title
listlengths
0
18
author
listlengths
0
4.41k
authoraffiliation
listlengths
0
6.45k
venue
listlengths
0
9
abstract
stringlengths
1
37.6k
doi
stringlengths
10
114
pdfurls
listlengths
1
3
corpusid
int64
158
259M
arxivid
stringlengths
9
16
pdfsha
stringlengths
40
40
text
stringlengths
66
715k
github_urls
listlengths
0
36
[ "Likelihood Asymptotics in Nonregular Settings: A Review with Emphasis on the Likelihood Ratio", "Likelihood Asymptotics in Nonregular Settings: A Review with Emphasis on the Likelihood Ratio" ]
[ "Alessandra R Brazzale ", "Valentina Mameli " ]
[]
[]
This paper reviews the most common situations where one or more regularity conditions which underlie classical likelihood-based parametric inference fail. We identify three main classes of problems: boundary problems, indeterminate parameter problems-which include non-identifiable parameters and singular information matrices-and change-point problems. The review focuses on the large-sample properties of the likelihood ratio statistic. We emphasize analytical solutions and acknowledge software implementations where available. We furthermore give summary insight about the possible tools to derivate the key results. Other approaches to hypothesis testing and connections to estimation are listed in the annotated bibliography of the Supplementary Material.
null
[ "https://export.arxiv.org/pdf/2206.15178v3.pdf" ]
52,825,767
2206.15178
5f1b1083af56317c0712d5b080f3efc23ddf6209
Likelihood Asymptotics in Nonregular Settings: A Review with Emphasis on the Likelihood Ratio April 19, 2023 Alessandra R Brazzale Valentina Mameli Likelihood Asymptotics in Nonregular Settings: A Review with Emphasis on the Likelihood Ratio April 19, 2023and phrases: boundary pointchange- pointfinite mixturefirst order theoryidentifiabil- itylarge-sample inferencesingular information This paper reviews the most common situations where one or more regularity conditions which underlie classical likelihood-based parametric inference fail. We identify three main classes of problems: boundary problems, indeterminate parameter problems-which include non-identifiable parameters and singular information matrices-and change-point problems. The review focuses on the large-sample properties of the likelihood ratio statistic. We emphasize analytical solutions and acknowledge software implementations where available. We furthermore give summary insight about the possible tools to derivate the key results. Other approaches to hypothesis testing and connections to estimation are listed in the annotated bibliography of the Supplementary Material. Introduction The likelihood ratio or Wilks statistic is the oldest of the three classical approaches of likelihood-based inference to hypothesis testing, which include the asymptotically equivalent Wald and score statistics. Modern quests still advocate Wilks' test to identify new rare events and/or to detect a sudden change in a data generation process. It is commonly believed that under the null hypothesis its finite-sample distribution approaches the chisquared distribution as the sample size goes to infinity. However, in order to hold true, the application of Wilks' (1938) theorem to calculate measures of significance, and of subsequent and related results of large sample asymptotic theory, requires a number of regularity conditions which are often not met. Though there has always been awareness of nonregular problems, especially practitioners may be less familiar with the resulting limiting distributions and how these are derived. The aim of this paper is to resume a seemingly bygone statistical problem, whose inferential issues are highly relevant for a number of modern applications; see, for instance, . We present an overview of the situations in which Wilks' (1938) theorem can fail and of how to construct valid inferences. We will focus on the likelihood ratio statistic and its limiting distribution motivated by its widespread use for hypothesis testing, model selection and other related uses. Analogies with alternative test statistics and/or nonparametric and semiparametric models within the frequentist but also Bayesian paradigm are listed in the annotated bibliography of the Supplementary Material. Asymptotic theory is an essential part of statistical methodology. It provides first thing approximate answers where exact ones are unavailable. Beyond this, it serves to check if a proposed inferential solution provides a sensible answer when the amount of information in the data increases without limit. Given the tremendous advances in computer age statistical inference (Efron and Hastie, 2016) one could be tempted to by-pass the often rather demanding algebraic derivations of asymptotic approximation. Gaining insight in what happens to the limiting distribution of likelihood-based test statistics when one or more regularity conditions fail is a central issue to decide whether and to which extent to rely upon simulation. The following simple example tries and makes the point. Example 1.1 (Testing for homogeneity in a von Mises mixture). Suppose we observe a random sample y 1 , . . . , y n from the mixture model (1 − p)f (y i ; 0, κ) + pf (y i ; µ, κ), (1.1) where 0 ≤ p ≤ 1 is the mixing proportion. Furthermore, f (y i ; µ, κ) denotes the von Mises distribution with mean direction |µ| ≤ π and concentration parameter κ ≥ 0. Fu et al. (2008) prove that the asymptotic null distribution of the likelihood ratio statistic for testing the hypothesis p = 0 is the squared supremum of a truncated Gaussian process. The quantiles of the process can in principle be approximated to desirable precision by simulation, this way overcoming the algebraic difficulties of the exact solution. However, the same authors also show that if a suitable penalisation term is used, the distribution of the corresponding modified likelihood ratio statistic converges to the simple χ 2 1 distribution for n → ∞. This is wholly different from what happens in the Gaussian case. If the component densities f (y i ; µ, κ) in (1.1) represent normal distributions with unknown mean µ ∈ R and variance κ > 0, the distribution of the likelihood ratio statistic for testing model homogeneity diverges to infinity unless suitable constraints are imposed (Chen and Chen, 2003). This is because normal mixtures with unknown variance are not identifiable unlike the von Mises mixture model (1.1); see Section 5.2.2. Trying and simulating the limiting distribution in this case would lead to totally misleading results as the likelihood ratio tends to infinity with probability one. The finite-sample distribution of the likelihood ratio statistic, however, can be approximated using the parametric bootstrap as in McLachlan (1987). The required conditions, which are typically of Cramér type (Cramér, 1946, §33.3), require, among others, differentiability with respect to the parameters of the underlying joint probability or density function up to a suitable order and finiteness of the Fisher information matrix. Models which satisfy these requirements are said to be 'regular' and cover a wide range of applications. However, there are many important cases where one or more conditions break down. A highly cited review of nonregular problems is Smith (1989); see also the discussion paper by Cheng and Traylor (1995). Further examples can be found in Barndorff-Nielsen and Cox (1994, §3.8), Davison (2003, §4.6) and Cox (2006, Chapter 7). A classical example, which is traditionally used to demonstrate the failure of parametric likelihood theory, is Neyman and Scott's (1948) paradox. Example 1.2 (Growing number of parameters). Let (X 1 , Y 1 ), . . . , (X n , Y n ) denote n independent pairs of mutually independent and normally distributed random variables such that for each i = 1, . . . , n, X i and Y i have mean µ i and common variance σ 2 . The maximum likelihood estimator of σ 2 isσ 2 n = 1 2n n i=1 {(X i −μ i ) 2 + (Y i −μ i ) 2 }, withμ i = (X i + Y i )/2. Straightforward calculation shows that, for n → ∞,σ 2 n converges in probability to σ 2 /2 instead of the true value σ 2 . The reason is that only a finite number of observations, in fact two, is available for estimating the unknown sample means µ i . This violates a major requirement which underlies the consistency of the maximum likelihood estimator, namely that the uncertainty of all parameter estimates goes to zero. Example 1.2 is an early formulation of an incidental parameters problem. Other examples of this type are reviewed in Lancaster (2000), who also discusses the relevance of the Neyman-Scott paradox in statistics and economics. A recent contribution is Feng et al. (2012). Non-regularity may also arise when the parameter value under the null hypothesis is not an interior point of the parameter space, or when some of the parameters disappear under the null hypothesis. The following simple example shows what may happen when the support of the distribution depends on the parameter θ. Example 1.3 (Translated exponential distribution). Let X 1 , . . . , X n be an independent and iden- tically distributed sample from an exponential distribution with rate equal to 1. Consider the translation Y i = X i + θ, with θ > 0 unknown. Given the minimum observed value Y (1) , the likelihood ratio statistic for testing the hypothesis that θ = θ 0 is W (θ 0 ) = 2n(Y (1) − θ 0 ). Straightforward calculation proves that under the null hypothesis W (θ 0 ) has a χ 2 2 distribution, not the classical χ 2 1 limiting distribution. Furthermore, the maximum likelihood estimator of θ is no longer asymptotically normal. Indeed, it is easy to show that Y (1) − θ follows exactly an exponential distribution with rate n. The left panel of Figure 1 shows the χ 2 1 quantile plot of the likelihood ratio statistic observed in 10,000 exponential samples of size n = 50 generated with rate equal to 1 and translated by θ 0 = 3. The finitesample distribution of W (3) is visibly far from the theoretical χ 2 1 approximation represented by the dotted diagonal line. The right panel reports the empirical distribution of the likelihood ratio statistics with superimposed the χ 2 2 density (solid line). These situations are not mere mathematical artifacts, but include many models of practical interest, such as mixture distributions and change-point problems, in genetics, reliability, econometrics, and many other fields. There is, indeed, a rich literature on this topic. The majority of existing results consider the failure of one condition at a time, but failure of two assumptions simultaneously has also received attention. In the absence of a unifying theory, most of the individual problems have been treated on their own. After careful consideration, we decided to group them into three broad classes: boundary problems, indeterminate parameter problems and change-point problems. We furthermore restrict our attention to the key results. Figure 1 depicts a personal selection of these. The corresponding prototype derivations are provided in Appendix A, while Appendix B of the Supplementary Material lists further contributions, such as analogies with alternative test statistics and/or nonparametric and semiparametric models. The paper is organised as follows. First order parametric inference based on the likelihood function of a regular model is reviewed in Section 2, together with the conditions upon which it is based. Section 3 treats the first nonregular setting and embraces, in particular, testing for a value of the parameter which lies on its boundary. Section 4 concerns models where one part of the parameter vanishes when the remaining one is set to a particular value. The best-studied indeterminate parameter problems are finite mixture models. Given their widespread use in statistical practice, and their closeness to boundary problems, we will consider them separately in Section 5. The third broad class of nonregular models, that is, change-point problems, are reviewed in Section 6. Most articles investigate the consequences of the failure of one regularity condition at a time. Mixture distributions and change-point problems deserve special attention as they represent situations where two conditions fail simultaneously. Section 7 reviews cases which do not fit into the above three broad model classes, but still fall under the big umbrella of nonregular problems. These include, among others, shape constrained inference, a genre of nonparametric problem which leads to highly nonregular models. Despite the many remarkable theoretical developments in likelihood-based asymptotic theory for nonregular parametric models, one may wonder why the corresponding results are little known especially among practitioners. We believe there are at least two reasons. The first is that the results are highly scattered, in time and scope, which makes it difficult to get the general picture. The second reason is that the limiting distributions are often fairly complex in their derivation and implementation. Section 8 reviews the few software implementation we are aware of. The paper then closes with the short summary discussion of Section 9. 2 Likelihood Asymptotics 2.1 First order theory 2.1.1 General notation. Consider a parametric statistical model F = {f (y; θ), Θ, Y}, where y = (y 1 , . . . , y n ) are n observations from the d-dimensional random variable Y = (Y 1 , . . . , Y n ), d ≥ 1, with probability density or mass function f (y; θ) whose support is Y ⊆ R d . Furthermore, the p-dimensional parameter θ takes values in a subset Θ ⊆ R p , p ≥ 1. Throughout the paper we will consider y an independent and identically distributed random sample unless stated differently. Furthermore, there may be situations where the model is specified by a function different from f (y; θ) such as the cumulative distribution function F (y; θ). Let L(θ) = L(θ; y) ∝ f (y; θ) and l(θ) = log L(θ) denote the likelihood and the log-likelihood functions, respectively. The large interest in this inferential tool is motivated by the idea that L(θ) will be larger for values of θ near the true value θ 0 of the distribution which generated the data. The maximum likelihood estimate (MLE)θ of θ is the value of θ which maximises L(θ) or equivalently l(θ). That is, it is the value to which the data assign maximum evidence. Under mild regularity conditions on the log-likelihood function to be discussed in Section 2.2,θ solves the score equation u(θ) = 0, where u(θ) = ∂l(θ)/∂θ is the score function. We furthermore define the observed information function j(θ) = −∂ 2 l(θ)/∂θ∂θ , where θ denotes transposition of θ, and the expected or Fisher information i(θ) = E [j(θ; Y )]. No nuisance parameter. The three classical likelihood-based statistics for testing θ = θ 0 are the standardized MLE, (θ − θ 0 ) j(θ)(θ − θ 0 ), score statistic, u(θ 0 ) j(θ) −1 u(θ 0 ), likelihood ratio W (θ 0 ) = 2{l(θ)−l(θ 0 )}, where the observed information j(θ) is at times replaced by the Fisher information i(θ). These statistics are also known under the names of Wald's, Rao's and Wilks' tests, respectively. If the parametric model is regular, the finite-sample null distribution of the above three statistics converges to a χ 2 p distribution to the order O(n −1 ) as n → ∞. For θ scalar, inference may be based on the corresponding signed versions, that is, on the signed Wald statistic, (θ − θ 0 )j(θ) 1/2 , score statistic, u(θ 0 )j(θ 0 ) −1/2 , and likelihood root, r(θ 0 ) = sign(θ − θ 0 )[2{l(θ) − l(θ 0 )}] 1/2 , whose finite-sample distributions converge to the standard normal distribution to the order O(n −1/2 ). Nuisance parameters. Suppose now that the parameter θ = (ψ, λ) ∈ Ψ×Λ is partitioned into a p 0 -dimensional parameter of interest, ψ ∈ Ψ ⊆ R p0 , and a vector of nuisance parameters λ ∈ Λ ⊆ R p−p0 of dimension p − p 0 . Large-sample inference for ψ is commonly based on the profile log-likelihood function l p (ψ) = sup λ∈Λ l(ψ, λ), which maximises the log-likelihood l(ψ, λ) with respect to λ for fixed ψ. The profile likelihood ratio statistic for testing ψ ∈ Ψ 0 is W p (ψ 0 ) = 2{ sup ψ∈Ψ l p (ψ) − sup ψ∈Ψ0 l p (ψ)}, where Ψ 0 ⊂ Ψ is the parameter space specified under the null hypothesis. If the null hypothesis is ψ = ψ 0 , the finite-sample distribution of W p (ψ 0 ) converges to the χ 2 p0 distribution to the order O(n −1 ) for n → ∞. If there exists a closed form expression for the constrained maximum likelihood estimateλ ψ of λ for given ψ, the profile log-likelihood function may be written as l p (ψ) = sup λ∈Λ l(ψ, λ) = l(ψ,λ ψ ). (2.1) A typical situation whereλ ψ is not available in closed form is when the nuisance parameter λ vanishes under the null hypothesis, as will be addressed in Section 4.2. If (2.1) holds, we may define the profile Wald, score and likelihood ratio statistics for testing ψ = ψ 0 as in Section 2.1.2, but now in terms of the profile log-likelihood l p (ψ), with u p (ψ) = ∂l p (ψ)/∂ψ and j p (ψ) = ∂l p (ψ)/∂ψ∂ψ being the profile score and profile observed information functions. The asymptotic null distribution of these statistics is a χ 2 p0 distribution up to the order O(n −1 ). If ψ is scalar, the distributions of the corresponding signed versions, (ψ − ψ 0 )j p (ψ) 1/2 , u p (ψ 0 )j p (ψ 0 ) −1/2 , and r p (ψ 0 ) = sign(ψ − ψ 0 )[2{l p (ψ) − l p (ψ 0 )}] 1/2 , (2.2) may be approximated by standard normal distributions up to the order O(n −1/2 ). Regularity conditions Definition The first step in the derivation of the large-sample approximations and statistics of Sections 2.1 is typically Taylor series expansion of the log-likelihood function l(θ), or quantities derived thereof, inθ around θ. We illustrate this by considering the derivation of the asymptotic distribution of the likelihood ratio statistic W (θ) = 2{l(θ) − l(θ)} for the scalar parameter case. Example 2.1 (Asymptotic distribution of the likelihood ratio). Let p = 1 and l m = l m (θ) = d m l(θ)/dθ m be the derivative of order m = 2, 3, . . . of l(θ), the log-likelihood function for θ in a regular parametric model. Recall that −l 2 (θ; y) = j(θ) represents the observed information, while E[−l 2 (θ; Y )] = i(θ) is the expected Fisher information. Taylor series expansion of l(θ) aroundθ yields l(θ) = l(θ) − 1 2 j(θ)(θ − θ) 2 − 1 6 (θ − θ) 3 l 3 (θ), whereθ is such that |θ −θ| < |θ −θ|. Suitable rearrangement of the terms, leads to W (θ) = j(θ)(θ − θ) 2 + 1 3 l 3 (θ)(θ − θ) 3 = j(θ) i(θ) i(θ)(θ − θ) 2 + 1 3 l 3 (θ)(θ − θ) 3 . (2.3) Now, under suitable regularity conditions on l(θ) and of its first three derivatives,θ p → θ (Serfling, 1980, Statement (i) of Theorem on p. 145), j(θ) i(θ) p → 1 and i(θ) 1/2 (θ − θ) d → Z, where Z has the standard normal distribution (Serfling, 1980, Lemma B and Lemma A(ii)). Furthermore, by the law of large numbers l 3 (θ) p → c < +∞. Advocating Slutzky's lemma, the leading term in (2.3) hence converges asymptotically to the χ 2 1 distribution, while the second addend is of order o p (1). This leads to the well known result for Wilks' statistic. The derivation of Example 2.1 requires that the model under consideration is regular. This implies first of all that the log-likelihood function can be differentiated at least to the third order, but also that the expected values of log-likelihood derivatives are finite and that their asymptotic order is proportional to the sample size. Wald (1949)-who is generally acknowledged for having provided the earliest proof of consistency of the maximum likelihood estimator which is mathematically correct-furthermore emphasized the importance of the compactness of the parameter space Θ and that the maximum likelihood estimator be unique. Indeed, the former condition was missing in Cramér's (1946) and Huzurbazar's (1948) proofs. In this paper, by the term "regularity conditions" we mean the assumptions on the parametric statistical model F that ensure the validity of classical asymptotic theory. These may be formulated in several ways; see e.g. Cox and Hinkley (1974, p. 281), Barndorff-Nielsen and Cox (1994, §3.8), Azzalini (1996, §3.2.3), Severini (2000, §4.7), van der Vaart (2000, Chap. 5), Davison (2003, §4.6), Hogg, McKean andCraig (2019, §6.1, §6.2 andA.1). We will assume that the following five conditions on F = {f (y; θ), Θ, Y} and related likelihood quantities hold. Condition 1 All components of θ are identifiable. That is, two probability density or mass functions f (y; θ 1 ) and f (y; θ 2 ) defined by any two different values θ 1 = θ 2 of θ are distinct almost surely. Condition 2 The support Y of f (y; θ) does not depend on θ. Condition 3 The parameter space Θ is a compact subset of R p , for a fixed positive integer p, and the true value θ 0 of θ is an interior point of Θ. Condition 4 The partial derivatives of the loglikelihood function l(θ; y) with respect to θ up to the order three exist in a neighbourhood of the true parameter value θ 0 almost surely. Furthermore, in such a neighbourhood, n −1 times the absolute value of the log-likelihood derivatives of order three are bounded above by a function of Y whose expectation is finite. Condition 5 The first two Bartlett identities hold, which imply that E[u(θ; Y )] = 0, i(θ) = Var[u(θ; Y )], in addition to 0 < Var[u(θ; Y )] < ∞. Failure of regularity Conditions 1-5 are relevant in many important models of practical interest, and can fail in as many ways. For instance, from the perspective of significance testing, Condition 1 fails when under the null hypothesis parameters defined for the whole model become undefined and therefore inestimable. We already mentioned this situation when introducing the profile log-likelihood function; nonidentifiability of the parameters will be addressed in Section 4.2. Further examples are treated in Sections 4.3 and 5. Failure of Condition 2 is addressed in Hirano and Porter (2003) and Severini (2004). Failure of Condition 3 characterises the first and most extensively explored nonregular setting, that is, boundary problems; see Section 3. Furthermore, they include the, to our knowledge, only contribution which explores the higher order properties of likelihood-based test statistics in a nonregular setting (del Castillo and Lopez-Ratera, 2006). The compactness condition, in particular, can be omitted, provided it is replaced by some other requirements; see, for instance, Pfanzagl (2017, Page 119). This will be also the case for a number of the largesample results derived for nonregular models; see, for instance, Section 5. Condition 4 typically does not hold in change-point problems, which will be treated in Section 6. A further prominent example where Condition 4 is not satisfied, is the double exponential, or Laplace, distribution, which arises in quantile regression. For a book-length review of this topic we refer the reader to Koenker et al. (2017). Condition 5 is guaranteed if standard results on the interchanging of integration and differentiation hold, Condition 2 is satisfied, and the log-likelihood derivatives are continuous functions of θ. A typical situation where this condition fails is when the data under analysis are derived from a probability density which does not belong to the model f (y; θ), a topic of much investigation in robustness (Huber and Ronchetti, 2009). A remedy is provided by Godambe's theory of estimating equations (Godambe, 1991). Conditions 4 and 5, as used by Cramér (1946), Wald (1949) and others, require the existence of at least three derivatives of the log-likelihood function together with some uniform integrability restrictions. Condition 4, in particular, embraces both, the existence of the partial derivatives of l(θ) and their asymptotic order. An example for which this latter condition does not hold is the Pearson Type III (or translated Gamma) distribution (Blischke et al., 1969), which generalizes Example 1.3. In this latter case, |dl(θ; y)/dθ| = n is not dominated by an integrable function on (θ, +∞). Condition 4 ensures the consistency of the maximum likelihood estimatorθ, and the existence of a quadratic approximation to the likelihood ratio function l(θ) − l(θ) in the Euclidean n −1/2 -neighbourhood of θ of the form l(θ) − l(θ) = (θ − θ) u(θ; y) − 1 2 (θ − θ) i(θ)(θ − θ) + o p (1), (2.4) which involves the score function u(θ; y) and the expected Fisher information i(θ). However, these conditions do not have by themselves any direct statistical interpretation. LeCam (1970) presents a different type of regularity assumptiondifferentiability in quadratic mean-which requires only differentiation of order one and may be justified from a statistical point of view. Local asymptotics To give a glimpse of LeCam's ideas, assume, without loss of generality, θ ∈ R scalar. A statistical model F = {f (y i ; θ), θ ∈ Θ} is said to be differentiable in quadratic mean (DQM) at θ if there exists a random functionu(θ; Y i ) such that, as h → 0, E θ f (Y i ; θ + h) − f (Y i ; θ) − h 2u (θ; Y i ) f (Y i ; θ) 2 = o(||h 2 ||). (2.5) The expectation is taken with respect to the distribution indexed by θ and the functionu(θ; Y i ) is said to be the quadratic mean derivative of the square root f (Y i ; θ) of the probability density function (Lehmann and Romano, 2005, Chapter 12). As shown in LeCam's (1970) paper, the regularity conditions of Cramér type imply differentiability in quadratic mean. Indeed, for every differentiable f (y i ; θ) ∂ ∂θ f (y i ; θ) = 1 2 u(θ; y i ) f (y i ; θ), (2.6) whereu(θ; y i ) = u(θ; y i ) is the score function as defined in Section 2.1. The opposite does not hold true, a prominent counterexample being the Laplace distribution. Differentiability in quadratic mean hence generalizes Condition 5 in a natural way as it is possible to show that E θ [u(θ; Y i )] = 0 and that the equivalent of the unit expected Fisher informationï 1 (θ) = E θ [{u(θ; Y i )} 2 ] is finite. Using differentiability in quadratic mean, LeCam gives rise to a radically different type of asymptotic inference called local asymptotics. The word 'local' is meant to indicate that one looks at a sequence of alternative hypotheses of the form θ n = θ + / √ n, where is any given real number. The properties of the likelihood-based procedures are hence studied in a small neighbourhood θ ± / √ n of the fixed parameter θ defined by , where 'small' means of size O(1/ √ n). The motivation for studying a local approximation is that, usually, asymptotically, the 'true' parameter value can be known with unlimited precision. The real difficulty is therefore to distinguish between values which are 'close'. Closeness in this case is measured in terms of the Hellinger distance H 2 (θ) = 1 2 E θ λ(θ; Y i ) − 1 2 , with λ(θ; Y i ) = f (Y i ; θ + h) f (Y i ; θ) , whose definition can be linked to the notion of differentiability in quadratic mean as H 2 (θ) = h 2 8 E θ {u(θ; Y i )} 2 + o(||h|| 2 ) = h 2 8ï (θ) + o(||h|| 2 ) if the model satisfies the DQM condition (2.5). Indeed, in this case it is possible to show that the likelihood ratio function of a random sample, y = (y 1 , . . . , y n ), of size n lr(θ; y) = n i=1 log λ(θ; y i ) with h = / √ n, is locally asymptotically quadratic in that lr(θ; y) = √ nu (θ; y) − 1 2 2 nï (θ) + o p (1). Note how this expression mimics the quadratic approximation (2.4) of classical likelihood-based asymptotics, whereθ n − θ = O p (1/ √ n) and the score and the expected Fisher information functions are replaced byu(θ; Y ) andï(θ) = nï 1 (θ). For large n we can locally approximate the likelihood ratio function by the normal distribution N − 1 2 2ï 1 (θ), 2ï 1 (θ) , which then serves as the basis for the derivation of the limiting distributions of estimators and test statistics. For further details see the two monographs by van der Vaart (2000, Chapter 7) and LeCam and Yang (1970). In the remainder of the paper, we review the most common situations where one or some of Conditions 1-5 fail. We will also provide some summary insight into the main prototype derivations of the corresponding finite-sample or asymptotic results. The vast majority of the proofs require conditions of Cramér type; in some occasions, as for instance in Section 4.2, LeCam's local asymptotic theory will be used. Boundary Problems Definition A boundary problem arises when the value θ 0 specified by the null hypothesis, or parts of it, is not an interior point of the parameter space. In general terms, the "boundary" of the parameter space Θ is the set of values θ such that every neighbourhood of θ contains at least one interior point of Θ and at least one point which is not in Θ. Informally, the methodological difficulties in likelihoodbased inference occur because the maximum likelihood estimate can only fall 'on the side' of θ 0 that belongs to the parameter space Θ. This implies that if the maximum occurs on the boundary, the score function need not be zero and the distributions of the related likelihood statistics will not converge to the typical normal or chi-squared distributions. Because of the difficulties inherent the derivation of the limiting distribution of the likelihood ratio statistic, especially practitioners tend to ignore the boundary problem and to proceed as if all parameters where interior points of Θ. This is commonly called the naïve approach. An alternative approach is to suitably enlarge the parameter space so as to guarantee that the likelihood ratio maintains the common limiting distribution; see, for instance, Feng and McCulloch (1992). However, this idea works only as long as the null hypothesis is uniquely identified. The following example gives a flavour of the statistical issues. Example 3.1 (Bivariate normal). Consider a single observation y = (y 1 , y 2 ) from the bivariate normal random variable Y = (Y 1 , Y 2 ) ∼ N 2 (θ, I 2 ), where θ = (θ 1 , θ 2 ), with θ 1 ≥ 0 and θ 2 ≥ 0, and I 2 is the 2 × 2 identity matrix. Straightforward calculation shows that the null distribution of the likelihood ratio statistic for θ 0 = (0, 0) versus the alter- Under the null hypothesis θ 0 = (0, 0), the parameter space collapses with the origin. The asymptotic distribution of the corresponding likelihood ratio statistics is a mixture of χ 2 0 , χ 2 1 and χ 2 2 distributions with weights (0.25, 0.5, 0.25). W(θ0) = Y 2 2~χ 1 2 W(θ0) = Y 1 2~χ 1 2 W(θ0) = 0~χ 0 2 θ2 θ1 W(θ0) = Y 1 2 + Y 2 2~χ 2 2 θ0 = (0,0) native hypothesis that at least one equality does not hold, converges to a mixture of a point mass χ 2 0 at 0 and two chi-squared distributions, χ 2 1 and χ 2 2 (Das-Gupta, 2008, Example 21.3). Figure 3 provides a graphical representation of the problem. Because of the limitedness of the parameter space, we have thatθ 1 = max(y 1 , 0) andθ 2 = max(y 2 , 0). The grey shaded area is the parameter space into which the MLE is bound to fall. However, the random observation Y = (Y 1 , Y 2 ) can fall into any of the 4 quadrants of R 2 with equal probability 1/4. When Y falls into the first quadrant, that is, when y 1 , y 2 > 0, the likelihood ratio statistic is W (θ 0 ) = Y 2 1 + Y 2 2 and follows the common χ 2 2 distribution. However, if y 1 > 0 and y 2 < 0 or when y 1 < 0 and y 2 > 0, we have that W (θ 0 ) = Y 2 1 ∼ χ 2 1 and W (θ 0 ) = Y 2 2 ∼ χ 2 1 , respectively. Lastly, when Y lies in the third quadrant, W (θ 0 ) = 0 and its distribution is a point mass in 0. Summing up, we can informally write W (θ 0 ) ∼ 1 4 χ 2 0 + 1 2 χ 2 1 + 1 4 χ 2 2 . (3.1) Distribution (3.1) is a special case of the so-called chi-bar squared distribution (Kudô, 1963), denoted byχ 2 (ω, N ), with cumulative distribution function Pr(χ 2 ≤ c) = N ν=0 ω ν Pr(χ 2 ν ≤ c), which corresponds to a mixture of chi-squared distributions with degrees of freedom ν from 0 to N . In some cases, explicit and computationally feasible formulae are available for the weights ω = (ω 0 , . . . , ω N ). Extensive discussion on their computation and use, with special emphasis on inequality constrained testing, is given in Robertson et al. (1988, Chapters 2 and 3), Wolak (1987), Shapiro (1985Shapiro ( , 1988 and Sun (1988). General results The research on boundary problems was initiated by Chernoff (1954) who derives the asymptotic null distribution of the likelihood ratio statistic for testing whether θ lies on one or the other side of a smooth (p − 1)-dimensional surface in a pdimensional space when the true parameter value lies on the surface. Using a geometrical argument, Chernoff established that this distribution is equivalent to the distribution of the likelihood ratio statistic for testing suitable restrictions on the mean of a multivariate normal distribution with covariance matrix given by the inverse of the Fisher information matrix using a single observation. In particular, Chernoff proved that the limiting distribution is aχ 2 (ω, 1) distribution, with ω = (0.5, 0.5), that is, a mixture of a point mass at zero and a χ 2 1 , with equal weights. This generalizes Wilks (1938) result when the parameter space under the null hypothesis is not a hyperplane. The no doubt cornerstone contribution which inspired many researchers and fuelled an enormous literature, is the highly-cited article by Self and Liang (1987). In Chernoff (1954), the parameter spaces Θ 0 and Θ 1 , specified by the null and the alternative hypotheses, are assumed to have the same dimension. Furthermore, the true parameter value falls on the boundary of both, Θ 0 and Θ 1 , while it is still an interior point of the global parameter space Θ = Θ 0 ∪ Θ 1 . Using geometrical arguments similar to those of Chernoff (1954), Self and Liang (1987) study the asymptotic null distribution of the likelihood ratio statistic for testing the null hypothesis θ ∈ Θ 0 against the alternative θ ∈ Θ 1 = Θ \ Θ 0 . This time, the true parameter value θ 0 no longer needs be an interior point, but can fall onto the boundary of Θ. The two sets Θ and Θ 0 must be regular enough to be approximated by two cones, C Θ and C Θ0 , with vertex at θ 0 (Chernoff, 1954, Definition 2). Under this scenario and provided their Assumptions 1-4 hold-which translate into our Conditions 1-2 and 4-5, with likelihood derivatives taken from the appropriate side- Self and Liang (1987, Theorem 3) show that the distribution of the likelihood ratio converges to the distribution of sup θ∈C Θ−θ 0 −(Z − θ) i 1 (θ 0 )(Z − θ) − (3.2) sup θ∈C Θ 0 −θ 0 −(Z − θ) i 1 (θ 0 )(Z − θ) . Here, C Θ−θ 0 and C Θ0−θ 0 are the translations of the cones C Θ and C Θ0 , such that their vertices are at the origin, andZ is a multivariate Gaussian variable with mean 0 and covariance matrix given by i 1 (θ 0 ) −1 , which is the Fisher information matrix for a single observation. If we transform the random variableZ so that it follows a multivariate standard Gaussian distribution Z, we can re-express Equa- tion (3.2) as inf θ∈C0 ||Z − θ|| 2 − inf θ∈C ||Z − θ|| 2 = ||Z − PC 0 (Z)|| 2 − ||Z − PC(Z)|| 2 ,(3.3) whereC andC 0 are the corresponding transformations of the cones C Θ−θ 0 and C Θ0−θ 0 and ||·|| is the Euclidean norm. Finding the null distribution requires to work out the two projections PC(Z) and PC 0 (Z) of Z onto the conesC andC 0 . This must be done on a case by case basis as shown by the following revisitation of Example 3.1. Example 3.2 (Bivariate normal revisited). In Example 3.1 we faced a typical non-standard situation where both components of the parameter θ are of interest and both lie on the boundary of the parameter space. Here, the Fisher information matrix is the identity matrix which is whyZ = Z = Y and the original two set Θ and Θ 0 agree with the approximating cones. That is, the grey shaded re- Figure 3 represents the sets Θ = C Θ = C Θ−θ0 =C, while the origin {0} corresponds to the sets Θ 0 = C Θ0 = C Θ0−θ0 =C 0 . The derivation of the second term of (3.3) depends on the projection of Z ontoC, which is gion [0, ∞) × [0, ∞) inPC(Z) =          Z = (Z 1 , Z 2 ) if Z 1 , Z 2 > 0 Z 2 if Z 1 < 0, Z 2 > 0 0 if Z 1 , Z 2 < 0 Z 1 if Z 1 > 0, Z 2 < 0, while PC 0 (Z) = 0. As shown in Example 3.1, PC(Z) takes on the four possible values with equal probability 1/4. By simple algebra, we can prove that the distribution of the likelihood ratio statistics is given by the mixture of Equation (3.1). Self and Liang (1987) present a number of special cases in which the representations (3.2) and (3.3) are used to derive the asymptotic null distribution of the likelihood ratio statistic. In most cases, the limiting distribution is a chi-bar squared distribution whose weights depend, at times in a rather tricky way, on the partition of the parameter space induced by the geometry of the cones. A sketch of the derivation of Equation (3.2) is given in Appendix A.1. The proof consists of two steps. We first consider a quadratic Taylor series expansion of the log-likelihood l(θ) around θ 0 , the true value of the parameter. The asymptotic distribution of the likelihood ratio statistic is then derived as in Chernoff (1954) by approximating the sets Θ and Θ 0 using the cones C Θ and C Θ0 . A further major step forward in likelihood asymptotics for boundary problems was marked by Kopylev and Sinha (2011) and Sinha et al. (2012). Now, the null distribution of the likelihood ratio statistic is derived by using algebraic arguments. From the technical point of view, the derivation of a closed form expression for the limiting distribution of the likelihood ratio becomes the more difficult the more nuisance parameter lie on the boundary of the parameter space. In particular, the derivation of the limiting distribution becomes awkward when there are more than four boundary points and/or the Fisher information matrix is not diagonal. Sinha et al. (2012) furthermore show that when one or more nuisance parameters are on the boundary, following the naïve approach can result in inferences which are anticonservative. In general, the asymptotic distribution turns out to be a chi-bar squared distribution with weights that depend on the number of parameters of interest and of nuisance parameters, and on where these lie in Θ. However, limiting distributions other than thē χ 2 distribution are found as well; see, for instance, Theorem 2.1 of Sinha et al. (2012). A concise review of the cases considered in Self and Liang (1987), Kopylev and Sinha (2011) and Sinha et al. (2012), with some interesting examples and an account of the areas of interest in genetics and biology, is given by Kopylev (2012). The following two sections treat two special cases, namely testing for a zero variance component and constrained one-sided tests. We will mention the mainstream contributions, while further related work can be found in Appendix B of the Supplementary Material. This includes, for instance, alternatives which avoid the calculation of the mixing weights of theχ 2 distribution and/or lead to the classical χ 2 limiting distribution. Null variance components In linear and generalized linear mixed models a boundary problem arises as soon as we want to assess the significance of one or more variance components. The two reference papers are Crainiceanu and Ruppert (2004) and Stram and Lee (1994). Both consider a linear mixed effects model and test for a zero scalar variance component. However, Stram and Lee (1994) assume that the data vector can be partitioned into a large number of independent and identically distributed sub-vectors, which needs not hold for Crainiceanu and Ruppert (2004). The limiting distributions are derived from the spectral decomposition of the likelihood ratio statistic. More precisely, assume the following model holds, Y = Xβ + Zb + ε, where Y is a vector of observations of dimension n, X is a n × p fixed effects design matrix and β is a p-dimensional vector of fixed effects. In addition, Z is a n × k random effects design matrix and b is a k -dimensional vector of random effects which are assumed to follow a multivariate Gaussian distribution with mean 0 and covariance matrix σ 2 b Σ of order k × k. The error term ε is assumed to be independent of b and distributed as a normal random vector with zero mean and covariance matrix σ 2 ε I n , where I n is the identity matrix. Suppose we are interested in testing H 0 : β p+1−q = β 0 p+1−q , . . . , β p = β 0 p , σ 2 b = 0 against H 1 : β p+1−q = β 0 p+1−q , . . . , β p = β 0 p , or σ 2 b > 0 for some positive value of q ∈ {1, . . . , p}. Nonregularity arises as under the null hypothesis σ 2 b = 0 falls on the boundary of the parameter space. Furthermore, the alternative hypothesis that σ 2 b > 0 induces dependence among the observations Y . Crainiceanu and Ruppert (2004, Theorem 1) show that the finite-sample distribution of the likelihood ratio statistic agrees with the distribution of n 1 + q s=1 u 2 s n−p s=1 w 2 s + sup λ≥0 f n (λ), (3.4) where u s for s = 1, . . . , k and w s for s = 1, . . . , n−p are independent standard normal variables, λ = σ 2 b /σ 2 ε , and f n (λ) = n log 1 + N n (λ) D n (λ) − k s=1 log (1 + λξ s,n ), where N n (λ) = k s=1 λµ s,n 1 + λµ s,n w 2 s , and D n (λ) = k s=1 w 2 s 1 + λµ s,n + n−p s=k+1 w 2 s . Here, µ s,n and ξ s,n are the k eigenvalues of the matrices Σ 1 2 Z T P 0 ZΣ 1 2 and Σ 1 2 Z T ZΣ 1 2 , respectively. The matrix P 0 = I n −X(X T X) −1 X T is the matrix which projects onto the orthogonal complement to the subspace spanned by the columns of the design matrix X. Theorem 2 of Crainiceanu and Ruppert (2004) shows that the asymptotic null distribution of the likelihood ratio statistic depends on the asymptotic behaviour of the eigenvalues µ s,n and ξ s,n . The limiting distribution, in general, differs from the chi-bar squared distribution which often holds for independent and identically distributed data. Formula (3.4) represents the spectral decomposition of the likelihood ratio statistic. A similar result is also derived for the restricted likelihood ratio (Crainiceanu and Ruppert, 2004, Formula 9). The unquestioned advantage of these two results is that they allow us to simulate the finite-sample null distribution of the two test statistics once the eigenvalues are calculated. Furthermore, this simulation is more efficient than bootstrap resampling, as the speed of the algorithm only depends on the number of random effects k, and not on the number of observations n. Applications of Crainiceanu and Ruppert's (2004) results include testing for levelor subject-specific effects in a balanced one-way ANOVA, testing for polynomial regression versus a general alternative described by P-splines and testing for a fixed smoothing parameter in a P-spline regression. Constrained one-sided tests Multistage dose-response models are a further example of boundary problem. A K-stage model is characterised by a dose-response function of the form ψ(d; β) = ψ(β 0 + β 1 d + β 2 d 2 + · · · + β K d K ), where d is the tested dose and ψ(·) is a function of interest such as, for instance, the probability of developing a disease. The coefficients β k ≥ 0, for k = 1, . . . , K, are often constrained to be nonnegative so that the dose-response function will be non-decreasing. There is no limit on the number of stages K, though in practice this is usually specified to be no larger than the number of non-zero doses. Testing whether β k = 0 results in a boundary problem and requires the application of a so-called constrained one-sided test. Apart from clinical trials, constrained one-sided tests are common in a number of other areas, where the constraints on the parameter space are often natural such as testing for over-dispersion, for the presence of clusters and for homogeneity in stratified analyses. All these instances amount to having the parameter value lying on the boundary of the parameter space under the null hypothesis. Despite their importance in statistical practice, few contributions are available on the asymptotic behaviour of the most commonly used test statistics, and of the likelihood ratio in particular. A first contribution which evaluates the asymptotic properties of constrained one-sided tests is Andrews (2001), who establishes the limiting distributions of the Wald, score, quasi-likelihood and rescaled quasi-likelihood ratio statistics under the null and the alternative hypotheses. The results are used to test for no conditional heteroscedasticity in a GARCH(1,1) regression model and zero variances in random coefficient models. Sen and Silvapulle (2002) review refinements of likelihoodbased inferential procedures for a number of parametric, semiparametric, and nonparametric models when the parameters are subject to inequality constraints. Special emphasis is placed on their applicability, validity, computational flexibility and efficiency. Again, the chi-bar squared distribution plays a central role in characterising the limiting null distribution of the test statistics, while the corresponding proof requires tools of convex analysis, such projections onto cones. See Silvapulle and Sen (2005) for a book-length account of constrained statistical inference. 4 Indeterminate parameter problems Definition An "indeterminate parameter" problem occurs when setting one of the components of the parameter θ = (θ 1 , θ 2 ) to a particular value, say θ 1 = θ 10 , leads to the disappearance of some or all components of θ 2 . The model is no longer identifiable, as all probability density or mass functions f (y; θ) with θ 1 = θ 10 and arbitrary θ 2 identify the same distribution. The following simple example illustrates this point. Example 4.1 (Loss of identifiability in jump regression). Consider the model Y = θ 11 + θ 12 1(X > θ 2 ) + ε, ε ∼ f (ε), where Y is a continuous response, X a corresponding covariate and 1(X > θ 2 ) represents the indicator function which assumes value 1 if X > θ 2 and zero otherwise. Furthermore, θ 1 = (θ 11 , θ 12 ) is a real valued vector of regression coefficients, while θ 2 ∈ R defines the point at which the jump occurs. Assume that ε is a zero-mean error term with density function f (ε). The mean of the variable Y is θ 11 for values of X less or equal to θ 2 and is equal to θ 11 + θ 12 for values of X larger than θ 2 . Under the null hypothesis of no jump, θ 11 is arbitrary, but the parameters θ 12 = 0 and θ 2 disappear; the model is no longer identifiable. Arbitrary values of θ 2 identify the same distribution for the variable Y . Loss of identifiability occurs in areas as diverse as econometrics, reliability theory and survival analysis (Prakasa Rao, 1992), and has been the subject of intensive research. Rothenberg (1971) studied the conditions under which a general stochastic model whose probability law is determined by a finite number of parameters is identifiable. Paulino and Pereira (1994) present a systematic and unified description of the aspects of the theory of identifiability. As illustrated in Example 2.1, the classical theory of asymptotic inference heavily relies on quadratic approximation of the log-likelihood function. Indeed, if the value θ 0 specified by the null hypothesis is unique, we can use (2.4) to approximate twice the likelihood ratio function by 2{l(θ) − l(θ 0 )} = 2 √ n(θ − θ 0 ) ν n (u(θ 0 ; Y i )) − n(θ − θ 0 ) i 1 (θ 0 )(θ − θ 0 ) + o p (1), (4.1) for θ belonging to an n −1/2 -neighbourhood of θ 0 . Here, ν n (u) = n −1/2 n i=1 {u(θ; Y i ) − E θ0 [u(θ; Y i )] } is a random process defined for any integrable score function u(θ; Y i ). The asymptotic null distribution of W (θ 0 ) can be obtained by maximizing this quadratic form in θ. When the parameter which indexes the true distribution is not unique, various difficulties may arise. For instance, the maximum likelihood estimator may not converge to any point in the parameter space specified by the null hypothesis. Or, the Fisher information matrix degenerates. Typically, the limiting distribution of the likelihood ratio statistics will not be chi-squared. In the remainder of the section we will consider two special cases: non-identifiable parameters and singular information matrix. We will report the main research strains; related contributions are listed in Appendix B of the Supplementary Material. Non-identifiable parameters The general framework for deriving the asymptotic null distribution of the likelihood ratio statis-tic when some of the parameters are not identifiable under the null hypothesis was developed by . They address the common hypothesis testing problem H 0 : θ ∈ Θ 0 against H 1 : θ ∈ Θ \ Θ 0 , where Θ 0 = {θ ∈ Θ : F θ = F 0 } with F θ the distribution function indexed by θ and F 0 the true distribution. The true distribution is hence unique and H 0 is a simple null hypothesis. However, the set Θ 0 may contain more than one value. When the true parameter value θ 0 is not unique, the classical quadratic approximation of the likelihood ratio function in an Euclidean neighbourhood of θ 0 no longer holds. by-pass this problem by establishing a general quadratic approximation of the likelihood ratio function, this time in the so-called Hellinger neighbourhood of F 0 Θ = {θ ∈ Θ | 0 < H(θ) ≤ }, where H 2 (θ) is the squared Hellinger distance between F θ and F 0 . The rationale is that, instead of using the Euclidean distance between two parameter values, θ and θ 0 , closeness between the two models defined by F θ and F 0 is now measured in terms of a distance, which is valid with or without loss of identifiability of the true distribution F 0 . This is closely related to, and indeed generalizes, LeCam's (1970) local asymptotic theory which we briefly discussed in Section 2.3. The proof is detailed in Appendix A.2. Here, we sketch the main steps in its derivation. As in , we express the likelihood ratio function based on a random sample of n observations y = (y 1 , . . . , y n ) lr(θ) = n i=1 log {λ i (θ)} , in terms of the Radon-Nikodym derivative, λ i (θ) = λ(θ; y i ) = dF θ /dF 0 , evaluated at y i , for i = 1, . . . , n, and recall the definition H 2 (θ) = 1 2 E F 0 λ i (θ) − 1 2 of the square Hellinger distance. As lr(θ) may diverge to −∞ for some θ ∈ Θ , it can be difficult to find a quadratic approximation of the likelihood ratio function with a uniform residual term o p (1) in Θ . This is why we rewrite the likelihood ratio statistic as W (H 0 ) = 2 sup θ∈Θ\Θ0 {lr(θ) ∨ 0}, (4.2) where {a ∨ b} = max(a, b), and maximize lr(θ) ∨ 0, which generally has a quadratic approximation. show that such an approximation exists if, for some > 0, the trio {S i (θ), H(θ), R i (θ)}, with E F 0 [S i (θ)] = E F 0 [R i (θ)] = 0, satisfies h i (θ) = λ i (θ) − 1 = H(θ)S i (θ) − H 2 (θ) + H 2 (θ)R i (θ), for all θ ∈ Θ , in addition to the generalized differentiable in quadratic mean (GDQM) condition (Liu and Shao, 2003, Definition 2.3). We may then approximate twice the likelihood ratio function by 2 lr(θ) = 2 √ nH(θ)ν n (S i (θ)) − nH 2 (θ) 2 + E Fn [S 2 i (θ)] + o p (1), (4.3) where ν n (S i ) = n −1/2 n i=1 {S i (θ) − E F 0 [S i (θ) ]} now is defined in terms of the expectations taken with respect to the empirical distribution function F n (·) and the true distribution F 0 . Expansion (4.3) is then used to prove that the distribution of the likelihood ratio statistic (4.2) converges to the distribution of the supremum of a squared left-truncated centered Gaussian process with uniformly continuous sample paths. Though S i (θ) and R i (θ) may not be unique, they yield the same limiting distribution of the likelihood ratio statistic under suitable conditions. In principle, the distribution of the Gaussian process can be approximated by simulation, since its covariance kernel is known. The most crucial aspect, however, is the derivation of the set which contains the L 2 limits of the generalized score function S i (θ) 1 + E F 0 [S 2 i (θ)]/2 over which the supremum is to be taken. This needs be worked out on a case by case basis. The GDQM expansion always exists and reduces to LeCam's DQM expansion show how (4.3) is equivalent to (4.1) by rewriting the latter as h i (θ) = (θ − θ 0 ) u(θ 0 ; y i ) if θ 0 is unique. Furthermore,2 lr(θ) = 2 √ nD(θ 0 )ν n (S i (θ 0 )) − nD 2 (θ 0 ) + o p (1), in terms of the squared Pearson-type L 2 distance D 2 (θ) = E θ 0 {λ i (θ) − 1} 2 = (θ − θ 0 ) i 1 (θ 0 )(θ − θ 0 ) + o p (1), if ||θ − θ 0 || = O(n −1/2 ), and where S i (θ) = {λ i (θ) − 1} /D(θ), for θ ∈ Θ \ Θ 0 , defines the generalized score function. Singular information matrix A further case of indeterminate parameter problem is when the expected Fisher information matrix is singular at the true value θ 0 of the parameter. Example 4.2 (Singular information). Consider a sample of size n from a normal random variable Y with mean θ q , for a given odd integer q > 0, and variance 1. The information function i(θ) = nq 2 θ 2(q−1) is non singular in an open neighbourhood of θ 0 , but vanishes for θ 0 = 0, which violates Condition 5. For scalar θ, zero information implies a null score statistic with probability 1. The left panel of Figure 4 plots the score functions of three different samples of size n = 10 for θ 0 = 0 and q = 3. The right panel shows the corresponding normalised log-likelihood functions. The score function vanishes at the origin and at the maximum likelihoood estimateθ =ȳ 1/q . The loglikelihood function hence admits a global maximum in the neighbourhood of the true parameter value and an inflection at θ 0 = 0. Standard techniques to prove consistency of the maximum likelihood estimator and to derive the limiting distribution of the likelihood ratio statistics, such as Expansion (4.1), won't apply as both u(θ 0 ; y) = i(θ 0 ) = 0 at θ 0 = 0. Generally speaking, the singularity of the Fisher information matrix prevents the use of the usual second-order expansions of the log-likelihood function. The, to our knowledge, earliest contribution who addresses this type of problem is Silvey (1959). The author proposes to modify the curvature of the quadratic approximation of the likelihood ratio by replacing the inverse of the Fisher information matrix with a generalized inverse matrix obtained by imposing suitable constraints on the model parameters. The however cornerstone contribution to the development of the theory of singular information matrices is Rotnitzky et al. (2000) who derive the asymptotic null distribution of the likelihood ratio statistic for testing the null hypothesis H 0 : θ = θ 0 versus H 1 : θ = θ 0 , when θ is a pdimensional parameter of an identifiable parametric model and the information matrix is singular at θ 0 and has rank p − 1. Indeed, Rotnitzky et al. (2000) derive a suitable approximation for the likelihood ratio function l(θ)−l(θ 0 ) from a higher-order Taylor expansion. The theory is developed only for independent and identically distributed random variables, though the authors point out that the same theory may straightforwardly be extended to non-identically distributed observations. When θ is scalar, the asymptotic properties of the maximum likelihood estimator and of the likelihood ratio statistic depend on the integer m 0 , which represents the order of the first derivative of the loglikelihood function which does not vanish at θ = θ 0 ; see Theorems 1 and 2 of Rotnitzky et al. (2000). If m 0 is odd, the distribution of the likelihood ratio converges under the null hypothesis to a χ 2 1 distribution, while for even m 0 it converges to aχ 2 (ω, 1) with ω = (0.5, 0.5). As far as Example 4.2 goes, m 0 = 3 and l 3 (0; y) = 6nȳ = 0. Indeed, the likelihood ratio statistic W (0) = nȲ 2 distributes exactly as a chi-squared distribution with one degree of freedom. Extensions of these results when the parameter θ is p-dimensional are also provided. These are generally based on suitable re-parametrizations of the model which remove the specific causes of the singularity, but are difficult to generalize as they are ad-hoc solutions. Further contributions who propose to penalise the likelihood function so as to guarantee the consistency and normality of the maximum likelihood estimator are mentioned in Appendix B of the Supplementary Material. Finite mixture models Background Finite mixtures deserve special attention, because of their widespread use in statistical practice, but also because of the methodological challenges posed by the derivation of their asymptotic properties. They probably represent the best-studied indeterminate parameter problem, though we may also treat them as a boundary case. This is best illustrated by the two-component mixture model f (y; η) = (1 − π)f 1 (y; θ 1 ) + πf 2 (y; θ 2 ), (5.1) with η = (π, θ 1 , θ 2 ). Here, the probability density or mass functions f 1 (y; θ 1 ) and f 2 (y; θ 2 ), with θ 1 ∈ Θ 1 ⊆ R p1 and θ 2 ∈ Θ 2 ⊆ R p2 , represent the mixture components, while 0 ≤ π ≤ 1 is the mixing probability. The null hypothesis of homogeneity can be written in different ways. We may set π = 0, which corresponds to H 0 : f 0 = f 1 (y; θ 1 ), where f 0 represents the true unknown distribution, or alternatively, π = 1 and H 0 : f 0 = f 2 (y; θ 2 ). If the two components, f 1 (y; θ 1 ) and f 2 (y; θ 2 ), are known, then the limiting distribution is aχ 2 (ω, 1) with ω = (0.5, 0.5) (Lindsay, 1995, p. 75). Otherwise, for f 1 (y; θ) = f 2 (y; θ) a third possibility arises: in this case homogeneity assumes that H 0 : θ 1 = θ 2 . Whatever choice is made, some model parameters, that is, θ 2 and θ 1 , respectively, in the first two cases and π in the third, vanish under the null hypothesis. This contradicts classical likelihood theory, where the parameter which characterises the true distribution is typically assumed to be a unique point θ 0 in the open subset Θ ⊆ R p . Under this scenario, the asymptotic distribution of the likelihood ratio statistic does not follow the commonly believed chi-squared distribution. Indeed, in many cases the finite sample distribution converges to the supremum of a Gaussian process. The remainder of the section outlines the mainstream contributions for this class of models, with special emphasis on homogeneity testing using the likelihood ratio. This represents a subset of all available and related contributions on finite mixtures, whose treatment would easily fit a booklength account. Two general references for finite mixture models are the monographs by Lindsay (1995) and McLachlan and Peel (2000). The largesample properties of a number of classical and recent likelihood-based test statistics for assessing the number of components of a finite mixture model are reviewed in Chen (2017). Further related work is listed in Appendix B of the Supplementary Material. Testing for homogeneity General theory The first discussion of asymptotic theory for testing homogeneity of model (5.1) when all parameters are unknown was provided by Ghosh and Sen (1985) who characterise the limiting distribution of the likelihood ratio statistic under the assumption that Θ 2 is a closed bounded interval of R, while Θ 1 ⊆ R p1 , p 1 ≥ 1. There is an additional major difficulty in dealing with finite mixture models: though the mixture itself may be identifiable, the parameters π, θ 1 and θ 2 may not be. Indeed, if f 1 (y; θ) = f 2 (y; θ) in (5.1), the equality f (y; η) = f (y; η ), holds for η = η = (π , θ 1 , θ 2 ), but also for (1 − π , θ 2 , θ 1 ). That is, under the alternative hypothesis there is a second set of parameters which gives rise to the same distribution, while under the null hypothesis of homogeneity the model is represented by the three curves π = 1, π = 0 and θ 1 = θ 2 . Choosing an identifiable parametrisation doesn't bring any improvement as the density is then no longer differentiable. Ghosh and Sen (1985) bypass this difficulty in two ways: by requiring either strong identifiability of the finite mixture or then by imposing a separation condition on the parameters. Strong identifiability holds if the equality f (y; η) = f (y; η ) implies that π = π , θ 1 = θ 1 and θ 2 = θ 2 . The distribution of the likelihood ratio statistic for testing H 0 : π = 0 then converges to the distribution of T 2 I {T >0} , where T = sup θ2 {Z(θ 2 )} and Z(θ 2 ) is a zero-mean Gaussian process on Θ 2 whose covariance function depends on the true value of the parameters under the null hypothesis (Ghosh and Sen, 1985, Theorem 2.1). This results from proceeding in two steps. We first approximate the log-likelihood function by a quadratic expansion with respect to π and θ 1 which, under the null hypothesis, converges to the square of a Gaussian random process indexed by the non-identifiable parameter θ 2 . The supremum of this process with respect to θ 2 is then taken. The sketch of this proof is given in Appendix A.3. A similar result holds if the finite mixture is not strongly identifiable, such as when f 1 (y; θ) = f 2 (y; θ) in (5.1). In this case, a separation condition between θ 1 and θ 2 of the form ||θ 1 −θ 2 || ≥ for a fixed quantity > 0 needs be imposed, so that H 0 is described by either π = 0 or π = 1 (Ghosh and Sen, 1985, §5). The proof outlined in Appendix A.3 still applies with the exception that now the nonidentifiable parameter θ 2 varies in a subset of Θ 2 which depends on the given . Gaussian mixtures Theoretical results are particularly generous if the two-component model is a normal mixture, which is justified by the widespread use of the normal distribution in a wide variety of situations. Milestone contributions are Goffinet et al. (1992) and Chen andChen (2001, 2003), who consider a twocomponent mixture of normal densities φ(·; µ, σ) of mean µ and variance σ 2 . This class of models deserves special attention because of the technical challenges posed by some undesirable properties of the Gaussian distribution. As discussed in Chen and Li (2009), the expected Fisher information for the mixing proportion is not finite unless a bound is placed on the variance of the corresponding component density. Furthermore, the derivatives of the log-density become linearly dependent since ∂ 2 φ(x; µ, σ)/∂µ 2 = 2∂φ(x; µ, σ)/∂σ 2 . Last, but not least, the log-likelihood function is un-bounded in case of heterogeneous components, and the maximum likelihood estimator may not exist (Hartigan, 1985). These deficiencies generally invalidate the standard quadratic approximation of the likelihood ratio function, which needs be expanded further. This is why most contributions on normal mixtures assume homogeneous variances. Several solutions have been proposed to overcome these shortcomings; in addition to Chen and Li (2009), see also Kasahara and Shimotsu (2015) and Wichitchan et al. (2019). The finite sample distribution of the likelihood ratio statistic often converges to the distribution of sup |t|≤M Z(t) 2 + W, (5.2) where Z(t), t ∈ [−M, M ] , is a Gaussian process and W is an independent chi-squared random variable with suitable degrees of freedom. The Gaussian process Z(t) has zero mean and known covariance function. As mentioned in Section 2.2, the compactness of the parameter space is a necessary condition to avoid that the distribution of the likelihood ratio statistic diverges to infinity. This was already proved by Hartigan (1985) and is an immediate implication of the fact that {sup |t|≤M Z(t)} 2 tends in probability to infinity if M → ∞. The proofs of the Theorems in Chen andChen (2001, 2003) essentially are suitable adaptations of the prototype derivation for finite mixture models reported in Appendix A.3. All passages are detailed in the original contributions to which we refer the interested reader. As in most cases the asymptotic distribution of the likelihood ratio statistic is related to a Gaussian random field, the computation of percentile points becomes tricky or impossible. That is why other tests or methods have been proposed. Reviewing all these would go beyond the scope of the paper. Let us mention, here, the most fruitful research strained initiated by Li et al. (2009) who propose an EM-test for homogeneity, which Chen and Li (2009) Alternative approaches Several authors have addressed how to remove the separation condition of Ghosh and Sen (1985). Three lines of research emerged: reparametrization of the probability density or mass function, penalization of the likelihood to ensure identifiability and simulation. Reparametrization does not change the model which is why the limiting distribution, generally, remains the supremum of a Gaussian process. The first contribution, to our knowledge, who uses reparametrization is Chernoff and Lander (1995) who heuristically study several versions of the twocomponent binomial mixture model. Formal proofs and extensions to finite mixtures with contaminated densities are provided in Pons (1997, 1999), while Ciuperca (2002) considers the case of translated mixture components. This latter contribution has the further merit of highlighting how Condition 3 of Section 2.2 is necessary, but not sufficient. Indeed, the limiting distribution of the likelihood ratio statistic converges to a fifty-fifty mixture of a point mass at zero and of a distribution which diverges in probability to +∞, and this despite the fact that all parameters are assumed to belong to a compact set. The unboundedness behaviour of the likelihood ratio of Ciuperca (2002) can be explained by means of the theory of "locally conic" reparametrizations proposed by Gassiat (1997, 1999). A rather different route is taken in who suggest to penalise the log-likelihood function of model (5.1) with f 1 (y; θ) = f 2 (y; θ), l(π, θ; y) + c log{4π(1 − π)}, (5.3) where the degree of penalisation is controlled by the constant term c. As the authors point out, the penalisation term can be justified from the Bayesian perspective, as a prior on the mixing proportion π. It furthermore guarantees that the maximum likelihood estimate of the mixing proportion 0 <π < 1 will not fall on the boundary of the parameter space and that the maximum likelihood estimators of all parameters are consistent under the null hypothesis π = 0. Provided Conditions 1-5 of their paper hold, the distribution of the modified likelihood ratio statistic derived from (5.3) converges to aχ 2 (ω, 1) distribution with ω = (0.5, 0.5); see also Example 1.1. Indeed, regularising the likelihood function does change the problem at hand and the limiting distributions no longer is the supremum of a squared truncated Gaussian random process. The third route to investigate the asymptotic null distribution of the likelihood ratio statistic for finite mixture models is by simulation. Thode et al. (1988) consider testing the hypothesis that the sample comes from a normal random variable with unknown mean and unknown variance against the alternative that the sample comes from a twocomponent Gaussian mixture with unequal means and common variances. All model parameters are assumed to be unknown. Their extensive numerical investigation shows that the distribution of the likelihood ratio statistic converges very slowly to a limiting distribution, if any exists, and is rather unstable even for sample sizes as large as n = 1, 000. For very large sample sizes, the empirical distributions rather closely agree with the commonly assumed χ 2 2 , though this may be too liberal for small to moderate n. Böhning et al. (1994) investigate numerically the asymptotic properties of the likelihood ratio statistic for testing homogeneity in the twocomponent mixture model (5.1) when the component distributions f k (y; θ k ), k = 1, 2 are binomial, Poisson, exponential or Gaussian with known common variance. Lo (2008) shows that the commonly used χ 2 approximation for testing the null hypothesis of a homoscedastic normal mixture against the alternative that the data arise from a heteroscedastic model is reasonable only for samples as large as n = 2, 000 and component distributions that are well separated under the alternative. Furthermore, the restrictions of Hathaway (1985) need be imposed to ensure that the likelihood is bounded and to rule out spurious maxima under the alternative. Otherwise, the author suggests use of parametric resampling. Very recently, Cong and Yao (2021) study the behaviour of the likelihood ratio statistics for multivariate normal mixtures. They recommend using parametric boostrap resampling as, similarly to Lo (2008). 6 Change-point problems Definition A change-point problem arises whenever the regime of random events suddenly changes. A modification in the data generating process generally implies that the log-likelihood function is no longer differentiable with respect to some values of the parameter. This typically leads to the failure of Condition 4 of Section 2.2. The most basic change-point problem tries and identifies patterns in a random sequence. For instance, given n independent observations y 1 , . . . , y n , listed in the order they occurred, Page (1955Page ( , 1957 considered the problem of verifying whether these were generated by a random variable with distribution function F (y; θ) against the alternative that only the first τ , 0 ≤ τ < n, observations are generated from F (y; θ) while the remaining n − τ come from F (y; θ ) with θ = θ and τ unknown. Since Page's pioneering papers, change-point problems have been the subject of intensive research owing to their pervasiveness in all major domains of application. Summarizing all contributions can easily fill in book-length accounts. A first annotated bibliography of change-point problems is Shaban (1980). Krishnaiah and Miao (1988) give an overview of change-point estimation up to their time of writing; Csörgö and Horváth (1997) focus their review monograph on limit theorems for change-point analysis. Khodadadi and Asgharian (2008) is a more than 200 pages length annotated bibliography of change-point problems in regression. Lee (2010) summarizes the most recent literature and gives a comprehensive bibliography for five major types of change-point problems. A book-length account of change-point problems with examples from medicine, genetics and finance is Chen and Gupta (2012). The discussion papers of Horváth and Rice (2014a,b) mention, in addition to classical methods, also modern lines of research in functional data and high dimensions. Research on change-point analysis has seen a revival over the last decade and a half, especially as far as the detection of multiple changes goes (Niu et al., 2016;Yau and Zhao, 2016;Dette and Gösmann, 2020). The proposed inferential solutions range from parametric to nonparametric techniques and include frequentist and Bayesian approaches. Most recently, Sofronov et al. (2020) edited a special issue of the journal Statistical Papers dedicated to change-point detection. Generally speaking, two questions are of interest in change-point analysis: identifying the potential number of changes, and, once identified, estimating where these occur, together with further quantities of interest such as the size of the change. In the remainder of this section we will focus our attention on the first problem, that is, the identification of a change by means of the likelihood ratio statis-tic. As highlighted by Chen and Gupta (2012), the majority of reference models which have been proposed for change-point detection assume normality of the observations. These will be treated extensively in Sections 6.2 and 6.3 with special emphasis on regression type problems. In particular, Section 6.2 addresses the issue of detecting possible shifts in the location and/or the scale of the distribution. Sections 6.3 extends the treatment to linear regression and piecewise linear models. Given the breadth of the available solutions, each section contains a selection of contributions which illustrate the main currents of research. A further deeply explored class of models are continuous exponential families (Worsley, 1986), though some results are available for the binomial (Worsley, 1983) and Poisson cases. Further contributions are listed in Appendix B of the Supplementary Material. Shifts in location and scale The reference model for testing a change in the mean value of a random variable can generally be written as y i = η i + ε i , i = 1, . . . , n, (6.1) where the ε i 's are independent zero-mean random errors. All observations are considered in the order they appear, an assumption which will hold for the whole section. The function η i may change K times, η i = µ 1 , 0 < i ≤ τ 1 , (6.2) = µ 2 , τ 1 < i ≤ τ 2 , . . . = µ K+1 , τ K < i ≤ n, where the change-points τ k can only assume integer values. Unless differently stated, both the K+1 different mean values µ k and the K change-points τ k are supposed to be unknown, though the very early contributions focus on the simpler setting where one or both pieces of information are given. Assuming K = 1, Hawkins (1977) considers testing the null hypothesis of no change in the mean η i when the ε i ∼ N (0, σ 2 ) are centered normal variables with constant variance σ 2 > 0, that is, N (µ, σ 2 ), i = 1, . . . , n, against the alternative that there exists a 0 < τ < n at which the unknown mean switches from µ to µ = µ. The variance σ 2 is assumed to be known and we set it to one without loss of generality. This is a non-standard problem because the changepoint appears only under the alternative hypothesis, but not under the null. The corresponding likelihood ratio statistic is a function of H 0 : Y i ∼W τ = τ (Ȳ τ −Ȳ ) 2 + (n − τ )(Ȳ n−τ −Ȳ ) 2 , whereȲ τ andȲ n−τ are the partial means, computed using the first τ and the last n − τ observations. Since the change-point τ is unknown, the likelihood ratio W = W τ * = max 1≤τ <n W τ maximises W τ over all possible values of τ , and is usually refereed to a "maximally selected likelihood ratio". To derive its exact null distribution, Hawkins (1977) re-expresses W τ as W τ = T 2 τ , where T τ = n τ (n − τ ) τ i=1 (Y i −Ȳ ) (6.3) has standard normal distribution. It follows that the finite-sample distribution of U = W τ * = max 1≤τ <n |T τ | (6.4) agrees with the distribution of the maximum absolute value attained by a Gaussian process in discrete time having zero mean, unit variance and autocorrelation function given by Expression (3.2) of Hawkins (1977). In particular, the null distribution of U has density function f U (u) = 2φ(u) n−1 τ =1 g τ (u)g n−τ (u), (6.5) where φ(u) is the density of the standard normal, g 1 (u) = 1 for u ≥ 0 and g τ (u) is a recursive function such that g τ (u) = Pr(|T i | < u, i = 1, . . . , τ − 1 | |T τ | = u). (6.6) The sketch of the proof of (6.5) is given in Appendix A.4. Yao and Davis (1986, Theorem 2.1) show that a suitably normalized version of U converges, however slowly (Jarušková, 1997;Csörgö and Horváth, 1997), under H 0 to the double exponential, or Gumbel, distribution, which provides approximate quantiles. See also Irvine (1986). The finite-sample null distribution of the likelihood ratio statistic for unknown σ 2 was worked out by Worsley (1979). Further generalizations, such as to the multivariate case and/or to account for a possible change in the scale of the distribution, can be found in the book length account of Chen and Gupta (2012, § §2.2-2.3 and 3.2-3.3). See also the selection of references given in Appendix B of the Supplementary Material. Change-point detection in regression A further extension of Model (6.2) with respect to location, η i = α 1 + β 1 x i , 0 < i ≤ τ 1 , (6.7) = α 2 + β 2 x i , τ 1 < i ≤ τ 2 , . . . = α K+1 + β K+1 x i , τ K < i ≤ n, is used for change-point detection in simple linear regression. The early contributions by Quandt (1958Quandt ( , 1960 derive the likelihood ratio statistic under the null hypothesis of no switch against the alternative that the model possibly obeys two separate regimes under the assumption of independent and zero-mean normal error terms ε i . Under the alternative hypothesis, the variance is furthermore allowed to switch from σ 2 1 to σ 2 2 at instant τ , when the linear predictor η i undergoes a structural change. The likelihood ratio statistic is W = max 3≤τ ≤n−3 W τ , with W τ = −2 log σ 2τ 1σ 2(n−τ ) 2 σ 2n , is a function of the least squares estimatorsσ 2 1 and σ 2 2 of σ 2 1 and σ 2 2 , respectively, computed using the corresponding subsets of observations, and of the MLEσ 2 of the common variance σ 2 = σ 2 1 = σ 2 2 based upon the entire sample. Quandt (1958) initially conjectured that the asymptotic distribution of W may be χ 2 4 under the null hypothesis of no change. However, the numerical investigation he reported in a later publication for the three sample sizes n = 20, 40, 60 (Quandt, 1960, Table 3) revealed that the finite-sample distribution depends on the number of observations n. Change-point detection in simple linear regression using the likelihood ratio is also the subject of Kim and Siegmund (1989). These authors consider two situations: where only the intercept is allowed to change and where both, the intercept and the slope change while the variance remains constant. Again, the Brownian Bridge process is central to the derivation of the corresponding limiting distributions as in Yao and Davis (1986). Approximations for the corresponding tail probabilities are given by Kim and Siegmund (1989) under reasonably general assumptions. Model (6.7) can be extended to account for changes in the covariates, which is known as piecewise linear regression. This type of models are very popular in a large number of disciplines, including among others environmental sciences (Piegorsch and Bailer, 1997, Section 2.2;Muggeo, 2008a), medical sciences (Smith and Cook, 1980;Muggeo et al., 2014), epidemiology (Ulm, 1991) and econometrics (Zeileis, 2006). A review of likelihood ratio testing for piecewise linear regression up to his time of writing can be found in Bhattacharya (1994). See also the annotate bibliography for a selection of related contributions. Beyond parametric inference This section reviews cases of interest which do not fit into the previously mentioned three broad model classes, but still fall under the big umbrella of nonstandard problems. In particular we will focus on shape constrained inference, a genre of nonparametric problem which leads to highly nonregular models. As brought to our attention by an anonymous Referee, the asymptotic theory of semiparametric and nonparametric inference has interesting analogues to the classical parametric likelihood theory reviewed in Section 2. Indeed, the parameter space of a semiparametric model is an infinite-dimensional metric space. This makes the model non-standard as we typically consider a real parameter of interest in the presence of an infinitely large nuisance parameter. Despite this departure from regularity, the likelihood ratio statistic still behaves as we would expect it. van der Vaart (1997, 2000), for instance, show that the corresponding limiting distribution is chi-squared also when we profile out the infinite-dimensional nuisance parameter. The behavior of this likelihood ratio statistics under local alternative hypotheses is studied in Banerjee (2005). The classical approximations of Section 2 also hold for the asymptotic theory of empirical likelihood (Owen, 1990(Owen, , 1991; see Chen and Van Keilegom (2009) for a review. These results are quite remarkable given that the underlying distributional assumptions are much less strict. An area of research which has received much attention in the last decade is nonparametric inference under shape constraints (Samworth and Bodhisattva, 2018). Shape constraints originate as a natural modelling assumption and lead to highly nonregular models. As highlighted by Groeneboom and Jongbloed (2018), the probability density or mass functions of many of the widely used parametric models satisfy shape constraints. For example, the exponential density is decreasing, the Gaussian density is unimodal, while the Gamma density can be both, depending on whether its shape parameter is smaller or larger than one. Estimation under shape constraints leads to an M-estimation problem where the parameter vector typically has the same length as the sample size and is constrained to lie in a convex cone. Non-regularity arises since the M-estimator typically falls on the face of the cone. As for boundary problems, convex geometry is an essential tool to treat shape constrained problems. The field of shape constraint problems originated from 'monotone' estimation problems, where functions are estimated under the condition that they are monotone. The maximum likelihood estimator converges typically at the rate n −1/3 if reasonable conditions hold, that is, at a slower pace than the n −1/2 rate attained by regular problems. Moreover, the maximum likelihood estimator has a nonstandard limiting distribution known as Chernoff's distribution (Groeneboom and Wellner, 2001). A considerable body of work has studied the asymptotic properties of the nonparametric likelihood ratio statistic under monotonicity. In particular, Banerjee and Wellner (2001) initiated the research strain of testing whether a monotone function ψ assumes the particular value ψ(t 0 ) = ψ 0 at a fixed point t 0 . An extension to regression is given by Banerjee (2007), who assumes that the conditional distribution p(y, θ(x)), of the response variable Y given the covariate X = x, belongs to a regular parametric model, where the parameter θ, or part of it, is specified by a monotone function θ(x) ∈ Θ of x. Other types of shape constraint problems have emerged in the meantime which entail concavity or convexity and uni-modality of the functions to be estimated; see Appendix B of the Supplementary Material. Shape constraints arise also in many highdimensional problems, which opens frontiers for research in nonregular settings; see for example Bellec (2018). Most recently, Doss and Wellner (2019) showed that the likelihood ratio statistic is asymptotically pivotal if the density is logconcave. The class of log-concave densities has many attractive properties from a statistical viewpoint; an account of the key aspects is given in Samworth (2018). Non-standard limiting distributions characterize shape constrained inferential problems. Generally, the likelihood ratio statistic converges to a limiting distribution which can be described by a functional of a standard Brownian motion plus a quadratic drift. In addition, the limiting distribution is asymptotically pivotal, that is, it doesn't depend on the nuisance parameters, as happens for the common χ 2 distribution of regular parametric problems. Recent work on highdimensional asymptotics of likelihood ratio tests under convexity constraints is discussed in Han et al. (2022). Computational aspects and software Deriving the asymptotic distribution of the likelihood ratio statistic under non-standard conditions is generally a cumbersome task. In some cases the limiting distribution is well defined and usable, as for instance when it boils down to a chi or chi-bar squared distribution. Quite often, however, the analytical approximation is intractable, so as when we have to determine the percentiles of a Gaussian random field. This fact has motivated the development of alternative test statistics whose null distribution presents itself in a more manageable form; see, for instance, the contributions mentioned in Section 5.3. Or, we may rely upon simulation, using Monte Carlo or the bootstrap, as mentioned in passing in Sections 3.3, 4.2, 5.3 and 6.2. Bootstrapping, in particular, allows us to recover the finitesample null distribution of the test statistic very naturally provided that the bootstrap resamples from a consistently estimated density (Titterington, 1990;Feng and McCulloch, 1996). In general terms, this requires that the maximum likelihood estimator converges to the possibly non-identifiable subset of the parameter space to which the true parameter belongs to. An example of inconsistency of the bootstrap when a parameter is on the boundary of the parameter space together with further counterexamples that are already in the literature are provided in Andrews (2000). Bootstrap likelihood ratio tests for finite-mixture models are reviewed in Feng and McCulloch (1996). A most recent application for boundary points is Cavaliere et al. (2022), while Kirch (2008), Hušková and Kirch (2012), and the very recent papers by Chen and Cabrera (2020) and Yu and Chen (2022) boostrap the critical values of change-point tests. Permutation is also used to derive the critical values for tests statistics in change-point analysis; see for example Kirch and Steinebach (2006) and reference therein. A compromise between analytical approximation and simulation is the hybrid approach described in Brazzale et al. (2007, Section 7.7) where parts of the analytical approximation are obtained by simulation. However, simulation becomes useless if the limiting distribution diverges to infinity, as already mentioned in Example 1.1. A non exhaustive list of examples is provided in paragraphs 5.2.2 and 5.3 of Appendix B of the Supplementary Material. Substantive applications in which the approximations have been found useful and details of how to implement the methods in standard computing packages are generally missing. Reviewing all software contributions which implement likelihood ratio based inference for nonregular problems in a more or less formalized way is beyond the scope of this paper. In the following we try and give a selected list of packages for the numerical computing environment R (R Core Team, 2020). We will again group them into the three broad classes reviewed in the previous Sections 3-6, that is, boundary problems, mixture models and change-point problems. Crainiceanu and Ruppert's (2004) proposal, which tests for a null variance component, is implemented in the RLRsim package by Scheipl et al. (2008). We furthermore mention the varTestnlme package by Baey and Kuhn (2019) and the lmeVarComp package by Zhang (2018). The first again tests for null variance components in linear and non linear mixed effects model, while the second implements the method proposed by Zhang et al. (2016) for testing additivity in nonparametric regression models. An account of some early software implementations to handle mixture models can be found in Haughton (1997), in the Appendix of McLachlan and Peel (2000) and also in the Software section of the recent review paper by McLachlan et al. (2019). A most recent implementation for use in astrostatistics is the TOHM package by Algeri and van Dyk (2020) which implements a computationally efficient approximation of the likelihood ratio statistic for a multimensional two-component finite-mixture model. The package is also available for the Python programming language. The code provided by Chauveau et al. (2018) for testing a two-component Gaussian mixture versus the null hypothesis of homogeneity using the EM test is available through the MixtureInf package by Li et al. (2016). Maximum likelihood estimation in finite mixture models based on the EM algorithm is furthermore addressed in the mixR package by Yu (2018), which also considers different information criteria and bootstrap resampling. The clustBootstrapLRT function of the mclust package by Scrucca et al. (2016) also implements bootstrap inference for the likelihood ratio to test the number of mixture components. A further implementation of the likelihood ratio test for mixture models is the mixtools package by Benaglia et al. (2009). All R packages linked to finite mixture models are listed on the CRAN Task View webpage for Cluster Analysis & Finite Mixture Models 1 . The changepoint package by Killick and Eckley (2014) considers a variety of test statistics for detecting change-points among which the likelihood ratio. The strucchange package by Zeileis et al. (2002) provides methods for detecting changes in linear regression models. We may furthermore mention the segmented package by Muggeo (2008b) for change-point detection in piecewise linear models, the bcp package by Erdman and Emerson (2007) for Bayesian analysis of a single change in univariate time series and the CPsurv package by Brazzale et al. (2019) for nonparametric changepoint estimation in survival data. Discussion Non-regularity can arise in many different ways, though all entail the failure of one, at times even two, regularity conditions. Many problems can be dealt with straightforwardly; other require sophisticated tools including limit theorems and extreme value theory for random fields. The wealth of contributions, which has been produced during the last 70 years, synthesized in Figure 1, testifies that the interest in this type of problems has not faded since they made their entrance back in the early 50's. The best-studied nonregular case are boundary problems. Common examples of application are testing for a zero variance component in mixed effect models and constrained one-sided tests. The limiting distribution of the likelihood ratio is generally a chi-bar squared distribution with a number of components and mixing weights that depend on the number of parameters which fall on the boundary. This is also the only type of problem for which higher order results are available. Indeterminate parameter problems are far more heterogenous. Apart from finite mixtures, the remaining cases can be put under the two umbrellas of non-identifiable parameters and singular information matrix. The methodological difficulties increase as the limiting distributions depend on the parametric family and on the unknown parameters. If θ is scalar and we want to test homogeneity against a two-component mixture, the distribution of the likelihood ratio converges to the distribution of the supremum of a Gaussian process. For a larger number of mixture components and/or multidimensional θ, this becomes the distribution of the supremum of a Gaussian random field. In these cases, simulation-based approaches are often needed to obtain the required tail probabilities. Moreover, constraints must be imposed to guarantee identifiability of the mixture parameters. As outlined by , these may act on the parameter space, by bounding it or imposing suitable separation conditions among the parameters, or on the alternative hypotheses which must be contiguous. A further possibility is to penalize the likelihood function so that the limiting distribution of the corresponding modified likelihood ratio statistic is chi-squared or well approximated by a chi-bar squared distribution. Change-point problems range from the simple situation of detecting an alteration in the regime of a random sequence to identifying a structural break in multiple linear regression with possibly correlated errors. Although in the latter case the change-point can assume any value, in the first situation it must lie in a discrete set. In addition, there is a tie between change-point problems and indeterminate parameter problems whenever setting one of the components of the model to a particular value, can make other components, or parts of it, disappear, as shown in Example 4.1. Generally, the limiting distribution of the likelihood ratio statistic for detecting a change either converges to a Gaussian processes or can be adjusted to converge to a Gumbel type distribution (Horváth and Rice, 2014a). This technique was first applied by Darling and Erdős (1956) to derive the limiting distribution of the maximum of independent random variables, and has further been extended to depedent data; see Aue and Horváth (2013) for a review. Approximate critical values of the test statistics can be obtained from Bonferroni's inequality, by using asymptotic arguments or simulation. In some situations the likelihood ratio statistic for the unknown change-point is unbounded. From the more practical point of view, use of the asymptotic distribution of the likelihood ratio statistic loses its appeal once it goes beyond the common χ 2 distribution. As a result, simulationbased tests that circumvent the asymptotic theory are often used. Indeed, simulation may nowadays be used to establish the desired empirical distributions of the estimators and to compute approximations for p-values obtained from Wald-type statistics. For the most intricate situations, the authors suggest to use resampling-based techniques, such as parametric and nonparametric bootstrapping, to explore the finite-sample properties of likelihoodbased statistics. Methodological difficulties, such as the possibile divergence of the likelihood ratio statistic, and prohibitive computational costs limit, however, this possibility to specific applications. The review has focused on frequentist hypothesis testing using the likelihood ratio statistic. Maximum likelihood estimation for a class of nonregular cases, which include the three-parameter Weibull, the gamma, log-gamma and beta distributions, is considered in Smith (1985). A significant literature has grown since then, parts of which culminated in the book-length account of techniques for parameter estimation in non-standard settings by Cheng (2017). Most of the difficulties encountered in nonregular settings vanish if the model is analysed using Bayes' rule, though one has always to be cautious. Bayesian and nonparametric contributions with suitable links to their frequentist counterparts are mentioned in Appendix B of the Supplementary Material. A Prototype demonstrations Proof sketch A.1. Boundary problem (Self and Liang, 1987, Theorem 3) Let y 1 , . . . , y n be n independent observations on the random variable Y , and let l(θ) denote the associated log-likelihood function, where θ takes values in the parameter space Θ, a subset of R p . We want to test whether the true value of θ lies in the subset of Θ denoted by Θ 0 versus the alternative that it falls in the complement of Θ 0 in Θ, denoted by Θ 1 . Let θ 0 be the true value of θ, which may fall on the boundary of Θ. First, expand 2 l(θ) − l(θ 0 ) around θ 0 , 2 l(θ) − l(θ 0 ) = 2(θ − θ 0 ) u(θ 0 ) − (θ − θ 0 ) i(θ 0 )(θ − θ 0 ) + O p (||θ − θ 0 || 3 ), where u(θ) is the score function, i(θ) the Fisher information matrix and || · || represents the Euclidean norm. Rewrite this expansion as a function of the variableZ n = n −1 i 1 (θ 0 ) −1 u(θ 0 ), where i(θ 0 ) = ni 1 (θ 0 ) and i 1 (θ 0 ) is the Fisher information matrix associated with a single observation. This yields 2 l(θ) − l(θ 0 ) = − { √ nZ n − √ n(θ − θ 0 )} i 1 (θ 0 ) { √ nZ n − √ n(θ − θ 0 )} + u(θ 0 ) i(θ 0 ) −1 u(θ 0 ) + O p (||θ − θ 0 || 3 ). Consider now the likelihood ratio statistic W = 2 sup θ∈Θ l(θ) − sup θ∈Θ0 l(θ) = sup θ∈Θ −{ √ nZ n − √ n(θ − θ 0 )} i 1 (θ 0 ) { √ nZ n − √ n(θ − θ 0 )} − sup θ∈Θ0 −{ √ nZ n − √ n(θ − θ 0 )} i 1 (θ 0 ) { √ nZ n − √ n(θ − θ 0 )} + O p (||θ − θ 0 || 3 ). Approximate the two sets Θ and Θ 0 by the cones C Θ−θ 0 and C Θ0−θ 0 centered at θ 0 , respectively. Now, given that √ nZ n converges in distribution to a multivariate normal distribution with mean zero and covariance matrix i 1 (θ 0 ) −1 , for all θ such that θ − θ 0 = O p (n −1/2 ), the limiting distribution of W becomes sup θ∈C −(Z − θ) (Z − θ) − sup θ∈C0 −(Z − θ) (Z − θ) , or equivalently as in Expression (3.3), whereC and C 0 are the corresponding transformations of the cones C Θ−θ 0 and C Θ0−θ 0 , respectively, and Z is multivariate standard normal. Proof sketch A.2. Non-identifiable parameter (Liu and Shao, 2003, Theorem 2.3) Let Y 1 , . . . , Y n be n independent and identically distributed random observations from the true distribution function F 0 . Suppose that we want to test H 0 : θ ∈ Θ 0 against H 1 : θ ∈ Θ \ Θ 0 , where Θ 0 = {θ ∈ Θ : F θ = F 0 } with F θh i (θ) = H(θ)S i (θ) − H 2 (θ) + H 2 (θ)R i (θ), with h i (θ) = λ i (θ) − 1. H(θ) is the Hellinger distance between F θ and F 0 defined as H 2 (θ) = E F 0 λ i (θ) − 1 2 /2 and S i (θ) and R i (θ) are such that E F 0 [S i (θ)] = E F 0 [R i (θ)] = 0. Furthermore assume that sup θ∈Θ c/ √ n |ν n (S i (θ)) | = O p (1) and sup θ∈Θ c/ √ n |E Fn [R i (θ)] | = o p (1), for all c > 0, where F n (·) indicates the empirical distribution function and ν n (g) = n −1/2 (nE Fn − E F 0 )[g] is a random process defined for any integrable function g. Here, Θ = {θ ∈ Θ | 0 < H(θ) ≤ } defines the Hellinger neighbourhood of F 0 . Now, using the GDQM expansion and a Taylor series expansion of 2 log{1 + h i (θ)}, the log-likelihood ratio function lr(θ) can be expressed as lr(θ) = 2 n i=1 log{1 + h i (θ)} = 2 √ nH(θ)ν n (S i (θ)) − nH 2 (θ) 2 + E Fn S 2 i (θ) + o p (1), (A.2) in Θ c/ √ n for all c > 0. Under some general conditions on the trio {S i (θ), H(θ), R i (θ)} (Liu and Shao, 2003, Theorem 2.2), the quadratic expansion in (A.2) holds uniformly in θ ∈ Θ for some small enough > 0. Direct maximization of (A.1) by √ nH(θ) allows us to approximate the likelihood ratio statistic by the quadratic form {ν n (S i (θ)) ∨ 0} 2 1 + E Fn [S 2 i (θ)]/2 ≈ {ν n (S * i (θ)) ∨ 0} 2 . Let S be the se of all L 2 limits of the standardized score function S * i (θ) = S i (θ) 1 + E F 0 [S 2 i (θ)]/2 as H(θ) → 0. To complete the proof we assume there exists a centered Gaussian process {G S : S ∈ S} on the same probability space of the empirical process ν n with uniformly continuous sample paths and covariance kernel E F 0 [G S1 G S2 ] = E F 0 [S 1 S 2 ], for all S 1 , S 2 belonging to S. Using results from statistical limit theory, it is possibile to prove the following two inequalities W (H 0 ) ≤ sup S∈S {G S ∨ 0} 2 + o p (1), W (H 0 ) ≥ sup S∈S {G S ∨ 0} 2 + o p (1), which imply that lim n→∞ W (H 0 ) = sup S∈S {G S ∨ 0} 2 . Proof sketch A.3. Finite mixture model (Ghosh and Sen, 1985, Theorem 2.1) Let y 1 , . . . , y n be a sample of n i.i.d. observations from the strongly identifiable mixture model (5.1) and l(η) = n i=1 log {(1 − π)f 1 (y i ; θ 1 ) + πf 2 (y i ; θ 2 )} , with η = (π, θ 1 , θ 2 ) be the corresponding loglikelihood function. Suppose that H 0 : π = 0 is true, so the true model density is f 1 (y; θ 0 1 ), where θ 0 1 is the true value of θ 1 . Unless differently stated, all functions and expectations will be evaluated under this assumption, that is, for η 0 = (0, θ 0 1 , θ 2 ), with arbitrary θ 2 . Let W (H 0 ) be the likelihood ratio statistic W (H 0 ) = 2{ sup π∈[0,1] θ1∈Θ1 θ2∈Θ2 l(η) − sup π=0 θ1∈Θ1 θ2∈Θ2 l(η) } = sup θ2∈Θ2 2{ sup π∈[0,1] θ1∈Θ1 l(η) − sup π=0 θ1∈Θ1 l(η)}. (A.3) Expand l(η) with respect to the first two components of η = (π, θ 1 , θ 2 ) around π = 0 and θ 1 = θ 0 1 . This yields l(η) = l 1 (θ 0 1 ) + A n (η) + o p (1), (A.4) where l 1 (θ 1 ) = n i=1 log f 1 (y i ; θ 1 ) and A n (η) = πl π + (θ 1 − θ 0 1 ) l θ1 + 1 2 π 2 l ππ + 2π(θ 1 − θ 0 1 ) l πθ1 + (θ 1 − θ 0 1 ) l θ1θ1 (θ 1 − θ 0 1 ) . Here, the two indexes π and θ 1 denote differentiation with respect to the corresponding parameter components. As shown in Ghosh and Sen (1985), in virtue of the Kuhn-Tucker-Lagrange theorem, the unconstrained supremum of A n (η) becomes sup π∈[0,1] θ1∈Θ1 A n (η) = 1 2 u 0 (θ 2 ), u 1 i(θ 2 ) −1 u 0 (θ 2 ), u 1 if Z n (θ 2 ) ≥ 0 and sup π∈[0,1] θ1∈Θ1 A n (η) = 1 2 u 1 i −1 11 u 1 if Z n (θ 2 ) < 0, where we define Z n (θ 2 ) = u 0 (θ 2 )i 00 (θ 2 ) + u 1 (θ 2 ) i 01 (θ 2 ) {i 00 (θ 2 )} 1/2 . In the previous three expressions, u 0 (θ 2 ) = l π (η 0 ), u 1 = l θ1 (η 0 ), i represents the expected information matrix with respect to π and θ 1 , i jk (θ 2 ) denotes the (jk)-th component of i, for j = 0, 1 and k = 0, 1, while i jk (θ 2 ) denotes the (jk)-th component of i −1 . Similarly, the constrained supremum of A n (η) is sup π=0 θ1∈Θ1 A n (η) = 1 2 u 1 i −1 11 u 1 . Using known results on the inversion of block matrices, the likelihood ratio statistic (A.3) reduces to W (H 0 ) = sup θ2∈Θ2 Z 2 n (θ 2 ) I {Zn≥0} + o p (1). To ensure the convergence of Z n (θ 2 ) to the zeromean Gaussian processes Z(θ 2 ), the set Θ 2 needs be bounded and a Lipschitz condition has to hold for the u 0 component of the score vector which, in turn, implies tightness of u 0 . These conditions furthermore guarantee that the remainder term in expansion (A.4) is o p (1) over the two bounded sets of π and θ 1 and uniformly in θ 2 . Proof sketch A.4. Shift in location for Gaussian model (Hawkins, 1977, Theorem 1) Given n independent Gaussian observations, we want to test whether Y i ∼ N (µ, σ 2 ), i = 1, . . . , n, against the alternative that there exists a 0 < τ < n at which the unknown mean µ switches to µ = µ. The variance σ 2 is assumed to be known; we set it to one without loss of generality. Recall from Section 6.2 that the likelihood ratio statistic can be re-expressed as a function of U = max 1≤τ <n |T τ |, where T τ = n τ (n − τ ) τ i=1 (Y i −Ȳ ). The null distribution of U is given at (6.5). The proof considers the following events A τ = {|T τ | ∈ (u, u + du)}, B τ = {|T i | < |T τ |, ∀i ∈ (1, . . . , τ − 1)}, and C τ = {|T i | < |T τ |, ∀i ∈ (τ + 1, . . . , n)}. Define F U (u + du) − F U (u) = Pr U ∈ (u, u + du) = Pr n−1 τ =1 {|T τ | ∈ (u, u + du)}∩ {|T τ | > |T i |, i = τ } = n−1 τ =1 Pr(A τ ∩ B τ ∩ C τ ) = n−1 τ =1 Pr(A τ )Pr(B τ |A τ )Pr(C τ |A τ ∩ B τ ). Since T τ ∼ N (0, 1), we have that Pr(A τ ) = 2φ(u)du + o(du). Moreover, Pr(B τ |A τ ) = Pr(|T i | < |T τ |, ∀i ∈ (1, . . . , τ − 1) | |T τ | = u) = Pr(|T i | < u, ∀i ∈ (1, . . . , τ − 1) | |T τ | = u) + O(du) = g τ (u) + O(du), (A.5) where g 1 (u) = 1 for u ≥ 0 and g τ (u) is given in (6.6). Since the series {T 1 , T 2 , . . . , T n−1 } is Markovian, {T 1 , T 2 , . . . , T τ −1 } and {T τ +1 , T τ +2 , . . . , T n−1 } are independent. It follows that the events B τ and C τ are independent given T τ = u, that is, Pr(C τ |A τ ∩ B τ ) = P(C τ |A τ ). According to the probability symmetry between B τ and C τ (Chen and Gupta, 2012, §2.1.1), similar to Pr(B τ |A τ ), it follows that Pr(C τ |A τ ) = g n−τ (u) + O(du). (A.6) Combining (A.5) and (A.6), we obtain Pr{U ∈ (u, u + du)} = 2φ(u) n−1 τ =1 g τ (u)g n−τ (u)du + o(du), which corresponds to Expression (6.5). SUPPLEMENTARY MATERIAL B Annotated bibliography The subsequent section-wise list of references supplements the work cited in the main text in an attempt to provide a comprehensive overview of the asymptotic properties of the likelihood ratio statistic in nonregular problems. Böhning, D. and Dietz, E. (1995). Discussion of the paper by Cheng and Traylor (1995 Chernoff 's (1954) results to the case where the true parameter value is 'near' the boundary of Θ 0 and Θ 1 . Assumes that the true parameter value θ 0 n = θ 0 + o(1) is a sequence of points, not necessarily in Θ 0 or Θ 1 , such that θ 0 n approaches the boundary given by the intersec-tionΘ 0 ∩Θ 1 of their complementary subsetsΘ 0 andΘ 1 . [ §3.2] Make the same assumptions than Self and Liang (1987), but require that all nuisance parameters are interior points of Θ. Propose a data-dependent solution which avoids the calculation of the mixing weights of the chi-bar squared limit distribution and performs well in terms of power and type I error. (1977) and Davies (1987) to the linear model with unknown error variance. Fortunati, S., Gini, F., Greco, M., Farina, A., Graziano, A. and Giompapa, S. to verify the hypothesis of a homogeneous model against the alternative of a Gaussian mixture of two or more components with a common and unknown variance. Show, in particular, that the χ 2 2 distribution represents a stochastic upper bound to the limiting null distribution of the test statistic. Chen, J., Li, P. and Fu, Y. (2008) Hartigan (1985) and Chen and Chen (2001, Theorem 2). Polymenis, A. and Titterington, D. M. (1999). A note on the distribution of the likelihood ratio statistic for normal mixture with known proportions. Journal of Statistical Computation and Simulation, 64, 167-175. [ §5.2.2] Analyse empirically the d = 1 scenarios treated by Goffinet et al. (1992) and give an heuristic explanation for the slow convergence. Propose to refer to theχ 2 (ω, 1) distribution with suitably defined mixing proportionsω instead of the theoretical value ω = (0.5, 0.5) to improve the approximation in finite samples. Qin, Y. S. and Smith, B. (2004). Likelihood ratio test for homogeneity in normal mixtures in the presence of a structural parameter. Statistica Sinica, 14, 1165-1177. [ §5.2.2] Consider a mixture of two normal distributions as in Chen and Chen (2003) with the restrictions on the mean parameters given by . Identifiability is guaranteed by setting π ≤ 0.5. In addition the mixing proportion need to satisfy min(π, 1 − π) ≥ for some positive < 1/2. The likelihood ratio asymptotically follows a fifty-fifty mixture of a χ 2 1 and a χ 2 2 distribution under the hypothesis of homogeneity. Qin, Y. S. and Smith, B. (2006 (2004) to a bivariate normal mixture model with known covariance matrix under the condition min(π, 1 − π) ≥ for some positive < 1/2 (Theorem 1). In practice, the limiting distribution must be found numerically, though an approximation is provided in their Section 4. Chen, H. and Chen, J. (2001b) et al. (1994), though the component distributions are allowed to belong to a generic parametric family. They show that under suitable conditions, which guarantee identifiability of the mixture and regularity of the component distributions, the limiting distribution of the likelihood ratio is the distribution of the squared supremum of a left-truncated standard Gaussian process, whose autocorrelation function is explicitly presented. Figure 1 : 1Example 1.3: Translated exponential distribution. Values of the likelihood ratio W (3) observed in 10,000 exponential samples of size n = 50 generated with rate equal to 1 and translated by θ 0 = 3. Left: χ 2 1 quantile plot. The diagonal dotted line is the theoretical χ 2 1 approximation. Right: histogram and superimposed χ 2 2 density (solid line). Figure 2 : 2Publishing timeline of a selection of contributions on the large-sample properties of the likelihood ratio statistic in nonregular settings. Figure 3 : 3Example 3.1: Bivariate normal. The grey shaded area represents the parameter space Θ. Figure 4 : 4Example 4.2: Singular information. The three solid curves represent the score functions (left panel) and normalised log-likelihood functions (right panel) for three samples of size n = 10 for θ 0 = 0 and q = 3. decline in the case of a two-component Gaussian mixture. A most recent treatment isChauveau et al. (2018). the distribution indexed by θ. Letlr(θ) = n i=1 log{λ i (θ)} be the log-likelihood ratio function, where λ i (θ) = λ(Y i ; θ) denotes the Radon-Nikodym derivative, λ(θ) = dF θ /dF 0 , evaluated at Y i . Define the likeli-{a∨b} =max(a, b). Assume that there exists a trio {S i (θ), H(θ), R i (θ)} which satisfies the generalized differentiable in quadratic mean expansion (GDQM) Chant, D. (1974). On asymptotic tests of composite hypotheses in non standard conditions. Biometrika, 61, 291-298. [ §3.2] Investigates the limiting distribution of the maximum likelihood estimator when θ lies on the boundary of a closed parameter space. Chen, Y., Huang, J., Ning, Y., Liang, K.-Y., and Lindsay, B.G. (2015). A conditional composite likelihood ratio test with boundary constraints. Biometrika, 105, 225-232. [ §3.2] Generalize Susko's (2013) results to composite likelihoods. Chen, Y., Ning, J., Ning, Y., Liang, K.-Y., and Bandeen-Roche, K. (2017). On pseudolikelihood inference for semiparametric models with boundary problems. Biometrika, 104, 165-179. [ §3.2] Establish the asymptotic behaviour of the pseudo likelihood ratio statistic under semiparametric models when testing the hypothesis that the parameter of interest lies on the boundary of its parameter space. Drton, M. (2009). Likelihood ratio test and singularities. The Annals of Statistics, 37, 979-1012. [ §3.2] Uses tools from algebraic geometry to study the asymptotic distribution of the likelihood ratio statistic when the true parameter value is a singularity, as for instance a cusp. Feder, P. I. (1968). On the distribution of the log likelihood ratio test statistic when the true pa-rameter is near the boundaries of the hypothesis regions. The Annals of Mathematical Statistics,). Journal of the Royal Statistial Society Series B (Methodological), 57, 33-34. [ §3.1] Present a counterexample for finite mix- tures. 39, 2044-2055. [ §3.2] Extends Moran, P. A. P. (1971). Maximum likelihood estimation in non standard conditions. Mathematical Proceedings of the Cambridge Philosophical Society, 70, 441-450. [ §3.2] Investigates the limiting distribution of the maximum likelihood estimator when θ lies on the boundary of a closed parameter space.Susko, E. (2013). Likelihood ratio tests with boundary constraints using data-dependent degrees of freedom. Biometrika, 100, 1019-1023. Vu, H. T. V. andZhou, S. (1997). Generalization of likelihood ratio tests under nonstandard conditions. The Annals ofStatitics, 25,[897][898][899][900][901][902][903][904][905][906][907][908][909][910][911][912][913][914][915][916] Derive the large-sample distribution of estimators obtained from estimating functions for models involving covariates. The non-standard asymptotic distribution of the likelihood ratio statistic for the two-way nested variance components model is derived as an example.Wood, S. N. (2013). A simple test for random effects in regression models. Biometrika, 100, 1005-1010. [ §3.3] Extends Crainiceanu and Ruppert (2004) to generalized linear mixed models with multiple variance components. Exploits the link between random effects and penalized regression to develop a simple simulation-free test for a null variance component based on the restricted likelihood ratio, which under the null hypothesis follows a weighted sum of squared independent standard normal random variables. Zhang, D. and Lin, X. (2008). Variance component testing in generalized linear mixed models for longitudinal/clustered data and other related topics. In Random Effect and Latent Variable Model Selection (D. B. Dunson Editor), p. 19-36, Lecture Notes in Statistics, Springer-Verlag, New York. [ §3.3] Extend Stram and Lee (1994) ideas to generalized linear mixed effects models to test if between-subject variation is absent. Claeskens, G., Nguti, R. and Janssen, P. One-sided tests in shared frailty models. Test, 17, 69-82. [ §3.4] Propose likelihood ratio tests in frailty models. Extend Maller and Zhou (2003) to allow for the presence of covariates. Consider also the limiting null distribution of the score statistic. Maller, R.A. and Zhou, X. (2003). Testing for individual heterogeneity in parametric models for event-history data. Mathematical Methods of Statistics, 12, 276-304. [ §3.4] Show that under minimal conditions on the censoring mechanism, the likelihood ratio statistic for homogeneity testing asymptotically distributes as aχ 2 (ω, 1) with ω = (0.5, 0.5). Molenberghs, G. and Verbeke, G. (2007). Likelihood ratio, score and Wald tests in a constrained parameter set. The American statistician, 61, 22-27. [ §3.4] Compare the performance of the Wald, score and likelihood ratio statistics in multivariate one-sided testing. Suggest to consider the likelihood ratio as the default choice. Andrews, D. W. K. and Ploberger W. (1994). Optimal tests when a nuisance parameter is present only under the alternative. Econometrica, 62, 1383-1414. [ §4.2] Discuss asymptotically optimal tests for the linear model with unknown error variance. Use their results to test for a one-time structural change with unknown change-point and discuss several other examples. Bowden, R. (1973). The theory of parametric identification. Econometrica, 41, 1069-1074. [ §4.2] Sets out a general criterion for the identifiability of a statistical system based on Kullback's information integral. Davies, R. B. (1977). Hypothesis testing when a nuisance parameter is present only under the alternative. Biometrika, 64, 247-254. [ §4.2] Investigates the construction of optimal likelihood-based tests under loss of identifiability for a two-parameter model when the test statistic follows a normal distribution. Davies, R. B. (1987). Hypothesis testing when a nuisance parameter is present only under the alternative. Biometrika, 64, 33-43. [ §4.2] Extends Davies (1977) to the case when the model distribution is chi-squared. Davies, R. B. (2002). Hypothesis testing when a nuisance parameter is present only under the alternative: Linear model case. Biometrika, 89, 484-489. [ §4.2] Extends Davies . An identifiability criterion in the presence of random nuisance parameters. Proceedings of the 20th European Signal Processing Conference(EUSIPCO 2012). Bucharest, August 27-31, 2012. [ §4.2] Extend Bowden's (1973) result which connects parameter identifiability to non-singularity of the information matrix to the nuisance parameter case. Ritz, C. and Skovgaard, Ib M. (2005). Likelihood ratio tests in curved exponential families with nuisance parameters present only under the alternative. Biometrika, 92, 507-517. [ §4.2] Derive the asymptotic distribution of the likelihood ratio and of the related score statistic for a general curved exponential family for which some nuisance parameters vanish under the null hypothesis. Their results are derived without the need to assume compactness of the parameter space, a condition which is generally required when some parameters vanish under the null hypothesis. Song, R., Kosorok, M. R. and Fine, J. P. (2009). On asymptotically optimal tests under loss of identifiability in semiparametrics models. The Annals of Statistics, 37, 2409-2444. [ §4.2] Consider tests of hypothesis when the parameters are not identifiable under the null hypothesis in the context of semiparametric models. Aitchison, J. and Silvey, S. D. (1960). Maximum likelihood procedures and associated tests of significance. Journal of the Royal Statistical Society, Series B (Methodological), 22, 154-171. [ §4.3] Address the problem of singular information matrix when the null hypothesis is specified by constraints on the parameters and the outcome of the test dictates whether it is necessary to provide estimates of these parameters. Barnabani, M. (2006). Inference in the indeterminate problem. STATISTICA, 1, 59-75. [ §4.3] Proposes to maximise a suitably penalized log-likelihood function which guarantees that the corresponding estimator of the parameter is consistent and asymptotically normal. Allows one to construct a Wald type test statistic which has a limiting chi-squared distribution both under null and alternative hypotheses. Ekvall, K. O. and Bottai, M. (2022). Confidence regions near singular information and boundary points with applications to mixed models. The Annals of Statistics, 50, 1806-1832. [ §4.3] Propose confidence regions with asymptotically correct uniform coverage probability of parameters whose Fisher information matrix can be singular at important points of the parameter set. El-Helbawy, A. T. and Hassan, T. (1994). On the Wald, Lagrangian multiplier and likelihood ratio tests when the information matrix is singular. Journal of the Italian Statistical Society, 1, 51-60. [ §4.3] Build upon Silvey (1959) and develop modified formulae for the Wald, score and likelihood ratio statistics which, under standard regularity conditions, asymptotically follow a chi-squared distribution with degrees of freedom specified by the number of constraints. Jin, F. and Lee, L.-F. (2018). Lasso maximum likelihood estimation of parametric models with singular information matrices. Econometrics, 6, 8. [ §4.3] Propose to fit the parameters of models with singular information matrix by adaptive lasso while allowing the true parameter vector to lie on the boundary of the parameter space. Azaïs, J.-M., Gassiat,É. and Mercadier, C. (2006). Asymptotic distribution and local power of the log-likelihood ratio test for mixtures: bounded and unbounded cases. Bernoulli, 12, 775-799. [ §5.2] Provide the asymptotic distribution of the likelihood ratio statistic under the null hypothesis of a K 0 -component model and under contiguous alternatives for a general mixture of parametric populations for a bounded parameter space. Azaïs, J.-M., Gassiat,É. and Mercadier, C. (2009). The likelihood ratio test for general mixture models with or without a structural parameter. ESAIM: Probability and Statistics, 13, 301-327. [ §5.2] Consider likelihood ratio for testing homogeneity in the general K-component model, with application to Gaussian, Poisson and binomial distributions, and testing for the number of components of a finite mixture with or without a nuisance parameter. A number of conditions need be imposed to avoid divergence of the limiting distribution of the likelihood ratio test. Chen, J. and Kalbfleisch, J. D. (2005). Modified likelihood ratio test in finite mixture models with a structural parameter. Journal of Statistical Planning and Inference, 129, 93-107. [ §5.2] Study a modification of the likelihood ratio statistic similar to that proposed by . Testing homogeneity in a mixture of von Mises distributions with a structural parameter. The Canadian Journal of Statistics, 36, 129-142. [ §5.2] De-rive the asymptotic null distribution of the modified likelihood ratio test introduced in Chen et al. (2001) and of a further modification, called the iterative modified likelihood ratio test, for testing model homogeneity against the alternative that the model is a two-component von Mises mixture with unknown mean directions without and with nuisance parameters. Jeffries, N. O. (2003). A note on "Testing the number of components in a normal mixture", Biometrika, 90, 991-994. [ §5.2] Disproves Lo et al. (2001) result based on the fact that it requires conditions on the structure of the parameter space that are generally not met when the null hypothesis of a K 0 -component model holds, with K 0 ≥ 1. Lo, Y., Mendell, N. R. and Rubin, D. B. (2001). Testing the number of components in a normal mixture. Biometrika, 88, 767-778. [ §5.2] Prove that in the Gaussian case the distribution of the likelihood ratio statistic based on the Kullback-Leibler information criterion converges under the null hypothesis to a weighted sum of χ 2 1 distributions. Garel, B. (2001). Likelihood ratio test for univariate Gaussian mixture. Journal of Statistical Planning and Inference, 96, 325-350. [ §5.2.2] Discusses seven distinct cases of homogeneity testing using the likelihood ratio for the general two-component Gaussian mixture model by imposing different restrictions on the means and the variances. Lei, Q. L. and Qin, Y. S. (2009). A modified likelihood ratio test for homogeneity in bivariate normal mixtures of two samples. Journal of Systems Science and Complexity, 22, 460-468. [ §5.2.2] Extend Qin and Smith (2006) to the twosample problem by using the modified likelihood statistic. Liu, X. and Shao, Y. (2004). Asymptotics for likelihood ratio test in a two-component normal mixture model. Journal of Statistical Planning and Inference, 123, 61-81. [ §5.2.2] Derive the asymptotic distribution of the likelihood ratio statistic for a two-component mixture model. Furthermore, show that the likelihood ratio diverges to infinity in probability at rate O{log(log n)} if the mean parameters are unbounded, in accordance with ). The likelihood ratio test for homogeneity in bivariate normal mixtures. Journal of Multivariate Analysis, 97, 474-491. [ §5.2.2] Generalize Qin and Smith . The likelihood ratio test for homogeneity in finite mixture models. The Canadian Journal of Statistics, 29, 201-215. [ §5.3] Consider the same setting than Böhning http://cran.r-project.org/web/views/Cluster.html AcknowledgementsPrevious and more extended versions of this manuscript can be found on arXiv:2206.15178. We thank the Editors, Associate Editors and all anonymous Referees for their valuable suggestions which greatly helped us improve many aspects of the paper. It's furthermore a pleasure to acknowledge discussion with Prof. Ruggero Bellio, Prof. Anthony C. Davison and Prof. Nancy Reid. This research was supported by University of Padova grant no. CPDA101912 Large-and small-sample inference under non-standard conditions (Progetto di Ricerca di Ateneo 2010).Andrews, D. W. K.,Lee, I. and Ploberger, W. (1996). Optimal changepoint tests for normal linear regression. Journal ofEconometrics,70,ExtendAndrews (1993)and determine a class of finite-sample optimal tests for the existence of a single or multiple changes at unknown time points in multiple linear regression with normal errors and known variance. Exact critical values are obtained straightforwardly on a case by case basis using simulation. Simulation is furthermore used to compare the power of the proposed test statistics.Aue, A., Horváth, L.,Hušková, M. and Kokoszka, P. (2008).Hinkley (1969), investigates two cases: (i) there is no change α 1 = α 2 and β 1 = β 2 , and (ii) the response variable is constant until the change-point τ (β 1 = 0). On empirical grounds, suggests to approximate the finite-sample distribution by an F 1,n−4 , respectively. He furthermore develops confidence intervals for the change-point τ and joint confidence regions for the change-point and the model parameters.Andrews and Ploberger (1994)for the general regression setting with time trend regressors. Critical values are obtained via simulation.Kim, H.-J. (1994). Tests for a change-point in linear regression. In Change-point Problems. IMS Searching for new phenomena with profile likelihood ratio tests. S Algeri, J Aalbers, K D Mora, J Conrad, Nature Review Physics. 2Algeri, S., Aalbers, J., Mora, K. D. and Conrad, J. (2020). Searching for new phenom- ena with profile likelihood ratio tests. Nature Re- view Physics, 2, 245-252. Testing one hypothesis multiple times: the multidimensional case. S Algeri, D A Van Dyk, Journal of Computational and Graphical Statistics. 29Algeri, S. and van Dyk, D. A. (2020). Test- ing one hypothesis multiple times: the multidi- mensional case. Journal of Computational and Graphical Statistics, 29, 358-371. Inconsistency of the bootstrap when a parameter is on the boundary of the parameter space. D W K Andrews, Econometrica. 68Andrews, D. W. K. (2000). Inconsistency of the bootstrap when a parameter is on the boundary of the parameter space. Econometrica, 68, 399- 405. Testing when a parameter is on the boundary of the maintained hypothesis. D W K Andrews, Econometrica. 69Andrews, D. W. K. (2001). Testing when a pa- rameter is on the boundary of the maintained hypothesis. Econometrica, 69, 683-734. Structural breaks in time series. A Aue, L Horváth, Journal of Time Series Analysis. 34Aue, A. and Horváth, L. (2013). Structural breaks in time series. Journal of Time Series Analysis, 34, 1-16. Statistical Inference Based on the likelihood. A Azzalini, Chapman & HallLondonAzzalini, A. (1996). Statistical Inference Based on the likelihood. Chapman & Hall, London. varTestnlme: variance components testing in mixedeffect models. C Baey, E Kuhn, Baey, C. and Kuhn, E. (2019). varTestnlme: variance components testing in mixed- effect models. https://github.com/baeyc/ varTestnlme Likelihood ratio tests under local alternatives in regular semiparametric models. M Banerjee, Statistica Sinica. 15Banerjee, M. (2005). Likelihood ratio tests un- der local alternatives in regular semiparametric models. Statistica Sinica, 15, 635-644. Likelihood based inference for monotone response models. M Banerjee, The Annals of Statistics. 35Banerjee, M. (2007). Likelihood based inference for monotone response models. The Annals of Statistics, 35, 931-956. Likelihood Ratio Tests for Monotone Functions. M Banerjee, J A Wellner, The Annals of Statistics. 29Banerjee, M. and Wellner, J. A. (2001). Like- lihood Ratio Tests for Monotone Functions. The Annals of Statistics, 29, 1699-1731. Inference and Asymptotics. O E Barndorff-Nielsen, D R Cox, Chapman & HallLondonBarndorff-Nielsen, O. E. and Cox, D. R. (1994). Inference and Asymptotics. Chapman & Hall, London. Sharp oracle inequalities for Least Squares estimators in shape restricted regression. P C Bellec, The Annals of Statistics. 46Bellec, P. C. (2018). Sharp oracle inequalities for Least Squares estimators in shape restricted regression. The Annals of Statistics, 46, 745-780. mixtools: An R Package for Analyzing Finite Mixture Models. T Benaglia, D Chauveau, D R Hunter, Y Derek, Journal of Statistical Software. 32Benaglia, T., Chauveau, D., Hunter, D. R., Derek, Y. (2009). mixtools: An R Package for Analyzing Finite Mixture Models. Journal of Statistical Software, 32, 1-29. Some aspects of change-point analysis. In Change-point Problems. P K Bhattacharya, IMS Lecture Notes -Monograph Series. E. Carlstein, H.-G. Möller and D. Siegmud23IMSBhattacharya, P. K. (1994). Some aspects of change-point analysis. In Change-point Prob- lems. IMS Lecture Notes -Monograph Series (edited by E. Carlstein, H.-G. Möller and D. Sieg- mud), 23, pp. 28-56. IMS, Hayward. On non-regular estimation. W R Blischke, A J Truelove, P B Mundle, Blischke, W. R., Truelove, A. J. and Mun- dle, P. B. (1969). On non-regular estimation. Variance bounds for estimators of location parameters. I , Journal of the American Statistical Association. 64I. Variance bounds for estimators of location pa- rameters. Journal of the American Statistical As- sociation, 64, 1056-1072. The distribution of the likelihood ratio for mixture of densities from the one-parameter exponential family. D Böhning, E Dietz, R Schaub, P Schlattmann, B G Lindsay, The Annals of the Institute of Statistical Mathematics. 46Böhning, D., Dietz, E., Schaub, R., Schlattmann, P. and Lindsay, B. G. (1994). The distribution of the likelihood ratio for mix- ture of densities from the one-parameter expo- nential family. The Annals of the Institute of Sta- tistical Mathematics, 46, 373-388. Applied Asymptotics: Case Studies in Small Sample Statistics. A R Brazzale, A C Davison, N Reid, Cambridge University PressCambridgeBrazzale, A. R., Davison, A. C. and Reid, N. (2007). Applied Asymptotics: Case Studies in Small Sample Statistics. Cambridge Univer- sity Press, Cambridge. Nonparametric change point estimation for survival distributions with a partially constant hazard rate. A R Brazzale, H Küchenhoff, S Krügel, T S Schiergens, H Trentzsch, W Hartl, Lifetime Data Analysis. 25Brazzale, A. R., Küchenhoff, H., Krügel, S., Schiergens, T. S., Trentzsch, H. and Hartl, W. (2019). Nonparametric change point estimation for survival distributions with a par- tially constant hazard rate. Lifetime Data Anal- ysis, 25, 301-321. Bootstrap inference on the boundary of the parameter space, with application to conditional volatility models. G Cavaliere, H B Nielsen, R S Pedersen, A Rahbek, Journal of Econometrics. 227Cavaliere, G., Nielsen, H. B., Pedersen, R. S. and Rahbek, A. (2022). Bootstrap inference on the boundary of the parameter space, with ap- plication to conditional volatility models. Jour- nal of Econometrics, 227, 241-263. Testing for univariate Gaussian mixture in practice. URL: hal.archives-ouvertes.fr/hal-01659771. D Chauveau, B Garel, S Mercier, Chauveau, D., Garel, B. and Mercier, S. (2018). Testing for univariate Gaussian mixture in practice. URL: hal.archives-ouvertes.fr/hal-01659771/, Version 2. On finite mixture models. Statistical Theory and Related Fields. J Chen, 1Chen, J. (2017). On finite mixture models. Statis- tical Theory and Related Fields, 1, 15-27. Large sample distribution of the likelihood ratio test for normal mixtures. H Chen, J Chen, Statistics & Probability Letters. 52Chen, H. and Chen, J. (2001). Large sample dis- tribution of the likelihood ratio test for normal mixtures. Statistics & Probability Letters, 52, 125-133. Tests for homogeneity in normal mixtures in the presence of a structural parameter. H Chen, J Chen, Statistica Sinica. 13Chen, H. and Chen, J. (2003). Tests for homo- geneity in normal mixtures in the presence of a structural parameter. Statistica Sinica, 13, 351- 365. A modified likelihood ratio test for homogeneity in finite mixture models. H Chen, J Chen, J D Kalbfleisch, Journal of the Royal Statistical Society, Series B (Methodological). 63Chen, H., Chen, J. and Kalbfleisch, J. D. (2001). A modified likelihood ratio test for ho- mogeneity in finite mixture models. Journal of the Royal Statistical Society, Series B (Method- ological), 63, 19-29. Parametric Statistical Change Point Analysis with Application to Genetics. J Chen, A K Gupta, Medicine and Finance. BostonBirkhäuser2nd ed.Chen, J. and Gupta, A. K. (2012). Paramet- ric Statistical Change Point Analysis with Appli- cation to Genetics, Medicine and Finance (2nd ed.). Birkhäuser, Boston. Hypothesis test for normal mixture models: the EM approach. J Chen, P Li, The Annals of Statistics. 37Chen, J. and Li, P. (2009). Hypothesis test for normal mixture models: the EM approach. The Annals of Statistics, 37, 2523-2542. Bootstrap confidence intervals using the likelihood ratio test in changepoint detection. R Chen, J Cabrera, Chen, R. and Cabrera, J. (2020). Bootstrap confidence intervals using the likelihood ratio test in changepoint detection. Available on https: //arxiv.org/abs/2011.03718. A review on empirical likelihood methods for regression. S X Chen, I Van Keilegom, TEST. 18Chen, S. X. and Van Keilegom, I. (2009). A review on empirical likelihood methods for re- gression. TEST, 18, 415-447. Non-Standard Parametric Statistical Inference. R Cheng, Oxford University PressNew YorkCheng, R. (2017). Non-Standard Parametric Sta- tistical Inference. Oxford University Press, New York. Nonregular maximum likelihood problems (with Discussion). R C H Cheng, L Traylor, Journal of the Royal Statistical Society, Series B (Methodological). 57Cheng, R. C. H. and Traylor, L. (1995). Non- regular maximum likelihood problems (with Dis- cussion). Journal of the Royal Statistical Society, Series B (Methodological), 57, 3-44. On the distribution of the likelihood ratio. H Chernoff, The Annals of Mathematical Statistics. 54Chernoff, H. (1954). On the distribution of the likelihood ratio. The Annals of Mathematical Statistics, 54, 573-578. Asymptotic distribution of the likelihood ratio test that a mixture of two binomials is a single binomial. H Chernoff, E Lander, Journal of Statistical Planning and Inference. 43Chernoff, H. and Lander, E. (1995). Asymp- totic distribution of the likelihood ratio test that a mixture of two binomials is a single binomial. Journal of Statistical Planning and Inference, 43, 19-40. Likelihood ratio statistic for exponential mixtures. G Ciuperca, The Annals of the Institute of Statistical Mathematics. 54Ciuperca, G. (2002). Likelihood ratio statistic for exponential mixtures. The Annals of the Institute of Statistical Mathematics, 54, 585-594. A Likelihood Ratio Test of a Homoscedastic Multivariate Normal Mixture Against a Heteroscedastic Multivariate Normal Mixture. L Cong, W Yao, Econometrics and Statistics. 18Cong, L., and Yao, W. (2021). A Likelihood Ra- tio Test of a Homoscedastic Multivariate Normal Mixture Against a Heteroscedastic Multivariate Normal Mixture. Econometrics and Statistics, 18, 79-88. Principles of Statistical Inference. D R Cox, Cambridge University PressNew YorkCox, D. R. (2006). Principles of Statistical Infer- ence. Cambridge University Press, New York. Theoretical Statistics. D R Cox, D V Hinkley, Chapman & HallLondonCox, D. R. and Hinkley, D. V. (1974). Theoret- ical Statistics. Chapman & Hall, London. Likelihood ratio tests in linear mixed models with one variance component. C M Crainiceanu, D Ruppert, Journal of the Royal Statistical Society, Series B (Methodological. 66Crainiceanu, C. M. and Ruppert, D. (2004). Likelihood ratio tests in linear mixed models with one variance component. Journal of the Royal Statistical Society, Series B (Methodologi- cal), 66, 165-185. Mathematical Methods of Statistics. H Cramér, Princeton University PressPrincetonCramér, H. (1946). Mathematical Methods of Statistics. Princeton University Press, Princeton. Limit Theorems in Change-point Analysis. M Csörgö, L Horváth, WileyNew YorkCsörgö, M. and Horváth, L. (1997). Limit Theorems in Change-point Analysis. Wiley, New York. Testing in locally conic models, and application to mixture models. D Dacunha-Castelle, É Gassiat, ESAIM: Probability and Statistics. 1Dacunha-Castelle, D. and Gassiat,É. (1997). Testing in locally conic models, and applica- tion to mixture models. ESAIM: Probability and Statistics, 1, 285-317. A limit theorem for the maximum of normalized sums of independent random variables. D A Darling, P Erdős, Duke Mathematical Journal. 23Darling, D.A. and Erdős, P. (1956). A limit theorem for the maximum of normalized sums of independent random variables. Duke Mathemat- ical Journal, 23, 143-155. Testing the order of a model using locally conic parametrization: population mixtures and stationary ARMA processes. D Dacunha-Castelle, É Gassiat, The Annals of Statistics. 27Dacunha-Castelle, D. and Gassiat,É. (1999). Testing the order of a model using locally conic parametrization: population mixtures and sta- tionary ARMA processes. The Annals of Statis- tics, 27, 1178-1209. Asymptotic Theory of Statistics and Probability. A Dasgupta, Springer-VerlagNew YorkDasGupta, A. (2008). Asymptotic Theory of Statistics and Probability. Springer-Verlag, New York. A C Davison, Statistical Models. Cambridge University Press. CambridgeDavison, A. C. (2003). Statistical Models. Cam- bridge University Press, Cambridge. Saddlepoint approximation in exponential models with boundary points. J Del Castillo, A Lopez-Ratera, Biometrika. 12del Castillo, J. and Lopez-Ratera, A. (2006). Saddlepoint approximation in exponen- tial models with boundary points. Biometrika, 12, 491-500. A likelihood ratio approach to sequential change point detection for a general class of parameters. H Dette, J Gösmann, Journal of the American Statistical Association. 115Dette, H. and Gösmann, J. (2020). A likelihood ratio approach to sequential change point detec- tion for a general class of parameters. Journal of the American Statistical Association, 115, 1361- 1377. Inference for the mode of a log-concave density. C R Doss, J A Wellner, The Annals of Statistics. 47Doss, C. R. and Wellner, J. A. (2019). Infer- ence for the mode of a log-concave density. The Annals of Statistics, 47, 2950-2976. Computer Age Statistical Inference: Algorithms, Evidence, and Data Science. B Efron, T Hastie, Cambridge University PressUSAEfron, B. and Hastie, T. (2016). Computer Age Statistical Inference: Algorithms, Evidence, and Data Science. Cambridge University Press, USA. bcp: An R Package for Performing a Bayesian Analysis of Change Point Problems. C Erdman, J W Emerson, Journal of Statistical Software. 23Erdman, C. and Emerson, J. W. (2007). bcp: An R Package for Performing a Bayesian Analy- sis of Change Point Problems. Journal of Statis- tical Software, 23, 1-13. Statistical inference using maximum likelihood estimation and the generalized likelihood ratio when the true parameter is on the boundary of the parameter space. Z Feng, C E Mcculloch, Statistics & Probability Letters. 13Feng, Z. and McCulloch, C. E. (1992). Sta- tistical inference using maximum likelihood esti- mation and the generalized likelihood ratio when the true parameter is on the boundary of the pa- rameter space. Statistics & Probability Letters, 13, 325-332. Using bootstrap likelihood ratios in finite mixture models. Z Feng, C E Mcculloch, Journal of the Royal Statistical Society, Series B (Methodological). 58Feng, Z. and McCulloch, C. E. (1996). Using bootstrap likelihood ratios in finite mixture mod- els. Journal of the Royal Statistical Society, Se- ries B (Methodological), 58, 609-617. The asymptotic distribution of a likelihood ratio test statistic for the homogeneity of poisson distribution. C Feng, H Wang, X M Tu, Sankhyā A. 74Feng, C., Wang, H. and Tu, X. M. (2012). The asymptotic distribution of a likelihood ratio test statistic for the homogeneity of poisson distribu- tion. Sankhyā A, 74, 263-268. Modified likelihood ratio test for homogeneity in a mixture of von Mises distributions. Y Fu, J Chen, P Li, Journal of Statistical Planning and Inference. 138Fu, Y., Chen, J. and Li, P. (2008). Modified like- lihood ratio test for homogeneity in a mixture of von Mises distributions. Journal of Statistical Planning and Inference, 138, 667-681. Recent asymptotic results in testing for mixtures. B Garel, Computational Statistics & Data analysis. 51Garel, B. (2007). Recent asymptotic results in testing for mixtures. Computational Statistics & Data analysis, 51, 5295-5304. On the asymptotic performance of the log likelihood ratio statistic for the mixture model and related results. J K Ghosh, K P Sen, Proceedings of the Berkeley Confer. L. LeCam, R. A. Olshen and C.-Sthe Berkeley ConferGhosh, J. K. and Sen, K. P. (1985). On the asymptotic performance of the log likelihood ra- tio statistic for the mixture model and related results. In Proceedings of the Berkeley Confer- ence in Honor of Jerzy Neyman and Jack Kiefer (edited by L. LeCam, R. A. Olshen and C.-S. . Cheng, Wadsworth Advanced Books & SoftwareIIMontereyCheng), Vol. II, pp. 789-806. Wadsworth Ad- vanced Books & Software, Monterey. Estimating Functions. V P Godambe, Oxford University PressOxfordGodambe, V. P. (1991). Estimating Functions. Oxford University Press, Oxford. Testing in normal mixture models when the proportions are known. B Goffinet, P Loisel, B Laurent, Biometrika. 79Goffinet, B., Loisel, P. and Laurent, B. (1992). Testing in normal mixture models when the proportions are known. Biometrika, 79, 842- 846. . P Groeneboom, G Jongbloed, Groeneboom, P. and Jongbloed, G. (2018). Some Developments in the Theory of Shape Constrained Inference. Statistical Science. 33Some Developments in the Theory of Shape Con- strained Inference. Statistical Science, 33, 473- 492. . P Groeneboom, J A Wellner, Groeneboom, P. and Wellner, J. A. (2001). Computing Chernoff's distribution. Journal of Computational and Graphical Statistics. 10Computing Chernoff's distribution. Journal of Computational and Graphical Statistics, 10, 388- 400. Highdimensional asymptotics of likelihood ratio tests in the Gaussian sequence model under convex constraints. Q Han, B Sen, Y Shen, The Annals of Statistics. 50Han, Q., Sen, B. and Shen, Y. (2022). High- dimensional asymptotics of likelihood ratio tests in the Gaussian sequence model under convex constraints. The Annals of Statistics, 50, 376- 406. Distribution problems in clustering. J A Hartigan, Classification and Clustering. New YorkAcademic PressHartigan, J. A. (1977). Distribution problems in clustering. In Classification and Clustering (edited by J. van Ryzin), pp. 45-72. Academic Press, New York. A failure of likelihood asymptotics for normal mixtures. J A Hartigan, Proceedings of the Berkeley Conference in Honor of Jerzy Neyman and Jack Kiefer. L. LeCam, R. A. Olshen and C.-S. Chengthe Berkeley Conference in Honor of Jerzy Neyman and Jack KieferMontereyWadsworth Advanced Books & SoftwareIIHartigan, J. A. (1985). A failure of likelihood asymptotics for normal mixtures. In Proceedings of the Berkeley Conference in Honor of Jerzy Neyman and Jack Kiefer (edited by L. LeCam, R. A. Olshen and C.-S. Cheng), Vol. II, pp. 807- 810. Wadsworth Advanced Books & Software, Monterey. A constrained formulation of maximum-likelihood estimation for normal mixture distributions. The Annals of Statistics. R J Hathaway, 13Hathaway, R. J. (1985). A constrained formula- tion of maximum-likelihood estimation for nor- mal mixture distributions. The Annals of Statis- tics, 13, 795-800. Packages for estimating finite mixtures: a review. The American Statistician. D Haughton, 51Haughton, D. (1997). Packages for estimating fi- nite mixtures: a review. The American Statisti- cian, 51, 194-205. Testing a sequence of observations for a shift in location. D M Hawkins, Journal of the American Statistical Association. 72Hawkins, D. M. (1977). Testing a sequence of ob- servations for a shift in location. Journal of the American Statistical Association, 72, 180-186. Asymptotic efficiency in parametric structural models with parameter-dependent support. K Hirano, J R Porter, Econometrica. 71Hirano, K. and Porter, J. R. (2003). Asymp- totic efficiency in parametric structural models with parameter-dependent support. Economet- rica, 71, 1307-1338. R V Hogg, J W Mckean, A T Craig, Introduction to Mathematical Statistics. BostonPearson8th ed.Hogg, R. V., McKean, J. W. and Craig, A. T. (2019). Introduction to Mathematical Statis- tics (8th ed.). Pearson, Boston. Extensions of some classical methods in change point analysis (with Discussion). L Horváth, G Rice, TEST. 23Horváth, L. and Rice, G. (2014a). Extensions of some classical methods in change point analysis (with Discussion). TEST, 23, 219-255. Rejoinder on: Extensions of some classical methods in change point analysis. L Horváth, G Rice, TEST. 23Horváth, L. and Rice, G. (2014b). Rejoinder on: Extensions of some classical methods in change point analysis. TEST, 23, 287-290. P J Huber, E M Ronchetti, Robust Statistics. Hoboken, New JerseyJohn Wiley & Sons2nd ed.Huber, P. J. and Ronchetti, E. M. (2009). Ro- bust Statistics (2nd ed.). John Wiley & Sons, Hoboken, New Jersey. Bootstrapping sequential change-point tests for linear regression. M Hušková, C Kirch, Metrika. 75Hušková, M. and Kirch, C. (2012). Bootstrap- ping sequential change-point tests for linear re- gression. Metrika, 75, 673-7088. The likelihood equation, consistency and the maxima of the likelihood function. V S Huzurbazar, The Annals of Eugenics. 14Huzurbazar, V. S. (1948). The likelihood equa- tion, consistency and the maxima of the likeli- hood function. The Annals of Eugenics, 14, 185- 200. The asymptotic distribution of the likelihood ratio test for a change in the mean. J M Irvine, CENSUS/SRD/RR-86/10Statistical Research Division Report Series. Bureau of the CensusIrvine, J. M. (1986). The asymptotic distribu- tion of the likelihood ratio test for a change in the mean. Statistical Research Division Report Series, CENSUS/SRD/RR-86/10, Bureau of the Census, Washington, D.C. 20233. Some Problems with application of change-point detection methods to environmental data. D Jarušková, Environmetrics. 8Jarušková, D.(1997). Some Problems with appli- cation of change-point detection methods to en- vironmental data. Environmetrics, 8, 469-483. Testing the number of components in normal mixture regression models. H Kasahara, K Shimotsu, Journal of the American Statistical Association. 110Kasahara, H., and Shimotsu, K. (2015). Test- ing the number of components in normal mixture regression models. Journal of the American Sta- tistical Association, 110, 1632-1645. . A Khodadadi, M Asgharian, Khodadadi, A. and Asgharian, M. (2008). Change-point problem and regression: an annotated bibliography. COBRA Preprint Series, Working Paper 44Change-point problem and regression: an an- notated bibliography. COBRA Preprint Se- ries, Working Paper 44. https://biostats. bepress.com/cobra/art44 changepoint: An R Package for Changepoint Analysis. R Killick, I A Eckley, Journal of Statistical Software. 58Killick, R. and Eckley, I.A. (2014). change- point: An R Package for Changepoint Analysis. Journal of Statistical Software, 58, 1-19. The likelihood ratio test for a change-point in simple linear regression. H J Kim, D Siegmund, Biometrika. 76Kim, H. J. and Siegmund, D. (1989). The likeli- hood ratio test for a change-point in simple linear regression. Biometrika, 76, 409-423. Permutation principles for the change analysis of stochastic processes under strong invariance. C Kirch, J Steinebach, Journal of Computational and Applied Mathematics. 186Kirch, C. and Steinebach, J. (2006). Permuta- tion principles for the change analysis of stochas- tic processes under strong invariance. Journal of Computational and Applied Mathematics, 186, 64-88. Bootstrapping sequential change-point tests. C Kirch, Sequential Analysis. 27Kirch, C. (2008). Bootstrapping sequential change-point tests. Sequential Analysis, 27, 330- 349. Handbook of Quantile Regression. R Koenker, V Chernozhukov, X He, L Peng, Chapman and Hall/CRCBoca Raton, FLKoenker, R., Chernozhukov, V., He, X. and Peng, L. (2017). Handbook of Quantile Regres- sion. Chapman and Hall/CRC, Boca Raton, FL. Constrained parameters in applications: Review of issues and approaches. L Kopylev, International Scholarly Research Network ISRN Biomathematics. Kopylev, L. (2012). Constrained parameters in applications: Review of issues and approaches. International Scholarly Research Network ISRN Biomathematics. On the asymptotic distribution of likelihood ratio test when parameters lie on the boundary. L Kopylev, B Sinha, Sankhyā B. 73Kopylev, L. and Sinha, B. (2011). On the asymptotic distribution of likelihood ratio test when parameters lie on the boundary. Sankhyā B, 73, 20-41. Review about estimation of change points. P R Krishnaiah, B Q Miao, Handbook of Statistics. P. R. Krishnaiah and C. R. RaoAmsterdam7Krishnaiah, P. R. and Miao, B. Q. (1988). Re- view about estimation of change points. In Hand- book of Statistics (edited by P. R. Krishnaiah and C. R. Rao), Vol. 7, pp. 375-402. Elsevier, Ams- terdam. A multivariate analogue of the one-sided test. A Kudô, Biometrika. 50Kudô, A. (1963). A multivariate analogue of the one-sided test. Biometrika, 50, 404-418. The incidental parameter problem since 1948. T Lancaster, Journal of Econometrics. 95Lancaster, T. (2000). The incidental parame- ter problem since 1948. Journal of Econometrics, 95, 391-413. On the assumptions used to prove asymptotic normality of maximum likelihood estimates. L Lecam, The Annals of Mathematical Statistics. 41LeCam, L. (1970). On the assumptions used to prove asymptotic normality of maximum like- lihood estimates. The Annals of Mathematical Statistics, 41, 802-828. Asymptotics in Statistics: Some Basic Concepts. L Lecam, G L Yang, Springer-VerlagNew YorkLeCam, L. and Yang, G. L. (1990). Asymptotics in Statistics: Some Basic Concepts, Springer- Verlag, New York. Change-point problems: bibliography and review. T.-S Lee, Journal of Statistical Theory and Practice. 4Lee, T.-S. (2010). Change-point problems: bibli- ography and review. Journal of Statistical Theory and Practice, 4, 643-662. E L Lehman, J P Romano, Testing Statistical Hypotheses. New YorkSpringer-Verlag3rd ed.Lehman, E. L. and Romano, J. P. (2005). Test- ing Statistical Hypotheses (3rd ed.). Springer- Verlag, New York. Likelihood ratio tests for genetic linkage. M Lemdani, O Pons, Statistics and Probability Letters. 33Lemdani, M. and Pons, O. (1997). Likelihood ra- tio tests for genetic linkage. Statistics and Prob- ability Letters, 33, 15-22. Likelihood ratio tests in contamination models. M Lemdani, O Pons, Bernoulli. 5Lemdani, M. and Pons, O. (1999). Likelihood ratio tests in contamination models. Bernoulli, 5, 705-719. MixtureInf: Inference for Finite Mixture Models. S Li, J Chen, P Li, R package version 1.1.Li, S., Chen, J. and Li, P. (2016). MixtureInf: Inference for Finite Mixture Models. R pack- age version 1.1. https://CRAN.R-project.org/ package=MixtureInf Nonfinite Fisher information and homogeneity: an EM approach. P Li, J Chen, P Marriot, Biometrika. 96Li, P., Chen, J. and Marriot, P. (2009). Non- finite Fisher information and homogeneity: an EM approach. Biometrika, 96, 411-426. B G Lindsay, Mixture Models: Theory, Geometry and Applications. Institute of Mathematical Statistics. HaywardLindsay, B. G. (1995). Mixture Models: Theory, Geometry and Applications. Institute of Mathe- matical Statistics, Hayward. Asymptotics for likelihood ratio tests under loss of identifiability. X Liu, Y Shao, The Annals of Statistics. 31Liu, X. and Shao, Y. (2003). Asymptotics for like- lihood ratio tests under loss of identifiability. The Annals of Statistics, 31, 807-832. A likelihood ratio test of a homoscedastic normal mixture against a heteroscedastic normal mixture. Y Lo, Statistics and Computing. 18Lo, Y. (2008). A likelihood ratio test of a homoscedastic normal mixture against a het- eroscedastic normal mixture. Statistics and Com- puting, 18, 233-240. On bootstrapping the likelihood ratio test statistic for the number of components in a normal mixture. G J Mclachlan, Applied Statistics. 36McLachlan, G. J. (1987). On bootstrapping the likelihood ratio test statistic for the number of components in a normal mixture. Applied Statis- tics, 36, 318-324. Finite mixture models. G Mclachlan, S X Lee, S Rathnayake, Annual Review of Statistics and Its Application. 6McLachlan, G., Lee, S. X. and Rathnayake, S. (2019). Finite mixture models. Annual Review of Statistics and Its Application, 6, 355-378. G Mclachlan, D Peel, Finite Mixture Models. New YorkJohn Wiley & SonsMcLachlan, G. and Peel, D. (2000). Finite Mixture Models. John Wiley & Sons, New York. segmented: an R Package to Fit Regression Models with Broken-Line Relationships. V M R Muggeo, R News, 8/1, 20-25Muggeo, V. M. R. (2008b). segmented: an R Package to Fit Regression Models with Broken- Line Relationships. R News, 8/1, 20-25. https: //cran.r-project.org/doc/Rnews/ Segmented mixed models with random changepoints: a maximum likelihood approach with application to treatment for depression study. V M R Muggeo, D C Atkins, R J Gallop, S Dimidjian, Statistical Modelling. 14Muggeo, V. M. R., Atkins, D. C., Gallop, R. J. and Dimidjian, S. (2014). Segmented mixed models with random changepoints: a maximum likelihood approach with application to treat- ment for depression study. Statistical Modelling, 14, 293-313. Semiparametric likelihood ratio inference. S Murphy, A Van Der Vaart, The Annals of Statistics. 25Murphy, S. and Van der Vaart, A. (1997). Semiparametric likelihood ratio inference. The Annals of Statistics, 25, 1471-1509. On Profile Likelihood. S Murphy, A Van Der Vaart, Journal of the American Statistical Association. 95Murphy, S. and Van der Vaart, A. (2000). On Profile Likelihood. Journal of the American Sta- tistical Association, 95, 449-465. Consistent estimates based on partially consistent observations. J Neyman, E L Scott, Econometrica. 16Neyman, J. and Scott, E. L. (1948). Consistent estimates based on partially consistent observa- tions. Econometrica, 16, 11-32. Multiple change-point detection: A selective overview. Y S Niu, N Hao, H Zhang, Statistical Science. 31Niu, Y.S., Hao, N. and Zhang, H. (2016). Multi- ple change-point detection: A selective overview. Statistical Science, 31, 611-623. Empirical likelihood confidence regions. A B Owen, The Annals of Statistics. 18Owen, A. B. (1990). Empirical likelihood confi- dence regions. The Annals of Statistics, 18, 90- 120. Empirical likelihood for linear models. A B Owen, The Annals of Statistics. 19Owen, A. B. (1991). Empirical likelihood for lin- ear models. The Annals of Statistics, 19, 1725- 1747. A test for a change in a parameter occurring at an unknown point. E S Page, Biometrika. 42Page, E. S. (1955). A test for a change in a param- eter occurring at an unknown point. Biometrika, 42, 523-527. On problems in which a change in a parameter occurs at an unknown point. E S Page, Biometrika. 44Page, E. S. (1957). On problems in which a change in a parameter occurs at an unknown point. Biometrika, 44, 248-252. On identifiability of parametric statistical models. C D M Paulino, C A B Pereira, Journal of the Italian Statistical Society. 1Paulino, C. D. M. and Pereira, C. A. B. (1994). On identifiability of parametric statistical models. Journal of the Italian Statistical Society, 1, 125-151. J Pfanzagl, Mathematical Statistics: Essays on History and Methodology. Berlin HeidelbergSpringer-VerlagPfanzagl, J. (2017). Mathematical Statistics: Essays on History and Methodology. Springer- Verlag, Berlin Heidelberg. Statistics for Environmental Biology and Toxicology. W W Piegorsch, A J Bailer, Chapman & HallLondonPiegorsch, W. W. and Bailer, A. J. (1997). Statistics for Environmental Biology and Toxi- cology. Chapman & Hall, London. Identifiability in Stochastic Models: Characterization of Probability Distributions. Prakasa Rao, B L S , Academic PressLondonPrakasa Rao, B. L. S. (1992). Identifiability in Stochastic Models: Characterization of Probabil- ity Distributions. Academic Press, London. The estimation of the parameters of a linear regression system obeying two separate regimes. R E Quandt, Journal of the American Statistical Association. 53Quandt, R. E. (1958). The estimation of the pa- rameters of a linear regression system obeying two separate regimes. Journal of the American Statistical Association, 53, 873-880. Tests of the hypothesis that a linear regression system obeys two separate regimes. R E Quandt, Journal of the American Statistical Association. 55Quandt, R. E. (1960). Tests of the hypothesis that a linear regression system obeys two sepa- rate regimes. Journal of the American Statistical Association, 55, 324-330. ; T R Core Team, F T Wright, R L Dykstra, R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing. Vienna, Austria; New YorkJohn Wiley & SonsOrder Restricted InferenceR Core Team (2020). R: A Language and En- vironment for Statistical Computing. R Founda- tion for Statistical Computing, Vienna, Austria. https://www.R-project.org/ Robertson, T., Wright, F. T. and Dykstra, R. L. (1988). Order Restricted Inference. John Wiley & Sons, New York. Identification in parametric models. T J Rothenberg, Econometrica. 39Rothenberg, T. J. (1971). Identification in para- metric models. Econometrica, 39, 577-591. Likelihood-based inference with singular information matrix. A Rotnitzky, D R Cox, M Bottai, J Robins, Bernoulli. 6Rotnitzky, A., Cox, D. R., Bottai, M. and Robins, J. (2000). Likelihood-based inference with singular information matrix. Bernoulli, 6, 243-284. Recent progress in logconcave density estimation. R J Samworth, Statistical Science. 33Samworth, R. J. (2018). Recent progress in log- concave density estimation. Statistical Science 33, 493-509. Special Issue on Nonparametric Inference Under Shape Constraints. Statistical Science. Samworth, R. J. and Bodhisattva, S.33Samworth, R. J. and Bodhisattva, S. (Eds.) (2018). Special Issue on Nonparametric Inference Under Shape Constraints. Statistical Science, 33. Size and power of tests for a zero random effect variance or polynomial regression in additive and linear mixed models. F Scheipl, S Greven, H Küchenhoff, Computational Statistics & Data Analysis. 52Scheipl, F., Greven, S. and Küchenhoff, H. (2008). Size and power of tests for a zero ran- dom effect variance or polynomial regression in additive and linear mixed models. Computational Statistics & Data Analysis, 52, 3283-3299. mclust 5: clustering, classification and density estimation using Gaussian finite mixture models. L Scrucca, M Fop, T B Murphy, A E Raftery, The R Journal. 8Scrucca, L., Fop, M., Murphy, T. B. and Raftery, A. E. (2016). mclust 5: clustering, classification and density estimation using Gaus- sian finite mixture models. The R Journal, 8, 289-317. Asymptotic properties of maximum likelihood estimators and likelihood ratio tests under non standard conditions. S G Self, K Liang, Journal of the American Statistical Association. 82Self, S. G. and Liang, K. (1987). Asymptotic properties of maximum likelihood estimators and likelihood ratio tests under non standard condi- tions. Journal of the American Statistical Asso- ciation, 82, 605-610. An appraisal of some aspects of statistical inference under inequality constraints. P K Sen, M J Silvapulle, Journal of Statistical Planning and Inference. 107Sen, P. K. and Silvapulle, M. J. (2002). An appraisal of some aspects of statistical inference under inequality constraints. Journal of Statisti- cal Planning and Inference, 107, 3-43. Approximation Theorems of Mathematical Statistics. R J Serfling, John Wiley & SonsNew YorkSerfling, R. J. (1980). Approximation Theorems of Mathematical Statistics. John Wiley & Sons, New York. T Severini, Likelihood Methods in Statistics. OxfordOxford University PressSeverini, T. (2000). Likelihood Methods in Statis- tics. Oxford University Press, Oxford. A modified likelihood ratio statistic for some nonregular models. T Severini, Biometrika. 91Severini, T. (2004). A modified likelihood ratio statistic for some nonregular models. Biometrika, 91, 603-612. Change point problems and two-phase regression: an annotated bibliography. S A Shaban, International Statistical Review. 48Shaban, S. A. (1980). Change point problems and two-phase regression: an annotated bibliography. International Statistical Review, 48, 83-93. Asymptotic distribution of test statistics in the analysis of moment structures under inequality constraints. A Shapiro, Biometrika. 72Shapiro, A. (1985). Asymptotic distribution of test statistics in the analysis of moment struc- tures under inequality constraints. Biometrika, 72, 133-144. Towards a unified theory of inequality constrained testing in multivariate analysis. A Shapiro, International Statistical Review. 56Shapiro, A. (1988). Towards a unified theory of inequality constrained testing in multivari- ate analysis. International Statistical Review, 56, 49-62. M J Silvapulle, P K Sen, Constrained Statistical Inference. New YorkJohn Wiley & SonsSilvapulle, M. J. and Sen, P. K. (2005). Con- strained Statistical Inference. John Wiley & Sons, New York. The Lagragian multiplier test. S D Silvey, The Annals of Mathematical Statistics. 30Silvey, S. D. (1959). The Lagragian multiplier test. The Annals of Mathematical Statistics, 30, 382-407. Some new aspects of dose-response multi-stage models with applications. B Sinha, L Kopylev, J Fox, Pakistan Journal of Statistics and Operation Research. 8Sinha, B., Kopylev, L. and Fox, J. (2012). Some new aspects of dose-response multi-stage models with applications. Pakistan Journal of Statistics and Operation Research, 8, 441-478. Maximum likelihood estimation in a class of non regular cases. R L Smith, Biometrika. 72Smith, R. L. (1985). Maximum likelihood estima- tion in a class of non regular cases. Biometrika, 72, 67-92. A survey of non-regular problems. R L Smith, Proceedings of the 47th Session of the International Statistical Institute. the 47th Session of the International Statistical InstituteParisSmith, R. L. (1989). A survey of non-regular prob- lems. Proceedings of the 47th Session of the In- ternational Statistical Institute, Paris, August 29 -September 6, 1989, pp. 353-372. Straight lines with a change point: A Bayesian analysis of some renal transplant data. A M F Smith, D G Cook, Applied Statistics. 29Smith, A. M. F. and Cook, D. G. (1980). Straight lines with a change point: A Bayesian analysis of some renal transplant data. Applied Statistics, 29, 180-189. Small sample distribution of the likelihood ratio effects in the random effects model. H Sørensen, Journal of Statistical Planning and Inference. 138Sørensen, H. (2008). Small sample distribution of the likelihood ratio effects in the random ef- fects model. Journal of Statistical Planning and Inference, 138, 1605-1614. Variance components testing in the longitudinal mixed rffects model. D O Stram, J W Lee, Biometrics. 50Stram, D. O. and Lee, J. W. (1994). Variance components testing in the longitudinal mixed rf- fects model. Biometrics, 50, 1171-1177. A FORTRAN subroutine for computing normal orthant probabilities of dimensions up to nine. Communication in Statistics -Computation and Simulation. H.-J Sun, 17Sun, H.-J. (1988). A FORTRAN subroutine for computing normal orthant probabilities of di- mensions up to nine. Communication in Statis- tics -Computation and Simulation, 17, 1097- 1111. Part 1: Special issue on change point detection (first 10 articles). Statistical Papers. G Sofronov, M Wendler, Liebscher, V.61Sofronov, G., Wendler, M. and Liebscher, V. (Eds.) (2020). Part 1: Special issue on change point detection (first 10 articles). Statistical Pa- pers, 61, 1347-1588. Simulated percentage points for the null distribution of the likelihood ratio test for a mixture of two normals. H C ThodeJr, S J Finch, N R Mendell, Biometrics. 44Thode, H. C. Jr., Finch, S. J. and Mendell, N. R. (1988). Simulated percentage points for the null distribution of the likelihood ratio test for a mixture of two normals. Biometrics, 44, 1195-1201. Some recent research in the analysis of mixture distributions. D M Titterington, Statistics. 21Titterington, D. M. (1990). Some recent re- search in the analysis of mixture distributions. Statistics, 21, 619-641. A statistical method for assessing a threshold in epidemiological studies. K W Ulm, Statistics in Medicine. 10Ulm, K. W. (1991). A statistical method for as- sessing a threshold in epidemiological studies. Statistics in Medicine, 10, 341-349. A W Van Der Vaart, Asymtptotic Statistics. New YorkCambridge University Pressvan der Vaart, A. W. (2000). Asymtptotic Statistics. Cambridge University Press, New York. Note on the consistency of the maximum likelihood estimate. A Wald, Annals of Mathematical Statistics. 20Wald, A. (1949). Note on the consistency of the maximum likelihood estimate. Annals of Mathe- matical Statistics, 20, 595-601. Hypothesis testing for finite mixture models. S Wichitchan, W Yao, Yang , G , Computational Statistics & Data Analysis. 132Wichitchan, S., Yao, W., and Yang, G. (2019). Hypothesis testing for finite mixture models. Computational Statistics & Data Analysis, 132, 180-189. The large sample distribution of the likelihood ratio for testing composite hypotheses. S S Wilks, The Annals of Mathematical Statistics. 1Wilks, S. S. (1938). The large sample distribution of the likelihood ratio for testing composite hy- potheses. The Annals of Mathematical Statistics, 1, 60-62. An exact test for multiple inequality and equality constraints in the linear regression model. F A Wolak, Journal of the American Statistical Association. 82Wolak, F. A. (1987). An exact test for multiple inequality and equality constraints in the linear regression model. Journal of the American Sta- tistical Association, 82, 782-793. On the likelihood ratio test for a shift in location of normal populations. K J Worsley, Journal of the American Statistical Association. 74Worsley, K. J. (1979). On the likelihood ratio test for a shift in location of normal populations. Journal of the American Statistical Association, 74, 365-367. The power of likelihood ratio and cumulative sum tests for a change in a binomial probability. K J Worsley, Biometrika. 70Worsley, K. J. (1983). The power of likelihood ratio and cumulative sum tests for a change in a binomial probability. Biometrika, 70, 455-464. Confidence regions and test for a change point in a sequence of exponential family random variables. K J Worsley, Biometrika. 73Worsley, K. J. (1986). Confidence regions and test for a change point in a sequence of expo- nential family random variables. Biometrika, 73, 91-104. The asymptotic behavior of the likelihood ratio statistic for testing a shift in mean in a sequence of independent normal variates. Y.-C Yao, R A Davis, Sankhyā A. 48Yao, Y.-C. and Davis, R.A. (1986). The asymp- totic behavior of the likelihood ratio statistic for testing a shift in mean in a sequence of indepen- dent normal variates. Sankhyā A, 48, 339-353. Inference for multiple change points in time series via likelihood ratio scan statistics. Y Yau, Z Zhao, Journal of the Royal Statistical Society Series B (Methodological). 78Yau, Y. and Zhao, Z. (2016). Inference for multi- ple change points in time series via likelihood ra- tio scan statistics. Journal of the Royal Statistical Society Series B (Methodological), 78, 895-916. mixR: Finite Mixture Modeling for Raw and Binned Data. Y Yu, R package version 0.1.1.Yu, Y. (2018). mixR: Finite Mixture Modeling for Raw and Binned Data. R package version 0.1.1. https://CRAN.R-project.org/package=mixR A robust bootstrap change point test for high-dimensional location parameter. M Yu, X Chen, Electronic Journal of Statistics. 16Yu, M. and Chen, X. (2022). A robust bootstrap change point test for high-dimensional location parameter. Electronic Journal of Statistics, 16, 1096-1152. Implementing a class of structural change tests: an econometric computing approach. A Zeileis, Computational Statistics & Data Analysis. 50Zeileis, A. (2006). Implementing a class of struc- tural change tests: an econometric comput- ing approach. Computational Statistics & Data Analysis, 50, 2987-3008 strucchange: An R package for testing for structural change in linear regression models. A Zeileis, F Leisch, K Hornik, C Kleiber, Journal of Statistical Software. 7Zeileis, A., Leisch, F., Hornik, K. and Kleiber, C. (2002). strucchange: An R package for testing for structural change in linear regres- sion models. Journal of Statistical Software, 7, 1-38. Testing for additivity in non-parametric regression. Y Zhang, A.-M Staicu, A Maity, Canadian Journal of Statistics. 44Zhang, Y., Staicu, A.-M., and Maity, A. (2016). Testing for additivity in non-parametric regression. Canadian Journal of Statistics, 44, 445-462. lmeVarComp: Testing for a Subset of Variance Components in Linear Mixed Models. Y Zhang, R package version 1.1.Zhang, Y. (2018). lmeVarComp: Testing for a Subset of Variance Components in Linear Mixed Models. R package version 1.1. https://CRAN. R-project.org/package=lmeVarComp Provides an overview of asymptotic results for testing homogeneity for the general two-component mixture model (5.1) and illustrates some new results such as the calculation of tail probabilities and asymptotic power for both bounded and unbounded parameter spaces. B Garel, Computational Statistics & Data analysis. 51Recent asymptotic results in testing for mixturesGarel, B. (2007). Recent asymptotic results in testing for mixtures. Computational Statistics & Data analysis, 51, 5295-5304. [ §5.3] Provides an overview of asymptotic re- sults for testing homogeneity for the general two-component mixture model (5.1) and illus- trates some new results such as the calculation of tail probabilities and asymptotic power for both bounded and unbounded parameter spaces. Characterise the asymptotic behaviour of the likelihood ratio for testing model homogeneity against a two-component gamma mixture with known shape and unknown rate parameters. Show that under the null hypothesis the asymptotic distribution agrees with the distribution of the square of Davies's (1977) statistic. Further show that if the unknown rate parameter belongs to an unbounded set, the likelihood ratio diverges to infinity in probability at rate O{log. X Liu, C Pasarica, Y Shao, Scandinavian Journal of Statistics. 30Testing homogeneity in gamma mixture models. log n)}, in accordance with HartiganLiu, X., Pasarica, C. and Shao, Y. (2003). Testing homogeneity in gamma mixture models. Scandinavian Journal of Statistics, 30, 227-239. [ §5.3] Characterise the asymptotic behaviour of the likelihood ratio for testing model homogene- ity against a two-component gamma mixture with known shape and unknown rate parameters. Show that under the null hypothesis the asymptotic distribution agrees with the distribution of the square of Davies's (1977) statistic. Further show that if the unknown rate parameter belongs to an unbounded set, the likelihood ratio diverges to in- finity in probability at rate O{log(log n)}, in ac- cordance with Hartigan (1985). Investigate numerically the distribution of the likelihood ratio under the alternative hypothesis of a two-component Gaussian mixture with unequal means and common variances for a wide range of mixing proportions. N R Mendell, H C ThodeJr, S J Finch, Biometrics. 47The likelihood ratio test for the two-component normal mixture problem. The authors conjecture that the limiting distribution is a non-central χ 2 2 distributionMendell, N. R., Thode, H. C. Jr. and Finch, S. J. (1991). The likelihood ratio test for the two-component normal mixture problem. Biometrics, 47, 1143-1148. [ §5.3] Investigate numerically the distribution of the likelihood ratio under the alternative hypothesis of a two-component Gaussian mixture with unequal means and common variances for a wide range of mixing proportions. The authors conjecture that the limiting distribution is a non-central χ 2 2 distribution. An application of the maximum likelihood test to the change-point problem. E Gombay, L Horváth, Stochastic Processes and their Applications. 50§6.1] Prove that a suitably centered and rescaled likelihood ratio statistics converges to a Gumbel distribution under the null hypothesis of no changeGombay, E. and Horváth, L. (1994b). An ap- plication of the maximum likelihood test to the change-point problem. Stochastic Processes and their Applications, 50, 161-171. [ §6.1] Prove that a suitably centered and rescaled likelihood ratio statistics converges to a Gum- bel distribution under the null hypothesis of no change. On the rate of approximations for maximum likelihood tests in change-point models. E Gombay, L Horváth, Journal of Multivariate Analysis. 56§6.1] Generalise Gombay and Horváth (1994b) to the case where a fixed nuisance parameter is presentGombay, E. and Horváth, L. (1996). On the rate of approximations for maximum likelihood tests in change-point models. Journal of Multivariate Analysis, 56, 120-152. [ §6.1] Generalise Gombay and Horváth (1994b) to the case where a fixed nuisance parameter is present. Changepoints and bootstrap. E Gombay, L Horváth, Environmetrics. 10§6.1] Detect possible changes in the distribution of random vector by using the weighted bootstrapGombay, E. and Horváth, L. (1999). Change- points and bootstrap. Environmetrics, 10, 725- 736. [ §6.1] Detect possible changes in the distribution of random vector by using the weighted bootstrap. The likelihood ratio test for the change point problem for exponentially distributed random variables. P Haccou, E Meelis, S Van De Geer, Stochastic Processes and their Applications. 27Haccou, P., Meelis, E. and van de Geer, S. (1987). The likelihood ratio test for the change point problem for exponentially distributed ran- dom variables. Stochastic Processes and their Ap- plications, 27, 121-139. Show that under the null hypothesis of no change in the rate parameter of an exponential distribution, the distribution of the likelihood ratio statistic converges to an extreme value distribution. The limiting distribution of the likelihood ratio statistic is obtained by using the theory of uniform quantile process. §6.1[ §6.1] Show that under the null hypothesis of no change in the rate parameter of an exponential distribution, the distribution of the likelihood ra- tio statistic converges to an extreme value distri- bution. The limiting distribution of the likelihood ratio statistic is obtained by using the theory of uniform quantile process. A problem with the likelihood ratio test for a change-point hazard rate model. R Henderson, Considers modified likelihood ratio tests for survival data as in Worsley. 77§6.1Henderson, R. (1990). A problem with the like- lihood ratio test for a change-point hazard rate model. Biometrika, 77, 835-843. [ §6.1] Considers modified likelihood ratio tests for survival data as in Worsley (1988). Inference about the change-point in a sequence of random variables. D V Hinkley, Biometrika. 57Hinkley, D. V. (1970). Inference about the change-point in a sequence of random variables. Biometrika, 57, 1-17. Derives limit theorems for the likelihood ratio for a change in a sequence of binomial distributions. §6.1[ §6.1] Derives limit theorems for the likelihood ra- tio for a change in a sequence of binomial distri- butions. The limit distributions of the likelihood ratio and cumulative sum tests for a change in binomial probability. L Horváth, Journal of Multivariate Analysis. 31§6.1] Derives limit theorems for the likelihood ratio for a change in a sequence of binomial distributionsHorváth, L. (1989). The limit distributions of the likelihood ratio and cumulative sum tests for a change in binomial probability. Journal of Mul- tivariate Analysis, 31, 148-159. [ §6.1] Derives limit theorems for the likelihood ra- tio for a change in a sequence of binomial distri- butions. A log-linear Model for a Poisson process change point. C R Loader, The Annals of Statistics. 20Loader, C. R. (1992). A log-linear Model for a Poisson process change point. The Annals of Statistics, 20, 1391-1411. for the presence of a change point in a non-homogeneous Poisson process. Large deviation techniques are used to approximate the significance level, and approximations for the power function are provided. A coal mining accident data set is used to illustrate the methodology. §6.1] Tests[ §6.1] Tests for the presence of a change point in a non-homogeneous Poisson process. Large devi- ation techniques are used to approximate the sig- nificance level, and approximations for the power function are provided. A coal mining accident data set is used to illustrate the methodology. ] Develop likelihood ratio tests for the detection of a single change-point in time series regression. M W Robbins, C M Gallagher, R B Lund, Journal of the American Statistical Association. 111A general regression changepoint test for time series data. considering both the time-varying situation of Section 6.3 and multi-phase regression as in Section 6.3Robbins, M. W., Gallagher, C. M. and Lund, R. B. (2016). A general regression changepoint test for time series data. Journal of the American Statistical Association, 111, 670-683. [ §6.1] Develop likelihood ratio tests for the detec- tion of a single change-point in time series re- gression, considering both the time-varying situ- ation of Section 6.3 and multi-phase regression as in Section 6.3. Address the problem of change-point detection in time series. M Robbins, C Gallagher, R Lund, A Aue, Journal of Time Series Analysis. 32Mean shift testing in correlated data. Point out that the likelihood ratio statistic converges to infinity if there is no continuity constraint on the regression function at the change-pointRobbins, M., Gallagher, C., Lund, R. and Aue, A. (2011). Mean shift testing in correlated data. Journal of Time Series Analysis, 32, 498- 511. [ §6.1] Address the problem of change-point detec- tion in time series. Point out that the likelihood ratio statistic converges to infinity if there is no continuity constraint on the regression function at the change-point. ] Consider change point detection for a general class of distributions. Derive the exact and asymptotic null distributions of the quasi-Bayes and likelihood ratio statistics using results from the theory of Brownian motion and bridge processes. S M Sadooghi-Alvandi, A R Nematollahi, R Habibi, Journal of Data Science. 9Test procedures for change point in a general class of distributionsSadooghi-Alvandi, S. M., Nematollahi, A. R. and Habibi, R. (2011). Test procedures for change point in a general class of distributions, Journal of Data Science, 9, 111-126. [ §6.1] Consider change point detection for a gen- eral class of distributions. Derive the exact and asymptotic null distributions of the quasi-Bayes and likelihood ratio statistics using results from the theory of Brownian motion and bridge pro- cesses. Using the results of Csörgö and Horváth (1997), investigated a trimmed version of the likelihood ratio statistic for detecting changes in parameters of the skew normal distribution. K K Said, W Ning, Y Tian, Communications in Statistics -Simulation and Computation. 46Likelihood procedure for testing changes in skew normal model with applications to stock returns. The asymptotic distribution of the test statistics is again a Gumbel distributionSaid, K.K, Ning, W. and Tian, Y. (2017). Like- lihood procedure for testing changes in skew nor- mal model with applications to stock returns. Communications in Statistics -Simulation and Computation, 46, 6790-6802. [ §6.1] Using the results of Csörgö and Horváth (1997), investigated a trimmed version of the likelihood ratio statistic for detecting changes in parameters of the skew normal distribution. The asymptotic distribution of the test statistics is again a Gumbel distribution. Confidence sets in changepoint problems. D Siegmund, International Statistical Review. 56Siegmund, D. (1988). Confidence sets in change- point problems. International Statistical Review, 56, 31-48. Discusses several methods, based on the likelihood ratio, for the construction of a confidence interval for the change-point in a sequence of independent observations from completely specified distributions. The results are generalised to the construction of confidence regions for the change-point and the parameters which index the exponential family from which the independent observations are drawn. §6.1[ §6.1] Discusses several methods, based on the likelihood ratio, for the construction of a con- fidence interval for the change-point in a se- quence of independent observations from com- pletely specified distributions. The results are generalised to the construction of confidence re- gions for the change-point and the parameters which index the exponential family from which the independent observations are drawn. ] Constructs procedures for testing a change in the distribution of a sequence of independent and identically distributed random variables which follow a double exponential law. The change can occur either in the location of the distribution. T Visek, 113The likelihood ratio method for testing changes in the parameters of double exponential observations. textitJournal of Statistical Planning and Inference. in the scale or in bothVisek, T. (2003). The likelihood ratio method for testing changes in the parameters of double expo- nential observations. textitJournal of Statistical Planning and Inference, 113, 79-111. [ §6.1] Constructs procedures for testing a change in the distribution of a sequence of indepen- dent and identically distributed random vari- ables which follow a double exponential law. The change can occur either in the location of the dis- tribution, in the scale or in both. Using the results of Csörgö and Horváth (1997), recently investigated a trimmed version of the likelihood ratio statistic for detecting changes in parameters of the skew slash distribution. T Wang, W Tian, W Ning, Published online in Communications in Statistics -Simulation and Computation. The asymptotic distribution of the test statistics is again a Gumbel distributionWang, T., Tian, W. and Ning, W. (2020) Like- lihood ratio test change-point detection in the skew slash distribution. Published online in Com- munications in Statistics -Simulation and Com- putation. [ §6.1] Using the results of Csörgö and Horváth (1997), recently investigated a trimmed ver- sion of the likelihood ratio statistic for detecting changes in parameters of the skew slash distri- bution. The asymptotic distribution of the test statistics is again a Gumbel distribution. ] Considers test for detecting a change in the hazard function. The likelihood ratio statistic is shown to be unbounded. K J Worsley, Biometrics. 44Exact percentage points of the likelihood-ratio test for a change-point hazard-rate model. but the exact null distribution of a suitably modified likelihood ratio test is providedWorsley, K. J. (1988). Exact percentage points of the likelihood-ratio test for a change-point hazard-rate model. Biometrics, 44, 259-263. [ §6.1] Considers test for detecting a change in the hazard function. The likelihood ratio statistic is shown to be unbounded, but the exact null distribution of a suitably modified likelihood ratio test is provided. Testing and locating variance change points with application to stock prices. J Chen, A K Gupta, Journal of American Statistical Association. 92Chen, J. and Gupta, A. K. (1997). Testing and locating variance change points with application to stock prices. Journal of American Statistical Association, 92, 739-747. Test and locate multiple change-points in the variance of a series of independent normal observations with known mean using the Schwarz information criterion. §6.2[ §6.2] Test and locate multiple change-points in the variance of a series of independent normal observations with known mean using the Schwarz information criterion. Test and locate multiple change-points in the variance of a series of independent normal observations with known mean using the Schwarz information criterion. J Chen, A K Gupta, Statistics. 38§6.2] Generalize Chen and Gupta (1997) to the multivariate caseChen, J. and Gupta, A. K. (2004). Test and lo- cate multiple change-points in the variance of a series of independent normal observations with known mean using the Schwarz information cri- terion. Statistics, 38, 17-28. [ §6.2] Generalize Chen and Gupta (1997) to the multivariate case. Detecting shifts in functions of multivariate location and covariance parameters. D M Hawkins, Journal of Statistical Planning and Inference. 33Hawkins, D. M. (1992). Detecting shifts in func- tions of multivariate location and covariance pa- rameters. Journal of Statistical Planning and In- ference, 33, 233-244. Generalizes his 1977 paper to study eight procedures-which, however, do not include the likelihood ratio-for monitoring possible shifts in the mean vector or covariance matrix of an arbitrary multivariate random variable. §6.2[ §6.2] Generalizes his 1977 paper to study eight procedures-which, however, do not include the likelihood ratio-for monitoring possible shifts in the mean vector or covariance matrix of an arbi- trary multivariate random variable. The maximum likelihood method for testing changes in the parameters of normal observations. L Horváth, The Annals of Statistics. 21Horváth, L. (1993). The maximum likelihood method for testing changes in the parameters of normal observations. The Annals of Statistics, 21, 671-680. Derives the asymptotic distribution of the likelihood ratio statistic for testing whether the mean and/or the variance of a sequence of normal observations changed over time at an unknown point τ. §6.2[ §6.2] Derives the asymptotic distribution of the likelihood ratio statistic for testing whether the mean and/or the variance of a sequence of nor- mal observations changed over time at an un- known point τ . Determine the asymptotic distribution of the maximum likelihood estimator of τ and of the likelihood ratio statistic for testing the null hypothesis H 0 : τ = τ 0. D V Hinkley, Biometrika. 57Inference about the change-point in a sequence of random variables. that is, that the change occurred at a given time point τ 0 in a sequence of normal variablesHinkley, D. V. (1970). Inference about the change-point in a sequence of random vari- ables. Biometrika, 57, 1-17. [ §6.2] Determine the asymptotic distribution of the maximum likeli- hood estimator of τ and of the likelihood ratio statistic for testing the null hypothesis H 0 : τ = τ 0 , that is, that the change occurred at a given time point τ 0 in a sequence of normal variables. Detection of multiple changes of variance using posterior odds. C Inclàn, Journal of Business and Economic Statistics. 11§6.2] Detects a single possible change-point in the variance of a sequence of independent Gaussian random variables with known common mean using a Bayesian approachInclàn, C. (1993). Detection of multiple changes of variance using posterior odds. Journal of Busi- ness and Economic Statistics, 11, 289-300. [ §6.2] Detects a single possible change-point in the variance of a sequence of independent Gaussian random variables with known common mean us- ing a Bayesian approach. Tests for a change-point. B James, K L James, D Siegmund, Biometrika. 74James, B., James, K. L. and Siegmund, D. (1987). Tests for a change-point. Biometrika, 74, 71-83. Compare various test statistics for detecting mean shifts in univariate normal distributions. §6.2. which also include the likelihood ratio[ §6.2] Compare various test statistics for detect- ing mean shifts in univariate normal distribu- tions, which also include the likelihood ratio. Asymptotic approximation for likelihood ratio test and confidence regions for a change point in mean of a multivariate normal distribution. B James, K L James, D Siegmund, Statistica Sinica. 2Compare various test statistics for detecting mean shifts as in James. et al.. but this time for the multivariate caseJames, B., James, K. L. and Siegmund, D. (1992). Asymptotic approximation for likelihood ratio test and confidence regions for a change point in mean of a multivariate normal distri- bution. Statistica Sinica, 2, 69-90. [ §6.2] Compare various test statistics for detect- ing mean shifts as in James et al. (1987), but this time for the multivariate case. Some onesided tests for change in level. A Sen, M S Srivastava, Technometrics. 17Sen, A. and Srivastava, M. S. (1975). Some one- sided tests for change in level. Technometrics, 17, 61-64. Consider change-point detection in location for the multivariate normal distribution. The likelihood ratio statistic is shown to be equivalent to the maximum of Hotteling's two sample statistic and that the same statistic can be used to test for extra-multinomial. M S Srivastava, K J Worsley, Journal of the American Statistical Association. 81Likelihood ratio tests for a change in the multivariate mean. variation in a contingency tableSrivastava, M.S. and Worsley, K.J. (1986). Likelihood ratio tests for a change in the multi- variate mean. Journal of the American Statistical Association, 81, 199-204. [ §6.2] Consider change-point detection in loca- tion for the multivariate normal distribution. The likelihood ratio statistic is shown to be equiv- alent to the maximum of Hotteling's two sample statistic and that the same statistic can be used to test for extra-multinomial variation in a con- tingency table. On testing homogeneity of variances for Gaussian models. J Tang, A K Gupta, Journal of Statistical Computation and Simulation. 272Tang, J. and Gupta, A. K.(1988). On testing homogeneity of variances for Gaussian models. Journal of Statistical Computation and Simula- tion, 27(2), 155-173. Use Bartlett's statistic to detect a single possible change-point in the variance of a sequence of independent Gaussian random variables with known common mean. §6.2[ §6.2] Use Bartlett's statistic to detect a single possible change-point in the variance of a se- quence of independent Gaussian random vari- ables with known common mean. Considers Wald, score and likelihood ratio type tests based on the generalized method of moments to detect a possible structural change in multiple regression with unknown change-point. Tables of critical values are provided. If the change-point is fixed, the test statistics follow a chi. D W K Andrews, Econometrica. 61Tests for parameter instability and structural change with unknown change point. squared distribution with p degrees of freedomAndrews, D. W. K.(1993). Tests for parameter instability and structural change with unknown change point. Econometrica, 61, 821-856. [ §6.3] Considers Wald, score and likelihood ratio type tests based on the generalized method of mo- ments to detect a possible structural change in multiple regression with unknown change-point. Tables of critical values are provided. If the change-point is fixed, the test statistics follow a chi-squared distribution with p degrees of free- dom. . Lecture Notes -Monograph Series. E. Carlstein, H.-G. Möller and D. Siegmud23IMS, HaywardLecture Notes -Monograph Series (edited by E. Carlstein, H.-G. Möller and D. Siegmud), 23, pp. 170-176. IMS, Hayward. . Extend Kim, Siegmund, to multiple linear regressionExtend Kim and Siegmund (1989) to mul- tiple linear regression. Robustness of the likelihood ratio test for a change in simple linear regression. H J Kim, L Cai, Journal of the American Statistical Association. 88Kim, H. J. and Cai, L. (1993). Robustness of the likelihood ratio test for a change in simple lin- ear regression. Journal of the American Statisti- cal Association, 88, 864-871. Confidence semilinear regression. M Knowles, D Siegmund, H P Zhang, Biometrika. 78Knowles, M., Siegmund, D. and Zhang, H. P. (1991). Confidence semilinear regression, Biometrika, 78 15-31. Provide confidence intervals and joint confidence regions based on the likelihood ratio statistic for the change-point in a broken-line regression model with K = 1. §6.3[ §6.3] Provide confidence intervals and joint con- fidence regions based on the likelihood ratio statis- tic for the change-point in a broken-line regres- sion model with K = 1. Consider two-phase linear regression with arbitrary error distribution and fixed jump in the linear predictor at the true change-point τ . The maximum likelihood estimatorτ is shown to be consistent and the finite-sample distribution of the standardized maximum. H L Koul, L Qian, Journal of Statistical Planning and Inference. 108Asymptotics of maximum likelihood estimator in a two-phase linear regression model. likelihood estimator to converge weakly to the distribution of a compound Poisson processKoul, H. L. and Qian, L. (2002). Asymptotics of maximum likelihood estimator in a two-phase linear regression model. Journal of Statistical Planning and Inference, 108, 99-119. [ §6.3] Consider two-phase linear regression with arbitrary error distribution and fixed jump in the linear predictor at the true change-point τ . The maximum likelihood estimatorτ is shown to be consistent and the finite-sample distribution of the standardized maximum likelihood estimator to converge weakly to the distribution of a com- pound Poisson process. Detection of undocumented changepoints: A revision of the twophase regression model. R Lund, J Reeves, Journal of Climate. 15§6.3] Consider the same problem as Hinkley (1969) and conjecture that the asymptotic approximation of the finite-sample distribution of the statistics may involve the Gumbel distributionLund, R. and Reeves, J. (2002). Detection of un- documented changepoints: A revision of the two- phase regression model. Journal of Climate, 15, 2547-2554. [ §6.3] Consider the same problem as Hinkley (1969) and conjecture that the asymptotic ap- proximation of the finite-sample distribution of the statistics may involve the Gumbel distribu- tion. Likelihood ratio tests for a changepoint with survival data. X Luo, B W Turnbull, L C Clark, Biometrika. 84§6.3] Derive the asymptotic distribution of the likelihood ratio statistic to test for a possible time-lag effect in covariates in the presence of right-censored observationsLuo, X., Turnbull, B. W. and Clark, L. C. (1997). Likelihood ratio tests for a changepoint with survival data. Biometrika, 84, 555-565. [ §6.3] Derive the asymptotic distribution of the likelihood ratio statistic to test for a possible time-lag effect in covariates in the presence of right-censored observations. A review and comparison of changepoint detection techniques for climate data. J Reeves, J Chen, X L Wang, R B Lund, Q Lu, Journal of Applied Meteorology and Climatology. 46Reeves, J., Chen, J., Wang, X. L., Lund, R. B. and Lu, Q. (2007). A review and comparison of changepoint detection techniques for climate data. Journal of Applied Meteorology and Clima- tology, 46, 900-915. Give an application for the identification of a possible shift in mean temperature values in climate data. §6.3[ §6.3] Give an application for the identification of a possible shift in mean temperature values in climate data. Estimates for the points of intersection of two polynomial regressions. D E Robison, Journal of the American Statistical Association. 59§6.3] Generalize Sprent (1961) to polynomial regressionRobison, D. E. (1964). Estimates for the points of intersection of two polynomial regressions. Jour- nal of the American Statistical Association, 59, 214-224. [ §6.3] Generalize Sprent (1961) to polynomial re- gression. (1991) confidence intervals and joint confidence regions based on the likelihood ratio statistic for the change. D O Siegmund, H ; Zhang, Hayward Ims, Change-point Problems. IMS Lecture Notes -Monograph Series. E. Carlstein, H.-G. Möller and D. Siegmud23Confidence regions in broken-line regression. point in a broken-line regression model with K = 1Siegmund, D. O. and Zhang, H. (1994). Con- fidence regions in broken-line regression. In Change-point Problems. IMS Lecture Notes - Monograph Series (edited by E. Carlstein, H.-G. Möller and D. Siegmud), 23, pp. 292-316. IMS, Hayward. int Problems. [ §6.3] Provide as Knowles et al. (1991) confi- dence intervals and joint confidence regions based on the likelihood ratio statistic for the change- point in a broken-line regression model with K = 1. Some hypotheses concerning two-phase regression lines. P Sprent, Biometrics. 17§6.3] For a known change-point τ , he uses the likelihood ratio to test a number of hypotheses on the relationship between the two straight lines which form the broken-line regression modelSprent, P. (1961). Some hypotheses concerning two-phase regression lines. Biometrics, 17, 634-645. [ §6.3] For a known change-point τ , he uses the likelihood ratio to test a number of hypotheses on the relationship between the two straight lines which form the broken-line regression model. Proposes a method, based on asymptotic pivots, for constructing nonparametric confidence sets for a monotone failure rate, and for unimodal or U-shaped hazards. M Banerjee, Statistica Sinica. 182Estimating monotone, unimodal and U-shaped failure rates using asymptotic pivotsBanerjee, M. (2008). Estimating monotone, uni- modal and U-shaped failure rates using asymp- totic pivots. Statistica Sinica, 18(2), 467-492. [ §7] Proposes a method, based on asymptotic piv- ots, for constructing nonparametric confidence sets for a monotone failure rate, and for uni- modal or U-shaped hazards. Nonparametric confidence intervals for monotone functions. P Groeneboom, G Jongbloed, The Annals of Statistics. 435Groeneboom, P. and Jongbloed, G. (2015). Nonparametric confidence intervals for monotone functions. The Annals of Statistics, 43(5), 2019- 2054. Obtain confidence intervals for distribution functions and monotone densities by inverting the acceptance region of the nonparametric likelihood ratio test. [ §7] Obtain confidence intervals for distribution functions and monotone densities by inverting the acceptance region of the nonparametric like- lihood ratio test.
[ "https://github.com/baeyc/" ]
[ "ConvPath: A software tool for lung adenocarcinoma digital pathological image analysis aided by a convolutional neural network", "ConvPath: A software tool for lung adenocarcinoma digital pathological image analysis aided by a convolutional neural network" ]
[ "Shidan Wang \nQuantitative Biomedical Research Center\nDepartment of Population and Data Sciences\nUniversity of Texas Southwestern Medical Center\nDallasTX\n", "Tao Wang \nQuantitative Biomedical Research Center\nDepartment of Population and Data Sciences\nUniversity of Texas Southwestern Medical Center\nDallasTX\n\nCenter for the Genetics of Host Defense\nUniversity of Texas Southwestern Medical Center\nDallasTX\n", "Lin Yang \nQuantitative Biomedical Research Center\nDepartment of Population and Data Sciences\nUniversity of Texas Southwestern Medical Center\nDallasTX\n\nDepartment of Pathology\nNational Cancer Center/Cancer Hospital\nChinese Academy of Medical Sciences (CHCAMS)\nChina\n", "Donghan M Yang ", "Junya Fujimoto \nQuantitative Biomedical Research Center\nDepartment of Population and Data Sciences\nUniversity of Texas Southwestern Medical Center\nDallasTX\n\nDepartment of Translational Molecular Pathology\nUniversity of Texas MD Anderson Cancer Center\nHoustonTX\n", "Faliu Yi \nQuantitative Biomedical Research Center\nDepartment of Population and Data Sciences\nUniversity of Texas Southwestern Medical Center\nDallasTX\n", "Xin Luo \nQuantitative Biomedical Research Center\nDepartment of Population and Data Sciences\nUniversity of Texas Southwestern Medical Center\nDallasTX\n", "Yikun Yang \nDepartment of Thoracic Surgery\nNational Cancer Center/Cancer Hospital\nChinese Academy of Medical Sciences (CHCAMS)\nChina\n", "Bo Yao \nQuantitative Biomedical Research Center\nDepartment of Population and Data Sciences\nUniversity of Texas Southwestern Medical Center\nDallasTX\n", "Shinyi Lin \nQuantitative Biomedical Research Center\nDepartment of Population and Data Sciences\nUniversity of Texas Southwestern Medical Center\nDallasTX\n", "Cesar Moran \nDepartment of Pathology\nDivision of Pathology/Lab Medicine\nThe University of Texas MD Anderson Cancer Center\nHoustonTX\n", "Neda Kalhor \nDepartment of Pathology\nDivision of Pathology/Lab Medicine\nThe University of Texas MD Anderson Cancer Center\nHoustonTX\n", "Annikka Weissferdt \nDepartment of Pathology\nDivision of Pathology/Lab Medicine\nThe University of Texas MD Anderson Cancer Center\nHoustonTX\n", "John Minna \nDepartment of Internal Medicine and Department of Pharmacology\nHamon Center for Therapeutic Oncology Research\nUniversity of Texas Southwestern Medical Center\nDallasTX\n\nHarold C. Simmons Comprehensive Cancer Center\nQuantitative Biomedical Research Center\nDepartment of Population and Data Sciences\nUT Southwestern Medical Center\nUniversity of Texas Southwestern Medical Center\nHarold C. Simmons Comprehen-sive Cancer Center75390Dallas, DallasTX, TX\n", "Yang Xie \nQuantitative Biomedical Research Center\nDepartment of Population and Data Sciences\nUniversity of Texas Southwestern Medical Center\nDallasTX\n\nDepartment of Bioinformatics\nUniversity of Texas Southwestern Medical Center\nDallasTX\n\nHarold C. Simmons Comprehensive Cancer Center\nQuantitative Biomedical Research Center\nDepartment of Population and Data Sciences\nUT Southwestern Medical Center\nUniversity of Texas Southwestern Medical Center\nHarold C. Simmons Comprehen-sive Cancer Center75390Dallas, DallasTX, TX\n", "Ignacio I Wistuba \nDepartment of Translational Molecular Pathology\nUniversity of Texas MD Anderson Cancer Center\nHoustonTX\n", "Yousheng Mao \nDepartment of Thoracic Surgery\nNational Cancer Center/Cancer Hospital\nChinese Academy of Medical Sciences (CHCAMS)\nChina\n", "Guanghua Xiao \nQuantitative Biomedical Research Center\nDepartment of Population and Data Sciences\nUniversity of Texas Southwestern Medical Center\nDallasTX\n\nDepartment of Bioinformatics\nUniversity of Texas Southwestern Medical Center\nDallasTX\n\nHarold C. Simmons Comprehensive Cancer Center\nQuantitative Biomedical Research Center\nDepartment of Population and Data Sciences\nUT Southwestern Medical Center\nUniversity of Texas Southwestern Medical Center\nHarold C. Simmons Comprehen-sive Cancer Center75390Dallas, DallasTX, TX\n" ]
[ "Quantitative Biomedical Research Center\nDepartment of Population and Data Sciences\nUniversity of Texas Southwestern Medical Center\nDallasTX", "Quantitative Biomedical Research Center\nDepartment of Population and Data Sciences\nUniversity of Texas Southwestern Medical Center\nDallasTX", "Center for the Genetics of Host Defense\nUniversity of Texas Southwestern Medical Center\nDallasTX", "Quantitative Biomedical Research Center\nDepartment of Population and Data Sciences\nUniversity of Texas Southwestern Medical Center\nDallasTX", "Department of Pathology\nNational Cancer Center/Cancer Hospital\nChinese Academy of Medical Sciences (CHCAMS)\nChina", "Quantitative Biomedical Research Center\nDepartment of Population and Data Sciences\nUniversity of Texas Southwestern Medical Center\nDallasTX", "Department of Translational Molecular Pathology\nUniversity of Texas MD Anderson Cancer Center\nHoustonTX", "Quantitative Biomedical Research Center\nDepartment of Population and Data Sciences\nUniversity of Texas Southwestern Medical Center\nDallasTX", "Quantitative Biomedical Research Center\nDepartment of Population and Data Sciences\nUniversity of Texas Southwestern Medical Center\nDallasTX", "Department of Thoracic Surgery\nNational Cancer Center/Cancer Hospital\nChinese Academy of Medical Sciences (CHCAMS)\nChina", "Quantitative Biomedical Research Center\nDepartment of Population and Data Sciences\nUniversity of Texas Southwestern Medical Center\nDallasTX", "Quantitative Biomedical Research Center\nDepartment of Population and Data Sciences\nUniversity of Texas Southwestern Medical Center\nDallasTX", "Department of Pathology\nDivision of Pathology/Lab Medicine\nThe University of Texas MD Anderson Cancer Center\nHoustonTX", "Department of Pathology\nDivision of Pathology/Lab Medicine\nThe University of Texas MD Anderson Cancer Center\nHoustonTX", "Department of Pathology\nDivision of Pathology/Lab Medicine\nThe University of Texas MD Anderson Cancer Center\nHoustonTX", "Department of Internal Medicine and Department of Pharmacology\nHamon Center for Therapeutic Oncology Research\nUniversity of Texas Southwestern Medical Center\nDallasTX", "Harold C. Simmons Comprehensive Cancer Center\nQuantitative Biomedical Research Center\nDepartment of Population and Data Sciences\nUT Southwestern Medical Center\nUniversity of Texas Southwestern Medical Center\nHarold C. Simmons Comprehen-sive Cancer Center75390Dallas, DallasTX, TX", "Quantitative Biomedical Research Center\nDepartment of Population and Data Sciences\nUniversity of Texas Southwestern Medical Center\nDallasTX", "Department of Bioinformatics\nUniversity of Texas Southwestern Medical Center\nDallasTX", "Harold C. Simmons Comprehensive Cancer Center\nQuantitative Biomedical Research Center\nDepartment of Population and Data Sciences\nUT Southwestern Medical Center\nUniversity of Texas Southwestern Medical Center\nHarold C. Simmons Comprehen-sive Cancer Center75390Dallas, DallasTX, TX", "Department of Translational Molecular Pathology\nUniversity of Texas MD Anderson Cancer Center\nHoustonTX", "Department of Thoracic Surgery\nNational Cancer Center/Cancer Hospital\nChinese Academy of Medical Sciences (CHCAMS)\nChina", "Quantitative Biomedical Research Center\nDepartment of Population and Data Sciences\nUniversity of Texas Southwestern Medical Center\nDallasTX", "Department of Bioinformatics\nUniversity of Texas Southwestern Medical Center\nDallasTX", "Harold C. Simmons Comprehensive Cancer Center\nQuantitative Biomedical Research Center\nDepartment of Population and Data Sciences\nUT Southwestern Medical Center\nUniversity of Texas Southwestern Medical Center\nHarold C. Simmons Comprehen-sive Cancer Center75390Dallas, DallasTX, TX" ]
[]
Background: The spatial distributions of different types of cells could reveal a cancer cell's growth pattern, its relationships with the tumor microenvironment and the immune response of the body, all of which represent key "hallmarks of cancer". However, the process by which pathologists manually recognize and localize all the cells in pathology slides is extremely labor intensive and error prone. Methods: In this study, we developed an automated cell type classification pipeline, ConvPath, which includes nuclei segmentation, convolutional neural network-based tumor cell, stromal cell, and lymphocyte classification, and extraction of tumor microenvironment-related features for lung cancer pathology images. To facilitate users in leveraging this pipeline for their research, all source scripts for ConvPath software are available at https://qbrc.swmed.edu/projects/cnn/. Findings: The overall classification accuracy was 92.9% and 90.1% in training and independent testing datasets, respectively. By identifying cells and classifying cell types, this pipeline can convert a pathology image into a "spatial map" of tumor, stromal and lymphocyte cells. From this spatial map, we can extract features that characterize the tumor micro-environment. Based on these features, we developed an image featurebased prognostic model and validated the model in two independent cohorts. The predicted risk group serves as an independent prognostic factor, after adjusting for clinical variables that include age, gender, smoking status, and stage. Interpretation: The analysis pipeline developed in this study could convert the pathology image into a "spatial map" of tumor cells, stromal cells and lymphocytes. This could greatly facilitate and empower comprehensive analysis of the spatial organization of cells, as well as their roles in tumor progression and metastasis.
10.1016/j.ebiom.2019.10.033
[ "https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/1f/70/main.PMC6921240.pdf" ]
52,877,250
1809.10240
f2eea6f536364ed04013043e05d1c87270b0cacb
ConvPath: A software tool for lung adenocarcinoma digital pathological image analysis aided by a convolutional neural network Available online 22 November 2019 Shidan Wang Quantitative Biomedical Research Center Department of Population and Data Sciences University of Texas Southwestern Medical Center DallasTX Tao Wang Quantitative Biomedical Research Center Department of Population and Data Sciences University of Texas Southwestern Medical Center DallasTX Center for the Genetics of Host Defense University of Texas Southwestern Medical Center DallasTX Lin Yang Quantitative Biomedical Research Center Department of Population and Data Sciences University of Texas Southwestern Medical Center DallasTX Department of Pathology National Cancer Center/Cancer Hospital Chinese Academy of Medical Sciences (CHCAMS) China Donghan M Yang Junya Fujimoto Quantitative Biomedical Research Center Department of Population and Data Sciences University of Texas Southwestern Medical Center DallasTX Department of Translational Molecular Pathology University of Texas MD Anderson Cancer Center HoustonTX Faliu Yi Quantitative Biomedical Research Center Department of Population and Data Sciences University of Texas Southwestern Medical Center DallasTX Xin Luo Quantitative Biomedical Research Center Department of Population and Data Sciences University of Texas Southwestern Medical Center DallasTX Yikun Yang Department of Thoracic Surgery National Cancer Center/Cancer Hospital Chinese Academy of Medical Sciences (CHCAMS) China Bo Yao Quantitative Biomedical Research Center Department of Population and Data Sciences University of Texas Southwestern Medical Center DallasTX Shinyi Lin Quantitative Biomedical Research Center Department of Population and Data Sciences University of Texas Southwestern Medical Center DallasTX Cesar Moran Department of Pathology Division of Pathology/Lab Medicine The University of Texas MD Anderson Cancer Center HoustonTX Neda Kalhor Department of Pathology Division of Pathology/Lab Medicine The University of Texas MD Anderson Cancer Center HoustonTX Annikka Weissferdt Department of Pathology Division of Pathology/Lab Medicine The University of Texas MD Anderson Cancer Center HoustonTX John Minna Department of Internal Medicine and Department of Pharmacology Hamon Center for Therapeutic Oncology Research University of Texas Southwestern Medical Center DallasTX Harold C. Simmons Comprehensive Cancer Center Quantitative Biomedical Research Center Department of Population and Data Sciences UT Southwestern Medical Center University of Texas Southwestern Medical Center Harold C. Simmons Comprehen-sive Cancer Center75390Dallas, DallasTX, TX Yang Xie Quantitative Biomedical Research Center Department of Population and Data Sciences University of Texas Southwestern Medical Center DallasTX Department of Bioinformatics University of Texas Southwestern Medical Center DallasTX Harold C. Simmons Comprehensive Cancer Center Quantitative Biomedical Research Center Department of Population and Data Sciences UT Southwestern Medical Center University of Texas Southwestern Medical Center Harold C. Simmons Comprehen-sive Cancer Center75390Dallas, DallasTX, TX Ignacio I Wistuba Department of Translational Molecular Pathology University of Texas MD Anderson Cancer Center HoustonTX Yousheng Mao Department of Thoracic Surgery National Cancer Center/Cancer Hospital Chinese Academy of Medical Sciences (CHCAMS) China Guanghua Xiao Quantitative Biomedical Research Center Department of Population and Data Sciences University of Texas Southwestern Medical Center DallasTX Department of Bioinformatics University of Texas Southwestern Medical Center DallasTX Harold C. Simmons Comprehensive Cancer Center Quantitative Biomedical Research Center Department of Population and Data Sciences UT Southwestern Medical Center University of Texas Southwestern Medical Center Harold C. Simmons Comprehen-sive Cancer Center75390Dallas, DallasTX, TX ConvPath: A software tool for lung adenocarcinoma digital pathological image analysis aided by a convolutional neural network Available online 22 November 201910.1016/j.ebiom.2019.10.033Article History: Received 30 May 2019 Revised 16 October 2019 Accepted 16 October 2019A R T I C L E I N F O A B S T R A C T Background: The spatial distributions of different types of cells could reveal a cancer cell's growth pattern, its relationships with the tumor microenvironment and the immune response of the body, all of which represent key "hallmarks of cancer". However, the process by which pathologists manually recognize and localize all the cells in pathology slides is extremely labor intensive and error prone. Methods: In this study, we developed an automated cell type classification pipeline, ConvPath, which includes nuclei segmentation, convolutional neural network-based tumor cell, stromal cell, and lymphocyte classification, and extraction of tumor microenvironment-related features for lung cancer pathology images. To facilitate users in leveraging this pipeline for their research, all source scripts for ConvPath software are available at https://qbrc.swmed.edu/projects/cnn/. Findings: The overall classification accuracy was 92.9% and 90.1% in training and independent testing datasets, respectively. By identifying cells and classifying cell types, this pipeline can convert a pathology image into a "spatial map" of tumor, stromal and lymphocyte cells. From this spatial map, we can extract features that characterize the tumor micro-environment. Based on these features, we developed an image featurebased prognostic model and validated the model in two independent cohorts. The predicted risk group serves as an independent prognostic factor, after adjusting for clinical variables that include age, gender, smoking status, and stage. Interpretation: The analysis pipeline developed in this study could convert the pathology image into a "spatial map" of tumor cells, stromal cells and lymphocytes. This could greatly facilitate and empower comprehensive analysis of the spatial organization of cells, as well as their roles in tumor progression and metastasis. Recently, deep learning-based algorithms have made remarkable achievements in pathology image analysis. Several deep learning models for lung cancer pathology image analysis have been proposed for lung cancer H&E-stained pathology images. Furthermore, several deep learning methods have been developed to characterize the tumor micro-environment, since the tumor micro-environment plays an important role in tumor progression and response to treatment. The major cell types in a malignant tissue of lung include tumor cells, stromal cells, and lymphocytes. Stromal cells are connective tissue cells such as fibroblasts and pericytes, and their interaction with tumor cells plays an important role in cancer progression and metastasis inhibition. For example, the crosstalk between cancer cells and stromal cells is needed for invasive growth and metastasis. Spatial heterogeneity of TILs is associated with the tumor molecular profile and patient prognosis. How to automatically classify different types of cells is a major technical challenge in studying the tumor microenvironment. Added value of this study In this study, we developed a pathological image analysis and cell classification pipeline, which can perform nuclei segmentation, CNNbased cell type prediction, and feature extraction. This pipeline successfully visualizes the spatial distributions of tumor, stromal, and lymphocyte cells in the ROI of lung ADC pathology images. Implications of all the available evidence Quantifying distribution and interaction with tumor or stromal cells of lymphocytes can potentially provide a way to evaluate immune response status and serve as a biomarker for immunotherapy response. The analysis pipeline developed in this study could convert the pathology image into a "spatial map" of tumor cells, stromal cells and lymphocytes. This could greatly facilitate and empower comprehensive analysis of cell spatial organization, as well as its role in tumor progression and metastasis. Hematoxylin and Eosin (H&E)-stained tissue whole-slide image (WSI) scanning is becoming a routine clinical procedure that produces massive pathology images with histological details in high resolution. Tumor pathology images contain not only essential information for tumor grade and subtype classification [1], but also information on the tumor microenvironment and the spatial distributions of different types of cells. Tumor tissues are complex structures with cancer cells and surrounding non-malignant cells (such as stromal cells and lymphocytes) that form the tumor micro-environment [2]. Understanding the interactions among these cells can provide critical insights into tumor initiation, progression, metastasis and potential therapeutic targets. For example, the crosstalk between cancer cells and stromal cells is needed for invasive growth and metastasis [3,4]. However, the major technical challenge to studying cell spatial organization is how to classify different types of cells from tumor tissues. It is impractical for a pathologist to manually recognize and localize every individual cell in a pathology slide. In recent years, convolutional neural networks (CNNs), one of the deep learning strategies, have made great success in image recognition tasks [5][6][7]. In this study, we developed a CNN model to automatically classify tumor cells, stromal cells, and lymphocytes for lung adenocarcinoma (ADC) pathology images. Furthermore, we developed an automated image analysis pipeline, ConvPath, to facilitate researchers in studying the spatial interactions of different types of cells and their roles in tumor progression and metastasis. The Con-vPath pipeline is composed of nuclei segmentation, cell type recognition, microenvironment characterization, and prognosis (Fig. 1). The prognostic performance of the model was validated in two independent lung ADC cohorts. Methods Datasets H&E-stained histology images and clinical information for lung ADC patients and corresponding clinical data were collected from four independent cohorts: The Cancer Genome Atlas lung ADC project LUAD data (referred as the TCGA dataset), the National Lung Screening Trial project (the NLST dataset), the University of Texas Special Program of Research Excellence (SPORE) in Lung Cancer project (the SPORE dataset), and the National Cancer Center/Cancer Hospital of Chinese Academy of Medical Sciences, China (the CHCAMS dataset). The TCGA data, including 1337 tumor images from 523 patients, were obtained from the TCGA image portal (https://wiki.cancerima gingarchive.net/display/Public/TCGA-LUAD). All TCGA images were captured at X20 or X40 magnification and included both frozen and Formalin-Fixed, Paraffin-Embedded (FFPE) slides. The NLST data, including 345 tumor images from 201 patients, were acquired from the National Lung Screening Trial, which was performed by the National Cancer Institute. All NLST images were FFPE slides and captured at 40X magnification. The CHCAMS data, including 102 images from 102 stage I ADC patients, were obtained from the National Cancer Center/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College (CHCAMS), China. All CHCAMS images were FFPE slides and captured at 20X magnification. The SPORE data, including 130 images from 112 patients, were acquired from the UT Lung SPORE tissue bank. All SPORE images were FFPE slides and captured at 20X magnification. The characteristics of the four datasets used in this study are summarized in Supplemental Table 1. Extraction of image patches centering at nuclei centroids A pathologist, Dr. Lin Yang, reviewed the H&E-stained pathology image slides and manually labeled Region of Interest (ROI) boundaries using the annotation tool of ImageScope (Leica Biosystem, Fig. 2a). ROIs were defined by the main malignant area within the pathology images. ConvPath randomly selected 10 sampling regions from each selected ROI. The sampling regions were sized 5000£5000 or 3000£3000 pixels in 40X or 20X magnification images, respectively. In each sampling region, ConvPath further extracted 80£80 pixel image patches (for 40X magnification images, 160£160 pixel image patches were extracted first and resized as 80£80 pixel) centering at nuclei centroids (Fig. 2b, Supplemental Figure 1). In order to extract the image patches, RGB color space was first converted to H&E color space with the deconvolution matrix set as [0.550 0.758 0.351; 0.398 0.634 0.600; 0.754 0.077 0.652] [8]. Morphological operations consisting of opening and closing were adopted to process the hematoxylin channel image [9]. Then, ConvPath detected nuclei boundaries using a level set segmentation technique [10,11]. In this segmentation method, the initial contour was randomly given, the value of sigma in Gaussian filter was 1, the number of iterations was 30, and the velocity term was 60. Next, nuclei centroids were detected as the moments of centroids of connected targets in a binary image, where the foreground was the regional maximum locations in a distance map of the segmented image. Here, Euclidean distance was utilized for the distance transform and regional maximums were searched within 8-connected neighborhoods. Finally, image patches using the detected nuclei centroids as centers were extracted from the original pathological RGB image (Fig. 2b). Deep learning algorithm in the convpath software ConvPath incorporates a CNN [12][13][14] to recognize the major cell types, including tumor cells, stromal cells and lymphocytes, in the center of pathology image patches (Fig. 3a, Supplemental Table 2). The input to the CNN was an 80£80 image patch normalized to the range [À0.5, 0.5] with 3 channels corresponding to the red (R), green (G), and blue (B) channels. The output layer for the CNN was a softmax layer with 3 categories: tumor cell, stromal cell, and lymphocyte. For one image patch, a probability for each of the 3 categories was predicted by the CNN; the category with the highest probability was assigned as the predicted class for the image patch. The CNN was trained using a batch size of 10, a momentum of 0.9, a weight decay of 0.0001, an initial learning rate of 0.01, which shrinks by 0.99995 in each step, and training steps of 20,000. The image patches were rotated and flipped to augment sample size. A drop connect probability of 0.5 was used in all convolutional layer parameters. The NLST and TCGA datasets were combined and used as the training set for the CNN (Fig. 3b&c, Supplemental Table 3), and the SPORE dataset was used as the external validation set. The image patches in training and validation sets were labeled by the pathologist as ground truth. Tumor micro-environment feature extraction Based on the prediction results of the CNN, ConvPath converted the pathology image into a "spatial map" of tumor cells, stromal cells and lymphocytes. From this spatial map, we can define the tumor cell regions, stromal cell regions and lymphocyte regions within each ROI, and characterize the distribution and interactions among these regions. For example, a stromal cell region is a small area with tumor tissue that consists of mostly stromal cells. Specifically, ConvPath used kernel smoothers to define regions of tumor cells, stromal cells and lymphocytes separately within the ROI (Fig. 4b). For instance, to define the tumor cell region, ConvPath extracted coordinates of the centers of all image patches and labeled them as 1 if they had been recognized as tumor cells from the previous step, 0 if not. For each point on the image, ConvPath then calculated the probability of being a tumor cell region by weighting all its neighbors with standard normal density kernel K (z/h), where z was defined as the distance between the point and center of each image patch, and h, the bandwidth, was defined as 2 times the estimated cell diameter. A region with probability larger than 0.5 was defined as a tumor cell region. The same approach was used to define stromal cell region and lymphocyte cell region. Next, ConvPath calculated 2 features for each region (Supplemental Table 4), which were the perimeter divided by the square root of region area and size divided by image size for the 3 kinds of cell regions separately. [15] and R packages survival (version 2.38À3), glmnet (version 2.0À5), and clinfun (version 1.0.13) were used for statistical analysis. Survival time was defined as the period from diagnosis to death or last contact for the NLST and TCGA datasets, and from diagnosis to recurrence or last contact in the CHCAMS dataset. The prognostic model was trained on the NLST patients using a Cox regression model with elastic penalty, to predict a risk score for each sampling region. The final risk score of each patient was determined by averaging risk scores across 10 sampling regions of this patient. The performance of this prognostic model was evaluated on the TCGA and CHCAMS datasets by dichotomizing the patients by the median predicted risk score of each dataset. In the validation study, the maximum follow-up time was set to six years, since patient survival after six years may not directly relate to cancer-specific events. Kaplan-Meier (K-M) plots and log-rank tests were used to compare survival outcomes. In addition, a multivariate Cox proportional hazard model was used to test whether the prognostic risk scores were statistically significant after adjusting for clinical variables, including age, gender, tobacco history, and stage. A Jonckheere-Terpstra (J-T) k-sample test [16] was used to test whether higher risk scores were correlated with theoretically more severe ADC subtypes. The results were considered significant if the two-sided test (except for the J-T test, which is a one-sided test for trend) p value 0.05. Statistical analysis R (version 3.2.4) Data availability Pathology images and clinical data in the NLST and TCGA datasets that support the findings of this study are available online in the NLST (https://biometry.nci.nih.gov/cdas/nlst/) and The Cancer Genome Atlas Lung Adenocarcinoma (TCGA-LUAD, https://wiki.can cerimagingarchive.net/display/Public/TCGA-LUAD). Data in the SPORE and CHCAMS datasets that support the findings of this study are available from the UT Lung SPORE Tissue bank and the National Cancer Center/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College (CHCAMS), China, separately, but restrictions apply to the availability of these data. Results ConvPath classifies lung adenocarcinoma cell types with high accuracy 11,988 tumor, stromal, and lymphocyte image patches centered at cell nuclei centroids were extracted from 29 slides in the TCGA and NLST datasets (Fig. 2, Supplemental Table 3) and used to train the CNN model (Fig. 3a). Example image patches are shown in Supplemental Figure 1. The overall classification accuracies of the CNN model on training images were 99.3% for lymphocytes, 87.9% for stromal cells, and 91.6% for tumor cells, respectively (Fig. 3b). The independent cross-study classification rates in the SPORE dataset were 97.8% for lymphocytes, 86.5% for stromal cells, and 85.9% for tumor cells (Fig. 3c). Tumor micro-environment features from predicted sampling regions correlate with overall survival ConvPath was then used to generate cell type predictions for 10 random sampling regions within the ROI on each slide. Based on nuclei centroid locations together with accurate cell type predictions (Fig. 4a, Supplemental Figure 2), we investigated whether the spatial distributions of tumor cells, stromal cells, and lymphocytes correlated with the survival outcome of lung ADC patients. In each predicted sampling region, tumor, stromal, and lymphocyte cell regions were detected using a kernel smoothing algorithm (Fig. 4b, Method section). For regions of each cell type, simple parameters such as perimeter and size were measured. To ensure comparability across image slides captured at different magnitudes, the parameters were normalized by area of sampling region. In univariate Cox analysis, 4 of the 6 extracted features significantly correlated with survival outcome in the NLST dataset (Supplemental Table 4). Interestingly, both perimeter and area of stroma region were good prognostic factors, suggesting a protective effect of stromal cells in lung ADC patients (Supplemental Figure 3&4). Development and validation of an image feature-based prognostic model Utilizing the region features of each cell type extracted from the pathology images in the NLST dataset, we developed a prognostic model to predict patient survival outcome (coefficients of this model are shown in Supplemental Table 4). The model was then independently validated in the TCGA and CHCAMS datasets. The TCGA and CHCAMS patients were dichotomized according to the median predicted risk scores in each dataset. In both datasets, the patients in the predicted high-risk group had significantly worse survival outcomes than those in the predicted low-risk group (Fig. 5a&b, log rank test, p = 0.0047 for the TCGA dataset, p = 0.030 for the CHCAMS dataset). To evaluate whether the image features extracted by ConvPath were independent of clinical variables, multivariate Cox proportional hazard models were used to adjust the predicted risk scores with available clinical variables, including gender, age, stage and smoking status ( Table 1). After adjustment, the still significant hazard ratios between high-and low-risk groups (p = 0.0021 for the TCGA dataset, p = 0.016 for the CHCAMS dataset) indicated that risk group as defined by ConvPath-extracted image features was an independent prognostic factor, in addition to other clinical variables. Predicted risk scores correlate with severity of adc subtypes The 2015 WHO classification of lung cancer further divides invasive lung ADC into several subtypes, including acinar, lepidic, micropapillary, papillary, solid, and mucinous ADC [1]. The correlation of the predicted risk scores with the predominant histology subtypes identified by our pathologist for the CHCAMS dataset, according to the 2015 WHO classification guidelines, was tested (Fig. 5c). Higher risk scores correlated with more aggressive ADC subtypes, such as solid predominant ADC and invasive mucinous ADC (p = 0.0039). Noticeably, despite such correlation, image-derived risk score was independent of the ADC subtypes in multivariate survival analysis (Supplemental Table 5). The convpath software and web server To facilitate practical application of this pathological image analysis pipeline by pathologists and bioinformaticians, the image segmentation, deep learning, and feature extraction algorithms were incorporated into the ConvPath software. The ConvPath software is publicly accessible from the web server created for this study, which is at https://qbrc.swmed.edu/projects/cnn/ (Supplemental Figure 6). Discussion Since 2011, computer algorithms have been developed to analyze tissue pathology images for cancer diagnosis [17][18][19][20][21], grading [22][23][24][25][26] and prognosis [27][28][29][30][31][32]. Recently, deep learning-based algorithms have made remarkable achievements in pathology image analysis [33][34][35][36]. Several deep learning models for lung cancer pathology image analysis have been proposed for lung cancer H&E-stained pathology images. For example, a CNN model was developed to classify image patches of 300£300 pixel size as malignant or non-malignant in lung cancer pathology images, and has achieved an overall classification accuracy of 89.8% in an independent testing set [35]. This model could facilitate pathologists to quickly detect and locate tumor region from tissue pathology images. In addition to detecting tumor regions, Coudray et al. have developed a CNN model to distinguish different lung cancer subtypes [37]. To classify different cell types, several classic machine learningbased models and CNN models have also been developed. QuPath enabled semi-automatic detection of different types of objects (e.g., cell nuclei) through classic machine learning methods [38]. Sirinukunwattana et al. utilized CNN to classify nuclei into epithelial, inflammatory, fibroblast, and miscellaneous nuclei in colon cancer histology images [39]. Furthermore, several deep learning methods have been developed to characterize the tumor micro-environment, since the tumor micro-environment plays an important role in tumor progression and response to treatment. For example, a CNN model has been developed to classify lymphocytes from necrosis or other tissues in multiple cancer types [40]. In another study, Yi et al. developed a Fully Convolutional Neural Network (FCN) [41] to segment micro blood-vessels from lung ADC pathology images. An image segmentation CNN model was developed to classify each pixel in lung ADC pathology images as nucleus centroid, nucleus boundary, or nonnuclei [42]. Based on the results of this model, morphological, textural, and graphical features of cell nucleus were extracted and used to develop a prediction model for tumor recurrence in lung ADC patients. The major cell types in a malignant tissue of lung include tumor cells, stromal cells, and lymphocytes. Stromal cells are connective tissue cells such as fibroblasts and pericytes, and their interaction with tumor cells plays an important role in cancer progression [43][44][45] and metastasis inhibition [46]. Tumor-infiltrating lymphocytes (TILs) are white blood cells that have migrated into a tumor. They are a mix of different types of cells, with T cells being the most abundant population. Tumor-infiltrating lymphocytes have been associated with patient prognosis in multiple tumor types [47][48][49][50]. The spatial distributions of different types of cells could reveal a cancer cell's growth pattern, its relationships with the tumor microenvironment and the immune response of the body, all of which represent key "hallmarks of cancer". For example, the crosstalk between cancer cells and stromal cells is needed for invasive growth and metastasis [3,4]. Spatial heterogeneity of TILs is associated with the tumor molecular profile and patient prognosis [40,51]. However, as there are more than 10,000 cells in each sampling region (Supplemental Figure 2), it is extremely labor-intensive and error-prone for a pathologist to manually recognize and localize every individual cell in a pathology slide. How to automatically classify different types of cells is a major technical challenge in studying the tumor microenvironment. In this study, we developed a pathological image analysis and cell classification pipeline, which can perform nuclei segmentation, CNNbased cell type prediction, and feature extraction (Fig. 1). This pipeline successfully visualizes the spatial distributions of tumor, stromal, and lymphocyte cells in the ROI of lung ADC pathology images. It can potentially serve as a prognostic method independent of other clinical variables. The patient prognostic model based on extracted image features was trained in the NLST dataset and independently validated in the TCGA and CHCAMS datasets, which indicates the generalizability of this analysis pipeline to other lung ADC patients. The accurate classification of cell types in pathology images was validated in an independent data cohort. While the qualities of H&E staining vary across different cohorts and there are inherent interpatient differences, ConvPath still has 90.1% overall accuracy in the SPORE dataset (Fig. 3c). The ConvPath pipeline developed in ADC can be directly applied to Squamous Cell Carcinoma, another subtype of NSCLC; satisfactory results were shown in Supplemental Figure 7. The robustness of ConvPath benefits from the level set-based segmentation algorithm in the nuclei segmentation step. This segmentation algorithm is invariant to the location of the initial contour and can handle high variability across different H&E pathology images. Moreover, nuclei centroid extraction based on distance transform can separate most of the connected nuclei that are not properly processed by the commonly used CellProfiler software [30,52]. The robustness of prediction also benefits from the powerful structure of CNN [53]. The relationships between the extracted tumor micro-environment-related image features and patient prognosis were evaluated in this study (Supplemental Table 4). In univariate analysis, higher stromal cell abundance correlated with better prognosis (Supplemental Figure 4), which is consistent with a recent report on lung ADC patients [46]. However, disparate roles of stromal cells in tumor progression have been reported, including stimulation of tumor proliferation through growth signals and limitation of tumor cells metastatic spreading [44,45,54]. Combinatory analysis of cell spatial distribution detected in this study and the functionality of stromal cells, which could not be evaluated through H&E staining, will help elucidate whether these disparate roles arise from the different activation status of crosstalk between tumor and stroma. In contrast, higher lymphocyte abundance, reflected by region size rather than perimeter, correlated with worse prognosis (Supplemental Table 4, Supplemental Figure 5). Although the presence of both tumor-and stroma-infiltrating lymphocytes has been reported to correlate with tumor cell apoptosis and better patient survival in non-small cell lung cancer [47,50,55], the tumor-suppressive or tumor-promoting properties of lymphocytes depend on spatial distribution of the lymphocytes in the tumor microenvironment [56]. On the other hand, in this study, the "size of lymphocyte cell region/image size" (in Supplement Table 4) refers to regions mainly consisting of lymphocyte cells, which are aggregated lymphocytes. So the size of the lymphocyte cell region may not directly correlate or even negatively correlate with tumor-and stroma-infiltrating lymphocytes, which are individual lymphocytes that are in the tumor and stromal cell-enriched regions. As reported in other studies, the spatial organization of lymphocytes, as well as their interactions with cancer cells, may play a more important role in patient prognosis. Quantifying distribution and interaction with tumor or stromal cells of lymphocytes can potentially provide a way to evaluate immune response status and serve as a biomarker for immunotherapy response. The analysis pipeline developed in this study could convert the pathology image into a "spatial map" of tumor cells, stromal cells and lymphocytes. This could greatly facilitate and empower comprehensive analysis of the spatial organization of cells [57][58][59], as well as their roles in tumor progression and metastasis. In this study, we developed a computational tool to automatically segment and classify different types of cell nuclei. This tool could potentially assist pathologists in clinical practice: First, it can assist pathologists to quickly pinpoint the tumor cells. It is time consuming and difficult for pathologists to locate very small tumor regions in tissue images, so this could greatly reduce the time that pathologists need to spend on each image. Second, this tool could help pathologists and clinicians to predict the patient prognosis, and therefore to tailor the treatment plan of individual patients using readily available tissue images. Furthermore, this tool could be used to quantify cellcell interactions and distributions of different types of cells, especially the spatial distribution of lymphocytes and their interaction with the tumor region, which could potentially provide information for patient response to immunotherapy. The computation time of the Convpath could be reduced in several ways: 1) By applying our model only to the tumor Region of Interest (ROI), which could be either annotated by a pathologist or detected by our tumor detection algorithm. Depending on the tissue resected, this step will reduce the processing time by tenfold. 2) By using parallel processing by creating multiple threads. In summary, by leveraging other existing computational methods and hardware infrastructures, the whole-slide processing time can be reduced to less than 1 h. There are several limitations of the ConvPath pathology image analysis pipeline. First, the sampling region selection and subsequent steps rely on ROI labeling, which is currently done by pathologists. The fully automated tumor region detection model [35] could potentially be used to locate the tumor region first and then apply the Con-vPath pipeline only in the detected tumor region, so that we can largely reduce the computation time for the ConvPath pipeline to run across a whole slide image by ignoring the non-malignant regions. Second, only three major cell types are considered in the ConvPath CNN algorithm; thus, this CNN model is sensitive to out-of-focus cell types such as macrophages and epithelial cells. Also, different subtypes of lymphocytes, such as CD4+ and CD8+ T cells, are not distinguishable using our algorithm [47,60]. More comprehensive labeling and immune-histochemical staining will help solve this problem. Third, more comprehensive analysis of spatial distribution of cells is not included in this research [61,62]. Analyzing the spatial patterns, such as cell clustering and inter-cell interactions, will help us understand the mechanism of tumor progression and immune response to tumor cells. The funders had no role in study design, data collection, data analysis, interpretation, writing of the manuscript. Funding sources Author contributions G.X. and T.W. supervised the project. S.W., T.W., L.Y. and G.X. conceived the method. S.W., T.W., L.Y. and G.X. designed and performed the analyses and interpreted the results. L.Y., Y.Y, J.F. I.W., Y.M. and Y. X. collected and provided the data. S.W., T.W., L.Y., F.Y, X.L., Y.Y. and A, G. curated the data. C.L., S.W., S.L., and B.Y developed the web application with advice from G.X., T.W. and Y.X. L.Y., A.G., J.F., and I.W. provided critical input. S.W. and T.W. drafted the article. All co-authors have read and edited the manuscript. Declaration of Competing Interest The other authors declare that they have no competing interests. Fig. 1 . 1Flow chart of ConvPath-aided pathological image analysis. CHCAMS, National Cancer Center/Cancer Hospital of Chinese Academy of Medical Sciences, China; CI, confidence interval; HR, hazard ratio; TCGA, The Cancer Genome Atlas. Fig. 2 . 2Image preprocessing step of the ConvPath software. (a) Selection of regions of interest (ROIs) in whole pathological imaging slides. (b) Image segmentation pipeline to extract cell-centered image patches from selected ROIs. Fig. 3 . 3Cell type recognition step of the ConvPath software. (a) Schema and structure of the convolutional neural network (CNN) to recognize the types of cells in the centers of image patches. (b) Confusion matrix of internal testing results of CNN on the NLST and TCGA training image slides. Prediction accuracies are calculated based on 3996 image patches for each cell type. (c) Confusion matrix of independent testing results of CNN on image patches of the SPORE dataset. Prediction accuracies are calculated based on 8245 lymphocyte, 2211 stroma, and 6836 tumor patches. Fig. 4 . 4Feature extraction step of the ConvPath software. (a) A zoomed-in part of a sampling region (Supplemental Figure 3) in which cell nuclei centroids are labeled with predicted cell types. Green, stroma; cyan, lymphocyte; yellow, tumor. (b) Cell type region detection using a kernel smoothing algorithm for the sampling region shown in Supplemental Figure 3. Area and perimeters are evaluated for regions of tumor, stroma, and lymphocyte. Fig. 5 . 5Application of the prognostic model to independent datasets. (a, b) Validation of the prognostic model in the TCGA overall survival data (a, log rank test, p = 0.0047) and the CHCAMS recurrence data (b, log rank test, p = 0.030). (c) Boxplot for the distribution of predicted risk scores in the 5 histological subtypes of lung adenocarcinoma for the CHCAMS dataset patients. Jonckheere-Terpstra k-sample test, p = 0.0039. The boxes and whiskers show the lower (Q1) and upper (Q3) quartiles and the median for each histological subtype. This work was supported by the National Institutes of Health [1R01GM115473, 5R01CA152301, 5P30CA142543, 5P50CA070907, 5P30CA016672 and 1R01CA172211]; and the Cancer Prevention and Research Institute of Texas [RP120732]. Table 1 1Multivariate analysis of the predicted risk scores in the CHCAMS and TCGA datasets adjusted by clinical variables.TCGA dataset (n = 346) HR 95% CI p value High risk vs. low risk 2.19 1.33À3.60 0.0021 Age (per year) 1.03 1.01À1.06 0.014 Male vs. female 0.69 1.45À1.16 0.16 Smoker vs. non-smoker 0.88 0.53À1.47 0.62 Stage Stage I ref À Stage II 2.69 1.45À5.00 0.0017 Stage III 5.04 2.69À9.43 <0.001 Stage IV 6.06 2.49À14.73 <0.001 CHCAMS dataset (n = 88) HR 95% CI p value High risk vs. low risk 2.21 1.16À4.21 0.016 Age (per year) 1.02 0.99À1.06 0.202 Male vs. female 1.85 0.69À4.91 0.22 Smoker vs. non-smoker 0.76 0.28À2.04 0.585 CHCAMS, National Cancer Center/Cancer Hospital of Chinese Academy of Medical Sciences, China;. CI, confidence interval;. HR, hazard ratio;. TCGA, The Cancer Genome Atlas. AcknowledgementsJessie Norris for helping us to edit this manuscript.Supplementary materialsSupplementary material associated with this article can be found in the online version at doi:10.1016/j.ebiom.2019.10.033. The 2015 world health organization classification of lung tumors: impact of genetic, clinical and radiologic advances since the 2004 classification. W D Travis, E Brambilla, A G Nicholson, J Thorac Oncol. 109Travis WD, Brambilla E, Nicholson AG, et al. The 2015 world health organization classification of lung tumors: impact of genetic, clinical and radiologic advances since the 2004 classification. J Thorac Oncol 2015;10(9):1243-60. Hallmarks of cancer: the next generation. D Hanahan, Weinberg Robert, A , Cell. 1445Hanahan D, Weinberg Robert A. Hallmarks of cancer: the next generation. Cell 2011;144(5):646-74. Tumors as organs: complex tissues that interface with the entire organism. M Egeblad, E S Nakasone, Z Werb, Dev. Cell. 186Egeblad M, Nakasone ES, Werb Z. Tumors as organs: complex tissues that inter- face with the entire organism. Dev. Cell 2010;18(6):884-901. Macrophage diversity enhances tumor progression and metastasis. B-Z Qian, J W Pollard, Cell. 1411Qian B-Z, Pollard JW. Macrophage diversity enhances tumor progression and metastasis. Cell 2010;141(1):39-51. ImageNet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, Commun Acm. 606Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolu- tional neural networks. Commun Acm 2017;60(6):84-90. Deep learning. Y Lecun, Y Bengio, G Hinton, Nature. 5217553LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015;521(7553):436-44. Deep learning in neural networks: an overview. J Schmidhuber, Neural Netw. 61Schmidhuber J. Deep learning in neural networks: an overview. Neural Netw 2015;61:85-117. Quantification of histochemical staining by color deconvolution. A C Ruifrok, D A Johnston, Anal Quant Cytol Histol. 234Ruifrok AC, Johnston DA. Quantification of histochemical staining by color decon- volution. Anal Quant Cytol Histol 2001;23(4):291-9. Automatic nuclei segmentation in h&e stained breast cancer histopathology images. M Veta, P J Van Diest, R Kornegoor, A Huisman, M A Viergever, J P Pluim, PLoS ONE. 8770221Veta M, van Diest PJ, Kornegoor R, Huisman A, Viergever MA, Pluim JP. Automatic nuclei segmentation in h&e stained breast cancer histopathology images. PLoS ONE 2013;8(7):e70221. Active contours with selective local or global segmentation: a new formulation and level set method. K Zhang, L Zhang, H Song, W Zhou, Image Vis Comput. 284Zhang K, Zhang L, Song H, Zhou W. Active contours with selective local or global segmentation: a new formulation and level set method. Image Vis Comput 2010;28(4):668-76. Automatic extraction of cell nuclei from H&E-stained histopathological images. F Yi, J Huang, L Yang, Y Xie, G Xiao, J Med Imaging. 4227502Yi F, Huang J, Yang L, Xie Y, Xiao G. Automatic extraction of cell nuclei from H&E- stained histopathological images. J Med Imaging (Bellingham) 2017;4(2):027502. Deep convolutional neural networks for computer-aided detection: cnn architectures, dataset characteristics and transfer learning. H C Shin, H R Roth, M Gao, IEEE Trans Med Imaging. 355Shin HC, Roth HR, Gao M, et al. Deep convolutional neural networks for com- puter-aided detection: cnn architectures, dataset characteristics and transfer learning. IEEE Trans Med Imaging 2016;35(5):1285-98. Discriminative unsupervised feature learning with exemplar convolutional neural networks. A Dosovitskiy, P Fischer, J T Springenberg, M Riedmiller, T Brox, IEEE Trans Pattern Anal Mach Intell. 389Dosovitskiy A, Fischer P, Springenberg JT, Riedmiller M, Brox T. Discriminative unsupervised feature learning with exemplar convolutional neural networks. IEEE Trans Pattern Anal Mach Intell 2016;38(9):1734-47. Stochastic synapses enable efficient brain-inspired learning machines. E O Neftci, B U Pedroni, S Joshi, M Al-Shedivat, G Cauwenberghs, Front Neurosci. 10241Neftci EO, Pedroni BU, Joshi S, Al-Shedivat M, Cauwenberghs G. Stochastic synap- ses enable efficient brain-inspired learning machines. Front Neurosci 2016;10:241. R: a language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. R C Team, Team RC. R: a language and environment for statistical computing. Vienna, Aus- tria: R Foundation for Statistical Computing; 2016. A distribution-free k-sample test against ordered alternatives. A R Jonckheere, Biometrika. 411/2Jonckheere AR. A distribution-free k-sample test against ordered alternatives. Biometrika 1954;41(1/2):133-45. Digital pathology for the primary diagnosis of breast histopathological specimens: an innovative validation and concordance study on digital pathology validation and training. B J Williams, A Hanby, R Millican-Slater, A Nijhawan, E Verghese, D Treanor, Histopathology. 724Williams BJ, Hanby A, Millican-Slater R, Nijhawan A, Verghese E, Treanor D. Digi- tal pathology for the primary diagnosis of breast histopathological specimens: an innovative validation and concordance study on digital pathology validation and training. Histopathology 2018;72(4):662-71. Validation of whole slide imaging for frozen section diagnosis in surgical pathology. T W Bauer, R J Slaw, J K Mckenney, D T Patil, J Pathol Inform. 649Bauer TW, Slaw RJ, McKenney JK, Patil DT. Validation of whole slide imaging for frozen section diagnosis in surgical pathology. J Pathol Inform 2015;6:49. Validation of digital pathology imaging for primary histopathological diagnosis. D R Snead, Y W Tsang, A Meskiri, Histopathology. 687Snead DR, Tsang YW, Meskiri A, et al. Validation of digital pathology imaging for primary histopathological diagnosis. Histopathology 2016;68(7):1063-72. Validation of a whole slide imaging system for primary diagnosis in surgical pathology: a community hospital experience. T P Buck, R Dilorio, L Havrilla, O Neill, D G , J Pathol Inform. 5143Buck TP, Dilorio R, Havrilla L, O'Neill DG. Validation of a whole slide imaging sys- tem for primary diagnosis in surgical pathology: a community hospital experi- ence. J Pathol Inform 2014;5(1):43. Validation of whole slide imaging in the primary diagnosis of gynaecological pathology in a university hospital. J Ordi, P Castillo, A Saco, J. Clin. Pathol. 681Ordi J, Castillo P, Saco A, et al. Validation of whole slide imaging in the primary diagnosis of gynaecological pathology in a university hospital. J. Clin. Pathol. 2015;68(1):33-9. Mitosis detection for invasive breast cancer grading in histopathological images. A Paul, D P Mukherjee, IEEE Trans Image Process. 2411Paul A, Mukherjee DP. Mitosis detection for invasive breast cancer grading in his- topathological images. IEEE Trans Image Process 2015;24(11):4041-54. Novel structural descriptors for automated colon cancer detection and grading. S Rathore, M Hussain, Aksam Iftikhar, M Jalil, A , Comput Methods Programs Biomed. 1212Rathore S, Hussain M, Aksam Iftikhar M, Jalil A. Novel structural descriptors for automated colon cancer detection and grading. Comput Methods Programs Biomed 2015;121(2):92-108. Prostate cancer grading: use of graph cut and spatial arrangement of nuclei. K Nguyen, A Sarkar, A K Jain, IEEE Trans Med Imaging. 3312Nguyen K, Sarkar A, Jain AK. Prostate cancer grading: use of graph cut and spatial arrangement of nuclei. IEEE Trans Med Imaging 2014;33(12):2254-70. . P Waliszewski, F Wagenlehner, S Gattenlohner, W Weidner, Fractal geometry in the objective grading of prostate carcinomaWaliszewski P, Wagenlehner F, Gattenlohner S, Weidner W. [Fractal geometry in the objective grading of prostate carcinoma]. . Der Urologe Ausg A. 538Der Urologe Ausg A 2014;53 (8):1186-94. Computational grading of hepatocellular carcinoma using multifractal feature description. C Atupelage, H Nagahashi, M Yamaguchi, T Abe, A Hashiguchi, M Sakamoto, Comput Med Imaging Graph. 371Atupelage C, Nagahashi H, Yamaguchi M, Abe T, Hashiguchi A, Sakamoto M. Computational grading of hepatocellular carcinoma using multifractal feature description. Comput Med Imaging Graph 2013;37(1):61-71. Systematic analysis of breast cancer morphology uncovers stromal features associated with survival. A H Beck, A R Sangoi, S Leung, Sci Transl Med. 3108Beck AH, Sangoi AR, Leung S, et al. Systematic analysis of breast cancer morphol- ogy uncovers stromal features associated with survival. Sci Transl Med 2011;3 (108):108ra13. Quantitative image analysis of cellular heterogeneity in breast tumors complements genomic profiling. Y Yuan, H Failmezger, O M Rueda, Sci Transl Med. 4157Yuan Y, Failmezger H, Rueda OM, et al. Quantitative image analysis of cellular het- erogeneity in breast tumors complements genomic profiling. Sci Transl Med 2012;4(157):157ra43. Comprehensive computational pathological image analysis predicts lung cancer prognosis. X Luo, X Zang, L Yang, J Thorac Oncol. 123Luo X, Zang X, Yang L, et al. Comprehensive computational pathological image analysis predicts lung cancer prognosis. J Thorac Oncol 2017;12(3):501-9. Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features. K H Yu, C Zhang, G J Berry, Nat Commun. 712474Yu KH, Zhang C, Berry GJ, et al. Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features. Nat Commun 2016;7:12474. Classification and mutation prediction from nonÀsmall cell lung cancer histopathology images using deep learning. N Coudray, P S Ocampo, T Sakellaropoulos, Nat. Med. 2410Coudray N, Ocampo PS, Sakellaropoulos T, et al. Classification and mutation pre- diction from nonÀsmall cell lung cancer histopathology images using deep learn- ing. Nat. Med. 2018;24(10):1559-67. Deep learning can predict microsatellite instability directly from histology in gastrointestinal cancer. J N Kather, A T Pearson, N Halama, Nat. Med. Kather JN, Pearson AT, Halama N, et al. Deep learning can predict microsatellite instability directly from histology in gastrointestinal cancer. Nat. Med. 2019. Detecting cancer metastases on gigapixel pathology images. Y Liu, K Gadepalli, M Norouzi, arXiv:1703024422017arXiv preprintLiu Y, Gadepalli K, Norouzi M, et al. Detecting cancer metastases on gigapixel pathology images. arXiv preprint arXiv:1703024422017. Deep learning for identifying metastatic breast cancer. D Wang, A Khosla, R Gargeya, H Irshad, A H Beck, arXiv:1606057182016arXiv preprintWang D, Khosla A, Gargeya R, Irshad H, Beck A.H. Deep learning for identifying metastatic breast cancer. arXiv preprint arXiv:1606057182016. Comprehensive analysis of lung cancer pathology images to discover tumor shape and boundary features that predict survival outcome. S D Wang, A Chen, L Yang, Sci Rep. 8Wang SD, Chen A, Yang L, et al. Comprehensive analysis of lung cancer pathology images to discover tumor shape and boundary features that predict survival out- come. Sci Rep-Uk 2018;8. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. Ehteshami Bejnordi, B Veta, M , Johannes Van Diest, P , JAMA. 31822Ehteshami Bejnordi B, Veta M, Johannes van Diest P, et al. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA 2017;318(22):2199-210. Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning. N Coudray, P S Ocampo, T Sakellaropoulos, Nat Med. 2410Coudray N, Ocampo PS, Sakellaropoulos T, et al. Classification and mutation pre- diction from non-small cell lung cancer histopathology images using deep learn- ing. Nat Med 2018;24(10):1559-67. QuPath: open source software for digital pathology image analysis. P Bankhead, M B Loughrey, J A Fernandez, Sci Rep. 7116878Bankhead P, Loughrey MB, Fernandez JA, et al. QuPath: open source software for digital pathology image analysis. Sci Rep 2017;7(1):16878. Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images. K Sirinukunwattana, Ahmed Raza, S E , Yee-Wah , T Snead, D R Cree, I A Rajpoot, N M , IEEE Trans Med Imaging. 355Sirinukunwattana K, Ahmed Raza SE, Yee-Wah T, Snead DR, Cree IA, Rajpoot NM. Locality sensitive deep learning for detection and classification of nuclei in rou- tine colon cancer histology images. IEEE Trans Med Imaging 2016;35(5):1196- 206. Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology images. J Saltz, R Gupta, L Hou, Cell Rep. 231Saltz J, Gupta R, Hou L, et al. Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology images. Cell Rep 2018;23(1):181-93 e7. Microvessel prediction in h&e stained pathology images using fully convolutional neural networks. F L Yi, L Yang, S D Wang, BMC Bioinformatics. 19Yi FL, Yang L, Wang SD, et al. Microvessel prediction in h&e stained pathol- ogy images using fully convolutional neural networks. BMC Bioinformatics 2018;19. Prediction of recurrence in early stage nonsmall cell lung cancer using computer extracted nuclear features from digital h&e images. X X Wang, A Janowczyk, Y Zhou, Sci Rep. 7Wang XX, Janowczyk A, Zhou Y, et al. Prediction of recurrence in early stage non- small cell lung cancer using computer extracted nuclear features from digital h&e images. Sci Rep-Uk 2017;7. Abundant tumor promoting stromal cells in lung adenocarcinoma with hypoxic regions. H Nakamura, T Ichikawa, S Nakasone, Lung Cancer. 115Nakamura H, Ichikawa T, Nakasone S, et al. Abundant tumor promoting stromal cells in lung adenocarcinoma with hypoxic regions. Lung Cancer 2018;115:56- 63. The role of tumor stroma in cancer progression and prognosis: emphasis on carcinoma-associated fibroblasts and nonsmall cell lung cancer. R M Bremnes, T Donnem, S Al-Saad, J Thorac Oncol. 61Bremnes RM, Donnem T, Al-Saad S, et al. The role of tumor stroma in cancer pro- gression and prognosis: emphasis on carcinoma-associated fibroblasts and non- small cell lung cancer. J Thorac Oncol 2011;6(1):209-17. Hallmarks of cancer: interactions with the tumor stroma. K Pietras, A Ostman, Exp Cell Res. 3168Pietras K, Ostman A. Hallmarks of cancer: interactions with the tumor stroma. Exp Cell Res 2010;316(8):1324-31. The ratio of cancer cells to stroma within the invasive area is a histologic prognostic parameter of lung adenocarcinoma. T Ichikawa, K Aokage, M Sugano, Lung Cancer. Ichikawa T, Aokage K, Sugano M, et al. The ratio of cancer cells to stroma within the invasive area is a histologic prognostic parameter of lung adenocarcinoma. Lung Cancer 2018. The prognostic influence of tumour-infiltrating lymphocytes in cancer: a systematic review with meta-analysis. M J Gooden, G H De Bock, N Leffers, T Daemen, H W Nijman, Br J Cancer. 1051Gooden MJ, de Bock GH, Leffers N, Daemen T, Nijman HW. The prognostic influ- ence of tumour-infiltrating lymphocytes in cancer: a systematic review with meta-analysis. Br J Cancer 2011;105(1):93-103. Prognostic significance of tumor-infiltrating CD8+ and FOXP3+ lymphocytes in residual tumors and alterations in these parameters after neoadjuvant chemotherapy in triple-negative breast cancer: a retrospective multicenter study. M Miyashita, H Sasano, K Tamaki, Breast Cancer Res. 17124Miyashita M, Sasano H, Tamaki K, et al. Prognostic significance of tumor-infiltrat- ing CD8+ and FOXP3+ lymphocytes in residual tumors and alterations in these parameters after neoadjuvant chemotherapy in triple-negative breast cancer: a retrospective multicenter study. Breast Cancer Res 2015;17:124. Prognostic significance of tumor-infiltrating lymphocytes for patients with colorectal cancer. J W Huh, J H Lee, H R Kim, Arch Surg. 1474Huh JW, Lee JH, Kim HR. Prognostic significance of tumor-infiltrating lympho- cytes for patients with colorectal cancer. Arch Surg 2012;147(4):366-72. Prognostic effect of tumor lymphocytic infiltration in resectable non-small-cell lung cancer. E Brambilla, Le Teuff, G Marguet, S , J Clin Oncol. 3411Brambilla E, Le Teuff G, Marguet S, et al. Prognostic effect of tumor lymphocytic infiltration in resectable non-small-cell lung cancer. J Clin Oncol 2016;34 (11):1223-30. Quantitative assessment of the spatial heterogeneity of tumor-infiltrating lymphocytes in breast cancer. N L Mani, K A Schalper, C Hatzis, Breast Cancer Res. 18178Mani NL, Schalper KA, Hatzis C, et al. Quantitative assessment of the spatial het- erogeneity of tumor-infiltrating lymphocytes in breast cancer. Breast Cancer Res. 2016;18(1):78. CellProfiler: image analysis software for identifying and quantifying cell phenotypes. A E Carpenter, T R Jones, M R Lamprecht, Genome Biol. 710100Carpenter AE, Jones TR, Lamprecht MR, et al. CellProfiler: image analysis software for identifying and quantifying cell phenotypes. Genome Biol 2006;7(10):R100. Pathology image analysis using segmentation deep learning algorithms. S Wang, D M Yang, R Rong, X Zhan, G Xiao, Am. J. Pathol. In PressWang S, Yang DM, Rong R, Zhan X, Xiao G. Pathology image analysis using seg- mentation deep learning algorithms. Am. J. Pathol. 2019 In Press. Pericytes limit tumor cell metastasis. X Xian, J Hakansson, A Stahlberg, J Clin Invest. 1163Xian X, Hakansson J, Stahlberg A, et al. Pericytes limit tumor cell metastasis. J Clin Invest 2006;116(3):642-51. Prognostic effect of epithelial and stromal lymphocyte infiltration in nonÀsmall cell lung cancer. K I Al-Shibli, T Donnem, S Al-Saad, M Persson, R M Bremnes, Busund Ll-T, Clinical cancer research. 1416Al-Shibli KI, Donnem T, Al-Saad S, Persson M, Bremnes RM, Busund Ll-T. Prognos- tic effect of epithelial and stromal lymphocyte infiltration in nonÀsmall cell lung cancer. Clinical cancer research 2008;14(16):5220-7. The role of tumor-infiltrating immune cells and chronic inflammation at the tumor site on cancer development, progression, and prognosis: emphasis on non-small cell lung cancer. R M Bremnes, K Al-Shibli, T Donnem, J Thorac Oncol. 64Bremnes RM, Al-Shibli K, Donnem T, et al. The role of tumor-infiltrating immune cells and chronic inflammation at the tumor site on cancer development, progres- sion, and prognosis: emphasis on non-small cell lung cancer. J Thorac Oncol 2011;6(4):824-33. A bayesian mark interaction model for analysis of tumor pathology images. Q Li, X Wang, F Liang, G Xiao, Annals of Applied Statistics. Li Q, Wang X, Liang F, Xiao G. A bayesian mark interaction model for analysis of tumor pathology images. Annals of Applied Statistics 2019 https://ui.adsabs.har- vard.edu/abs/2018arXiv180208308L accessed 2019. A bayesian hidden potts mixture model for analyzing lung cancer pathology images. Q Li, X Wang, F Liang, Biostatistics. Li Q, Wang X, Liang F, et al. A bayesian hidden potts mixture model for analyzing lung cancer pathology images. Biostatistics 2018. Lung cancer pathological image analysis using a hidden potts model. Q Li, F Yi, T Wang, G Xiao, F Liang, Cancer Inform. 161176935117711910Li Q, Yi F, Wang T, Xiao G, Liang F. Lung cancer pathological image analysis using a hidden potts model. Cancer Inform 2017;16:1176935117711910. CD4+ t cells in cancer stroma, not CD8+ t cells in cancer cell nests, are associated with favorable prognosis in human non-small cell lung cancers. O Wakabayashi, K Yamazaki, S Oizumi, Cancer Sci. 9411Wakabayashi O, Yamazaki K, Oizumi S, et al. CD4+ t cells in cancer stroma, not CD8+ t cells in cancer cell nests, are associated with favorable prognosis in human non-small cell lung cancers. Cancer Sci 2003;94(11):1003-9. Targeting tumor-stroma crosstalk: the example of the NT157 inhibitor. T Rampias, R Favicchio, J Stebbing, G Giamas, Oncogene. 3520Rampias T, Favicchio R, Stebbing J, Giamas G. Targeting tumor-stroma crosstalk: the example of the NT157 inhibitor. Oncogene 2016;35(20):2562-4. Spatial distribution of b cells predicts prognosis in human pancreatic adenocarcinoma. G F Castino, Cortese N Capretti, G , Oncoimmunology. 541085147Castino GF, Cortese N, Capretti G, et al. Spatial distribution of b cells predicts prog- nosis in human pancreatic adenocarcinoma. Oncoimmunology 2016;5(4): e1085147.
[]
[ "Multi-Agent Low-Dimensional Linear Bandits", "Multi-Agent Low-Dimensional Linear Bandits" ]
[ "Ronshee Chawla ", "Abishek Sankararaman ", "Sanjay Shakkottai " ]
[]
[]
We study a multi-agent stochastic linear bandit with side information, parameterized by an unknown vector θ * ∈ R d . The side information consists of a finite collection of low-dimensional subspaces, one of which contains θ * . In our setting, agents can collaborate to reduce regret by sending recommendations across a communication graph connecting them. We present a novel decentralized algorithm, where agents communicate subspace indices with each other and each agent plays a projected variant of LinUCB on the corresponding (low-dimensional) subspace. By distributing the search for the optimal subspace across users and learning of the unknown vector by each agent in the corresponding low-dimensional subspace, we show that the per-agent finite-time regret is much smaller than the case when agents do not communicate. We finally complement these results through simulations.Asymptotically matching an oracle's regret rate in large systems: Despite playing from a time-varying set of subspaces, every agent incurs a regret of at most. This scaling does not depend 2 on the gossip matrix G and we show that these communication constraints only affect the constant term in regret. Note that an oracle that knew the right subspace will only incur regret for finding θ * in that subspace, while avoiding any regret due to subspace search. We use this fact at the end of Section 4 to informally argue that even if an agent gets the information about the correct subspace whenever it communicates with other agents, it cannot do better than Ω(m √ T ) regret under our model of information sharing. Consequently, we show in Corollary 3 that for large K and N , SubGoss achieves near-optimal performance, demonstrating that it uses the side information effectively.Finite-time gains due to faster search of subspaces with collaboration:We quantify the extent to which collaboration helps by analyzing the ratio of regret upper bound achieved by SubGoss without collaborations to that achieved with collaboration. We observe that the benefits occur from the ability of multiple agents to do a faster search for the right subspace containing θ * as compared to a single agent. In high dimensional settings (when d is large and m is a constant) with large number of subspaces and agents (K = N = O(d)), we show in Corollary 3 (and the remarks following it), that by time T = Ω(d), the collaborative gain is of the order of Ω d log d . The key reason for the gain lies in the ability of multiple agents to distribute the search for the right subspace among them, enabling all agents to identify the subspace faster, compared to a single agent without collaboration. Finally, these results are corroborated through simulations(Fig. 1).Related Work:Our work focuses on collaborative multi-agent bandits, where agents jointly accomplish the shared objective of minimizing cumulative regret[25,12,9,19,24,25,13,27,28]. Our work focuses on a setting where agents only share recommendations of actions (e.g., to minimize network traffic due to collaboration, while optimizing for per-agent regret), and do not share the samples themselves[25,13,27]. In each of these studies, agents play from a small subset of arms at all times and exchange the arm IDs of what they consider the best arm in their playing set through pairwise gossip-style communications, which is further used to update their playing set. Another approach that focuses on reducing network traffic, while simultaneously optimizing for total cumulative regret (sum over time and users) is based on the follow-the-leader approach [28],
10.1109/tac.2022.3179521
[ "https://arxiv.org/pdf/2007.01442v4.pdf" ]
220,347,228
2007.01442
3147102ebe7c2310a56eecd5dbd74f107df22065
Multi-Agent Low-Dimensional Linear Bandits Ronshee Chawla Abishek Sankararaman Sanjay Shakkottai Multi-Agent Low-Dimensional Linear Bandits We study a multi-agent stochastic linear bandit with side information, parameterized by an unknown vector θ * ∈ R d . The side information consists of a finite collection of low-dimensional subspaces, one of which contains θ * . In our setting, agents can collaborate to reduce regret by sending recommendations across a communication graph connecting them. We present a novel decentralized algorithm, where agents communicate subspace indices with each other and each agent plays a projected variant of LinUCB on the corresponding (low-dimensional) subspace. By distributing the search for the optimal subspace across users and learning of the unknown vector by each agent in the corresponding low-dimensional subspace, we show that the per-agent finite-time regret is much smaller than the case when agents do not communicate. We finally complement these results through simulations.Asymptotically matching an oracle's regret rate in large systems: Despite playing from a time-varying set of subspaces, every agent incurs a regret of at most. This scaling does not depend 2 on the gossip matrix G and we show that these communication constraints only affect the constant term in regret. Note that an oracle that knew the right subspace will only incur regret for finding θ * in that subspace, while avoiding any regret due to subspace search. We use this fact at the end of Section 4 to informally argue that even if an agent gets the information about the correct subspace whenever it communicates with other agents, it cannot do better than Ω(m √ T ) regret under our model of information sharing. Consequently, we show in Corollary 3 that for large K and N , SubGoss achieves near-optimal performance, demonstrating that it uses the side information effectively.Finite-time gains due to faster search of subspaces with collaboration:We quantify the extent to which collaboration helps by analyzing the ratio of regret upper bound achieved by SubGoss without collaborations to that achieved with collaboration. We observe that the benefits occur from the ability of multiple agents to do a faster search for the right subspace containing θ * as compared to a single agent. In high dimensional settings (when d is large and m is a constant) with large number of subspaces and agents (K = N = O(d)), we show in Corollary 3 (and the remarks following it), that by time T = Ω(d), the collaborative gain is of the order of Ω d log d . The key reason for the gain lies in the ability of multiple agents to distribute the search for the right subspace among them, enabling all agents to identify the subspace faster, compared to a single agent without collaboration. Finally, these results are corroborated through simulations(Fig. 1).Related Work:Our work focuses on collaborative multi-agent bandits, where agents jointly accomplish the shared objective of minimizing cumulative regret[25,12,9,19,24,25,13,27,28]. Our work focuses on a setting where agents only share recommendations of actions (e.g., to minimize network traffic due to collaboration, while optimizing for per-agent regret), and do not share the samples themselves[25,13,27]. In each of these studies, agents play from a small subset of arms at all times and exchange the arm IDs of what they consider the best arm in their playing set through pairwise gossip-style communications, which is further used to update their playing set. Another approach that focuses on reducing network traffic, while simultaneously optimizing for total cumulative regret (sum over time and users) is based on the follow-the-leader approach [28], Introduction The Multi-Armed Bandit (MAB) model features a single decision maker making sequential decisions under uncertainty. It has found a wide range of applications: advertising [11], information retrieval [30], and operation of data-centers [18] to name a few. See also books of [22,8]. As the scale of applications increases, several decision makers (a.k.a. agents) are involved in making repeated decisions as opposed to just a single agent. For example, in internet advertising, multiple servers are typically deployed to handle the large volume of traffic [9]. Multi-agent MAB models have emerged as a framework to design algorithms accounting for this large scale. In recent times, there has been a lot of interest in the study of multi-agent unstructured bandits [6,7,13,24]. However, from a practical perspective, the linear bandit framework has shown to be more appropriate than unstructured bandits in many instances (e.g. recommendations [23], clinical studies [5]). The linear bandits framework allows for a continuum of arms with a shared reward structure, thereby modeling many complex online learning scenarios [15,1]. Despite its applicability, the study of multi-agent linear bandits is limited. The key technical challenge arises from the 'information leakage': the reward obtained by playing an arm gives information on the reward obtained by all other arms. In a multi-agent scenario, this is further exacerbated, making design of collaborative algorithms non-trivial. We take a step in this direction by considering a collaborative multi-agent low-dimensional linear bandit problem and propose a novel decentralized algorithm. Agents in our model have side information in the form of subspaces. In our algorithm, agents collaborate by sharing these subspaces as opposed to the linear reward in our algorithm. Our main result shows that, even with minimal communications, the regret of all agents are much lower compared with the case of no collaboration. Model Overview: Our problem consists of a single instance of a stochastic linear bandit with unknown parameter θ * ∈ R d , played concurrently by N agents. The common side information available to the agents is a collection of K disjoint m-dimensional subspaces, only one of which contains θ * . However, the agents are not aware of the subspace containing θ * . At each time t, each agent i ∈ [N ] 1 chooses a subspace in [K]. Subsequently, it plays an action vector a (i) t from the action set A t ⊂ R d while satisfying the constraints imposed by the chosen subspace and receives a reward a (i) t , θ * + η (i) t , where η (i) t is zero mean sub-gaussian noise. Thus, the above problem can be visualized as a two-tier bandit problem, described as follows: the first tier corresponds to the K arms of an unstructured bandit. In second tier, each arm corresponds to solving the stochastic linear bandit problem (parametrized by unknown θ * ) over one of the K known subspaces. The rewards obtained by the agents are only dependent on their actions and independent of actions of other agents. The agents in our model are connected through a communication graph over which they can exchange messages to collaborate. Agents are constrained to communicate by exchanging messages for a fixed number of times over any given time span. We seek decentralized algorithms for agents, i.e., the choice of action vector, communication choices and messages depend only on the observed past history (of action vectors, rewards, and messages) of that agent. Motivating Example: We motivate our model in the context of personalized news recommendation systems. Suppose that a user u can be modeled by an (unknown) vector θ * u ∈ R d , which lies in one of K possible subspaces. These subspaces reflect information from historical data of other users whose feature vectors have been learned (for example, users that have been in the system for a long time) and can be categorized into a collection of low-dimensional subspaces. Thus, when any new user enters the system, it needs to (i) identify the subspace that the user's vector lies in, and (ii) determine the corresponding θ * u for that user. At any point of time, each user is handled by a single server (agent). However, in large scale applications, a collection of servers are deployed to handle the large number of users. Even though a user's queries are routed to different servers over time; however, these servers can collaborate by exchanging messages to speed up learning. Elaborating on the news recommendation example above, the subspaces could correspond to political leanings of the user (e.g. social liberal, fiscal conservative, libertarian, etc.). In this model, all users with the same political leanings would share the same subspace, however, their personal vectors θ * u would differ (to capture fine-grained individual differences). The two-tier bandit thus models a coarse dimensionality reduction through the subspace choice and a finer characterization through θ * u in a specific low-dimensional subspace. The above discussion reflects the two-tier bandit from a single user's perspective; the system will run many parallel instances of this -one for each user. Our model abstracts this setup and our algorithm demonstrates that agents (servers) can indeed benefit from collaborations with minimal communications. Our main contributions are as follows: 1. The SubGoss Algorithm: We propose SubGoss (Algorithm 1), which proceeds in phases, 1 [N ] denotes the set {1, · · · , N } such that agents in any phase (a) explore the subspaces repeatedly to identify the correct subspace containing θ * , followed by (b) playing Projected LinUCB on that subspace, and (c) communicating that subspace whenever requested. Our algorithm constrains agents to search for θ * over only a small set (of cardinality ≤ K N + 2) of subspaces per agent. Agents use pairwise communications to recommend subspaces (not samples), i.e., agents communicate the ID of the estimated best subspace. This set of subspaces is updated through recommendations: agents accept new recommendations and drop subspace(s) unlikely to contain θ * , ensuring that the total number of subspaces an agent considers at all times remain small. Agents can communicate O(log T ) times over a span of time T . Nevertheless, the best one spreads to all the agents through communications and thus all agents eventually identify the correct subspace. wherein a leader among the agents is elected and subsequently becomes the sole player exploring the arms, while other agents act as its followers. However, all of the above works are adapted to the case of finite-armed unstructured MABs and cannot be applied to a linear bandit setup such as ours. Nevertheless, we adopt some of the broader principles from [25,13] regarding the use of the gossiping paradigm for communications to spread the best subspace into our algorithm design. The stochastic linear bandit framework and the study of LinUCB algorithm was initiated by [15,1]. From a practical perspective, the linear bandit framework has been shown to be effective for various applications: for example [3,23] apply this framework in the context of internet advertising and [5,26] apply in the context of clinical trials. Furthermore, a projected version of LinUCB on low-dimensional subspaces has been recently studied in [21]. To the best of our knowledge, our model has not been studied before, even in a single-agent setting. Our model can be viewed as a generalization of the well-studied model of sparse linear bandits [5], [10,17,2]. The sparse linear bandit problem assumes that the unknown vector θ * is s-sparse, for some known sparsity level s < d. In other words, θ * is assumed to lie in one of the d s subspaces, where each of these subspaces corresponds to a particular set of s coordinates, i.e., the subspaces are axis aligned. Our model is a generalization where θ * lies in one of any K given arbitrary disjoint subspaces. The two main algorithmic ideas in sparse bandits are to either use heavy-tailed priors for sampling action vectors and the associated posterior distributions that result in sparse estimates [17,2], or use a LASSO-type regularizer in the estimator [5]. We cannot use the techniques from sparse linear bandits in our model because even though the unknown θ * lies in one of the low-dimensional subspaces, all of its d coordinates can have non-zero values. Consequently, the linear bandit suffers from the problem of "information sharing": the reward obtained by playing an action vector in one subspace reveals information about the rewards of action vectors in other subspaces. Hence, algorithmic ideas from sparse bandits are not directly applicable in our setting. The study of multi-agent linear bandit framework has attracted a lot of attention lately [29]. Multi-agent linear bandits have been studied in the context of clustering [20], differentially-private federated learning [16], and safety-critical distributed learning [4]. However, all of these works involve agents sharing samples with each other in the absence of side information, unlike our setting where agents have the side information in the form of subspaces and communicate only subspace IDs with each other. Problem Setup Our problem setup consists of single instance of stochastic linear bandit (parametrized by unknown θ * ), concurrently played by N agents. All agents play from the same set of action vectors {A t } t∈N at any time t, where A t ⊂ R d . The side information available to all the agents is a collection of K disjoint subspaces in R d of dimension m < d. These subspaces are denoted by the d × m orthonormal matrices {U i } K i=1 , where span(U i ) defines a m-dimensional subspace in R d . One of these subspaces contains θ * , but agents are unaware of the subspace containing it. Without loss of generality, we assume that θ * 2 ≤ 1 and K is an integral multiple of N . Let P k = U k U T k denote the projection matrix of the subspace span(U k ) for all k ∈ [K]. We also assume that the set of action vectors {A t } t∈N contain the orthonormal basis vectors of all the K subspaces (which are columns of U k for all k ∈ [K]) for all t ∈ N. At any time t, an agent i chooses a subspace in [K]. Subsequently, it plays an action vector a (i) t ∈ A t while satisfying the constraints imposed by the chosen subspace and the reward obtained is given by r (i) t := a (i) t , θ * + η (i) t . Here, η (i) t is a zero mean sub-gaussian noise, conditional on the actions and rewards accumulated only by agent i, i.e., for all z ∈ R, E[exp(zη (i) t )|F (i) t−1 ] ≤ exp z 2 2 a.s. and F (i) t−1 = σ(a (i) 1 , r (i) 1 , · · · , a (i) t−1 , r (i) t−1 , a (i) t ). The noise is independent across agents. Thus, the above setup can be abstracted as a two-tier bandit problem, where: (a) the first tier corresponds to the K arms of an unstructured bandit, and (b) in second tier, each arm corresponds to solving the stochastic linear bandit problem over one of the K subspaces. Collaboration among Agents: Our model builds on gossip-based communication constraints for multi-agent finite-armed unstructured bandits in [25,13]. The agents collaborate by exchanging messages over a communication network. This matrix is represented through a N × N gossip matrix G, with rows in this matrix being probability distributions over [N ]. At each time step, after playing an action vector and obtaining a reward, agents can additionally choose to communicate with each other. Agent i, if it chooses to communicate, will do so with another agent j ∼ G(i, ·), chosen independently of everything else. However, for any time horizon T , the total number of times an agent can communicate is O(log T ). Each time an agent chooses to communicate, it can exchange at most log 2 K + 1 number of bits. Therefore, every agent communicates O(log K. log T ) bits over a time horizon of t, for all t. Decentralized Algorithm: Each agent's decisions (arm play and communication decisions) in the algorithm depend only on its own history of plays and observations, along with the the recommendation messages that it has received from others. Performance Metric: Each agent plays action vectors in order to minimize their individual cumulative regret. At any time t, the instantaneous regret for an agent i is given by w (i) t = θ * , a * t − θ * , a (i) t , where a * t = arg max a∈At θ * , a . The expected cumulative regret for any agent i ∈ [N ] is given by E[R (i) T ] := E[ T t=1 w (i) t ], where the expectation is with respect to the σ-field generated by action vectors and rewards of agent i up to and including time T . SubGoss Algorithm Key Ideas and Intuition: Our setting considers that the unknown θ * lies in one of a large number of (low-dimensional) subspaces. In our approach, agents at any time instant identify a small active set of subspaces (cardinality ≤ K/N + 2) and play actions only within this set of subspaces (however, this set is time-varying). At each point of time, an agent first identifies among its current active set of subspaces the one likely to contain θ * . It subsequently plays a projected version of LinUCB on this identified subspace. The communication medium is used for recommendations; whenever an agent is asked for information, it sends as message the subspace ID that it thinks most likely contains θ * , which is then used to update the active set of the receiving agent. Thus, an agent's algorithm has two time-interleaved parts: (a) updating active sets through collaboration, which is akin to a distributed best-arm identification problem, and (b) determining the optimal θ * from within its active set of subspaces, an estimation problem in low dimensions, similar to the classical linear bandit. SubGoss algorithm is organized in phases with the active subspaces for each agent fixed during the phase. Within a phase, all agents solve two tasks -(i) identify the most likely subspace among its active subspaces to contain θ * and (ii) within this subspace, play actions optimally to minimize regret. The first point is accomplished by agents through pure exploration. In pure exploration, agents play the orthonormal basis vectors of all the subspaces in their respective active sets in a round-robin fashion. Agents minimize their regret during pure exploration by considering a small active set of subspaces (of cardinality ≤ K/N + 2) at all times. Otherwise, agents play action vectors within their best estimated subspace containing θ * to minimize regret. This step is achieved by playing a projected version of the LinUCB algorithm. The second step only incurs regret in the dimension of the subspace (once the true subspace is correctly identified) as opposed to the ambient dimension, thereby keeping regret low. Due to communications, the correct subspace spreads to all agents, while playing from a small active set of active subspaces at all times (and thus reducing the regret due to explorations). Description SubGoss algorithm builds on some of the ideas developed for a (non-contextual) collaborative setting for unstructured bandits in [13]. We fix an agent i ∈ [N ] for ease of exposition. SubGoss proceeds in phases, where phase j ∈ N is from time slots j−1 l=1 b l−1 + 1 to j l=1 b l−1 , both inclusive, where b > 1 is given as an input. During each phase j, agent i only plays from an active set S Initialization: At the beginning of the algorithm, every agent is assigned a sticky set of subspaces, by partitioning the subspaces equally across agents: ( S (i) ) N i=1 = (i − 1) K N + 1, · · · , i K N .(1) We set the initial active set S (i) 1 = S (i) . Action Vectors Chosen in a Phase: We play the following two subroutines in every phase j ∈ N in the order as described below: 1. Explore -In this subroutine, for every k ∈ S (i) j , we play the orthonormal basis vectors of the subspace span(U k ) (which are the columns of U k ) in a round robin fashion for 8m b j− 1 2 times. Let n (i) k,j denote the number of times subspace span(U k ) has been explored by agent i up to and including phase j. After executing the explore subroutine in a phase, agent i calculates the least squares estimates θ (i) k,j for every k ∈ S (i) j by using only the explore samples of the subspace span(U k ) up to and including phase j. Mathematically, θ (i) k,j = arg min θ∈R d ( A (i) k, n (i) k,j ) T θ − r (i) k, n (i) k,j 2 , where A (i) k, n (i) k,j is a d × n (i) k,j matrix whose columns are the explore action vectors of the subspace span(U k ) played up to and including phase j and r (i) k, n (i) k,j is a column vector of the corresponding rewards. It is worth noticing that θ (i) k,j is the estimate of the vector P k θ * (details in the proof of Lemma 4), which is the projection of the unknown vector θ * in the subspace span(U k ). We will describe in the proof sketch (Section 4.2) that this observation is crucial to finding the subspace containing θ * . Projected LinUCB -Let O (i) j = arg max k∈S (i) j θ (i) k,j 2 . For the remainder of the phase j, agent i chooses the action vector according to the Projected LinUCB [21], played on the subspace span(U O (i) j ). We set k = O (i) j for reducing the clutter while describing Projected LinUCB. For all j−1 l=1 b l−1 + 8m|S (i) j | b j−1 2 < t ≤ j l=1 b l−1 , where t denotes the corresponding time instants after the end of explore subroutine in phase j, let n (i) k,t denote the number of times agent i has played Projected LinUCB on the subspace span(U k ) up to time t. The action vector chosen is given according to the following equations [21]: a (i) t ∈ arg max a∈At max θ∈C (i) k,t θ, P k a , where C (i) k,t = θ ∈ R d : || θ (i) t − θ||V k,t (λ) (i) ≤ β t,δ , β t,δ = √ λ + 2 log 1 δ + m log 1 + n (i) k,t λm ,V k,t (λ) (i) = P k (A (i) k,t−1 (A (i) k,t−1 ) T + λI d )P k , and θ (i) k,t−1 = arg min θ∈R d ||(P k A (i) k,t−1 ) T θ − r (i) k,t−1 || 2 2 + λ||P k θ|| 2 2 . A (i) k,t−1 is a d × n (i) k,t matrix whose columns are the Projected LinUCB action vectors played only on the subspace span(U k ) up to time t and r (i) k,t−1 is a column vector of the corresponding rewards. Communications and the Active Subspaces for the Next Phase: After phase j gets over, agent i asks for a subspace recommendation from an agent J ∼ G(i, ·) chosen independently. Denote by O (i) j ∈ [K] to be this recommendation. Agent i if asked for a recommendation at the end of phase j, recommends the subspace ID O (i) j , i.e., using only the explore samples. The next active set is constructed as follows: (i) if O (i) j ∈ S (i) j , the active set remains unchanged, (ii) if O (i) j / ∈ S (i) j and |S (i) j | < K N + 2, then S (i) j+1 := S (i) j ∪ O (i) j , and (iii) if O (i) j / ∈ S (i) j and |S (i) j | = K N + 2, then S (i) j+1 := S (i) ∪ B (i) j ∪ O (i) j , where B (i) j = arg max k∈S (i) j \ S (i) θ (i) k,j 2 . Observe that S (i) ⊆ S (i) j ∀j ∈ N, and thus, S (i) is called sticky. Moreover, the update step along with the initialization S (i) 1 = S (i) also ensures that |S (i) j | ≤ K N + 2 for all phases j ∈ N. Please see Algorithm 1 for the pseudo-code of the SubGoss Algorithm. Remarks: 1. Until phase τ 0 (defined in Theorem 1), the duration of the explore subroutine exceeds b j−1 . In order to make less noisy subspace recommendations until phase τ 0 , the exploration is equally distributed across all the subspaces in S k,t is formally described in Theorem 7 in the appendix. The confidence set is an ellipsoid in the subspace on which Projected LinUCB is played. It is constructed such that: (a) it contains θ * with high probability, and (b) it shrinks in size as the correct sequence of action vectors is played with time. Algorithm 1 SubGoss Algorithm (at Agent i) 1: Input: K disjoint m-dimensional subspaces {U l } K l=1 , b > 1, regularization parameter λ > 0, δ ∈ (0, 1). 2: Initialization: S (i) , S (i) 1 (Equation (1)), j ← 1. 3: while phase j ≥ 1 do 4: Explore: For each k ∈ S (i) j , play the orthonormal basis vectors of the subspace ID k in a round robin fashion for 8m b j− 1 2 times. 5: Calculate the least squares estimate θ (i) k,j for each k ∈ S (i) j after running the Explore by using only its explore samples collected thus far. 6: At the end of phase j, sample an agent from the gossip matrix ag ∼ G(i, ·) for receiving subspace recommendation. O (i) j ← arg max k∈S (i) j θ (i) k,j 2 . 9: Get the subspace recommendation O (i) j ← arg max k∈S (ag) j θ (ag) k,j 2 . 10: Active set update for the next phase: 11: if O (i) j ∈ S (i) j then 12: S (i) j+1 ← S (i) j . 13: else 14: if |S (i) j | < K N + 2 then 15: S (i) j+1 ← S (i) j ∪ O (i) j . 16: else if |S (i) j | = K N + 2 then 17: B (i) j ← arg max k∈S (i) j \ S (i) θ (i) k,j 2 . 18: S (i) j+1 ← S (i) ∪ B (i) j ∪ O (i) j . 19: end if 20: end if 21: j ← j + 1. 22: end while 3. Choice of a (i) t and its computational complexity while playing Projected LinUCB -Analogous to the upper confidence bound (UCB) for classical K-armed bandits, an agent playing Projected LinUCB calculates an upper bound for the reward obtained for every a ∈ A t and plays the action vector that maximizes the upper bound. This can be observed for the case when the action set A t is finite, as follows: for a fixed a ∈ A t θ, P k a = θ − θ (i) k,t−1 , P k a + θ (i) k,t−1 , P k a , ≤ θ − θ (i) k,t−1 V k,t (λ) (i) . P k a (V k,t (λ) (i) ) † + θ (i) k,t−1 , P k a , ≤ β t,δ P k a (V k,t (λ) (i) ) † + θ (i) k,t−1 , P k a , where the first inequality is obtained by applying Hölder's inequality and the second inequality follows by using the definition of C t = arg max a∈At θ (i) k,t−1 , P k a + β t,δ P k a (V k,t (λ) (i) ) † . (2) The first term is the empirical estimate of the reward of the action a and the second term corresponds to the deviation around that estimate, similar to the UCB value in K-armed bandits. The computational complexity of determining a (i) t depends on the computational complexity of calculating (V k,t (λ) (i) ) † , θ (i) k,t−1 and the inner products in equation (2). Main Result In order to state the result, we assume that the gossip matrix G is connected (detailed definition in Appendix A). We define a random variable τ (G) spr denoting the spreading time of the following process: node i initially has a rumor; at each time, an agent without a rumor calls another chosen independently from the gossip matrix G and learns the rumor if the other agent knows the rumor. The stopping time τ (G;i) spr denotes the first time when agent i knows the rumor and τ (G) spr = max i∈[N ] τ (G;i) spr is the time by which all agents know the rumor. For ease of exposition, we assume that θ * ∈ span(U 1 ), which the agents are unaware of and a 2 ≤ 1 for all a ∈ ∪ T t=1 A t . Let ∆ = min k∈[K]:k =1 ∆ k , where ∆ k = P 1 θ * − P k θ * 2 . Theorem 1. Consider a system consisting of N agents connected by a gossip matrix G, all running SubGoss Algorithm with K disjoint m-dimensional subspaces and input parameters b > 1, λ ≥ 1, δ = 1 T . Then, the expected cumulative regret of any agent i ∈ [N ], after time T ∈ N is bounded by: E[R (i) T ] ≤ 8mT β 2 T log 1 + T mλ + 2 Projected LinUCB Regret + 2g(b) b 2τ 0 + 48b 3 log b . m 4 N ∆ 6 + bE[b 2τ (G) spr ] Constant Cost of Pairwise Communications + 16m K N + 2 log b (h b,T ) + 16m K N + 2 h b,T − 1 √ b − 1 Cost of subspace exploration ,(3) where β T = √ λ + 2 log T + m log 1 + T −1 λm , g(b) = 1 b−1 + 1 log b , h b,T = b(1 + (T − 1)(b − 1)), and τ 0 = min j ∈ N : ∀j ≥ j, b j −1 ≥ 8m K N + 2 b j −1 2 , Remarks: 1. Proposition 5 shows that τ 0 ≤ 2 log b 16m K N + 2 + 1 and thus, the term b 2τ 0 in the constant cost of pairwise communications scales as O m. K , because it has to search through all the K subspaces to find the subspace containing θ * . We express this result formally in Theorem 6, which is given in Appendix C. 3. Setting δ = 1 T in Theorem 1 requires the knowledge of time horizon in SubGoss Algorithm to achieve the corresponding regret guarantee. However, this is not a problem, as a fixed value of the confidence parameter δ ∈ (0, 1) achieves the same regret scaling as in Theorem 1 with high probability, which can be proved in a similar manner. Thus, the insights that can be obtained from our results are unaffected by the knowledge of time horizon. 4. Subspace recommendation quality vs. network spread -Observe that b > 1 is an input to the algorithm, where agents communicate for the l th time after playing b l−1 number of times since the last communication. Thus, increasing b will decrease the total number of communications between agents. Theorem 1 shows that, there exists an optimal b * > 1, such that b * = arg min b>1 E[R (i) T ]. This can be seen by observing that as b decreases towards 1, the time between two communication instants reduces. However, each communication is based on fewer samples and thus, subspace recommendations are noisy. On the other hand, as b becomes large, each recommendation is based on large number of samples and thus, less noisy. The number of communications, however, is much lower, leading to a large time for the best subspace to spread. The optimal b * trades-off between these two competing effects. Impact of Network Structure on Regret We can obtain the dependence of regret bound on network-related parameters by expressing the term E[b 2τ (G) spr ] in terms of the conductance φ of the gossip matrix (graph) G. In order to do so, we use a result obtained in [13], Corollary 17, which we reproduce here: For a d-regular 3 graph with adjacency matrix A G , conductance φ and gossip matrix G = d −1 A G , E[b 2τ (G) spr ] ≤ b 2C log N φ for all b ≤ exp φ C , where C is a universal constant. Using the above result, we now consider an illustrative example in which we assume that the agents are connected by a complete graph, i.e., G(i, j) = 1 N −1 for j = i, 0 otherwise. In this case, it is easy to see that for all N and b ≤ exp N 2(N −1)C (where C is an universal constant), E[b 2τ (G) spr ] ≤ αN 2(log 2 b+log b) for some constant α > 0, independent of N (see Corollary 16 of [13], where we substitute the conductance φ = N 2(N −1) for the complete graph). This is because for the complete graph, τ (G) spr ≤ log 2 N + log N with high probability. Corollary 2 quantifies the impact of underlying network on regret scaling. Corollary 2. Suppose the agents are connected by a complete graph and b = min exp log 2 1+log 2 . 1 2 , exp N 2(N −1)C , i.e., log 2 b + log b ≤ 1 2 . With the same assumptions for λ and δ as in Theorem 1, the regret scaling of any agent i ∈ [N ] after playing SubGoss Algorithm for T time steps is given by E[R (i) T ] ≤ O(m √ T log T ) Projected LinUCB Regret + O K N m √ T Cost of Subspace Exploration + O(N ) E[b 2τ (G) spr ] +O m. K N 4 +O m 4 N ∆ 6 ,(4) where C > 0 is an universal constant. In (4), the O(·) notation only hides input constants and universal constants. It is evident from (4) that the network structure does not affect the time scaling in regret. Proof Sketch We provide the proof of Theorem 1 in Appendix B, however, we summarize its salient ideas here. Similar to the phenomenon in unstructured bandits [13], we prove in Proposition 1 that in our linear bandit setup, after a freezing phase τ , all agents have the correct subspace containing θ * . Consequently, for all phases j > τ , all agents will play Projected LinUCB [21] from the correct subspace in the exploit time instants and recommend it at the end of the phase. Therefore, the set of subspaces every agent has does not change after phase τ and the regret after phase τ can be decomposed into regret due to pure exploration and regret due to Projected LinUCB (Proposition 2). The technical novelty of our proof lies in bounding the regret till phase τ , i.e., E τ k,j of θ * using only its explore samples up to and including phase j, concentrates to P k θ * in l 2 norm (Lemma 4). Therefore, for any subspace ID k, θ (i) k,j 2 concentrates to P k θ * 2 and eventually, the correct subspace span(U 1 ) will achieve the largest value of θ + b τ −1 b−1 (Proposition 3(i) k,j 2 among all subspaces, if present in the active set. Subsequently, we use the previous fact in Lemma 5 to show that if an agent has the correct subspace containing θ * in a phase, the probability that it will not be recommended and hence dropped at the end of the phase is small. Combining these two observations we establish that, after a random phase denoted by τ stab in Appendix B, satisfying E[ τ stab ] < ∞, agents never recommend incorrectly at the end of a phase and thus play the Projected LinUCB on the correct subspace in the exploit time instants of a phase. To conclude, after random phase τ stab , the spreading time can be coupled with that of a standard rumor spreading [14], as once an agent learns the correct subspace, it is not dropped by the agent. This final part is similar to the one conducted for unstructured bandits in [13], giving us the desired bound on E τ + b τ −1 b−1 . Remark (Freezing time): The freezing phase τ is a quantity only showing up in the analysis, but is not part of the algorithm. In fact, the algorithm needs all agents to explore and communicate in all phases indefinitely, because τ is a sample-path dependent quantity. Indeed, any bandit algorithm that can achieve sub-linear cumulative regret in the stochastic setting inherently has such a freezing time with finite expectation (including in the classical single-agent K-armed bandit); beyond this time, the best arm is identified with high probability. This can be shown by noting that sub-linear regret implies that the probability the best arm is not played at time t, decreases to 0 as t goes to infinity. The finite freezing time follows from the simple Hoeffding inequality and Borel-Cantelli lemma. However, this is not useful in the algorithm but serves only as a proof technique. Formally, this random time τ is not a stopping time, and cannot be determined in an online fashion. Moreover, despite the existence of such a freezing time, the lower bounds for regret increase with the time horizon, showing that infinite exploration is necessary. Remark (Technical differences w.r.t. [13]): The algorithms in [13] and the SubGoss Algorithm appear similar, because of the correspondence between the subspaces in our setup and the arms in a K-armed bandit. However, this correspondence is superficial, because unlike an arm, a subspace represents a continuum of actions, instead of just being an action. In order to quantify the reward corresponding to a subspace, one has to form an estimate of θ * in that subspace by playing the sequence of action vectors spanning that subspace. Furthermore, any given phase in SubGoss bears a superficial resemblance to the explore-then-commit (ETC) algorithm, wherein the explore part of the phase that identifies the subspace to commit to in the exploit part, analogous to best arm identification in the standard K-armed bandit. However, the ETC algorithm requires the knowledge of lower bound of arm mean gaps as an input. In our model, this translates to agents needing knowledge of the distance between subspaces (denoted by ∆ k for all k = 1), which requires knowledge of θ * . We circumvented this through a phased approach with exponentially increasing lengths, where each phase has an explore part and a commit part. The phases with exponentially increasing lengths ensure that: (i) the probability of picking a subspace not containing θ * decreases with every phase, (ii) the increasing duration of playing Projected LinUCB within a phase as the phases progress minimizes the cumulative regret, and (iii) does not need knowledge of gap between subspaces. The consequence of not knowing θ * is why agents need to continually explore in the explore part of every phase, as opposed to only exploring once in the beginning. Remark (Discussion on a lower bound): We provide a brief discussion about the fundamental limits of our model (in terms of cumulative regret) to evaluate the effectiveness of SubGoss Algorithm. We conjecture that the regret of Ω(m √ T ) is unavoidable. The above claim can be argued as follows: in our model, an agent can exchange at most log 2 K + 1 number of bits each time it chooses to communicate. One can consider the scenario in which whenever an agent decides to pull subspace recommendation (as a subspace ID in {1, · · · , K} can be perfectly described by log 2 K + 1 number of bits) from another agent based on gossip matrix G, suppose it always receives the ID of the correct subspace containing θ * . In that case, the agent doesn't have to incur any regret in finding the correct subspace. However, given that there is no sample (action vectors and corresponding rewards) sharing possible between the agents, an agent will still have to search for θ * in the correct subspace by itself. From [22], Chapter 24, we know that finding θ * in R d without any side information results in Ω(d √ T ) regret. Given that the subspaces are m-dimensional, we can replace d with m in the previous statement and conclude that finding θ * in the correct subspace incurs a regret of Ω(m √ T ). However, formalizing this argument requires surmounting some technical challenges. First, we need to precisely define the space of allowed communication policies without prescribing the content of the messages, for example, those that communicate at most a fixed number of bits at each time instant and total number of bits that scales as the logarithm of the time horizon. Once this is done, we need to establish that no communication policy under this constraint can encode knowledge of the true underlying θ * to small enough precision, and show that communicating information other than subspace indices does not yield regret reduction. While the preceding paragraph provides a plausible intuition for the Ω(m √ T ) lower bound, a detailed argument is left to future work. Benefit of Collaboration in High Dimensions In this section, we illustrate how collaboration aids in reducing regret for each agent in the highdimensional setting. We quantify this by computing the ratio of the regret upper bound achieved by SubGoss Algorithm without collaboration to that achieved with collaboration for any agent, denoted by r C (T ). The high-dimensional setting corresponds to large d, m a constant, K and N scaling linear in d (system with a large number of agents). Two remarks are now in order. O(m √ T log T ), (b) r C (T ) = r S (T ) r M (T ) , where r S (T ) = 8mT β 2 T log 1 + T mλ + 2 + 2g(b) b(16d) 2 + 8b 2 log b . m 2 ∆ 2 + 16d log b (h b,T ) + 16d h b,T − 1 √ b − 1 , r M (T ) = 8mT β 2 T log 1 + T mλ + 2 + 2g(b) b 2 (48m) 4 + 48b 3 log b . m 3 d ∆ 6 + αd m + 48m log b (h b,T ) + 48m h b,T − 1 √ b − 1 . Remarks: 1. Matching an oracle's regret rate, asymptotically: 3 shows the power of collaboration in a large multi-agent system, as the regret scaling for any agent i ∈ [N ] matches that of a genie who is already aware of the subspace containing θ * and can play Projected LinUCB on that subspace [21]. This demonstrates that the cost of subspace search can be amortized across agents and only contributes a lower order term in regret, despite agents communicating infrequently (a total of O(log T ) number of pairwise communications by every agent) and exchanging a limited number of bits in each communication (no sample sharing). Furthermore, the discussion in Section 4 implies that in the absence of sample sharing between agents, an agent will incur Ω(m √ T ) regret for finding θ * in the correct subspace. Thus, SubGoss Algorithm is near-optimal even in high-dimensional settings with large number of subspaces and agents. Finite-time gains due to faster search of subspaces with collaboration: The observations following the Corollary 3 show that even the single agent running SubGoss without communications is able to utilize the side information and incur lower regret (O(m √ T log T + d √ T )), compared to an agent running OFUL [1] without any side information. However, the time taken by the single agent running SubGoss to reap the benefits of the subspace side information is very large in high-dimensional settings (T = Ω(e d )). In contrast, the ability of a multi-agent system to learn the right subspace faster is what leads to large collaborative gain of r C (T ) = Ω d log d by time T = Ω(d). These gains are also observed empirically in Figure 1. These gains are more pronounced and are observed for large duration of time in settings with large d, which is typical in many modern applications. Numerical Results We evaluate the SubGoss algorithm empirically in synthetic simulations. We show the cumulative regret (after averaging across all agents) over 30 random runs of the algorithm with 95% confidence intervals. We compare its performance with two benchmarks: SubGoss algorithm with no collaborations (i.e., a single agent playing SubGoss algorithm) and a single agent playing the OFUL (classical LinUCB) algorithm of [1]. In this section, the number of times a subspace span(U k ) (where k ∈ S (i) j ) is explored during the Explore subroutine in phase j is set to m b j−2 2 , as the constants in SubGoss algorithm (Algorithm 1) arise from somewhat loose tail bounds. In our experiments, the agents are connected through a complete graph. Each m-dimensional subspace is the orthogonal matrix obtained by the SVD of a random d × m matrix with i.i.d. standard normal entries. The action set A consists of 5d i.i.d. Gaussian vectors on surface of the unit l 2 ball, along with orthonormal basis vectors for each of the K subspaces. The vector θ * is the projected version of a standard Gaussian vector onto subspace 1 (the true subspace). We set b = 2 and λ = 1 in simulations. Fig. 1 evaluates the performance of SubGoss algorithm for different values of problem parameters (d, m, K, N ). Insights from numerical results: From simulations we confirm several insights predicted by our theory. First, we see that SubGoss yields lower regret than OFUL for the single agent case, demonstrating that SubGoss can effectively leverage the side-information provided through the subspaces. Second, we observe the collaboration gains, where any agent in the multi-agent setting incurs far smaller regret compared to a single agent without collaboration. Finally, we also observe that as the number of agents increases, the regret for every agent decreases. These collaborative gains follow, as each agent has to search through a smaller set of subspaces to find the true subspace. Conclusions and Open Problems We studied a multi-agent linear bandit problem with side information (in the form of disjoint mdimensional subspaces), where only one of the subspaces contains the unknown parameter θ * ∈ R d , but agents are unaware of the subspace containing it. We proposed a novel decentralized algorithm, where agents collaborate by sending recommendations through pairwise gossip communications across a communication graph connecting them, to minimize their individual cumulative regret. We demonstrated that distributing the search for the subspace containing θ * across the agents and learning of the unknown vector in the corresponding low-dimensional subspace results in a much smaller per-agent regret, compared to the case when agents do not communicate. However, the paper leaves open, some important questions. The paper assumed that all agents have exact knowledge of the subspaces. In several practical applications however, the subspaces are estimated from historical data and as such can only be known noisily at best. Developing algorithms that can leverage benefit from collaboration while being robust to mis-specifications is an interesting direction for future work. Another open problem is to establish lower bounds on regret under our model of information sharing. This is non-trivial to define since the communication budget needs to be accounted for in regret. To the best of our knowledge, lower bounds involving both communication and regret minimization have not been established even for the simple unstructured bandits case. Appendix A Technical Assumption for Theorem 1 Building on the communication constraints considered in [13], we make the following mild assumption: [13] The gossip matrix G is irreducible, i.e., for any i = j ∈ [N ], there exists 2 ≤ l ≤ N and k 1 , · · · , k l ∈ [N ], with k 1 = i and k l = j such that the product G(k 1 , k 2 ) · · · G(k l−1 , k l ) > 0. In words, the communication graph among the agents is connected [13]. This assumption is needed because if the communication graph among the agents is not connected, the setup becomes degenerate, as there exists at least a pair of agents which cannot participate in information exchange. However, the practical insights that can be obtained from our results are not affected by this assumption. Appendix B Proof of Theorem 1 In this section and subsequent sections, we assume agents know the parameter S such that θ * 2 ≤ S. In the paper, we set S = 1 for ease of exposition. Before going through the proof, we will first set some definitions and notations. B.1 Definitions and Notations We adapt the proof ideas developed in [13] for the unstructured bandit case. Recall that for any phase j, agent i ∈ [N ], and subspace k ∈ S χ (i) j = 1 1 ∈ S (i) j , O (i) j = 1 , which indicates whether agent i, if it has the subspace span(U 1 ), does not recommend it at the end of a phase. Similar to [13], we provide the definitions of certain random times that will aid in the analysis: τ (i) stab = inf{j ≥ τ 0 : ∀j ≥ j , χ (i) j = 0}, τ stab = max i∈[N ] τ (i) stab , τ (i) spr = inf{j ≥ τ stab : 1 ∈ S (i) j } − τ stab , τ spr = max i∈{1,··· ,N } τ (i) spr , τ = τ stab + τ spr . Here, τ (i) stab is the earliest phase, such that if agent i has the subspace span(U 1 ) in the phases following it, it will recommend the subspace span(U 1 ). The number of phases it takes after τ stab to have the subspace span(U 1 ) in its playing set is denoted by τ (i) spr . The following proposition shows that the system is frozen after phase τ , i.e. after phase τ , the set of subspaces of all agents remain fixed in the future. Proposition 1. For all agents i ∈ {1, · · · , N }, we have almost-surely, j≥τ S (i) j = S (i) τ , O (i) l = 1 ∀l ≥ τ, ∀i ∈ {1, · · · , N }. Proof. For any agent i ∈ [N ] and any phase j ≥ τ , we have for all j ≥ τ , χ (i) j = 0,(5)as τ ≥ τ (i) stab . However, as τ ≥ τ stab + τ (i) spr , we know that 1 ∈ S (i) j .(6) Equations (5) and (6) imply that O B.2 Intermediate Results Before stating and proving the intermediate results, we highlight the key pieces needed to prove Theorem 1. We already showed in Proposition 1 that after the freezing phase τ , all agents have the correct subspace containing θ * and recommend it henceforth. Thus, the expected cumulative regret incurred can be decomposed into two parts: the regret up to phase τ and the regret after phase τ . The expected cumulative regret incurred up to phase τ is a constant independent of the time horizon (Proposition 3). It is a consequence of following important observations resulting from pure exploration in the explore time steps: • For any agent i ∈ [N ] and subspace k ∈ S (i) j , the estimate θ (i) k,j concentrates to P k θ * in l 2 norm (Lemma 4). • Subsequently, we show that the probability that an agent will not recommend and thus drop the correct subspace containing θ * is small at the end of a phase (Lemma 5). The above observations imply that after a (random) phase, denoted by τ stab ≤ τ , agents always recommend (and never drop) the correct subspace. After phase τ stab , we stochastically dominate (in Proposition 4) the spreading time of the correct subspace with a standard rumor spreading process [14]. Hence, the expected cumulative regret up to phase τ is bounded by the total number of time steps taken to reach phase τ stab and the additional number of phases taken to spread the correct subspace. Post phase τ , as the active set of subspaces maintained by agents remains unchanged (as deduced in Proposition 1) and thus, the regret can be decomposed into sum of regret due to pure exploration and regret due to projected LinUCB. The regret due to projected LinUCB is adapted from the analysis of a similar algorithm conducted in [21]. The following intermediate results will precisely characterize the intuition behind the proof of Theorem 1. Proposition 2. The regret of any agent i ∈ {1, · · · , N } after playing for T steps is bounded by E[R (i) T ] ≤ 2S E τ + b τ − 1 b − 1 + E[R proj,T ] + 16mS K N + 2 log b (h b,T ) + 16mS K N + 2 h b,T − 1 √ b − 1 , where h b,T is defined in Theorem 1. Proof. We will first show that the instantaneous regret w (i) t ≤ 2S for all i ∈ [N ] . In order to obtain this bound, notice that for any a ∈ R d such that a 2 ≤ 1, | θ * , a | ≤ ||θ * || 2 .||a|| 2 ≤ S by Cauchy-Schwarz inequality. Therefore, we have w (i) t ≤ 2S for all t. Let l ∈ N such that SubGoss Algorithm is played for t steps by the end of phase l. t and l are related as follows: t = l p=1 b p−1 .(7) Therefore , b l −1 b−1 ≤ t ≤ l + b l −1 b−1 . Assume that SubGoss Algorithm is played for T steps such that T occurs in some phase E, i.e., b E−1 −1 b−1 + 1 ≤ T ≤ E + b E −1 b−1 and it follows that E ≤ log b (b(1 + (T − 1)(b − 1))) = log b (h b,T ) . Let e j = j l=1 b l−1 denote the number of times SubGoss Algorithm has been played by the end of phase j and Reg (i) j denote the regret incurred by agent i in phase j, i.e., Reg (i) j = b j−1 s=1 w e j−1 +s . From the definition of regret R (i) T , R (i) T = T t=1 w (i) t ≤ E j=1 Reg (i) j = τ j=1 Reg (i) j + E j=τ +1 Reg (i) j .(8) We will now bound each of the terms in (8) separately. The first term τ j=1 Reg (i) j can be bounded as follows: τ j=1 Reg (i) j = τ j=1 b j−1 s=1 w (i) e j−1 +s ≤ 2S τ j=1 b j−1 s=1 1 = 2S τ j=1 b j−1 ≤ 2S τ + b τ − 1 b − 1 ,(9) where the second step follows from w t ≤ 2S for all t ∈ N and the last step follows from the fact that x ≤ x + 1 for all x ∈ R. We bound the second term E j=τ +1 Reg (i) j in the following steps: let d (i) j = 8m|S (i) τ | b j−1 2 and R proj,T denote the regret incurred by playing Projected LinUCB on the subspace containing θ * after the freezing phase τ , i.e., R proj, T = E j=τ +1 b j−1 d (i) j +1 w (i) e j−1 +s . Then, we have E j=τ +1 Reg (i) j = E j=τ +1 b j−1 s=1 w (i) e j−1 +s = E j=τ +1 d (i) j s=1 w (i) e j−1 +s + E j=τ +1 b j−1 d (i) j +1 w (i) e j−1 +s (a) ≤ 2S E j=1 8m|S (i) τ | b j−1 2 s=1 1 + R proj,T (b) ≤ 16mS K N + 2 log b (h b,T ) + 16mS K N + 2 h b,T − 1 √ b − 1 + R proj,T .(10) Recall that for any agent i, SubGoss Algorithm explores in the first d (i) j = 8m|S (i) τ | b j−1 2 time slots of phase j by playing the orthonormal basis vectors of each of the subspaces in the playing set in a round robin fashion. Therefore, in step (a), we bound the total number of explore steps from phase j > τ (first term) by the bound on total number of explore steps from t = 1 to T . In the remaining time slots of phases j ∈ N, agents play Projected LinUCB in the subspace span(U O (i) j ) and O (i) j = 1 for all j > τ . Thus, the second term in step (a) is bounded above by the regret incurred by playing Projected LinUCB in the subspace span(U 1 ) for T time steps (as the number of times an agent will play Projected LinUCB is less than T ). Step (b) follows from the discussion that if time step T occurs in some phase E then E ≤ log b (h b,T ), |S (i) j | ≤ K N + 2 for all j ∈ N, and x ≤ x + 1 for all x ∈ R. Substituting (9) and (10) in (8), followed by taking expectation on both sides completes the proof of Proposition 2. The following lemma bounds the probability that for any subspace k ∈ S (i) j , θ (i) k,j deviates from P k θ * in l 2 norm after the explore time slots in phase j, which will eventually help us obtain a bound on probability of picking the wrong subspace. Lemma 4. For any agent i ∈ [N ], phase j ≥ τ 0 , and k ∈ S (i) j , we have P θ (i) k,j − P k θ * || 2 > ≤ 2m exp − 4 2 m b j−1 2 . where > 0. Proof. We have for any k ∈ S (i) j , θ (i) k,j = arg min θ∈R d ( A (i) k, n (i) k,j ) T θ − r (i) k, n (i) k,j 2 = arg min θ∈R d (P k A (i) k, n (i) k,j ) T θ − r (i) k, n (i) k,j 2 , where the last step follows from the fact that during the explore time slots, orthonormal basis vectors for each of the subspaces in S (i) j are played in a round robin fashion. By squaring the objective function in the last step, taking the gradient and setting it to all zeroes vector, we get θ (i) k,j = (P k A (i) k, n (i) k,j A (i) T k, n (i) k,j P k ) † (P k A (i) k, n (i) k,j A (i) T k, n (i) k,j P k )θ * + (P k A (i) k, n (i) k,j A (i) T k, n (i) k,j P k ) † P k A (i) k, n (i) k,j η (i) k, n (i) k,j ,(11) where M † denotes the Moore-Penrose pseudoinverse of the matrix M . By substituting P k = U k U T k , we get P k A (i) k, n (i) k,j A (i) T k, n (i) k,j P k = U k Σ (i) k, n (i) k,j U T k , where Σ (i) k, n (i) k,j = (U T k A (i) k, n (i) k,j )(U T k A (i) k, n (i) k,j ) T . Notice that Σ (i) k, n (i) k,j is a symmetric, full-rank m × m matrix, as A (i) k, n (i) k,j is a matrix whose columns are the orthonormal basis vectors of the subspace span(U k ) in a round robin fashion and n (i) k,j > m. Therefore, (P k A (i) k, n (i) k,j A (i) T k, n (i) k,j P k ) † = U k ( Σ (i) k, n (i) k,j ) −1 U T k and thus, (P k A (i) k, n (i) k,j A (i) T k, n (i) k,j P k ) † (P k A (i) k, n (i) k,j A (i) T k, n (i) k,j P k ) = U k U T k = P k . Moreover, as A (i) k, n (i) k,j = [U k · · · U k ] d× n (i) k,j , U T k A (i) k, n (i) k,j = [I m · · · I m ] m× n (i) k,j (where I m denotes the m × m identity matrix) and thus, Σ (i) k, n (i) k,j = n (i) k,j m I m . Substituting everything above in (11), we get θ (i) k,j = P k θ * + U k v (i) n (i) k,j , where v (i) n (i) k,j is a m × 1 vector whose entries are v , and u k,n denotes the n th column of U k . Hence, θ (i) k,j − P k θ * 2 2 = v (i) T n (i) k,j U T k U k v (i) n (i) k,j = v (i) n (i) k,j 2 2 , where the above equality follows from the fact that U k is an orthonormal matrix. From the assumption that the additive noise is conditionally 1-subgaussian, we know that P |v (i) n (i) k,j ,n | > γ ≤ 2e − γ 2 n (i) k,j 2m(12) for all γ > 0. If |v (i) k,j − P k θ * 2 ≤ . Hence, P θ (i) k,j − P k θ * 2 > ≤ P ∃n ∈ [m] : |v (i) n (i) k,j ,n | > √ m = P m n=1 |v (i) n (i) k,j ,n | > √ m (a) ≤ m n=1 P |v (i) n (i) k,j ,n | > √ m (b) ≤ 2m exp   − 2 n (i) k,j 2m 2   (c) ≤ 2m exp − 2 2m 2 .8m b j−1 2 (d) ≤ 2m exp − 4 2 m b j−1 2 . Step (a) is a direct application of union bound. Step (b) uses the result from (12). In step (c), we use the fact that any subspace k ∈ S (i) j is explored for at least 8m b j−1 2 times up to and including phase j. Step (d) follows from the inequality x ≥ x for all x ∈ R, thus concluding the proof. We will now obtain a bound on probability for choosing a wrong subspace. Since θ * ∈ span(U 1 ), any subspace chosen other than span(U 1 ) will result in an error. Mathematically, it can be expressed as O (i) j = 1, which implies that at least one of the events below is true: θ (i) O (i) j ,j − P O (i) j θ * 2 > ∆ 2 , θ (i) 1,j − P 1 θ * 2 > ∆ 2 .(13) The above implication follows from the contrapositive argument by showing that that the negation of both the events in (13) must be simultaneously true so that O (i) j = 1 holds. The contrapositive argument can be proved as follows: observe that for any k( = 1) ∈ S (i) j , θ (i) k,j − P k θ * 2 ≤ ∆ 2 ∩ θ (i) 1,j − P 1 θ * 2 ≤ ∆ 2 implies P k θ * 2 − ∆ 2 ≤ θ (i) k,j 2 ≤ P k θ * 2 + ∆ 2 and P 1 θ * 2 − ∆ 2 ≤ θ (i) 1,j 2 ≤ P 1 θ * 2 + ∆ 2 , which follows from the triangle inequality. Now, suppose that θ implies that P k θ * 2 − ∆ 2 ≥ P 1 θ * 2 + ∆ 2 , which after rearranging the terms results in the following inequality: P 1 θ * 2 − P k θ * 2 ≤ −∆. Notice that the left hand side of this inequality is strictly positive, as P 1 θ * = θ * . This is a contradiction, as a strictly positive number cannot be less than a strictly negative number, as ∆ > 0. Therefore, our initial assertion that θ (i) k,j 2 ≥ θ (i) 1,j 2 is incorrect and the claim in (13) follows. The above discussion results in the following lemma: Lemma 5. For any agent i ∈ [N ] and phase j ≥ τ 0 , we have P 1 ∈ S (i) j , O (i) j = 1 ≤ 4m exp − ∆ 2 m b j−1 2 . Proof. We have P 1 ∈ S (i) j , O (i) j = 1 = P 1 ∈ S (i) j , θ (i) O (i) j ,j 2 ≥ θ (i) k,j 2 for all k ∈ S (i) j ≤ P 1 ∈ S (i) j , θ (i) O (i) j ,j 2 ≥ θ (i) 1,j 2 (a) ≤ P θ (i) O (i) j ,j − P O (i) j θ * 2 > ∆ 2 ∪ θ (i) 1,j − P 1 θ * 2 > ∆ 2 (b) ≤ P θ (i) O (i) j ,j − P O (i) j θ * 2 > ∆ 2 + P θ (i) 1,j − P 1 θ * 2 > ∆ 2 (c) ≤ 4m exp − ∆ 2 m b j−1 2 , where step (a) follows from the fact that { O (i) j = 1} implies at least one of the events in (13) must be true, step (b) follows from union bound, and step (c) follows from Lemma 4. This concludes the proof of Lemma 5. Proposition 3. The freezing time τ + b τ −1 b−1 is bounded by E τ + b τ − 1 b − 1 ≤ g(b) b 2τ 0 + 48b 3 log b . m 4 N ∆ 6 + bE[b 2 τspr ] , where τ 0 and g(b) are defined in Theorem 1. Proof. We follow similar steps as in the proof for Proposition 3 in [13] for establishing the above result. As τ is a non-negative random variable, E[b τ ] ≤ E[ b τ ] = t≥1 P( b τ ≥ t) ≤ 1 + t≥2 P(b τ + 1 ≥ t) ≤ 1 + t≥2 P (τ ≥ log b (t − 1) ) = 1 + t≥1 P ( τ stab + τ spr ≥ log b t ) ≤ 1 + t≥1 P τ stab ≥ 1 2 log b t + t≥1 P τ spr ≥ 1 2 log b t ≤ 1 + t≥1 P τ stab ≥ 1 2 log b t + t≥1 P τ spr ≥ 1 2 log b t − 1 2 = 1 + t≥1 P τ stab ≥ 1 2 log b t + t≥1 P b 2 τspr+1 ≥ t ≤ 1 + b 2τ 0 + t≥ b 2τ 0 +1 P τ stab ≥ 1 2 log b t + bE[b 2 τspr ]. Since the spreading time with the standard rumor model dominates τ spr , we use this to bound the term E[b 2 τspr ] in Proposition 4 after the proof of Proposition 3. The summation in the last step is bounded by using Lemma 5, as follows: for some fixed x ≥ τ 0 , we have P( τ stab ≥ x) = P N i=1 ( τ (i) stab ≥ x) ≤ N i=1 P( τ (i) stab ≥ x), = N i=1 P ∞ l=x (χ (i) l = 1) ≤ N i=1 l≥x P χ (i) l = 1 = N i=1 l≥x P 1 ∈ S (i) j , O (i) j = 1 (a) ≤ N i=1 l≥x 4m exp − ∆ 2 m b For bounding E[τ ], notice that τ ≤ b τ −1 log b , where we use the fact that for all b > 1, b x − x log b − 1 ≥ 0 for all x ≥ 0. Thus, E[τ ] ≤ 1 log b E[b τ − 1] . Substituting the bound for E[b τ − 1] obtained above completes the proof of Proposition 3. Proposition 4. The random variable τ spr is stochastically dominated by τ (G) spr . Proof. The proof follows in a similar way as the proof for Proposition 4 in [13]. Proposition 5. τ 0 defined in Theorem 1 is bounded by τ 0 ≤ 2 log b 16m K N + 2 + 1. Proof. From Theorem 1, τ 0 = min j ∈ N : ∀j ≥ j, b j −1 ≥ 8m K N + 2 b j −1 2 . As 8m K N + 2 b j −1 2 ≤ 8m K N + 2 b j −1 2 +8m K N + 2 ≤ 16m K N + 2 b j −1 2 , the minimum value of j that satisfies b j−1 ≥ 16m K N + 2 b j−1 2 is an upper bound on τ 0 . Rearranging the terms results in j ≥ 2 log b 16m K N + 2 + 1 and thus, τ 0 ≤ 2 log b 16m K N + 2 + 1. B.3 Proof of Theorem 1 From In step (a), the first sum is trivially bounded by 8mT β 2 T log 1 + T mλ and the second sum uses the definition of R proj,T with w t ≤ 2S for all t ∈ N. Step (b) uses Theorem 8 with δ = 1 T . Substituting the results of Propositions 3 and 4, along with (15) into Proposition 2 concludes the proof of Theorem 1. Appendix C Regret Upper Bound for Single Agent Running Sub-Goss Algorithm Without Communications Theorem 6. With the same assumptions as in Theorem 1, when a single agent runs SubGoss Algorithm in case of no communication, the regret after any time T ∈ N is bounded by E[R T ] ≤ 8mT β 2 T log 1 + T mλ + 2S Projected LinUCB Regret + 2Sg(b) b(16mK) 2 + 8b 2 log b . m 2 ∆ 2 Constant Cost of Right Subspace Search + 16mKS log b (h b,T ) + 16mKS h b,T − 1 √ b − 1 Cost of subspace exploration .(16) Here, β T , g(b), and h b,T are the same as in Theorem 1. Proof. Before we prove Theorem 6, we set some notation. Let O j = arg max k∈[K] θ . Following the proof of Proposition 5, it can be shown thatτ 0 ≤ 2 log b (16mK) + 1. We also define a random phase τ freeze as follows: τ freeze = inf{j ≥τ 0 : ∀j ≥ j, O j = 1}. Thus, τ freeze is the earliest phase after which the single agent will play the projected LinUCB from the subspace span(U 1 ) in the exploit time slots of a phase. Notice that the random phase τ freeze plays the same role as τ stab in the multi-agent case. This suggests that the regret analysis in this case must follow the same chain of argument as for the multi-agent case. Following the same steps as for Proposition 2, the bound on the regret of the single agent after T time steps is given by E[R T ] ≤ 2S E τ freeze + b τ freeze − 1 b − 1 + E[R proj,T ] + 16mKS log b (h b,T ) + 16mKS h b,T − 1 √ b − 1 .(17) We have already shown in (15) that E[R proj,T ] ≤ 8mT β 2 T log 1 + T mλ + 2S. We will now bound E[τ freeze ] and E[b τ freeze ] to complete the proof. We first bound E[b τ freeze ]. From the definition of expectation for positive random variables, E[b τ freeze ] ≤ E[ b τ freeze ] = ∞ t=1 P( b τ freeze ≥ t) ≤ 1 + ∞ t=2 P(b τ freeze + 1 ≥ t) ≤ 1 + ∞ t=1 P(τ freeze ≥ log b t ) ≤ 1 + bτ 0 + ∞ t= bτ0 +1 P(τ freeze ≥ log b t ) We will bound the summation in the last term below: ∞ t= bτ0 +1 P(τ freeze ≥ log b t ) (a) = ∞ t= bτ0 +1 P   j≥ log b t ( O j = 1)   ≤ ∞ t= bτ0 +1 j≥ log b t P( O j = 1) (b) ≤ ∞ t= bτ0 +1 j≥ log b t 4m exp − ∆ 2 m .b j−1 2 (c) ≤ 4m j≥τ 0 b j+1 t= bτ0 +1 exp − ∆ 2 m .b j−1 2 ≤ 4m j≥τ 0 b j+1 exp − ∆ 2 m .b j−1 2 ≤ 4m ∞ x=1 b x+1 exp − ∆ 2 m .b x−1 2 dx (d) = 8mb 2 log b ∞ u=1 u exp − ∆ 2 m .u du ≤ 8mb 2 log b ∞ u=0 u exp − ∆ 2 m .u du ≤ 8b 2 log b . m 2 ∆ 2 . We use the definition of τ freeze in Step (a). In step (b), we substitute the bound on probability of choosing the subspace other than span(U 1 ) from Lemma 5. We interchange the order of summation in step (c). In step (d), we perform a change of variables with x = 2 log b u + 1. Therefore, E [b τ freeze − 1] ≤ b(16mK) 2 + 8b 2 log b . m 2 ∆ 2 . For bounding E[τ freeze ], notice that τ freeze ≤ b τ freeze −1 log b , where we use the fact that for all b > 1, b x − x log b − 1 ≥ 0 for all x ≥ 0. Thus, E[τ freeze ] ≤ 1 log b E[b τ freeze − 1] . Substituting the bound for E[b τ freeze − 1] obtained above completes the proof of Theorem 6. D.2 Regret Analysis Before bounding the regret, let us set some notation here. We have a t ∈ arg max a∈At max θ∈C (i) t θ, P 1 a . Let a * t = arg max a∈At θ * , a andθ t ∈ C (i) t such thatθ t = arg max θ∈C (i) t θ, P 1 a t . The following theorem characterizes the regret after every agent has the right subspace. Theorem 8. With probability at least 1 − δ, the regret incurred after playing Projected LinUCB on the subspace containing θ * for T steps satisfies R proj,T ≤ 8mT β 2 T,δ log 1 + T mλ . Proof. The proof of Theorem 8 is contingent on the following lemma. A similar lemma appears as Lemma 19.4 in [22]. Lemma 9. Let a 1 , · · · , a T be the sequence of action vectors played up to and including time T . Then, T t=1 min 1, ||a t || 2 Vt(λ) † ≤ 2 log det(Σ T +1 ) det(Σ 1 ) . = I T R (a) ≤ √ T T t=1 (w (i) t ) 2 (b) ≤ √ T 4β 2 T,δ T t=1 min 1, ||A t || 2 Vt(λ) † (c) ≤ 8T β 2 T,δ log det(Σ T +1 ) det(Σ 1 ) (d) ≤ 8mT β 2 T,δ log 1 + T mλ . Step (a) follows from Cauchy-Schwarz inequality. In step (b), we use (21). Step (c) is obtained by application of Lemma 9. Step (d) results from the fact that det(Σ 1 ) = det(Λ) = λ m and det(Σ t ) ≤ λ + t−1 m m (which follows from Lemma 11 in [21]), thus completing the proof. j | ≤ (K/N ) + 2. Agents communicate at the end of the phase to update their active set. Notice that the phase length is b j−1 , which satisfies the communication constraint of O(log T ) communications for any time horizon T . j for the entire duration of the phase. 2. Choice of C t while playing Projected LinUCB -The construction and analysis of the confidence set C For the remainder of the phase j, play the Projected LinUCB [21] on the subspace ID O using only its Projected LinUCB samples collected thus far. t . Therefore, for finite action sets, Projected LinUCB plays Single Agent running SubGoss -In case of no communication, when a single agent runs Algorithm 1 (without requiring communication graph G), it incurs a higher regret due to subspace exploration (which scales as O(Km √ T ) instead of O((K/N )m √ T ) in the multi-agent case) ) and in particular showing it to be finite. This follows from two key observations arising from pure exploration in the explore time steps. First, for any agent i ∈ [N ] and subspace ID k ∈ S Corollary 3 . 3Consider a high-dimensional system where N agents are connected by a complete graph with K = N = d m , where d is a multiple of m and m is a constant. Assume that d ≥ 3m. With the choice of b as in Corollary 2, (a) for any agent i ∈ [N ], the regret with collaboration scales as Proof. The proof follows by substituting K = N = d m in Theorems 1 and 6, along with the bound for spreading time E[b 2τ (G) spr ] from Corollary 2. The following observations can be deduced from point (b) of Corollary 3: (i) when T = Θ d 1+γ for all γ ≥ 0, r C (T ) = Ω d log d . (ii) when T = Θ(e d β ) for all β ∈ (0, 1), r C (T ) = Ω(d 1−β ). (iii) when T = Ω(e d ), r C (T ) = Ω(1). Figure 1 : 1Illustrating benefit of collaboration. (d, m, K) are (24, 2, 12), (48, 3, 16), and (60, 4, 15) respectively. j is the number of times agent i explores the subspace span(U k ) up to and including phase j and O the ID of the subspace in which every agent i ∈ [N ] plays Projected LinUCB in the exploit time slots of phase j and subsequently, recommends it at the end of phase j. Let χ j = 1 is true for all phases j ≥ τ and all agents i ∈ [N ], as they are arbitrarily chosen. Furthermore, the update step of the algorithm along with the above reasoning tells us that none of the agents will change their subspaces after any phase j ≥ τ , as the agents already have the correct subspace in their respective playing sets. Thus, all agents i ∈ [N ].Proposition 1 also tells us that for all phases j ≥ τ , in the exploit time slots, all agents will play Projected LinUCB from the subspace span(U 1 ), because the algorithm picks the subspace span(U O (i) j ) in the exploit time slots of phase j and O (i) j = 1 for all j ≥ τ . p for all n ∈ [m], a k,p denotes the p th column of A m for all n ∈ [m] and > 0, then θ E 1 R proj,T > 8mT β 2 T,δ log 12Theorem 8 and δ = 1 T , E[R proj,T ] is bounded as follows: E[R proj,T ] = E R proj,T 1 R proj,T ≤ 8mT β 2 T,δ log 1 + T mλ + E R proj,T 1 R proj,T > 8mT β 2 T,δ j 2 , i.e., O j denotes the ID of subspace in which the single agent plays Projected LinUCB in the exploit time slots of phase j. Letτ 0 = min j ∈ N : ∀j ≥ j, b j −1 ≥ 8mK b j −1 2 We require G to be connected. See Appendix A standard graph-theoretic notion which has nothing to do with ambient dimension d in R d AcknowledgementsThis work was partially supported by ONR Grant N00014-19-1-2566, NSF Grant SATC 1704778, ARO grant W911NF-17-1-0359, the NSA SoS Lablet H98230-18-D-0007 and the WNCG Industrial Affiliates Program.where step (b) follows by re-writing the range of summations. The sum l≥τ 0 b 2l+1 exp − ∆ 2 m b l−1 2is bounded as follows:where we perform change of variables with x = 2 log b u + 1 in step (c). Therefore,Appendix D Analysis of Projected LinUCBWe now analyze Projected LinUCB as a separate black box, where every agent plays from the correct subspace containing θ * for T steps. This holds as every agent plays Projected LinUCB on the subspace span(U 1 ) during the exploit time slots after the freezing phase τ by only using the Projected LinUCB actions and rewards for span(U 1 ), which of course will happen for less than T steps. As this analysis is valid for all agents, we drop the superscript (i) from all the pertinent variables, whereD.1 Confidence Set Construction and AnalysisThe construction of the confidence set is done in a similar way as in[21,22].Theorem 7.Let δ ∈ (0, 1) and β t,δ = S √ λ + 2 log 1 δ + m log 1 + t−1 λm . Then,The proof of Theorem 7 is adapted from the proof of Theorem 8 in[21], which considers a different setting: the subspace in which θ * lies is unknown and needs to be estimated. The error in estimating the correct subspace appears in the construction of projected confidence set in Theorem 8 of[21]. However, in our setting, agents are aware of the true projection matrices of the subspaces and thus do not need to account for subspace estimation error while constructing confidence sets. This necessitates a different definition of the confidence set given in(18). Thus, when agents play Projected LinUCB on the correct subspace, they don't have to pay the overhead of recovering the actual subspace from the perturbed action vectors. Hence, Theorem 7 is proved by substituting the estimated projection matrixP t equal to the true projection matrix P 1 in the proof of Theorem 8 in[21]for all t ∈ N.Proof. The proof is identical to the proof of[22],Lemma 19.4, except that we use the recursive update of det(Σ t ) instead of det(V t−1 ).We now have the required ingredients to complete the proof of Theorem 8. Using the fact that θ * ∈ C (i) t and from the algorithm definitions, the following chain of inequalities is true:Thus, for all t ∈ N,which is shown in a similar way as bounding r t in[22], Theorem 19.2. However, we additionally use the fact that θ * = P 1 θ * and P 2 1 = P 1 . While proving Proposition 1, we showed that w (i) t ≤ 2S for all t ∈ N. Combining this with(20)andTherefore, the cumulative regret incurred by playing projected LinUCB for T can be bounded as follows: let I denote an all-ones column vector of size T , and R be a column vector containing the elements w 1 , · · · , w T . Then, Improved algorithms for linear stochastic bandits. Yasin Abbasi-Yadkori, Dávid Pál, Csaba Szepesvári, Advances in Neural Information Processing Systems. Yasin Abbasi-Yadkori, Dávid Pál, and Csaba Szepesvári. Improved algorithms for linear stochastic bandits. In Advances in Neural Information Processing Systems, pages 2312-2320, 2011. Online-to-confidence-set conversions and application to sparse stochastic bandits. Yasin Abbasi-Yadkori, David Pal, Csaba Szepesvari, Artificial Intelligence and Statistics. PMLRYasin Abbasi-Yadkori, David Pal, and Csaba Szepesvari. Online-to-confidence-set conversions and application to sparse stochastic bandits. In Artificial Intelligence and Statistics, pages 1-9. PMLR, 2012. Making contextual decisions with low technical debt. Alekh Agarwal, Sarah Bird, Markus Cozowicz, Luong Hoang, John Langford, Stephen Lee, Jiaji Li, Dan Melamed, Gal Oshri, Oswaldo Ribas, arXiv:1606.03966arXiv preprintAlekh Agarwal, Sarah Bird, Markus Cozowicz, Luong Hoang, John Langford, Stephen Lee, Jiaji Li, Dan Melamed, Gal Oshri, Oswaldo Ribas, et al. Making contextual decisions with low technical debt. arXiv preprint arXiv:1606.03966, 2016. Decentralized multi-agent linear bandits with safety constraints. Sanae Amani, Christos Thrampoulidis, arXiv:2012.00314arXiv preprintSanae Amani and Christos Thrampoulidis. Decentralized multi-agent linear bandits with safety constraints. arXiv preprint arXiv:2012.00314, 2020. Online decision making with high-dimensional covariates. Hamsa Bastani, Mohsen Bayati, Operations Research. 681Hamsa Bastani and Mohsen Bayati. Online decision making with high-dimensional covariates. Operations Research, 68(1):276-294, 2020. Distributed multi-player bandits-a game of thrones approach. Ilai Bistritz, Amir Leshem, Advances in Neural Information Processing Systems. Ilai Bistritz and Amir Leshem. Distributed multi-player bandits-a game of thrones approach. In Advances in Neural Information Processing Systems, pages 7222-7232, 2018. Sic-mmab: synchronisation involves communication in multiplayer multi-armed bandits. Etienne Boursier, Vianney Perchet, Advances in Neural Information Processing Systems. Etienne Boursier and Vianney Perchet. Sic-mmab: synchronisation involves communication in multiplayer multi-armed bandits. In Advances in Neural Information Processing Systems, pages 12048-12057, 2019. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends® in Machine Learning. Sébastien Bubeck, Nicolo Cesa-Bianchi, 5Sébastien Bubeck, Nicolo Cesa-Bianchi, et al. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends® in Machine Learning, 5(1):1-122, 2012. Information sharing in distributed stochastic bandits. Swapna Buccapatnam, Jian Tan, Li Zhang, 2015 IEEE Conference on Computer Communications (INFOCOM). IEEESwapna Buccapatnam, Jian Tan, and Li Zhang. Information sharing in distributed stochastic bandits. In 2015 IEEE Conference on Computer Communications (INFOCOM), pages 2605- 2613. IEEE, 2015. Bandit theory meets compressed sensing for high dimensional stochastic linear bandit. Alexandra Carpentier, Rémi Munos, Artificial Intelligence and Statistics. PMLRAlexandra Carpentier and Rémi Munos. Bandit theory meets compressed sensing for high dimensional stochastic linear bandit. In Artificial Intelligence and Statistics, pages 190-198. PMLR, 2012. Mortal multi-armed bandits. Deepayan Chakrabarti, Ravi Kumar, Filip Radlinski, Eli Upfal, Advances in neural information processing systems. Deepayan Chakrabarti, Ravi Kumar, Filip Radlinski, and Eli Upfal. Mortal multi-armed bandits. In Advances in neural information processing systems, pages 273-280, 2009. Coordinated versus decentralized exploration in multi-agent multi-armed bandits. Mithun Chakraborty, Kai Yee , Phoebe Chua, Sanmay Das, Brendan Juba, IJCAI. Mithun Chakraborty, Kai Yee Phoebe Chua, Sanmay Das, and Brendan Juba. Coordinated versus decentralized exploration in multi-agent multi-armed bandits. In IJCAI, pages 164-170, 2017. The gossiping insert-eliminate algorithm for multi-agent bandits. Ronshee Chawla, Abishek Sankararaman, Ayalvadi Ganesh, Sanjay Shakkottai, Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics. the Twenty Third International Conference on Artificial Intelligence and Statistics108Ronshee Chawla, Abishek Sankararaman, Ayalvadi Ganesh, and Sanjay Shakkottai. The gossiping insert-eliminate algorithm for multi-agent bandits. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, volume 108 of Proceedings of Machine Learning Research, pages 3471-3481, 2020. Almost tight bounds for rumour spreading with conductance. Flavio Chierichetti, Silvio Lattanzi, Alessandro Panconesi, Proceedings of the forty-second ACM symposium on Theory of computing. the forty-second ACM symposium on Theory of computingFlavio Chierichetti, Silvio Lattanzi, and Alessandro Panconesi. Almost tight bounds for rumour spreading with conductance. In Proceedings of the forty-second ACM symposium on Theory of computing, pages 399-408, 2010. Varsha Dani, P Thomas, Hayes, M Sham, Kakade, Stochastic linear optimization under bandit feedback. COLT. Varsha Dani, Thomas P Hayes, and Sham M Kakade. Stochastic linear optimization under bandit feedback. COLT, 2008. Differentially-private federated linear bandits. Abhimanyu Dubey, Alex Pentland, arXiv:2010.11425arXiv preprintAbhimanyu Dubey and Alex Pentland. Differentially-private federated linear bandits. arXiv preprint arXiv:2010.11425, 2020. Sparsity regret bounds for individual sequences in online linear regression. Sébastien Gerchinovitz, Proceedings of the 24th Annual Conference on Learning Theory. the 24th Annual Conference on Learning TheoryJMLR Workshop and Conference ProceedingsSébastien Gerchinovitz. Sparsity regret bounds for individual sequences in online linear regression. In Proceedings of the 24th Annual Conference on Learning Theory, pages 377-396. JMLR Workshop and Conference Proceedings, 2011. Distributed exploration in multi-armed bandits. Eshcar Hillel, Tomer Zohar S Karnin, Ronny Koren, Oren Lempel, Somekh, Advances in Neural Information Processing Systems. Eshcar Hillel, Zohar S Karnin, Tomer Koren, Ronny Lempel, and Oren Somekh. Distributed exploration in multi-armed bandits. In Advances in Neural Information Processing Systems, pages 854-862, 2013. Collaborative learning of stochastic bandits over a social network. Ravi Kumar Kolla, Krishna Jagannathan, Aditya Gopalan, IEEE/ACM Transactions on Networking. 264Ravi Kumar Kolla, Krishna Jagannathan, and Aditya Gopalan. Collaborative learning of stochastic bandits over a social network. IEEE/ACM Transactions on Networking, 26(4):1782- 1795, 2018. Distributed clustering of linear bandits in peer to peer networks. Nathan Korda, Balazs Szorenyi, Shuai Li, Proceedings of The 33rd International Conference on Machine Learning. The 33rd International Conference on Machine LearningPMLRNathan Korda, Balazs Szorenyi, and Shuai Li. Distributed clustering of linear bandits in peer to peer networks. In Proceedings of The 33rd International Conference on Machine Learning, pages 1301-1309. PMLR, 2016. Stochastic linear bandits with hidden low rank structure. Sahin Lale, Kamyar Azizzadenesheli, Anima Anandkumar, Babak Hassibi, abs/1901.09490CoRRSahin Lale, Kamyar Azizzadenesheli, Anima Anandkumar, and Babak Hassibi. Stochastic linear bandits with hidden low rank structure. CoRR, abs/1901.09490, 2019. Bandit algorithms. Tor Lattimore, Csaba Szepesvári, Cambridge University PressTor Lattimore and Csaba Szepesvári. Bandit algorithms. Cambridge University Press, 2020. A contextual-bandit approach to personalized news article recommendation. Lihong Li, Wei Chu, John Langford, Robert E Schapire, Proceedings of the 19th international conference on World wide web. the 19th international conference on World wide webACMLihong Li, Wei Chu, John Langford, and Robert E Schapire. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World wide web, pages 661-670. ACM, 2010. Decentralized cooperative stochastic bandits. David Martínez-Rubio, Varun Kanade, Patrick Rebeschini, Advances in Neural Information Processing Systems. David Martínez-Rubio, Varun Kanade, and Patrick Rebeschini. Decentralized cooperative stochastic bandits. In Advances in Neural Information Processing Systems, pages 4531-4542, 2019. Social learning in multi agent multi armed bandits. Abishek Sankararaman, Ayalvadi Ganesh, Sanjay Shakkottai, Proceedings of the ACM on Measurement and Analysis of Computing Systems. 33Abishek Sankararaman, Ayalvadi Ganesh, and Sanjay Shakkottai. Social learning in multi agent multi armed bandits. Proceedings of the ACM on Measurement and Analysis of Computing Systems, 3(3):1-35, 2019. From ads to interventions: Contextual bandits in mobile health. Ambuj Tewari, A Susan, Murphy, Mobile Health. SpringerAmbuj Tewari and Susan A Murphy. From ads to interventions: Contextual bandits in mobile health. In Mobile Health, pages 495-517. Springer, 2017. Robust multi-agent multi-armed bandits. Daniel Vial, Sanjay Shakkottai, R Srikant, arXiv:2007.03812arXiv preprintDaniel Vial, Sanjay Shakkottai, and R Srikant. Robust multi-agent multi-armed bandits. arXiv preprint arXiv:2007.03812, 2020. Optimal algorithms for multiplayer multi-armed bandits. Po-An Wang, Alexandre Proutiere, Kaito Ariu, Yassir Jedra, Alessio Russo, International Conference on Artificial Intelligence and Statistics. PMLRPo-An Wang, Alexandre Proutiere, Kaito Ariu, Yassir Jedra, and Alessio Russo. Optimal algorithms for multiplayer multi-armed bandits. In International Conference on Artificial Intelligence and Statistics, pages 4120-4129. PMLR, 2020. Distributed bandit learning: Near-optimal regret with efficient communication. Yuanhao Wang, Jiachen Hu, Xiaoyu Chen, Liwei Wang, International Conference on Learning Representations. Yuanhao Wang, Jiachen Hu, Xiaoyu Chen, and Liwei Wang. Distributed bandit learning: Near-optimal regret with efficient communication. In International Conference on Learning Representations, 2020. Interactively optimizing information retrieval systems as a dueling bandits problem. Yisong Yue, Thorsten Joachims, Proceedings of the 26th Annual International Conference on Machine Learning. the 26th Annual International Conference on Machine LearningYisong Yue and Thorsten Joachims. Interactively optimizing information retrieval systems as a dueling bandits problem. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 1201-1208, 2009.
[]
[ "HOMOGENIZATION AND LOW MACH NUMBER LIMIT OF COMPRESSIBLE NAVIER-STOKES EQUATIONS IN CRITICALLY PERFORATED DOMAINS", "HOMOGENIZATION AND LOW MACH NUMBER LIMIT OF COMPRESSIBLE NAVIER-STOKES EQUATIONS IN CRITICALLY PERFORATED DOMAINS" ]
[ "Peter Bella ", "Florian Oschmann " ]
[]
[]
In this note, we consider the homogenization of the compressible Navier-Stokes equations in a periodically perforated domain in R 3 . Assuming that the particle size scales like ε 3 , where ε > 0 is their mutual distance, and that the Mach number decreases fast enough, we show that in the limit ε → 0, the velocity and density converge to a solution of the incompressible Navier-Stokes equations with Brinkman term. We strongly follow the methods of Höfer, Kowalczyk and Schwarzacher [https://doi.org/10.1142/S0218202521500391], where they proved convergence to Darcy's law for the particle size scaling like ε α with α ∈ (1, 3).
10.1007/s00021-022-00707-1
[ "https://export.arxiv.org/pdf/2104.05578v2.pdf" ]
233,210,056
2104.05578
c91f52b1b85d9f5b775fcb26ab6de2f0708d062a
HOMOGENIZATION AND LOW MACH NUMBER LIMIT OF COMPRESSIBLE NAVIER-STOKES EQUATIONS IN CRITICALLY PERFORATED DOMAINS 24 Nov 2022 Peter Bella Florian Oschmann HOMOGENIZATION AND LOW MACH NUMBER LIMIT OF COMPRESSIBLE NAVIER-STOKES EQUATIONS IN CRITICALLY PERFORATED DOMAINS 24 Nov 202210.1142/S0218202521500391] In this note, we consider the homogenization of the compressible Navier-Stokes equations in a periodically perforated domain in R 3 . Assuming that the particle size scales like ε 3 , where ε > 0 is their mutual distance, and that the Mach number decreases fast enough, we show that in the limit ε → 0, the velocity and density converge to a solution of the incompressible Navier-Stokes equations with Brinkman term. We strongly follow the methods of Höfer, Kowalczyk and Schwarzacher [https://doi.org/10.1142/S0218202521500391], where they proved convergence to Darcy's law for the particle size scaling like ε α with α ∈ (1, 3). Introduction We consider a bounded smooth domain D ⊂ R 3 which for ε > 0 is perforated by tiny obstacles of size ε 3 , and show that solutions to the compressible Navier-Stokes equations in this domain converge as ε → 0 to a solution of the incompressible Navier-Stokes equations with Brinkman term. To the best of our knowledge, this is the first result of homogenization of compressible fluids for a critically sized perforation. There is a vast of literature concerning the homogenization of fluid flows in perforated domains. We will just cite a few. For incompressible fluids, Allaire found in [2] and [3] that, concerning the ratios of particle size and distance, there are mainly three regimes of particle sizes ε α , where α ≥ 1. Heuristically, if the particles are large, the velocity will slow down and finally stop. This phenomenon occurs if (in three dimensions) α ∈ [1, 3) and gives rise to Darcy's law. When the particles are very small, i.e., α > 3, they should not affect the fluid, yielding that in the limit, the fluid motion is still governed by the Stokes or Navier-Stokes equations. The third regime is the so-called critical case α = 3, where the particles are large enough to put some friction on the fluid, but not too large to stop the flow. For incompressible fluids, the non-critical cases α ∈ (1, 3) and α > 3 were considered in [3], while [2] dealt with the critical case α = 3. The case α = 1 was treated in [1]. In all the aforementioned literature, the proofs were given by means of suitable oscillating test functions, first introduced by Tartar in [17] and later adopted by Cioranescu and Murat in [5] for the Poisson equation. In the critical case, the additional friction term is the main part of Brinkman's law. Cioranescu and Murat considered in [5] the Poisson equation in a perforated domain, where they found in the limit "a strange term coming from nowhere". This Brinkman term purely comes from the presence of holes in the domain D ε . It physically represents the energy of boundary layers around each obstacle, as its columns are proportional to the drag force around a single particle [2, Proposition 2.1.4 and Remark 2.1.5]. The assumptions on the distribution of the holes can also be generalized. For the critical case, Giunti, Höfer, and Velázquez considered in [11] homogenization of the Poisson equation in a randomly perforated domain. They showed that the "strange term" also occurs in their setting. Hillairet considered in [12] the Stokes equations and random obstacles with a hard sphere condition. This condition was removed by Giunti and Höfer [10], where they showed that for incompressible fluids and randomly distributed holes with random radii, the randomness does not affect the convergence to Brinkman's law. More recently, for large particles, Giunti showed in [9] a similar convergence result to Darcy's law. Unlike as for incompressible fluids, the homogenization theory for compressible fluids is rather sparse. Masmoudi considered in [15] the case α = 1 of large particles, giving rise to Darcy's law. For large particles with α ∈ (1, 3), Darcy's law was just recently treated in [13] for a low Mach number limit. The case of small particles (α > 3) was treated in [6,7,14] for different growing conditions on the pressure. Random perforations in the spirit of [10] for small particles were considered by the authors in [4], where in the limit, the equations remain unchanged as in the periodic case. We want to emphasize that the methods presented here are strongly related to those of [13]. As a matter of fact, their techniques used in the case of large holes also apply in our case for holes having critical size. Notation: Throughout the whole paper, we denote the Frobenius scalar product of two matrices A, B ∈ R 3×3 by A : B := 1≤i,j≤3 A ij B ij . Further, we use the standard notation for Lebesgue and Sobolev spaces, where we denote this spaces even for vector valued functions as in scalar case, e.g., L p (D) instead of L p (D; R 3 ). Moreover, C > 0 denotes a constant which is independent of ε and might change its value whenever it occurs. Organization of the paper: The paper is organized as follows: In Section 2, we give a precise definition of the perforated domain D ε and state our main results for the steady Navier-Stokes equations. In Section 3, we introduce oscillating test functions, which will be crucial to show convergence of the velocity, density, and pressure. Section 4 is devoted to invoke the concept of Bogovskiȋ's operator as an inverse of the divergence, which is used to give uniform bounds independent of ε. In Section 5, we show how to pass to the limit ε → 0 and obtain the limiting equations. Setting and main results Consider a bounded domain D ⊂ R 3 with smooth boundary. Let ε > 0 and cover D with a regular mesh of size 2ε. Set x ε i ∈ (2ε Z) 3 as the center of the cell with index i and P ε i := x ε i + (−ε, ε) 3 . Further, let T ⋐ B 1 (0) be a compact and simply connected set with smooth boundary and set T ε i := x ε i + ε 3 T . We now define the perforated domain as D ε := D \ i∈Kε T ε i , K ε := {i : P ε i ⊂ D}.(1) By the periodic distribution of the holes, the number of holes inside D ε satisfy |K ε | ≤ C |D| ε 3 for some C > 0 independent of ε. In D ε , we consider the steady compressible Navier-Stokes equations      div(̺ ε u ε ⊗ u ε ) − div S(∇u ε ) + 1 ε β ∇̺ γ ε = ̺ ε f + g in D ε , div(̺ ε u ε ) = 0 in D ε , u ε = 0 on ∂D ε ,(2) where ̺ ε , u ε are the fluids density and velocity, respectively, and S(∇u ε ) is the Newtonian viscous stress tensor of the form S(∇u) = µ ∇u + ∇ T u − 2 3 div(u)I + η div(u)I (3) with viscosity coefficients µ > 0, η ≥ 0. Further, we assume that γ ≥ 3, β > 3 (γ+1), and f , g ∈ L ∞ (D) are given. Since the equations (2) are invariant under adding a constant to the pressure term ε −β ̺ γ ε , we define p ε := ε −β (̺ γ ε − ̺ γ ε ε ),(4) where · ε denotes the mean value over D ε , given by f ε = 1 |D ε |ˆD ε f dx. We will show convergence of the velocity u ε and the pressure p ε to limiting functions u and p, respectively, such that the couple (u, p) solves the incompressible steady Navier-Stokes-Brinkman equations      div(̺ 0 u ⊗ u) − µ∆u + ∇p + µM u = ̺ 0 f + g in D, div(u) = 0 in D, u = 0 on ∂D, where the resistance matrix M is introduced in the next section, and the constant ̺ 0 is the strong limit of ̺ ε in L 2γ (D), which is determined by the mass constraint on ̺ ε as formulated in Definition 2.1 below. Before stating our main result, we introduce the standard concept of finite energy weak solutions to (2). Definition 2.1. Let D ε be as in (1) and γ ≥ 3, m > 0 be fixed. We say a couple (̺ ε , u ε ) is a finite energy weak solution to system (2) if ̺ ε ∈ L 2γ (D ε ), u ε ∈ W 1,2 0 (D ε ), ̺ ε ≥ 0 a.e. in D ε ,ˆD ε ̺ ε dx = m, Dε ̺ ε u ε · ∇ψ dx = 0, Dε p ε div ϕ + (̺ ε u ε ⊗ u ε ) : ∇ϕ − S(∇u ε ) : ∇ϕ + (̺ ε f + g) · ϕ dx = 0 for all test functions ψ ∈ C ∞ c (D ε ) and all test functions ϕ ∈ C ∞ c (D ε ; R 3 ), where p ε is given in (4), and the energy inequalityˆD ε S(∇u ε ) : ∇u ε dx ≤ˆD ε (̺ ε f + g) · u ε dx (5) holds. Remark 2.2. Existence of finite energy weak solutions to system (2) is known for all values γ > 3/2; see, for instance, [16,Theorem 4.3]. However, we need the assumption γ ≥ 3 to bound the convective term div(̺ ε u ε ⊗ u ε ) in a proper way, see Section 4. Let us denote the zero extension of a function f with D ε as its domain of definition byf , that is, f = f in D ε ,f = 0 in R 3 \D ε . Our main result for the stationary Navier-Stokes equations now reads as follows: Theorem 2.3. Let D ⊂ R 3 be a bounded domain with smooth boundary, 0 < ε < 1, D ε be as in (1), γ ≥ 3, m > 0 and f , g ∈ L ∞ (D) . Let β > 3 (γ + 1) and (̺ ε , u ε ) be a sequence of finite energy weak solutions to problem (2). Then, with p ε defined in (4), we can extract subsequences (not relabeled) such that̺ ε → ̺ 0 strongly in L 2γ (D), p ε ⇀ p weakly in L 2 (D), u ε ⇀ u weakly in W 1,2 0 (D), where ̺ 0 = m/|D| is constant and (p, u) ∈ L 2 (D) × W 1,2 0 (D) with´D p = 0 is a weak solution to the steady incompressible Navier-Stokes-Brinkman equations      div(̺ 0 u ⊗ u) + ∇p − µ∆u + µM u = ̺ 0 f + g in D, div(u) = 0 in D, u = 0 on ∂D,(6) where M will be defined in (11). Remark 2.4. It it well known that the solution to system (6) is unique if f and g are "sufficiently small", see, e.g., [18, Chapter II, Theorem 1.3]. This smallness assumption can be dropped in the case of Stokes equations, i.e., without the convective term div(̺ 0 u ⊗ u). The cell problem and oscillating test functions In this section, we introduce oscillating test functions and define the resistance matrix M , following the original work of Allaire [2]. We repeat here the definition of these functions as well as the estimates given in [13]. Consider for a single particle T the solution (q k , w k ) to the cell problem          ∇q k − ∆w k = 0 in R 3 \T, div(w k ) = 0 in R 3 \T, w k = 0 on ∂T, w k = e k at infinity,(7) where e k is the k-th unit basis vector of the canonical basis of R 3 . Note that the solution exists and is unique, see, e.g., [8,Chapter V]. Let us further recall the definition of oscillating test functions as made in [2] (see also [13]): We set w ε k = e k , q ε k = 0 in P ε i ∩ D for each P ε i with P ε i ∩ ∂D = ∅. Now, we denote B r i := B r (x ε i ) and split each cell P ε i entirely included in D into the following four parts: P ε i = T ε i ∪ C ε i ∪ D ε i ∪ K ε i , where C ε i is the open ball centered at x ε i with radius ε/2 and perforated by the hole T ε i , D ε i = B ε i \B ε/2 i is the ball with radius ε perforated by the ball with radius ε/2, and K ε i = P ε i \ B ε i are the remaining corners, see Figure 1. Figure 1. Splitting of the cell P ε i In these parts, we define ε 3 ε 2ε K ε i D ε i C ε i T ε iw ε k = e k q ε k = 0 in K ε i , ∇q ε k − ∆w ε k = 0 div(w ε k ) = 0 in D ε i , w ε k (x) = w k x ε 3 q ε k (x) = 1 ε 3 q k x ε 3 in C ε i , w ε k = 0 q ε k = 0 in T ε i , where we impose matching Dirichlet boundary conditions and (q k , w k ) is the solution to the cell problem (7). As shown in [13, Lemma 3.5], we have for the functions (q ε k , w ε k ) and all p > 3 2 the estimates ∇w ε k L p (D) + q ε k L p (D) ≤ Cε 3 2 p −1 ,(8)∇q ε k L p (∪ i C ε i ) ≤ Cε 6 1 p −1 ,(9)∇w ε k L 2 (∪ i B ε i \B ε/4 i ) + q ε k L 2 (∪ i B ε i \B ε/4 i ) ≤ Cε,(10) where the constant C > 0 does not depend on ε. Moreover, we have the following Theorem due to Allaire: The functions (q ε k , w ε k ) fulfill: (H1) q ε k ∈ L 2 (D), w ε k ∈ W 1,2 (D); (H2) div w ε k = 0 in D and w ε k = 0 on the holes T ε i ; (H3) w ε k ⇀ e k in W 1,2 (D), q ε k ⇀ 0 in L 2 (D)/ R; (H4) For any ν ε , ν ∈ W 1,2 (D) with ν ε = 0 on the holes T ε i and ν ε ⇀ ν, and any ϕ ∈ D(D), we have ∇q ε k − ∆w ε k , ϕν ε W −1,2 (D),W 1,2 0 (D) → M e k , ϕν W −1,2 (D),W 1,2 0 (D) , where the resistance matrix M ∈ W −1,∞ (D) is defined by its entries M ik via M ik , ϕ D ′ (D),D(D) = lim ε→0ˆD ϕ∇w ε i : ∇w ε k dx (11) for any test function ϕ ∈ D(D). Further, for any p ≥ 1, w ε k − e k L p (D) → 0. Bogovskiȋ's operator and uniform bounds for the steady Navier-Stokes equations As in [6], we have the following result for the inverse of the divergence operator: B ε : f ∈ L q (D ε ) :ˆD ε f dx = 0 → W 1,q 0 (D ε ) such that for any f ∈ L q (D ε ) with´D ε f dx = 0, div B ε (f ) = f in D ε , B ε (f ) W 1,q 0 (Dε) ≤ C 1 + ε 3 2 q −1 f L q (Dε) , where the constant C > 0 does not depend on ε. We will use this result to bound the pressure p ε by the density ̺ ε . Since the main ideas how to get uniform bounds on u ε , ̺ ε , and p ε are given in [13], we just sketch the proof in our case. First, by Korn's inequality and (5), we find µ ∇u ε 2 L 2 (Dε) ≤ ̺ ε L 6 5 (Dε) u ε L 6 (Dε) f L ∞ (D) + g L ∞ (D) u ε L 1 (Dε) . Together with Sobolev embedding, we obtain u ε L 6 (Dε) ≤ C ∇u ε L 2 (Dε) , which yields u ε L 6 (Dε) + ∇u ε L 2 (Dε) ≤ C( ̺ ε L + 1).(12) To get uniform bounds on the velocity, we first have to estimate the density. To this end, let B ε be as in Theorem 4.1. Testing the first equation in (2) with B ε (p ε ) ∈ W 1,2 0 (D ε ) yields p ε 2 L 2 (Dε) =ˆD ε p ε div B ε (p ε ) dx =ˆD ε S(∇u ε ) : ∇B ε (p ε ) − (̺ ε u ε ⊗ u ε ) : ∇B ε (p ε ) − (̺ ε f + g) · B ε (p ε ) dx. Recalling ̺ ε ∈ L 2γ (D ε ) and γ ≥ 3, this leads to p ε 2 L 2 (Dε) ≤ C( ∇u ε L 2 (Dε) + ̺ ε L 6 (Dε) u ε 2 L 6 (Dε) ) ∇B ε (p ε ) L 2 (Dε) + C f L ∞ (Dε) ̺ ε L 2γ (Dε) + g L ∞ (Dε) B ε (p ε ) L 2 (Dε) (12) ≤ C( ̺ ε L 6 5 (Dε) + 1 + ̺ ε L 6 (Dε) ( ̺ ε 2 L 6 5 (Dε) + 1)) ∇B ε (p ε ) L 2 (Dε) + C( ̺ ε L 2γ (Dε) + 1) B ε (p ε ) L 2 (Dε) ≤ C( ̺ ε L 2γ (Dε) + ̺ ε L 6 (Dε) ̺ ε 2 L 6 5 (Dε) + 1) B ε (p ε ) W 1,2 0 (Dε) ≤ C( ̺ ε L 2γ (Dε) + ̺ ε 3 L 2γ (Dε) + 1) B ε (p ε ) W 1,2 0 (Dε) ≤ C( ̺ ε L 2γ (Dε) + ̺ ε 3 L 2γ (Dε) + 1) p ε L 2 (Dε) , that is, p ε L 2 (Dε) ≤ C( ̺ ε L 2γ (Dε) + ̺ ε 3 L 2γ (Dε) + 1).(13) Further, we have ̺ ε ε = 1 |D ε |ˆD ε ̺ ε dx = m |D ε | and 1 ε β ̺ γ ε − ̺ ε γ ε L 2 (Dε) ≤ C ε β ̺ γ ε − ̺ γ ε ε L 2 (Dε) (4) = C p ε L 2 (Dε) , see [13, Section 3.3 and inequality (4.7)]. This yields 1 ε β ̺ γ ε − ̺ ε γ ε L 2 (Dε) ≤ C p ε L 2 (Dε) ≤ C( ̺ ε L 2γ (Dε) + ̺ ε 3 L 2γ (Dε) + 1) ≤ C ̺ γ ε − ̺ ε γ ε 1 γ L 2 (Dε) + m |D ε | 1−1/(2γ) + ̺ γ ε − ̺ ε γ ε 3 γ L 2 (Dε) + m 3 |D ε | 3−3/(2γ) + 1 . Together with ab 1 p ≤ b + a p ′ ∀a, b > 0, 1 p + 1 p ′ = 1, which is a consequence of Young's inequality, we obtain, using γ ≥ 3 and the fact that we may assume ε ≤ 1 small enough, 1 ε β ̺ γ ε − ̺ ε γ ε L 2 (Dε) ≤ 1 4ε β ̺ γ ε − ̺ ε γ ε L 2 (Dε) + C + 1 4ε β ̺ γ ε − ̺ ε γ ε L 2 (Dε) + C ′ = 1 2ε β ̺ γ ε − ̺ ε γ ε L 2 (Dε) + C. Using that |̺ ε − ̺ ε ε | γ ≤ |̺ γ ε − ̺ ε γ ε |, which is a consequence of the triangle inequality for the metric d(a, b) = |a − b| 1 γ for γ ≥ 1, we conclude 1 ε β ̺ ε − ̺ ε ε γ L 2γ (Dε) ≤ 1 ε β ̺ γ ε − ̺ ε γ ε L 2 (Dε) ≤ C, which further gives rise to ̺ ε L 2γ (Dε) ≤ ̺ ε − ̺ ε ε L 2γ (Dε) + C ̺ ε ε ≤ C. In view of (12) and (13), we finally establish u ε W 1,2 0 (Dε) ≤ C, ̺ ε L 2γ (Dε) ≤ C, p ε L 2 (Dε) ≤ C, ̺ ε − ̺ ε ε L 2γ (Dε) ≤ Cε β γ(14) for some constant C > 0 independent of ε. Convergence proof for the steady case The proof of convergence we give here is essentially the same as in [13]. We thus just sketch the steps done there while highlighting the differences. Proof of Theorem 2.3. Step 1: Recall that, for a function f defined on D ε , we denote byf its zero prolongation to R 3 . By the uniform estimates (14), we can extract subsequences (not relabeled) such thatũ ε ⇀ u weakly in W 1,2 0 (D), p ε ⇀ p weakly in L 2 (D), ̺ ε → ̺ 0 strongly in L 2γ (D), where ̺ 0 = m/|D| > 0 is constant. The strong convergence of the density is obtained by ̺ ε − ̺ 0 L 2γ (D) ≤ ̺ 0 L 2γ (D\Dε) + ̺ ε − ̺ ε ε L 2γ (Dε) + ̺ ε ε − ̺ 0 L 2γ (Dε) ≤ ̺ 0 |D \ D ε | 1 2γ + Cε β γ + m|D ε | 1 2γ 1 |D ε | − 1 |D| → 0, since |D ε | → |D|. Due to Rellich's theorem, we further havẽ u ε → u strongly in L q (D) for all 1 ≤ q < 6. Step 2: We begin by proving that the limiting velocity u is solenoidal. To this end, let ϕ ∈ D(R 3 ). By the second equation of (2), we have 0 =ˆR 3̺ εũε · ∇ϕ dx → ̺ 0ˆD u · ∇ϕ dx. This together with the compactness of the trace operator yields div u = 0 in D, u = 0 on ∂D. (15) Step 3: To prove convergence of the momentum equation, let ϕ ∈ D(D) and use ϕw ε k as test function in the first equation of (2). This yieldŝ D S(∇ũ ε ) : ∇(ϕw ε k )dx =ˆD(̺ εũε ⊗ũ ε ) : ∇(ϕw ε k )dx +ˆDp ε div(ϕw ε k )dx +ˆD(̺ ε f + g) · (ϕw ε k )dx. Using the definition of S in (3) and the fact that div(w ε k ) = 0 by (H2) of Theorem 3.1, we rewrite the left hand side aŝ D S(∇ũ ε ) : ∇(ϕw ε k ) dx = µˆD ∇ũ ε : ∇(ϕw ε k ) dx + µ 3 + η ˆD div(ũ ε ) div(ϕw ε k ) dx = µˆD ∇w ε k : ∇(ϕũ ε ) + ∇ũ ε : (w ε k ⊗ ∇ϕ) − ∇w ε k : (ũ ε ⊗ ∇ϕ) dx + µ 3 + η ˆD div(ũ ε )w ε k · ∇ϕ dx and add the term −´D q ε k div(ϕũ ε ) dx to both sides to obtain µˆD∇w ε k : ∇(ϕũ ε ) − q ε k div(ϕũ ε ) dx I 1 + µˆD∇ũ ε : (w ε k ⊗ ∇ϕ) − ∇w ε k : (ũ ε ⊗ ∇ϕ) dx I 2 + µ 3 + η ˆD div(ũ ε )w ε k · ∇ϕ dx I 3 =ˆD(̺ εũε ⊗ũ ε ) : ∇(ϕw ε k ) dx I 4 +ˆDp ε w ε k · ∇ϕ + (̺ ε f + g) · (ϕw ε k ) dx I 5 −ˆD q ε k div(ϕũ ε ) dx I 6 . Since ν ε :=ũ ε and ν := u fulfill hypothesis (H4) of Theorem 3.1, we have I 1 → µ M e k , ϕu , where ·, · denotes the dual product of W −1,2 (D) and W 1,2 0 (D). Further, byũ ε → u strongly in L 2 (D) and ∇w ε k ⇀ 0 by hypothesis (H3), I 2 → µˆD ∇u : (e k ⊗ ∇ϕ) dx. Because of w ε k → e k strongly in L 2 (D) and (15), we deduce I 3 → 0, I 5 →ˆD p e k · ∇ϕ + (̺ 0 f + g) · (ϕe k ) dx. Step 4: To show convergence of I 4 , we proceed as follows. First, since u ε = 0 on ∂D ε andũ ε ⇀ u in W 1,2 (D), we have ∇u ε = ∇ũ ε ⇀ ∇u in L 2 (D). Second, as shown above for γ ≥ 3,̺ ε → ̺ 0 strongly in L 2γ (D) andũ ε → u strongly in L q (D) for any 1 ≤ q < 6, in particular in L 4 (D). Together with the strong convergence of w ε k in any L p (D) (see Theorem 3.1), in particular in L 12 (D), we get ̺ εũε ⊗ w ε k → ̺ 0 u ⊗ e k strongly in L 2 (D). This together with div(̺ ε u ε ) = 0 yields I 4 =ˆD ε (̺ ε u ε ⊗ u ε ) : ∇(ϕw ε k ) dx = −ˆD ε ̺ ε u ε · ∇u ε · ϕw ε k dx = −ˆD ε ϕ∇u ε : (̺ ε u ε ⊗ w ε k ) dx = −ˆD ϕ∇ũ ε : (̺ εũε ⊗ w ε k ) dx → −ˆD ϕ∇u : (̺ 0 u ⊗ e k ) dx =ˆD(̺ 0 u ⊗ u) : ∇(ϕe k ) dx. In the case γ > 3, one can also proceed by seeing that ̺ εũε ⊗ũ ε → ̺ 0 u ⊗ u strongly in L 2 (D), where we used thatũ ε → u strongly in L q (D) for q = 4γ/(γ − 1) < 6. Step 5: It remains to show convergence of I 6 . First, recall B r i = B r (x ε i ). We follow the idea of [13] and introduce a further splitting of the integral: Let ψ ∈ C ∞ c (B 1/2 (0)) be a cut-off function with ψ = 1 on B 1/4 (0), define for x ∈ B ε/2 i the function ψ i ε (x) := ψ((x − x ε i )/ε), and extend ψ i ε by zero to the whole of D. Set finally ψ ε (x) := i:P ε i ⊂D ψ i ε (x), where P ε i is the cell of size 2ε with center x ε i ∈ (2ε Z) 3 . Then we have ψ ε ∈ C ∞ c ( i B ε/2 i ) and ψ ε = 1 in i B ε/4 i , |∇ψ ε | ≤ Cε −1 .(16) With this at hand, we write ̺ ε ε · I 6 = ̺ ε εˆD ε q ε k ψ ε div(ϕu ε ) dx + ̺ ε εˆD ε q ε k (1 − ψ ε )ϕ div(u ε ) dx + ̺ ε εˆD ε q ε k (1 − ψ ε )u ε · ∇ϕ dx =: I 1 + I 2 + I 3 . Observe that since supp ψ ε ⊂ ∪ i B ε/2 i , the term I 1 covers the behavior of q ε k "near" the holes, whereas I 2 and I 3 cover the behavior "far away". Since q ε k and ψ ε are (2ε)-periodic functions and q ε k ψ ε ∈ L 2 (D), we have q ε k ψ ε ⇀ 0 in L 2 (D)/ R. This together withũ ε → u strongly in L 2 (D) yields |I 3 | → 0. For I 2 , we use the definition of q ε k and (10) to find |I 2 | ≤ CˆD \∪ i B ε/4 i |q ε k | | div(u ε )| dx (14) ≤ C q ε k L 2 (D\∪ i B ε/4 i ) = C q ε k L 2 (∪ i B ε i \B ε/4 i ) ≤ Cε → 0. To prove I 1 → 0, we write, using div(̺ ε u ε ) = 0, I 1 =ˆD ε ∇(q ε k ψ ε ϕ) · (̺ ε u ε ) dx −ˆD ε ∇(q ε k ψ ε ϕ) · ( ̺ ε ε u ε ) dx + ̺ ε εˆD ε q ε k ψ ε u ε · ∇ϕ dx =ˆD ε ∇(q ε k ψ ε ϕ)(̺ ε − ̺ ε ε ) · u ε dx + o(1). Here, we used again the periodicity of q ε k and ψ ε to conclude q ε k ψ ε ⇀ 0 in L 2 (D)/ R. This and the strong convergence ofũ ε to u in L 2 (D) shows that the last term vanishes in the limit ε → 0. For the remaining integral, we find, recalling supp ψ ε ⊂ ∪ i B ε/2 i and C ε i = B ε/2 i \ T ε i , |I 1 | ≤ ∇(q ε k ψ ε ϕ) L 2γ γ−1 (∪ i C ε i ) ̺ ε − ̺ ε ε L 2γ (Dε) u ε L 2 (Dε) + o(1) ≤ Cε β γ ∇(q ε k ψ ε ϕ) L 2γ γ−1 (∪ i C ε i ) + o(1). Since |∇ψ ε | ≤ Cε −1 , we have |∇(q ε k ψ ε ϕ)| ≤ C |∇q ε k | + 1 ε |q ε k | , thus |I 1 | ≤ Cε β γ ∇q ε k L 2γ γ−1 (∪ i C ε i ) + 1 ε q ε k L 2γ γ−1 (∪ i C ε i ) + o(1). Together with (8) and (9) for p = 2γ/(γ − 1) > 3/2, we establish |I 1 | ≤ Cε β γ ε −3− 3 γ + ε −1− 3 γ + o(1) ≤ Cε −3+ β−3 γ + o(1) → 0, provided β > 3 (γ + 1). To summarize, we have in the limit ε → 0 for all functions ϕ ∈ D(D) µ M e k , ϕu − µ ∆u, ϕe k = − div(̺ 0 u ⊗ u), ϕe k + ̺ 0 f + g − ∇p, ϕe k . Since M is symmetric, this is ∇p + ̺ 0 u · ∇u − µ∆u + µM u = ̺ 0 f + g in D ′ (D), which is the first equation of (6). This finishes the proof. Remark 3 . 2 . 32This definition of M yields that the matrix is symmetric and positive definite in the sense that for all test functions ϕ i ∈ D(D) and Φ = (ϕ i ) 1≤i≤3 , M Φ, Φ D ′ (D),D(D) that there exists at least one solution to system (6). . Let 1 < q < ∞ and D ε be defined as in(1). There exists a bounded linear operator Acknowledgement. The authors were partially supported by the German Science Foundation DFG in context of the Emmy Noether Junior Research Group BE 5922/1-1. MR 1079189 3. , Homogenization of the Navier-Stokes equations in open sets perforated with tiny holes. II. Noncritical sizes of the holes for a volume distribution and a surface distribution of holes. Grégoire Allaire, MR 1079190Homogenization of the Stokes flow in a connected porous medium, Asymptotic Anal. 2Arch. Rational Mech. Anal.Grégoire Allaire, Homogenization of the Stokes flow in a connected porous medium, Asymptotic Anal. 2 (1989), no. 3, 203-222. MR 1020348 2. , Homogenization of the Navier-Stokes equations in open sets perforated with tiny holes. I. Abstract framework, a volume distribution of holes, Arch. Rational Mech. Anal. 113 (1990), no. 3, 209-259. MR 1079189 3. , Homogenization of the Navier-Stokes equations in open sets perforated with tiny holes. II. Noncritical sizes of the holes for a volume distribution and a surface distribution of holes, Arch. Rational Mech. Anal. 113 (1990), no. 3, 261-298. MR 1079190 Peter Bella, Florian Oschmann, arXiv:2103.04323Inverse of divergence and homogenization of compressible Navier-Stokes equations in randomly perforated domains. arXiv preprintPeter Bella and Florian Oschmann, Inverse of divergence and homogenization of compressible Navier-Stokes equations in randomly perforated domains, arXiv preprint arXiv:2103.04323 (2021). Un termeétrange venu d'ailleurs. I, Nonlinear partial differential equations and their applications. Doïna Cioranescu, François Murat, MR 670272Collège de France Seminar. IIIPitmanRes. Notes in Math.Doïna Cioranescu and François Murat, Un termeétrange venu d'ailleurs. I, Nonlinear partial differential equations and their applications. Collège de France Seminar, Vol. III, Res. Notes in Math., vol. 70, Pitman, Boston, Mass.- London, 1982, pp. 154-178, 425-426. MR 670272 The inverse of the divergence operator on perforated domains with applications to homogenization problems for the compressible Navier-Stokes system, ESAIM: Control, Optimisation and Calculus of Variations. Lars Diening, Eduard Feireisl, Yong Lu, 23Lars Diening, Eduard Feireisl, and Yong Lu, The inverse of the divergence operator on perforated domains with applications to homogenization problems for the compressible Navier-Stokes system, ESAIM: Control, Optimisation and Calculus of Variations 23 (2017), no. 3, 851-868. Homogenization of stationary Navier-Stokes equations in domains with tiny holes. Eduard Feireisl, Yong Lu, Journal of Mathematical Fluid Mechanics. 172Eduard Feireisl and Yong Lu, Homogenization of stationary Navier-Stokes equations in domains with tiny holes, Journal of Mathematical Fluid Mechanics 17 (2015), no. 2, 381-392. Giovanni Paolo Galdi, Steady-state problems. MR 2808162An introduction to the mathematical theory of the Navier-Stokes equations. New YorkSpringerSpringer Monographs in MathematicsGiovanni Paolo Galdi, An introduction to the mathematical theory of the Navier-Stokes equations, second ed., Springer Monographs in Mathematics, Springer, New York, 2011, Steady-state problems. MR 2808162 Arianna Giunti, arXiv:2101.01046Derivation of Darcy's law in randomly punctured domains. arXiv preprintArianna Giunti, Derivation of Darcy's law in randomly punctured domains, arXiv preprint arXiv:2101.01046 (2021). Homogenisation for the Stokes equations in randomly perforated domains under almost minimal assumptions on the size of the holes. Arianna Giunti, Richard Matthias Höfer, MR 4020526Ann. Inst. H. Poincaré Anal. Non Linéaire. 367Arianna Giunti and Richard Matthias Höfer, Homogenisation for the Stokes equations in randomly perforated domains under almost minimal assumptions on the size of the holes, Ann. Inst. H. Poincaré Anal. Non Linéaire 36 (2019), no. 7, 1829-1868. MR 4020526 Homogenization for the Poisson equation in randomly perforated domains under minimal assumptions on the size of the holes. Arianna Giunti, Richard Matthias Höfer, Juan J L Velázquez, MR 3915491Comm. Partial Differential Equations. 439Arianna Giunti, Richard Matthias Höfer, and Juan J. L. Velázquez, Homogenization for the Poisson equation in ran- domly perforated domains under minimal assumptions on the size of the holes, Comm. Partial Differential Equations 43 (2018), no. 9, 1377-1412. MR 3915491 On the homogenization of the Stokes problem in a perforated domain. Matthieu Hillairet, 1179-1228. MR 3851058Arch. Ration. Mech. Anal. 2303Matthieu Hillairet, On the homogenization of the Stokes problem in a perforated domain, Arch. Ration. Mech. Anal. 230 (2018), no. 3, 1179-1228. MR 3851058 Darcy's law as low Mach and homogenization limit of a compressible fluid in perforated domains. Richard Matthias Höfer, Karina Kowalczyk, Sebastian Schwarzacher, Mathematical Models and Methods in Applied Sciences. 3109Richard Matthias Höfer, Karina Kowalczyk, and Sebastian Schwarzacher, Darcy's law as low Mach and homogeniza- tion limit of a compressible fluid in perforated domains, Mathematical Models and Methods in Applied Sciences 31 (2021), no. 09, 1787-1819. Homogenization of the compressible Navier-Stokes equations in domains with very tiny holes. Yong Lu, Sebastian Schwarzacher, Journal of Differential Equations. 2654Yong Lu and Sebastian Schwarzacher, Homogenization of the compressible Navier-Stokes equations in domains with very tiny holes, Journal of Differential Equations 265 (2018), no. 4, 1371 -1406. Homogenization of the compressible Navier-Stokes equations in a porous medium, ESAIM: Control, Optimisation and Calculus of Variations. Nader Masmoudi, 8Nader Masmoudi, Homogenization of the compressible Navier-Stokes equations in a porous medium, ESAIM: Control, Optimisation and Calculus of Variations 8 (2002), 885-906. Antonín Novotný, Ivan Straškraba, Introduction to the Mathematical Theory of Compressible Flow. Oxford, New York, LondonOUPAntonín Novotný and Ivan Straškraba, Introduction to the Mathematical Theory of Compressible Flow, OUP Oxford, New York, London, 2004. Incompressible fluid flow in a porous medium -convergence of the homogenization process, Appendix of Non-homogeneous media and vibration theory. Luc Tartar, Luc Tartar, Incompressible fluid flow in a porous medium -convergence of the homogenization process, Appendix of Non-homogeneous media and vibration theory (1980). Roger Temam, Navier-Stokes equations: theory and numerical analysis. North-Holland Publishing CompanyRoger Temam, Navier-Stokes equations: theory and numerical analysis, North-Holland Publishing Company, 1977.
[]
[ "StartupBR: Higher Education's Influence on Social Networks and Entrepreneurship in Brazil", "StartupBR: Higher Education's Influence on Social Networks and Entrepreneurship in Brazil" ]
[ "Michelle Reddy [email protected] \nGraduate School of Education\nStanford University\n\n", "Júlio C Nardelli \nFederal University of Technology -Paraná\n\n", "Yuri L Pereira [email protected] \nFederal University of Minas Gerais\n\n", "Marisa Vasconcelos [email protected] \nIBM Research\n\n", "Thiago H Silva \nFederal University of Technology -Paraná\n\n", "Leonardo B Oliveira \nFederal University of Minas Gerais\n\n\nComputer Science Department\nStanford University\n\n", "Mark Horowitz [email protected] \nComputer Science Department\nStanford University\n\n" ]
[ "Graduate School of Education\nStanford University\n", "Federal University of Technology -Paraná\n", "Federal University of Minas Gerais\n", "IBM Research\n", "Federal University of Technology -Paraná\n", "Federal University of Minas Gerais\n", "Computer Science Department\nStanford University\n", "Computer Science Department\nStanford University\n" ]
[]
Developing and middle-income countries increasingly emphasize higher education and entrepreneurship in their long-term development strategy. Our work focuses on the influence of higher education institutions (HEIs) on startup ecosystems in Brazil, an emerging economy. First, we describe regional variability in entrepreneurial network characteristics. Then we examine the influence of elite HEIs in economic hubs on entrepreneur networks. Second, we investigate the influence of the academic trajectories of startup founders, including their courses of study and HEIs of origin, on the fundraising capacity of startups. Given the growing capability of social media databases such as Crunchbase and LinkedIn to provide startup and individual-level data, we draw on computational methods to mine data for social network analysis. We find that HEI quality and the maturity of the ecosystem influence startup success. Our network analysis illustrates that elite HEIs have powerful influences on local entrepreneur ecosystems. Surprisingly, while the most nationally prestigious HEIs in the South and Southeast have the longest geographical reach, their network influence still remains local.
10.1007/s13278-022-01011-6
[ "https://arxiv.org/pdf/1904.12026v2.pdf" ]
139,102,719
1904.12026
168347a5b9cc148baccf83cd2903411960166091
StartupBR: Higher Education's Influence on Social Networks and Entrepreneurship in Brazil Michelle Reddy [email protected] Graduate School of Education Stanford University Júlio C Nardelli Federal University of Technology -Paraná Yuri L Pereira [email protected] Federal University of Minas Gerais Marisa Vasconcelos [email protected] IBM Research Thiago H Silva Federal University of Technology -Paraná Leonardo B Oliveira Federal University of Minas Gerais Computer Science Department Stanford University Mark Horowitz [email protected] Computer Science Department Stanford University StartupBR: Higher Education's Influence on Social Networks and Entrepreneurship in Brazil Higher Education · Entrepreneurship · Social Networks Developing and middle-income countries increasingly emphasize higher education and entrepreneurship in their long-term development strategy. Our work focuses on the influence of higher education institutions (HEIs) on startup ecosystems in Brazil, an emerging economy. First, we describe regional variability in entrepreneurial network characteristics. Then we examine the influence of elite HEIs in economic hubs on entrepreneur networks. Second, we investigate the influence of the academic trajectories of startup founders, including their courses of study and HEIs of origin, on the fundraising capacity of startups. Given the growing capability of social media databases such as Crunchbase and LinkedIn to provide startup and individual-level data, we draw on computational methods to mine data for social network analysis. We find that HEI quality and the maturity of the ecosystem influence startup success. Our network analysis illustrates that elite HEIs have powerful influences on local entrepreneur ecosystems. Surprisingly, while the most nationally prestigious HEIs in the South and Southeast have the longest geographical reach, their network influence still remains local. Introduction Entrepreneurship and higher education are increasingly viewed as drivers for long-term sustainable economic development. Despite this strong policy focus, studies of entrepreneur networks focus exclusively on high-income countries. For resource-rich and emerging economies like Brazil transitioning to a more sustainable, knowledge-based economy, higher education, and entrepreneurship are particularly important. At the same time, many resource-rich and emerging economies have high levels of inequality. Brazil is characterized by spatial arXiv:1904.12026v2 [cs.SI] 30 Apr 2019 inequalities, most notably, among regions, and evident in the stark contrast between the Brazilian North and Northeast and the economic hub of the South and Southeast, despite improvements in recent years [39]. Educational inequality in particular is well-documented, particularly in Brazil [38] and across emerging economies [3], and globally in terms of higher education access [25][26][27]. If entrepreneurship is heralded as the pathway towards sustainable development, to what extent do Higher Education Institutions (HEIs) influence entrepreneur networks? Will elite HEIs perpetuate existing inequalities, particularly across regions, by having more influence on startup ecosystems? Using social network analysis and mining public Web data of Brazilian entrepreneurs, we hypothesize that entrepreneur networks in regionally disadvantaged areas, such as the Brazilian Northeast, are closely linked with networks from top universities in the wealthier Southeast. We also conceive that the nature of networks will vary by region, given their varying levels of development, as some regions have more access to capital and others to natural resources. In addition, we test our assumption that at the regional level, elite HEIs will influence startups through the social networks formed through HEIs. Notably, we discuss how elite HEIs, according to national educational quality rankings, drive the success of a regional entrepreneurial ecosystem. Overall, we investigate the nature of these networks within Brazil and examine how universities contribute to Brazil's regional entrepreneur networks. Specifically, we aim to address the following questions: i) To what extent do HEIs influence entrepreneurial networks, within and outside their region? ii) How do entrepreneur networks vary by region in Brazil? iii) Are entrepreneur networks mostly embedded in elite HEI networks? As networks provide entrepreneurs with information, capital, and services, we examine entrepreneur networks in Latin America's largest startup ecosystem, Brazil, in this study [22,35]. We chose Brazil because there is limited, if any, empirical analysis of the conditions fostering high-tech regions in middle-income countries. In particular, Brazil is a suitable case because while specific regions are middle-income, others, such as the Northeast, have GDPs similar to low-income countries. Given these stark contrasts, we look at HEIs and their influence on regional entrepreneur networks. Through our Brazil analysis, we explore HEI influence on high-tech ecosystems in both a middle and low-income context. We use computational methods to mine public data from an online database regarding entrepreneurs in Brazil and triangulate with information publicly available in a social media network, as well as with official open data from Brazil's Ministry of Education. First, we download data regarding our target Brazilian startup ecosystems from Crunchbase [8] database. Second, we collect relevant data from LinkedIn to enrich our initial data on startup ecosystems. Finally, we add information about the General Index of Courses, which is an official indicator of quality concerning HEIs in Brazil. Note that the use of computational methods here is especially important since Brazil is a continental country and conventional data collection methods like questionnaires and interviews do not scale well. First, we characterize Brazil's entrepreneur network at the national level. Then, we create a framework for investigating the influence of HEIs, in par-ticular, elite HEIs, degree programs, and educational quality, on entrepreneur networks. Overall, our study contributes to education, entrepreneurship, and development research. Related Work The entrepreneurial university is a global phenomenon, due to the internal development of the university [11] and as the transition to a knowledge-based economy became a goal for sustainable economic development [36]. Entrepreneurial activities enhance national and regional economic growth as well as university finances [11]. Just as Brazil made strides in terms of startup growth in the past decade, it has exponentially increased access to higher education. Yet, inequalities still remain. Other resource-rich countries, such as Qatar [18], Malaysia, and Saudi Arabia [20], increasingly invest in higher education to move towards a knowledge-based economy and away from natural resource dependency. The influence of universities on sciences and technology-based industries is well-documented (see for example [37]). University-industry linkages include the movement of university graduates into commercial firms and faculty entrepreneurship, faculty involvement on advisory boards, industry gifts supporting university research and student training, among others [34]. There is a tendency for the research and development efforts of organizations to spillover into the innovation efforts of other organizations [17], which can occur across industries but is particularly acute within regions, and amplified when key participants are research organizations [10,32]. In particular, HEIs, and their relationship with industry, may be more favorable in certain regions than in others [34], especially if more elite universities are clustered in economically wealthy regions. The technological revolution enabled new entrepreneurial initiatives worldwide, creating an enabling environment for business without the startup costs of the larger firms that dominated the economic landscape of the mid-twentieth century in developed countries [7]. While technology is vital in the rise of entrepreneurship worldwide, as Banerji and Reimber [4] note, the importance of social networks on entrepreneurship is intuitive. In particular, potential funding agencies predict startup success by examining the social networks of founders [4], and networks provide information and opportunities [6,21], and legitimacy [19]. The role of social ties in entrepreneur networks has also been observed by numerous studies. In particular, Zimmer and Aldrich [43] note the importance of social networks on all three aspects of entrepreneurial success: launching a startup, turnover, and sustainability. These findings hold across several cultural contexts, for instance, in China [5], as well as for ethnic minorities in the United States [23]. Therefore, our study results are potentially useful for other cultural contexts, and in particular, middle-income and developing countries. Data and Methods Overview We explore three datasets in this study, namely: Crunchbase. Crunchbase is a global database updated daily that contains information about companies, funders, and staff [9,12]. As a partially crowdsourced database, Crunchbase is increasingly used for academic and commercial purposes [9]. We acquired a commercial license enabling unlimited access in addition to advanced search functions in Crunchbase. We procured all available data of Brazilian companies up to August 26, 2018. For 3,375 companies throughout Brazil, we include company name; LinkedIn profile URL; founding date; company type (or category); the total of investments received; and headquarters location. As Crunchbase uniquely links other data sources such as Twitter [40,41] and LinkedIn [9,30], we linked Crunchbase with LinkedIn to examine characteristics of startup founders, their universities, and social networks. LinkedIn. LinkedIn, as a popular social network of professional contacts, provided the educational information of the company founders. We collected the profiles of employees that held titles such as CEO, owner, and founder with the LinkedIn profile URL obtained through Crunchbase. In the end, this yielded 1,177 profiles and the main data collected were: degree type/level (e.g., Bachelor, Master or Ph.D.); degree area (e.g., Sociology, or Computer Science); graduation year; and the name of the alma mater. Multiple degrees for the same profile were common. We collected all information on the LinkedIn profiles. IGC. The General Index of Courses (IGC [14, 31]) is the official quality indicator for HEIs in Brazil. Annually, the National Institute of Educational Studies and Research (INEP [15,31]) performs the Census of Higher Education (CENSUP [16]), which is used to calculate the IGC, a metric of HEI quality. We used the IGC to classify HEIs as elite or nonelite instituions 6 . Data Pre-Processing For data pre-processing, we first obtained the geolocation of companies and HEI addresses. We used Google Geocode API to yield formatted address and geographic coordinates (e.g., latitude and longitude). We also standardized the name field. As Crunchbase has over 1,400 different categories for companies, we matched Crunchbase categories to categories used by the Brazilian Association of Startups (Abstartups) [1]. Since LinkedIn users report their educational background by open response, we standardized the names of HEIs using the IGC list, from the INEP website [13], and searched phonetically, using manual coding when necessary, to match IGC and Linkedin HEI names. Here, we describe our methodological approach to identifying startup ecosystems. Most studies of entrepreneur networks are rich in interview and survey data [4], see, for example, Zimmer and Aldrich (1987) [43], Bates (1987) [5], and Light (1984) [23]. Recent access to databases such as LinkedIn and Crunchbase facilitates more generalizable results, given the ability to generate a larger sample size [4]. Thus, we draw on LinkedIn and Crunchbase databases and use social network analysis to test our main research questions. We considered startups companies that are at most 15 years old. From the 3, 375 companies we extracted from Crunchbase, we selected only 1, 957 (57.98%). Next, we grouped the startups by city and considered those cities with at least ten startups as ecosystems. Then, we examined only startups associated with our ecosystems. As a result, we got 21 ecosystems covering 1, 547 startups (45.83%) of our initial set. We then collected founders' data from LinkedIn, yielding 146 HEIs and 648 academic degrees of founders. (Table 4 summarizes our dataset numbers.) Figure 1 shows the geographical distribution of the ecosystems present in our dataset. To address the gap regarding the number of startups among Brazilian systems, we divided the ecosystems into mature and emerging ecosystems. Figure 2 illustrates the difference between the two groups. Ecosystems with 74 startups or more are considered mature ecosystems. According to Crunchbase, the largest ecosystems (Table 1) are in Brazilian state capitals such as São Paulo (SP); Rio de Janeiro (RJ); Belo Horizonte (MG); Porto Alegre (RS); Curitiba (PR); and Florianópolis (SC). Together, they comprise 79.82% of startups and 97.05% of total fundraising. All of the largest ecosystems are located in the South or Southeast, the economic hub of Brazil. The emerging ecosystems, on the other hand, encompass 15 cities (Table 2). Emerging ecosystem locations are more diverse in terms of region and city size, ranging from regional capitals like Brasilia (DF), Fortaleza (CE), and Goiânia (GO) to smaller cities like Uberlândia (MG), Joinville (SC), and São José dos Campos (SP). Results Section 4.1 explores the network relationship between startups and HEIs, and the academic trajectory of startup founders. Section 4.2 analyzes the relationship between startups and HEIs. Finally, we investigate how educational quality influences the success of an ecosystem in Section 4.3. Network Characterization Here we analyze founders in terms of HEI, major, and degree nature (type and level). Figure 3 shows the degrees held by company founders before and after company creation. The most common degree is the Bachelor's degree, followed by the MBA and then other master's degrees. We find that founders obtain most of their Bachelor degrees before startup creation (Figure 3, left). Besides, after startup creation (Figure 3, right), the demand for other courses increased 50% (Master), 23% (MBA), 156% (Extension 7 ), and 647% (Ph.D.). This suggests that, after launching a startup, some founders may look for new educational opportunities that may add value to their business. Figure 4 presents the Bachelor's degree courses taken by the founders before startup creation. Most degrees come from STEM (Science, Technology, Engineering, and Mathematics) (≈59%) and social sciences (≈39%). Computer Science is the most popular course of study among startup founders, and many other courses are related to Computer Science (e.g., Computer engineering). Nearly half of startups are in IT or Telecom, perhaps drawing on the computer science background of many founders ( Figure 5). In a next step, we also examine whether founders of the same startup have similar academic trajectories. For each founder, we consider an academic trajectory vector where the ith position represents the number of degrees concluded in HEI i. We then measure the academic trajectory similarity between each pair of founders of the same startup using cosine similarity, then we average those values aggregating by startup. Figure 6 shows the cumulative distribution function (CDF) of this average similarity coefficient. Note that approximately 55% of the startups have non-zero cosine similarity, which means that founders had at least one common HEI in their academic trajectory. By further investigating the data, we found that, for the same group startups, 83% of them have contemporary founders (i.e., studied at the same HEI during the same period). Many founders may have met while at university, through acquaintances, or other university affiliations, such as being in the same social network even after university. Relationship Between HEIs and Startups In this section, we analyze the network relationship between HEIs and startups. Using social network analysis, we compare the described ecosystems in terms of academic trajectories, connectivity, spatial distribution. Finally, Section 4.3 analyzes the success of ecosystems as a function of HEI quality rankings. Network Approach We use an undirected bipartite graph G = (U, V, E), where nodes v i ∈ V are startups, nodes u i ∈ U are HEIs, and an edge e i,j = (v i , u j ) exists from node v i to u j if a startup v i founder is a HEI u j alum. For our analysis, we consider two networks of this kind: (i) Undergrad comprising only Bachelor degrees of founders; and (ii) All-Degrees including any founder degree (Appendix B, Figure 10 and Figure 11, respectively). Both networks also include the HEIs that issued the degrees. Table 3 shows the top ten HEIs according to networks'degree, closeness, and betweenness centralities [28]. Degree centrality reflects the importance of a node through its number of connections. Notably, in our network, HEIs are only linked to startups. So, the degree centrality expŕess the direct influence of the HEI in the startup formation. We found that University of Sao Paulo (USP) is the most central node in both Undergrad and All-Degrees networks (Table 3). In addition, an international HEI figures top-ranked on All-Degrees: Stanford University. Upon closer examination, we find that these founders took extension courses at Stanford. HEIs Centrality Broadly, closeness centrality captures the distance to all other nodes in the network. Here, the closeness centrality suggests that more elite HEIs reach (or influence) the network faster. In terms of undergraduate degrees among founders, Universidade Estadual Paulista (UNESP), though not top-ranked according to degree centrality, appears in the 2nd position in terms of closeness centrality, likely because UNESP is present in 24 cities. Additionally, among the top-ranked HEIs in terms of founder undergraduate degrees, there is AIEC/FAAB [2], a HEI that offers online courses nationwide. Finally, FGV/SP, Stanford, and IBMEC are the most central HEIs among All-Degrees. This is likely due to their online course delivery and the high ranking of their business programs. Betweenness centrality tells how often a node is within the shortest path with another in the network. In our study, this metric unveils HEIs that connect distinct social circles and then foster entrepreneurship. Here, Universidade Federal de Santa Catarina (UFSC) is the most central in Undergrad, and Federal University of the State of Rio de Janeiro (UNIRIO) in All-Degrees (Table 3). Finally, centrality top-ranked HEIs, in general, are most elite (IGC 4) HEIs. (There are two exceptions whose IGC = 3, though: AIEC and FDMC.) Also, 95 HEIs of 146 are located in major cities in the South or Southeast of Brazil, the economic hub of the country. HEIs Spatial Degree Centrality We draw on definitions by Lima and Musolesi [24] for our spatial degree analysis below. Each node i, i ∈ V or i ∈ U in our affiliation network G = (U, V, E), is assigned a set of j neighbours nodes, i.e., the neighbours of node i is the set of nodes j that are reachable from i through the out-link e ij ∈ E. All of them are represented by points on Earth P i = {p (i) 0 , p (i) 1 , ..., p (i) |j| }, expressed through latitude and longitude. For the spatial degree analysis, we first define a spatial neighborhood S as a circular region specified by its center and radius. Given a node i, its spatial coordinates (latitude and longitude) represents the center of a spatial neighborhood S i with a certain radius. The intersection P i ∩ S i contains all the points, i.e., nodes representing HEIs and startups, falling inside the region S i that are neighbors of i in G. In this way, we can compute the spatial degree centrality C of node i with spatial neighborhood S as: C i,S = |P i ∩ S i |(1) In this study, we are interested in the average spatial degree centrality C for HEIs. Thus, for our network G = (U, V, E) this metric is expressed as: C U,S = 1 |U | u∈U C u,S ,(2) where the set U represent HEIs. Figure 7 shows the spatial degree considering different non-overlapping spatial ranges, meaning the ranges are circle and expanding annular rings around each HEI. This analysis takes into account a network composed of startups that founders obtained any degree from any HEI in the 15 years before the startup creation. Most of the connections are short distance, up to 250 km, suggesting that the influence of HEIs are mostly local. However, we find that elite HEIs, such as PUC/SP, UNICAMP, and IBMEC, have the longest spatial ranges, therefore, their influence is more likely to extend beyond their local ecosystem and into other regions. In addition, we calculate the similarity of connections in the network concerning the nodes' state, using the assortativity coefficient [29]. In general, the coefficient lies between −1 and 1. The network has perfect assortative mixing patterns when the assortativity coefficient is 1,. The assortativity coefficient by state is 0.72 for the same network studied in the spatial analysis. This means that the majority of connections happen between nodes from the same state, corroborating with what is observed by the spatial degree centrality analysis. Thus, HEIs, in general, have more influence within their region. Elite HEI Alumni and Enhanced Startup Fundraising Capabilities We also examine how education quality drives the success of an ecosystem. We assume that HEIs with higher educational quality rankings are perceived as more elite. We calculated the Pearson correlation between the HEIs' IGC and the HEIs' degree centrality, and construct a scatter plot in Figure 8. The Pearson correlation is moderate and around 0.56 (p-value < 0.001). Figure 8 also plots the linear regression, with a 95% confidence interval of the best-fit line. These findings suggests that elite HEIs (high ranked IGC) have more startup connections, and overall support our hypothesis that elite HEIs have more influence on startup ecoystems. We also analyzed the fundraising capability (κ) of startups. Equation 3 describes how κ is calculated for a given startup i: κ i = F i,to Li × E i,to ,(3) where, κ i is the fundraising capability for startup i up to time t o (now), F i the total fund raised in the life cycle (from creation up to t o ), L i , startup age in months, and E i current number of employees. This equation was also used by Perotti and Yu [33]. Figure 9 shows a cumulative distribution function of κ for startups whose founders are elite HEIs alumni. A founder, or a group of founders from the same company, is a product of an elite HEI if the average IGC of all HEIs she/he attended is over or equal to four. There is a positive correlation between mature ecosystems and fundraising capability. In addition, startups whose founders are elite HEI alumni tend to increase κ. Finally, the combination of a mature ecosystem and elite HEI affiliation correlates with better fundraising capability, again supporting our hypothesis that elite HEIs have more influence on regional startup ecosystems. q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q qq q q qq q q qq q q q q q qq q q q q q q q q q qq Discussion In this study, we investigate how HEIs contribute to Brazil's regional entrepreneur networks, and the nature of these networks. Our original hypothesis suggested that entrepreneur networks in regionally disadvantaged areas, such as the North and Northeast of Brazil, will be closely linked with networks from elite HEIs in the wealthier South and Southeast. Though we find that elite HEIs, such as PUC/SP, UNICAMP, and IBMEC, have the longest spatial ranges, their network influence is still mostly local and for instance does not extend to the North and Northeast. Overall, most of the connections between HEIs and startups are local. The most elite HEI within the region tends to have the most influence within the regional network. We also found a strong presence of Stanford University in our networks, likely reflecting the global influence of this university and Silicon Valley in business education, technology, entrepreneurship, and innovation. In terms of variability among regional entrepreneur ecosystems, we found that the nature of networks varied by region, given their varied level of development. While IT and Telecom are the most common sectors across regions, there is more variability in startup sectors in the wealthier Southeast than in other areas. In addition, the most economically disadvantaged area, has a strong presence of health startups, likely due to historical underdevelopment. This possibly is an avenue for future research in terms of the potential of startups aimed at alleviating gaps in social service provision in low-income contexts. Overall, we find support for our hypothesis that regional elite HEIs (the HEI with the highest educational quality rankings in the region) influence regional startup ecosystems and the fundraising capability of founders. We also found that most startup founders were contemporaries while at university, meaning that they overlapped during their course of study, or met through social networks during or after their studies. We found that a majority of startup founders studied computer science, likely reflective of the strong presence of IT and telecom startups in Brazil. Regarding the limitations of our dataset, though Crunchbase is the most comprehensive data source for startups, it does not include all startups. A lower percentage of startups are registered in Crunchbase on developing and emerging economies. Second, as we relied on data entered by individual users on LinkedIn to furnish information on founders, there may be entry mistakes or misinformation. However, we believe that this is rare given that founders would be incentivized to invest in a proper online profile. Third, founder characteristics may be biased towards those that are LinkedIn users, and therefore might not be representative. Overall, we used the newest data sources, and are not aware of any existing studies targeting middle-income countries or low-income regions. Conclusion In this study, we take a unique conceptual approach in examining the influence of HEIs on entrepreneur networks in Brazil, a middle-income country with low-income regions. We employ an innovative methodological approach by mining publicly available data from Crunchbase, LinkedIn, and the official index of higher education institution quality, to construct and examine the social networks of startup founders. We find that most of the founders were contemporaries at the same HEI and that entrepreneurs frequently seek additional training after startup creation. We observe that the most influential nodes in the network are elite HEIs, though they usually remain within a more localized geographical range. The most nationally prestigious HEIs in the South and Southeast have the longest spatial range into other regions, yet remain fairly local, nor do they extend into the economically disadvantaged North and Northeast. In addition, we find that HEI quality and the maturity of the ecosystem influence startup success. While all of our mature startup ecosystems are in the wealthier South and Southeast, we see some regional movement in the top emerging ecosystems. Our findings, therefore, inform research in emerging, developing, and developed countries aiming to stimulate higher education and entrepreneurship particularly in a context of regional inequality. We find support for the notion that contemporaries at HEIs, and in particular, elite HEIs, have powerful influences on entrepreneur social networks. Our findings contribute to education, entrepreneurship, and development research more globally than studies exclusively focused on high-income countries, by examining entrepreneur networks in a middle-income country, Brazil, that also has low-income regions. A Dataset Overview Figure 1 : 1Map of Brazilian Ecosystems. The redder and larger the circle, the greater the number of startups. The center of the circle indicates the location of the ecosystem. Figure 2 : 2Brazilian Ecosystems by City. Mature ecosystems are in the cities where the number of startups is greater than the national mean (green line), in contrast to emerging ecosystems. Figure 3 : 3Founders' degrees. Figure 4 : 4Popular majors pre-startup creation. Figure 5 :Figure 6 : 56Startup CDF of cosine similarity over HEI. Figure 7 : 7Spatial degree analysis for different spatial neighborhood S. Figure 8 : 8Scatterplot of node degree centrality and IGC. Pearson correlation is 0.56 (p-value < 0.001). Figure 9 : 9CDF for fundraising capability (κ) of startups. Figure 10 : 10Undergrad network. Node colors represent the Brazilian state where they are located in. Figure 11 :Figure 12 : 1112All-Degree network. Node colors represent the Brazilian state where they are located in. South.Figure 13: Southeast. Figure 14 : 14Central-West. Figure 15 : 15Northeast. Table 1 : 1Mature startup ecosystems.Ecosystem Region size a Fundraising São Paulo Southeast 458 $2.6B Rio de Janeiro Southeast 323 $283.6M Belo Horizonte Southeast 151 $33.8M Porto Alegre South 114 $11.1M Curitiba South 95 $67.3M Florianópolis South 94 $43.5M a size stands for the number of startups. Table 2 : 2Top emerging ecosystems.Ecosystem Region size Fundraising Brasilia Central-West 39 $3.7M Recife Northeast 38 $8.5M Campinas Southeast 36 $13.4M Fortaleza Northeast 28 $105.4K SJ Campos Southeast 25 $4.1M Goiânia Central-West 22 no info. Barueri Southeast 21 $10.9M Joinville South 21 $41.3M Uberlândia Southeast 15 $2.6M João Pessoa Northeast 14 $194.8K Table 3 : 3Top 10 HEIs per degree, closeness, and betweenness centrality.Degree Closeness Betweenness Ranking Undergrad All-Degrees Undergrad All Degrees Undergrad All Degrees 01 USP USP FGV/SP FGV/SP UFSC UNIRIO 02 UFRGS PUC/SP UNESP STANFORD FGV/SP FGV/SP 03 UFRJ UFRGS USP IBMEC UNESP PUC/SP 04 PUC/SP FGV/SP UFRJ UFMG USP STANFORD 05 UFSC UFRJ UAM UFRJ UAM USP 06 UFMG UFSC MACKENZIE PUC/SP PUC/RS UFRJ 07 PUC/RS UFMG UFMG USP UFRJ UFRGS 08 PUC/PR IBMEC FDMC INSPER PUC/SP UFSC 09 PUC/MG STANFORD AIEC/FAAB MACKENZIE UFRGS UFMG 10 FGV/SP PUC/PR PUC/SP UAM UFMG IBMEC Table 4 : 4Dataset overview. Number of degrees obtained by founders 648 B Illustration of the Networks StudiedStartup creation period 2004 to 2018 Number of startups 1,547 Number of founders 454 Number of HEIs 146 Stanford and USP are absent from IGC rank. Yet, due to their academic excellence [42], we regarded them as elite HEIs. In Brazil, extension courses are certified programs that do not require a Bachelor's degree, like continuing studies in the U.S. E Summary of All Ecosystems Studied Abstartups: O momento da startup Brasileira e o futuro do ecossistema de inovação. Abstartups and Accenture. Abstartups: O momento da startup Brasileira e o futuro do ecossistema de ino- vação. Abstartups and Accenture (2018), Available online at: http://abstartups. com.br/PDF/radiografia-startups-brasileiras.pdf . AIEC: Faculdade AIEC. 14AIEC: Faculdade AIEC, https://www.aiec.br/, Online; accessed 14-April-2019 Inequalities in emerging economies. C Balestra, A Llena-Nozal, F Murtin, E Tosetto, B Arnaud, 10.1787/6c0db7fb-enBalestra, C., Llena-Nozal, A., Murtin, F., Tosetto, E., Arnaud, B.: Inequalities in emerging economies (Dec 2018), https://doi.org/10.1787/6c0db7fb-en Startup founders and their LinkedIn connections: Are well-connected entrepreneurs more successful?. D Banerji, T Reimer, 10.1016/j.chb.2018.08.033Computers in Human Behavior. 90Banerji, D., Reimer, T.: Startup founders and their LinkedIn connections: Are well-connected entrepreneurs more successful? Computers in Human Behavior 90, 46-52 (Jan 2019), https://doi.org/10.1016/j.chb.2018.08.033 Financing small business creation: The case of Chinese and Korean immigrant entrepreneurs. T Bates, 10.1016/S0883-9026(96)00054-7Journal of Business Venturing. 122Bates, T.: Financing small business creation: The case of Chinese and Korean immigrant entrepreneurs. Journal of Business Venturing 12(2), 109-124 (1997), https://doi.org/10.1016/S0883-9026(96)00054-7 The network structure of social capital. R S Burt, 10.1016/S0191-3085(00)22009-1Research in Organizational Behavior. 22Burt, R.S.: The network structure of social capital. Research in Organizational Be- havior 22, 345-423 (2000), https://doi.org/10.1016/S0191-3085(00)22009-1 Crunchbase: Main site. 14Crunchbase: Main site, https://www.crunchbase.com/, Online; accessed 14-April- 2019 Using Crunchbase for economic and managerial research. J M Dalle, M Den Besten, C Menon, 10.1787/6c418d60-enDalle, J.M., den Besten, M., Menon, C.: Using Crunchbase for economic and man- agerial research (Nov 2017), https://doi.org/10.1787/6c418d60-en Toward a new economics of science. P Dasgupta, P David, 10.1016/0048-7333(94)01002-1Research Policy. 23Dasgupta, P., David, P.: Toward a new economics of science. Research Policy 23, 487-521 (1994), https://doi.org/10.1016/0048-7333(94)01002-1 The future of the university and the university of the future: evolution of ivory tower to entrepreneurial paradigm. H Etzkowitz, A Webster, C Gebhardt, B R C Terra, 10.1016/S0048-7333(99)00069-4Research Policy. 292Etzkowitz, H., Webster, A., Gebhardt, C., Terra, B.R.C.: The future of the uni- versity and the university of the future: evolution of ivory tower to entrepreneurial paradigm. Research Policy 29(2), 313-330 (2000), https://doi.org/10.1016/ S0048-7333(99)00069-4 Where's the Money? The Social Behavior of Investors in Facebook's Small World. L Y Eugene, S D Yuan, 10.1109/ASONAM.2012.362012 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining. Eugene, L.Y., Yuan, S.D.: Where's the Money? The Social Behavior of Investors in Facebook's Small World. In: 2012 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining. pp. 158-162 (Aug 2012), https://doi.org/10.1109/ASONAM.2012.36 . INEP: Indice geral de cursos. 14accessed 1413. INEP: Censo da Educação Superior, http://inep.gov.br/educacao-superior, Online; accessed 14-April-2019 14. INEP: Indice geral de cursos (IGC), http://inep.gov.br/en/ indice-geral-de-cursos-igc-, Online; accessed 14-April-2019 INEP: Instituto Nacional de Estudos e Pesquisas Educacionais Anísio Teixeira. Online14INEP: Instituto Nacional de Estudos e Pesquisas Educacionais Anísio Teixeira, http://portal.inep.gov.br/, Online; accessed 14-April-2019 INEP: Sistema do censo da educação superior (CENSUP). Online14INEP: Sistema do censo da educação superior (CENSUP), http: //sistemascensosuperior.inep.gov.br/censosuperior_2018, Online; accessed 14-April-2019 Technological opportunity and spillovers of R&D: Evidence from firms' patents, profits, and market Value. A B Jaffe, 10.3386/w1815American Economic Review. 765Jaffe, A.B.: Technological opportunity and spillovers of R&D: Evidence from firms' patents, profits, and market Value. American Economic Review 76(5), 984-1001 (Dec 1986), https://doi.org/10.3386/w1815 Transitioning Towards a Knowledge Society: Qatar as a Case Study. G Julia, B Julia, K Fietkiewicz, W Stock, 10.1007/978-3-319-71195-9SpringerJulia, G., Julia, B., Fietkiewicz, K., Stock, W.: Transitioning Towards a Knowledge Society: Qatar as a Case Study. Springer (Jan 2018), https://doi.org/10.1007/ 978-3-319-71195-9 Influence of social network structure on entrepreneurship participation-A study of 20 national cultures. K Klyver, K Hindle, D Meyer, Klyver, K., Hindle, K., Meyer, D.: Influence of social network structure on entrepreneurship participation-A study of 20 national cultures, pp. 331-347. . 10.1007/s11365-007-0053-0s11365-007-0053-0US. Springer International PublishingSpringer International Publishing, US (Sep 2008), https://doi.org/10.1007/ s11365-007-0053-0 Knowledge-based economies and basing economies on knowledge: Skills a missing link in GCC countries. K B Kumar, D Van Welsum, RAND CorporationKumar, K.B., van Welsum, D.: Knowledge-based economies and basing economies on knowledge: Skills a missing link in GCC countries. RAND Corporation (2013), Available online at: https://www.rand.org/pubs/research_reports/RR188.html Partner networks: Leveraging external ties to improve entrepreneurial performance. A Larson, 10.1016/0883-9026(91)90008-2Journal of Business Venturing. 63Larson, A.: Partner networks: Leveraging external ties to improve entrepreneurial performance. Journal of Business Venturing 6(3), 173-188 (May 1991), https: //doi.org/10.1016/0883-9026(91)90008-2 Firm networks: external relationships as sources for the growth and competitiveness of entrepreneurial firms. C Lechner, M Dowling, 10.1080/08985620210159220Entrepreneurship & Regional Development. 151Lechner, C., Dowling, M.: Firm networks: external relationships as sources for the growth and competitiveness of entrepreneurial firms. Entrepreneurship & Regional Development 15(1), 1-26 (2003), https://doi.org/10.1080/08985620210159220 Immigrant and ethnic enterprise in North America. I Light, 10.1080/01419870.1984.9993441Ethnic and Racial Studies. 72Light, I.: Immigrant and ethnic enterprise in North America. Ethnic and Racial Studies 7(2), 195-216 (Sep 1984), https://doi.org/10.1080/01419870.1984. 9993441 Spatial dissemination metrics for location-based social networks. A Lima, M Musolesi, 10.1145/2370216.2370429Proceedings of the 2012 ACM Conference on Ubiquitous Computing. the 2012 ACM Conference on Ubiquitous ComputingLima, A., Musolesi, M.: Spatial dissemination metrics for location-based social networks. In: Proceedings of the 2012 ACM Conference on Ubiquitous Computing. pp. 972-979 (Sep 2012), https://doi.org/10.1145/2370216.2370429 Expansion without equity: An analysis of current policy on access to higher education in Brazil. T Mccowan, 10.1007/s10734-005-0097-453McCowan, T.: Expansion without equity: An analysis of current policy on access to higher education in Brazil. Higher Education 53, 579-598 (May 2007), https: //doi.org/10.1007/s10734-005-0097-4 Widening participation in higher education: a social justice analysis of student loans in Tanzania. Faustina M Msigwa, 10.1007/s10734-016-0037-5Higher Education. 724Msigwa, Faustina M.: Widening participation in higher education: a social justice analysis of student loans in Tanzania. Higher Education 72(4), 541-556 (Oct 2016), https://doi.org/10.1007/s10734-016-0037-5 Scientific and Cultural Organization (UNESCO): Education 2030: Towards inclusive and equitable quality education and lifelong learning for all. UNESCO. Nations Educational, Scientific and Cultural Organization (UNESCO): Education 2030: Towards inclusive and equitable quality education and lifelong learning for all. UNESCO (2016), Available online at: https://unesdoc.unesco.org/ark:/48223/ pf0000245656 Networks: An introduction. M Newman, 10.1093/acprof:oso/9780199206650.001.0001Oxford University Press, IncNew York, NY, USANewman, M.: Networks: An introduction. Oxford University Press, Inc., New York, NY, USA (2010), https://doi.org/10.1093/acprof:oso/9780199206650. 001.0001 Mixing patterns in networks. M E Newman, 10.1103/PhysRevE.67.026126Physical Review E. 67226126Newman, M.E.: Mixing patterns in networks. Physical Review E 67(2), 026126 (Feb 2003), https://doi.org/10.1103/PhysRevE.67.026126 Regularly change a running system! An analysis of stage-specific criteria for attracting venture capital and changing the likelihood for getting funded. D Nuscheler, 10.1787/9789264309050-enOECD: Rethinking quality assurance for higher education in BrazilNuscheler, D.: Regularly change a running system! An analysis of stage-specific criteria for attracting venture capital and changing the likelihood for getting funded (2016), Available online at: http://ifabs.org/assets/stores/1206/ userfiles/3IFABS%20Best%20Poster%20Award%20-%20Daniela%20Nuscheler, %20TU%20Dortmund%20University,%20DE.pdf 31. OECD: Rethinking quality assurance for higher education in Brazil (2018), https: //doi.org/https://doi.org/10.1787/9789264309050-en Knowledge networks as channels and conduits: The effects of spillovers in the Boston biotechnology community. J Owen-Smith, W Powell, 10.1287/orsc.1030.0054Organization Science -ORGAN SCI. 15Owen-Smith, J., Powell, W.: Knowledge networks as channels and conduits: The effects of spillovers in the Boston biotechnology community. Organization Science -ORGAN SCI 15, 5-21 (Feb 2004), https://doi.org/10.1287/orsc.1030.0054 Victor Perotti, Yang Yu, Startup Tribes: Social Network Ties that Support Success in New Firms. AMCIS 2015 Proceedings. Available online atPerotti, Victor and Yu, Yang: Startup Tribes: Social Network Ties that Support Success in New Firms. AMCIS 2015 Proceedings (2015), Available online at: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.852. 4582&rep=rep1&type=pdf The institutional embeddedness of high-tech regions: relational foundations of the Boston biotechnology community. Clusters, networks, and innovation. K Porter, K B Whittington, W W Powell, 261Porter, K., Whittington, K.B., Powell, W.W.: The institutional embeddedness of high-tech regions: relational foundations of the Boston biotechnology community. Clusters, networks, and innovation 261, 296 (2005), Available online at: https: //web.stanford.edu/group/song/papers/Porter_etal.pdf Who can you turn to? Tie activation within core business discussion networks. L A Renzulli, H Aldrich, 10.1353/sof.2005.0122Social Forces. 841Renzulli, L.A., Aldrich, H.: Who can you turn to? Tie activation within core busi- ness discussion networks. Social Forces 84(1), 323-341 (2005), https://doi.org/ 10.1353/sof.2005.0122 Identifying the key factors of growth in natural resource-driven countries. A look from the knowledgebased economy. Romilio Labra, Juan Antonio Rock, Isabel Álvarez, 10.1016/j.espe.2015.12.001Ensayos sobre Política Económica. 3479Romilio Labra and Juan Antonio Rock and Isabel Álvarez: Identifying the key factors of growth in natural resource-driven countries. A look from the knowledge- based economy. Ensayos sobre Política Económica 34(79), 78-89 (Apr 2016), https://doi.org/10.1016/j.espe.2015.12.001 American universities and technical advance in industry. N Rosenberg, R Nelson, 10.1016/0048-7333(94)90042-6Research Policy. 233Rosenberg, N., Nelson, R.: American universities and technical advance in in- dustry. Research Policy 23(3), 323-348 (May 1994), https://doi.org/10.1016/ 0048-7333(94)90042-6 Education remains the catalyst for Brazil's staggering inequality. T Sanderson, Sanderson, T.: Education remains the catalyst for Brazil's staggering in- equality (Nov 2017), https://brazilian.report/society/2017/11/06/ education-brazil-staggering-inequality/ Regional inequalities in Brazil: Divergent readings on their origin and public policy design. Simone Affonso Da Silva, Simone Affonso da Silva: Regional inequalities in Brazil: Divergent readings on their origin and public policy design. EchoGeo (2017), Available online at: https: //journals.openedition.org/echogeo/15060 Don't look back? The effect of attention to time and self on startup funding. A Tata, D Laureiro Martinez, S Brusoni, 10.5465/ambpp.2016.13926abstract13926Academy of Management Proceedings 2016Tata, A., Laureiro Martinez, D., Brusoni, S.: Don't look back? The effect of attention to time and self on startup funding. Academy of Management Pro- ceedings 2016(1), 13926 (Jan 2016), https://doi.org/10.5465/ambpp.2016. 13926abstract The psycholinguistics of entrepreneurship. A Tata, D L Martinez, D Garcia, A Oesch, S Brusoni, 10.1016/j.jbvi.2017.02.001Journal of Business Venturing Insights. 7Tata, A., Martinez, D.L., Garcia, D., Oesch, A., Brusoni, S.: The psycholinguistics of entrepreneurship. Journal of Business Venturing Insights 7, 38-44 (2017), https: //doi.org/10.1016/j.jbvi.2017.02.001 Resource mobilization through ethnic Networks: Kinship and friendship ties of shopkeepers in England. C Zimmer, H Aldrich, 10.2307/1389212Sociological Perspectives. 304Zimmer, C., Aldrich, H.: Resource mobilization through ethnic Networks: Kinship and friendship ties of shopkeepers in England. Sociological Perspectives 30(4), 422-445 (Oct 1987), https://doi.org/10.2307/1389212
[]
[ "SENSOR-TOPOLOGY BASED SIMPLICIAL COMPLEX RECONSTRUCTION FROM MOBILE LASER SCANNING", "SENSOR-TOPOLOGY BASED SIMPLICIAL COMPLEX RECONSTRUCTION FROM MOBILE LASER SCANNING" ]
[ "Stéphane Guinard [email protected] \nUniversité Paris-Est\nLASTIG MATIS\nIGN\nENSG\n73 avenue de Paris94160Saint-MandéFrance\n", "Bruno Vallet [email protected] \nUniversité Paris-Est\nLASTIG MATIS\nIGN\nENSG\n73 avenue de Paris94160Saint-MandéFrance\n" ]
[ "Université Paris-Est\nLASTIG MATIS\nIGN\nENSG\n73 avenue de Paris94160Saint-MandéFrance", "Université Paris-Est\nLASTIG MATIS\nIGN\nENSG\n73 avenue de Paris94160Saint-MandéFrance" ]
[]
We propose a new method for the reconstruction of simplicial complexes (combining points, edges and triangles) from 3D point clouds from Mobile Laser Scanning (MLS). Our main goal is to produce a reconstruction of a scene that is adapted to the local geometry of objects . Our method uses the inherent topology of the MLS sensor to define a spatial adjacency relationship between points. We then investigate each possible connexion between adjacent points and filter them by searching collinear structures in the scene, or structures perpendicular to the laser beams. Next, we create triangles for each triplet of self-connected edges. Last, we improve this method with a regularization based on the co-planarity of triangles and collinearity of remaining edges. We compare our results to a naive simplicial complexes reconstruction based on edge length.
null
[ "https://arxiv.org/pdf/1802.07487v2.pdf" ]
4,716,461
1802.07487
951f4650484eb7e262bdcc462c1520723451f858
SENSOR-TOPOLOGY BASED SIMPLICIAL COMPLEX RECONSTRUCTION FROM MOBILE LASER SCANNING Stéphane Guinard [email protected] Université Paris-Est LASTIG MATIS IGN ENSG 73 avenue de Paris94160Saint-MandéFrance Bruno Vallet [email protected] Université Paris-Est LASTIG MATIS IGN ENSG 73 avenue de Paris94160Saint-MandéFrance SENSOR-TOPOLOGY BASED SIMPLICIAL COMPLEX RECONSTRUCTION FROM MOBILE LASER SCANNING Commission II, WG II/4Simplicial complexes3D reconstructionpoint cloudsMobile Laser Scanningsensor topology We propose a new method for the reconstruction of simplicial complexes (combining points, edges and triangles) from 3D point clouds from Mobile Laser Scanning (MLS). Our main goal is to produce a reconstruction of a scene that is adapted to the local geometry of objects . Our method uses the inherent topology of the MLS sensor to define a spatial adjacency relationship between points. We then investigate each possible connexion between adjacent points and filter them by searching collinear structures in the scene, or structures perpendicular to the laser beams. Next, we create triangles for each triplet of self-connected edges. Last, we improve this method with a regularization based on the co-planarity of triangles and collinearity of remaining edges. We compare our results to a naive simplicial complexes reconstruction based on edge length. INTRODUCTION LiDAR scanning technologies have become a widespread and direct mean for acquiring a precise sampling of the geometry of scenes of interest. However, unlike images, LiDAR point clouds do not always have a natural topology (4-or 8-neighborhoods for images) allowing to recover the continuous nature of the acquired scenes from the individual samples. This is why a large amount of research work has been dedicated into recovering a continuous surface from a cloud of point samples, which is a central problem in geometry processing. Surface reconstruction generally aims at reconstructing triangulated surface meshes from point clouds, as they are the most common numerical representation for surfaces in 3D, thus well adapted for further processing. Surface mesh reconstruction has numerous applications in various domains: • Visualization: a surface mesh is much more adapted to visualization than a point cloud, as the visible surface is interpolated between points, allowing for a continuous representation of the real surface, and enabling the estimation of occlusions, thus to render only the visible parts of the scene. • Estimation of differential quantities such as surface normals and curvatures. • Texturing: a surface mesh can be textured (images applied on it) allowing for photo-realistic rendering. In particular, when multiple images of the acquired scene exists, texturing allows to fusion and blend them all into a single 3D representation. • Shape and object detection and reconstruction: these high level processes benefit from surface reconstruction since it solves the basic geometric ambiguity (which points are connected by a real surface in the real scene ? ). In practice, existing surface reconstruction algorithms often consider that their input is a set of (x, y, z) coordinates, possibly with normals. However, most LiDAR scanning technologies provide more than that: the sensors have a logic of acquisition that provides a sensor topology (Xiao et al., 2013;Vallet et al., 2015). For instance, planar scanners acquire points along a line that advances with the platform (plane, car, ...) it is mounted on. Thus each point can be naturally connected to the one before and after him along the line, and to its equivalent in the previous and next lines (see Figure 1). Fixed LiDARs scan in spherical (θ, φ) coordinates which also imply a natural connection of each point to the previous and next along these two angles. Some scanner manufacturers exploit this topology by proposing visualization and processing tools in 2.5D (depth images in (θ, φ)) rather than 3D. Moreover, LiDAR scanning can provide a meaningful information that is the position of the LiDAR sensor for each points, resulting in a ray along which we are sure that space is empty. This information can also disambiguate surface reconstruction as illustrated in Figure 3. This is why we decided to investigate the use of the sensor topology inherent to a MLS, to perform a 3D reconstruction of a point cloud. Secondly, the geometry processing community has mainly focused on reconstruction of rather smooth objects, possibly with sharp edges, but with a sampling density sufficient to consider that the object is a 2-manifold, which means that it is locally 2-dimensional. Thus these methods do not extend well to real scenes where such a guarantee is hardly possible. In particular, scans including poles, power lines, wires, ... almost never allow to create triangles on these structures because their widths (a few mm to a few cm) is much smaller than the scanning resolution. Scans of highly detailed structures (such as tree foliage for instance) even have a 0-dimensional nature: individual points should not even be connected to any of their neighbors. Applying the Nyquist-Shannon theorem to the the range in sensor space tells us that if the geometric frequency (frequency of the range signal in sensor space) is higher than half the sampling frequency (frequency of the samples in sensor space), then some (geometric) signal will be lost, which happens in the cases stated above for instance. Because of this, we should aim at reconstructing triangles only when the Shannon condition is met in the two dimensions, but only edges when the geometric frequency is too high in 1 dimension and points when the geometric frequency is too high in the 2 dimensions. Triangles, edges and points are called simplices, which are characterized by their dimension d (0 = vertices, 1 = edges, 2 = triangles). If we add the constraint that edges can only meet at a vertex and triangles can only meet at an edge or vertex, the resulting mathematical object is called a simplicial complex as illustrated in Figure 2. The aim of this paper is to propose a method to reconstruct such simplicial complexes from a LiDAR scan. STATE OF THE ART 3D surface mesh reconstruction from point clouds has been a major issue in geometry processing for the last decades. 3D reconstruction can be performed from oriented (Kazhdan and Hoppe, 2013) or unoriented point sets (Alliez et al., 2007). Data itself can come from various sources: Terrestrial Laser Scanning (Pu and Vosselman, 2009), Aerial Laser Scanning (Dorninger and Pfeifer, 2008) or Mobile Laser Scanning. We refer the reader to Berger et al. (2014) for a general review of surface reconstruction methodologies and focus our state of the art on surface reconstruction from Mobile Laser Scanning (MLS) and on simplicial complexes reconstruction, which are the two specificities of our approach. MLS have been used for the past years mostly for the modeling of outdoors environments, usually in urban scenes. Becker and Haala (2009) propose an automatically-generated grammar for the reconstruction buildings, whereas Rutzinger et al. (2010) focus more specifically on tree shapes reconstruction. MLS has also been useful for specific indoor environments: Zlot and Bosse (2014) used a MLS in an underground mine to obtain a 3D model of the tunnels. The utility of simplicial complexes for the reconstruction of 3D point clouds has been expressed by Popović and Hoppe (1997) as a generalization manner to simplify 3D meshes. Simplicial complexes are also used to simplify defect-laden point sets as a way to be robust to noise and outliers using optimal transport (De Goes et al., 2011;Digne et al., 2014) or alpha-shapes (Bernardini and Bajaj, 1997). As explained in introduction, the aim of this paper is to propose a reconstruction method that combines two advantages: 1. Reconstruction of a simplicial complexe instead of a surface mesh, adapting the local dimension to that of the local structure. 2. Exploiting the sensor topology both to solve ambiguities and to speed up computations. The two objectives are tackled at once by proposing a new criteria to define which simplices from the sensor topology should belong to the reconstructed simplicial complex. METHODOLOGY As explained above, sensor topology yields in general a regular mesh structure with a 6-neighborhood that can be used to perform a surface mesh reconstruction. This reconstruction is however very poor as all depth discontinuities will be meshed, so very elongated triangles will be constructed between objects and their background. This section investigates criteria to remove these triangles, while possibly keeping some of their edges. As all input points will be kept, the resulting reconstruction combines points and edges and triangles based on these points, which is called a simplicial complex in mathematics. Objectives Our main objective is to determine which adjacent points (in sensor topology) should be connected to form edges and triangles. We consider that we may be facing a discontinuity when the depth show the cases where two neighboring echoes have a huge depth difference. They can either fall on two different objects or on a same object and we have no hint to distinguish these two cases. Figure 4c shows the case where three or more echoes are approximately aligned, with a huge depth difference. In this case we want to reconstruct edges between these echoes because it may correspond to a grazing surface. difference between two neighboring echoes is high. This depth difference is computed from a sensor viewpoint, which implies that a large depth difference may correspond to two cases: either the echoes fell on two different objects with a notable depth difference, or they fell on a grazing surface (nearly parallel to the laser beams direction) as shown in figure 4. When two neighboring echoes have a large depth difference, there is no way we can guess whether they are located on two separate objects (4a) or a grazing surface (4b). The core idea of our filtering is that the only hint we can rely on to distinguish between these two cases is that if at least three echoes with a large depth differences are aligned (4c), we probably are in the grazing surface case rather than on separate objects. To perform our reconstruction, we consider each echo as an independent point. First, we define a neighborhood relationship between echoes in the sensor topology. Then, we create edges, based on the echoes and add triangles based on the edges. Last, we regularize the computed simplicial complex according to the local geometric consistency of the retrieved simplexes. Neighborhood in sensor topology The sensors used to capture point clouds often have an inherent topology. Mobile Laser Scanners sample a regular grid in (θ, t) where θ is the rotation angle of the laser beam and t the instant of acquisition. Because the vehicle moves at a varying speed (to adapt to the traffic and respect the circulation rules) and may rotate, the sampling is however not uniform in space. In general, the number Np of pulses for a 2π rotation in θ is not an integer so a pulse Pi has six neighbors Pi−1, Pi+1, Pi−n, Pi−n−1, Pi+n, Pi+n+1 where n = Np is the integer part of the number of pulses per line as illustrated on figure 5a. However, this topology concerns emitted pulses, not recorded echoes. One pulse might have 0 echo (no target hit) or up to 8 as most modern scanners can record multiple echoes for one pulse if the laser beam intersected several targets, which is very frequent in the vegetation or transparent objects for instance. We chose to tackle this issue by connecting an echo to each echoes of its pulses' neighbors as illustrated in Figure 5b because we should keep all possible edge hypotheses before filtering them. Edge filtering For each pair of connected echoes in sensor topology (as defined above), we need a criteria to decide whether we should keep it in the reconstructed simplicial complex. We propose the following: • C0 regularity: we want to prevent forming edges between echoes when their euclidean distance is too high. • C1 regularity: we want to favor edges when two collinear edges share an echo. In order to be independent from the sampling density, we propose to express the regularities in an angular manner. Moreover, the sensor topology has an hexagonal structure, and we propose to treat each line in the 3 directions of the structure independently. For the reminder of this article, and because a single pulse can have multiple echoes, we will express, for a pulse p, its echoes as E e p where e ∈ 1 . . . Np, with Np the number of echoes of p. We then express the regularities as: • C0 regularity, for an edge (E e 1 p , E e 2 p+1 ) between two echoes of two neighboring pulses: C0(p, e1, e2) = 1 − ep(e1, e2) · lp , • • • • • • • • E 1 p E 1 p+1 E 2 p+1 E 2 p−1 E 1 p−1 E 1 p+2 E 2 p+2 E 3 p+2 ep(e1, e2) ep−1 (e, e1) e p+1 (e 2 , e) lp Figure 6. Illustration of the computed regularities C0 and C1. The black dots represent the echoes associated to the considered pulses. The blue and red ones correspond respectively to the precedent and following adjacent echoes. The solid arrows show the adjacent echoes used for the C0 and C1 computation. The black one correspond to the most orthogonal liaison to the sensor beams. The blue and red ones are selected because the angle between these vectors and the black one are the closest possible to π. The black dashed lines represent the laser beams. where ep(e1, e2) = − −−−−−− → E e 1 p E e 2 p+1 || − −−−−−− → E e 1 p E e 2 p+1 || and lp is the direction of the laser beam of pulse p (cf Figure 6). C0 is close to 0 for surfaces orthogonal to the LiDAR ray and close to 1 for grazing surfaces, almost parallel to the ray. • C1 regularity, for an edge (E e 1 p , E e 2 p+1 ) between two echoes of two neighboring pulses: C1(p, e1, e2) = min N p−1 e=1 |1 − ep−1(e, e1) · ep(e1, e2)|· min N p+2 e=1 |1 − ep(e1, e2) · ep+1(e2, e)|. where the minima are given a value of 1 if the pulse is empty. C1 is close to 0 is the edge is aligned with at least one of its neighboring edges, and close to 1 if it is orthogonal to all neighboring edges. From these regularities, we propose a simple filtering based on the computed angles. Figure 6 illustrates the computation of the C0 and C1 regularities. Considering two adjacent echoes E e 1 p and E e 2 p+1 , the C0 regularity can be interpreted as the cosine of the angle between the laser beam direction in p and ep(e1, e2). We want to favor low values of the C0 as it corresponds to echoes with a low depth difference. On the other side, to compute the C1 regularity, we have to browse the echoes of the preceding and following pulses along the 3 directions of our structure. For the preceding and following pulses, we select the echo which minimizes | 1 − ep−1(e, e1) · ep(e1, e2) | (respectively | 1 − ep+1(e2, e) · ep(e1, e2) |). This gives us an information about the tendency of the considered edge to be collinear with at least one of its adjacent edges. We will favor the most collinear cases. Given two adjacent echoes, we consider that if the C0 regularity is high enough, we can ensure the real existence of the edge, and don't have to compute the C1 regularity. We denote this threshold αm. For all the other cases, we compute the C1 regularity and filter the edges according to the C0 and αm: C1 < λ · αm · C0 αm − C0 , where λ sets how much C1 regularity can compensate for C0 discontinuity. A high value of λ allows more edges to be kept. This criteria is illustrated on figure 7. The red line represent the αm threshold and the blue line corresponds to the limit cases between removing and keeping the edges depending on C0 and C1. We also want to filter edges that would remain single in the cloud, or just connected to one other edge but with different directions. Actually this often occurs on noisy areas where an edge can pass the regularity criteria "by chance", but it is very unprobable that this happens for two neighboring edges. That's why we propose an additional criterion to favor a reconstruction that leaves points instead of isolated or unaligned edges. Let e be an edge and {e1, . . . , en} its adjacent edges. We consider that if we find an edge ei ∈ {e1, . . . , en} so that: 1 − e · ei < , where is the tolerance on edge reconstruction, e is not alone or unaligned and we keep it in the simplicial complex. Triangle filtering Once we obtained a set of edges in our point cloud, a simple approach to filter triangles is to keep only the triangles (from sensor topology) which three edges have survived the edge filtering described above. Even if this method is an easy way to retrieve most triangles of the scene, it prevents recovering triangles in areas where edges are close to the threshold, in which case triangles will often have some edges just below and some just above the threshold so most triangles will be filtered out. In order to regularize the triangulation computed, we want to favor triangles that are coplanar to some of their adjacent triangles, in the same way we favored edges aligned with at least one neighboring edge. This is motivated by the fact that we want to ensure spatial regularity in our scene. Moreover, we found very unlikely the cases where a triangle is left alone. Also we want to remove all the triangles that may be formed by edges in noisy parts of the cloud as we cannot ensure their existence in the real scene. As triangles are 2D objects, we want to define a 2D C1 regularity by separating between C1 regularity along two directions. Unfortunately, a triangle has 3 neighbors. We solve the problem by filtering pairs of adjacent triangles (that we will call wedges) which have four adjacent wedges in 2 separate directions as illustrated on figure 8. The filtering we propose to keep the wedges that are C1 regular with neighboring wedges in the two directions, where C1 regularity is defined as the criterion: where W is a wedge whose normal is W N and {W1, . . . , Wn} are its adjacent wedges whose normals are W N 1 , . . . , W N n respectively. This means, on figure 8, that if the red wedge is only C1 regular with the wedges 1 and 3, it will be discarded, while it will be kept if it is regular with only 1 and 2. ω is the tolerance on the co-planarity of the two wedges to define regularity. The rationale behind this choice is the same as for the edges: being irregular with both neighbors in one direction means that we are on a depth discontinuity in that direction that cannot be distinguished from a grazing surface, while regularity with at least one neighbor in both directions means that the wedge is part of a (potentially grazing) planar surface. 1 − |W N · W N i | < ω , RESULTS We implemented the pipeline presented before, first with only the edge filtering and the simple triangle reconstruction from edge effectively forming a triangle. Then we added the triangle filtering part. We compared our results with a naive filtering on edge length where triangles in the simplicial complex correspond to all triplets of edges forming a triangle. For all the following tests, we used data from the Stereopolis vehicle (Paparoditis et al., 2012). The scenes have been acquired in an urban environment (Paris) and are mostly composed of roads, facades, trees and urban planning. All the simplicial complexes presented in this part will be represented as follow: • triangles in red, • edges that are not part of any triangle in green, • points that don't belong to any triangle or edge in black. Note that following its mathematical definition, the endpoints of an edge of a simplicial complex also belong to the complex, and similarly for the edges of a triangle, but we do not display them for clarity. The parameter search phase was conducted in two experiments. In the first set of experiments, the impact of parameters αm and λ were studied. For the remaining parameters ω and , their influence was tested in a second batch of experiments. Because all our criteria depend on trigonometric functions (the dot products of normalized vectors is the cosine of their angle), all these parameters are to choose in [0, 1]. Last, we compare both methods with the naive filtering on edge length. Parametrization of αm and λ We first studied the influence of αm. A high value will discard a lot of edges and prevent the formation of triangles, whereas a low value will preserve too many edges on real discontinuities. The results are presented in figure 9. We see on the left example that on the one hand, low values of αm allow the formation of edges between the bottom of the traffic sign and the road. On the other hand, high values of αm show that the reconstruction of triangles is harder, even on the road. The second parameter of this method, λ corresponds to the fact that we want to preserve grazing surfaces where edges are long (important depth discontinuity) but collinear. Figure 10 illustrates the tuning of this λ parameter. On the one hand, for high values of λ (right), edges between window bars and walls or insides of buildings are created. On the other hand, when λ is too low (left), only the best edges are retrieved. For these cases, the number of remaining edges is low (hundreds of edges for millions of points), and lowering λ removes edges that may be useful for human interpretation of the reconstruction. Parametrization of ω and For this set of experiments, αm and λ were fixed respectively to 0.05 and 10 −4 . The influence of the last two parameters is especially visible on noisy areas and grazing surfaces, where the level of detail of the scene is close to the acquisition density. We first studied the effect of parameter ω on triangles. For high values of ω, we expect that a lot of triangles will be retrieved, especially in the grazing surface case, where sometimes our algorithm struggles to retrieve edges in the 3 directions (but performs well on two directions). The results are shown on figure 11 and illustrate our problem in the grazing surface case. The figure on the left is the baseline computed previously. As expected, the number of triangles increase for high values of ω and the road is cleaner than without the triangle filtering part. Its main drawback is its propensity to let a few triangles in noisy areas like tree's foliage. The second parameter is a regularization term on the edges. Low values of will decrease the number of edges, thus leaving many points not linked to others. Results are presented in figure 12. As previously, the figure on the left shows the output of the first filtering step. Figures corresponding to lowest values of respect our previsions: the number of edges keeps decreasing whereas the number of points increase. A side effect of this method can be seen on figure 12b, as there is nearly no edge in the tree's foliage. Comparison of the three methods In this part, we compare our two methods to a naive filtering based on edge lengths alone. The naive filtering is based on a 0.5 meters threshold. For both methods, αm and λ are respectively fixed to 0.05 and 10 −4 . Furthermore, for the last method, ω and are fixed to 10 −3 and 5 · 10 −3 respectively. A video of the results is available at (Guinard, n.d.). Figure 13 presents a reconstruction in a complex urban scene. Unlike the naive filtering, our methods are able to retrieve thin objects such as poles or windows bars without merging them to the closest objects. This is illustrated on figure 13. The top right image of this figure is an extract from Google Streetview to help the interpretation. (a) λ = 10 −6 (b) λ = 10 −4 (c) λ = 10 −2 (d) λ = 0.1 Figure 10. Influence of λ. αm is fixed to 0,05. The scene represents a facade in the grazing surface case. Figure 14 focus more on specific areas of the scan. The first row shows the naive filtering method, whereas the second and third rows present respectively the edge filtering method and its extension with the triangle filtering. We remark that the naive filtering method struggles to retrieve limits between objects (like between poles and road, or people and buildings). The main advantage of the triangle filtering over the edge filtering that can be seen here, is that it helps to reduce the noise that occurs in complex areas such as tree foliage or grazing surfaces. We assume that in complex areas we cannot ensure the existence of connections between some points and will favor a reconstruction that remains careful on such areas. This is why the last method, which is less noisy than the edge filtering, is considered a more appropriate baseline for further developments, even if it discards some edges or triangles that had been well retrieved by the edge filtering method on grazing surfaces. CONCLUSIONS AND PERSPECTIVES This article presented a method for the simplicial complexes reconstruction of point clouds from MLS, based on the inherent structure of the MLS. We propose a filtering of edges possibly linking adjacent echoes by searching for collinear edges in the cloud, or edges perpendicular to the laser beams. We also presented an improvement of this method as a second filtering step, this time by looking for coplanar triangles. This last method produced simplicial complexes less holed than our first approach, and respects the noisy areas of the cloud (such as tree foliage) by discarding simplexes which existence cannot be ensured. The main drawback of our methods is its high locality: we work only by considering point's neighbors and simplexes' adjacent simplexes. Using knowledge of the neighbors at different scales, or even on the whole cloud could help us to regularize the reconstruction according to more global structures of the cloud. Further developments may also consider a hole filling process, to get rid of the absence of a few missing simplexes in a large structure (road, building). Last, setting up a generalization method, as in Popović and Hoppe (1997), would be interesting to simplify the resulting simplicial complexes on large and regular structures in order to reduce the memory weight of the simplicial complexes while maintaining a high accuracy. (a) No triangle filtering (b) ω = 10 −5 (c) ω = 10 −4 (d) ω = 10 −3 Figure 11. Parametrization of ω. is fixed to 5 · 10 −3 . The scene represents a road in the grazing surface case. Figure 1 . 1Echo intensity of a MLS displayed in sensor topology: vertical axis is the angle θ, horizontal axis is the line number, equivalent to time as the scanner acquires a constant number of lines per second. Horizontal resolution depends on vehicle speed (the left part is constant because the vehicle is stopped). Figure 2 . 2A simplicial complex consists of simplices of dimension 0 (points), 1 (edges) and 2 (triangles) (source Wikipedia) Figure 3 .Figure 4 . 34Left: a 2D point cloud (green) and possible reconstructions (blue). Right: knowing the LiDAR rays allows to solve the ambiguity (a) Separation case (b) Ambiguous case(c) Non-separation case Illustration of the two cases of important depth difference. The arrows represent the laser beams.Figures 4a and 4b Figure 5 . 5Definition of neighborhood in sensor space. For each figure, the points considered is colored in red, and connection is denoted by a red arrow. Figure 7 . 7Filtering of edges knowing C0 and C1. The red line corresponds to the αm threshold on C0. The hatched area corresponds to the edges that we keep. Figure 8 . 8Representation of a wedge (red) and its adjacent wedges. The limit between the triangles of each wedge is represented with a dotted line. The dashed lines stand for the directions of our structure. Wedges adjacent to the red one are numbered from 1 to 4. Figure 9 . 9Influence of αm. λ is fixed to 10 −4 . Figure 12 .Figure 13 .Figure 14 . 121314Parametrization of . ω is fixed to 10 −3 . The scene represents a tree with its foliage.(a) Image of the scene fromGoogle (n.d.) (b) Naive method (c) Edge filtering (d) Triangle filtering Results on a complete urban scene, with road, facades, poles and pedestrians. Comparison of the three methods: the naive filtering, the edge filtering and its extension with the triangle filtering. From left to right, the scenes represent: a window, a model in a showcase, a tree and barriers on a pavement. ACKNOWLEDGMENTSThe authors would like to acknowledge the DGA for their financial support of this work. Voronoi-based variational reconstruction of unoriented point sets. P Alliez, D Cohen-Steiner, Y Tong, M Desbrun, Symposium on Geometry processing. 7Alliez, P., Cohen-Steiner, D., Tong, Y. and Desbrun, M., 2007. Voronoi-based variational reconstruction of unoriented point sets. In: Symposium on Geometry processing, Vol. 7, pp. 39- 48. Grammar supported facade reconstruction from mobile lidar mapping. S Becker, N Haala, ISPRS Workshop, CMRT09-City Models. 3813Becker, S. and Haala, N., 2009. Grammar supported facade re- construction from mobile lidar mapping. In: ISPRS Workshop, CMRT09-City Models, Roads and Traffic, Vol. 38, p. 13. State of the art in surface reconstruction from point clouds. M Berger, A Tagliasacchi, L Seversky, P Alliez, J Levine, A Sharf, C Silva, EUROGRAPHICS star reports. 1Berger, M., Tagliasacchi, A., Seversky, L., Alliez, P., Levine, J., Sharf, A. and Silva, C., 2014. State of the art in surface re- construction from point clouds. In: EUROGRAPHICS star reports, Vol. 1number 1, pp. 161-185. Sampling and reconstructing manifolds using alpha-shapes. F Bernardini, C L Bajaj, Proc. 9th Canad. Conf. Comput. Geom. 9th Canad. Conf. Comput. GeomCiteseerBernardini, F. and Bajaj, C. L., 1997. Sampling and reconstruct- ing manifolds using alpha-shapes. In: In Proc. 9th Canad. Conf. Comput. Geom, Citeseer. An optimal transport approach to robust reconstruction and simplification of 2D shapes. F De Goes, D Cohen-Steiner, P Alliez, M Desbrun, Computer Graphics Forum. 305Wiley Online LibraryDe Goes, F., Cohen-Steiner, D., Alliez, P. and Desbrun, M., 2011. An optimal transport approach to robust reconstruction and simplification of 2D shapes. In: Computer Graphics Forum, Vol. 30number 5, Wiley Online Library, pp. 1593-1602. Feature-preserving surface reconstruction and simplification from defect-laden point sets. J Digne, D Cohen-Steiner, P Alliez, F De Goes, M Desbrun, Journal of mathematical imaging and vision. 482Digne, J., Cohen-Steiner, D., Alliez, P., De Goes, F. and Des- brun, M., 2014. Feature-preserving surface reconstruction and simplification from defect-laden point sets. Journal of mathe- matical imaging and vision 48(2), pp. 369-382. A comprehensive automated 3D approach for building extraction, reconstruction, and regularization from airborne laser scanning point clouds. P Dorninger, N Pfeifer, Sensors. 811Dorninger, P. and Pfeifer, N., 2008. A comprehensive automated 3D approach for building extraction, reconstruction, and regu- larization from airborne laser scanning point clouds. Sensors 8(11), pp. 7323-7343. View of the Mabillon street next to the corner of Lobineau street. N D Google, Accessed: 2018-01-09ParisGoogle, n.d. View of the Mabillon street next to the cor- ner of Lobineau street (Paris). https://goo.gl/maps/ FQTvA7oaiNm [Accessed: 2018-01-09]. Sensor-topology based simplicial complex reconstruction from mobile laser scanning. S Guinard, Accessed: 2018-04-06Guinard, S., n.d. Sensor-topology based simplicial complex re- construction from mobile laser scanning. https://youtu. be/zJn3YF8eer4 [Accessed: 2018-04-06]. Screened poisson surface reconstruction. M Kazhdan, H Hoppe, ACM Transactions on Graphics (TOG). 32329Kazhdan, M. and Hoppe, H., 2013. Screened poisson surface reconstruction. ACM Transactions on Graphics (TOG) 32(3), pp. 29. Stereopolis II: A multi-purpose and multi-sensor 3d mobile mapping system for street visualisation and 3d metrology. Revue française de photogrammétrie et de télédétection. N Paparoditis, J.-P Papelard, B Cannelle, A Devaux, B Soheilian, N David, E Houzay, 200Paparoditis, N., Papelard, J.-P., Cannelle, B., Devaux, A., So- heilian, B., David, N. and Houzay, E., 2012. Stereopolis II: A multi-purpose and multi-sensor 3d mobile mapping system for street visualisation and 3d metrology. Revue française de photogrammétrie et de télédétection 200(1), pp. 69-79. Progressive simplicial complexes. J Popović, H Hoppe, Proceedings of the 24th annual conference on Computer graphics and interactive techniques. the 24th annual conference on Computer graphics and interactive techniquesACM Press/Addison-Wesley Publishing CoPopović, J. and Hoppe, H., 1997. Progressive simplicial complexes. In: Proceedings of the 24th annual confer- ence on Computer graphics and interactive techniques, ACM Press/Addison-Wesley Publishing Co., pp. 217-224. Knowledge based reconstruction of building models from terrestrial laser scanning data. S Pu, G Vosselman, ISPRS Journal of Photogrammetry and Remote Sensing. 646Pu, S. and Vosselman, G., 2009. Knowledge based reconstruc- tion of building models from terrestrial laser scanning data. ISPRS Journal of Photogrammetry and Remote Sensing 64(6), pp. 575-584. Detection and modelling of 3D trees from mobile laser scanning data. M Rutzinger, A Pratihast, S Oude Elberink, G Vosselman, Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 38Rutzinger, M., Pratihast, A., Oude Elberink, S. and Vosselman, G., 2010. Detection and modelling of 3D trees from mobile laser scanning data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 38, pp. 520-525. TerraMobilita/iQmulus urban point cloud analysis benchmark. B Vallet, M Brédif, A Serna, B Marcotegui, N Paparoditis, Computers & Graphics. 49Vallet, B., Brédif, M., Serna, A., Marcotegui, B. and Paparoditis, N., 2015. TerraMobilita/iQmulus urban point cloud analysis benchmark. Computers & Graphics 49, pp. 126-133. Change detection in 3D point clouds acquired by a mobile mapping system. IS-PRS Annals of Photogrammetry. W Xiao, B Vallet, N Paparoditis, Remote Sensing and Spatial Information Sciences. 12Xiao, W., Vallet, B. and Paparoditis, N., 2013. Change detection in 3D point clouds acquired by a mobile mapping system. IS- PRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences 1(2), pp. 331-336. Efficient large-scale 3D mobile mapping and surface reconstruction of an underground mine. R Zlot, M Bosse, Field and service robotics. SpringerZlot, R. and Bosse, M., 2014. Efficient large-scale 3D mobile mapping and surface reconstruction of an underground mine. In: Field and service robotics, Springer, pp. 479-493.
[]
[ "DIRAC MONOPOLES IN THE ERNST-SCHWARZSCHILD SPACETIME", "DIRAC MONOPOLES IN THE ERNST-SCHWARZSCHILD SPACETIME" ]
[ "A A Bytsenko \nDepartamento de Fisica\nUniversidade Estadual de Londrina Caixa Postal\n6001Londrina-ParanaBrazil\n", "Yu P Goncharov \nExperimental Physics Department\nTheoretical Group\nState Polytechnical University Sankt-Petersburg\n195251Russia\n" ]
[ "Departamento de Fisica\nUniversidade Estadual de Londrina Caixa Postal\n6001Londrina-ParanaBrazil", "Experimental Physics Department\nTheoretical Group\nState Polytechnical University Sankt-Petersburg\n195251Russia" ]
[]
It is discussed that the Ernst-Schwarzschild metric describing a nonrotating black hole in the external magnetic field admits the solutions of the Dirac monopole types for the corresponding Maxwell equations. The given solutions are obtained in explicit form and a possible influence of the conforming Dirac monopoles on Hawking radiation is also outlined.
10.1142/s0217751x0301560x
[ "https://arxiv.org/pdf/hep-th/0305030v2.pdf" ]
2,696,288
hep-th/0305030
3474163b139d52e52ebad0310562dc3db757fa23
DIRAC MONOPOLES IN THE ERNST-SCHWARZSCHILD SPACETIME May 2003 November 7, 2018 A A Bytsenko Departamento de Fisica Universidade Estadual de Londrina Caixa Postal 6001Londrina-ParanaBrazil Yu P Goncharov Experimental Physics Department Theoretical Group State Polytechnical University Sankt-Petersburg 195251Russia DIRAC MONOPOLES IN THE ERNST-SCHWARZSCHILD SPACETIME May 2003 November 7, 2018arXiv:hep-th/0305030v2 30 It is discussed that the Ernst-Schwarzschild metric describing a nonrotating black hole in the external magnetic field admits the solutions of the Dirac monopole types for the corresponding Maxwell equations. The given solutions are obtained in explicit form and a possible influence of the conforming Dirac monopoles on Hawking radiation is also outlined. Introduction In astrophysics for a long time the physics of black holes immersed into an external magnetic field has been studied ( see, e.g., the review in Ref. 1). In view of it the whole class of solutions of the Einstein-Maxwell equations was found to model a black hole in an external electromagnetic field. Referring for more details to Refs. 3 we should like here to notice that an isolated black hole might possess the internal magnetic fields of the Dirac monopole types. The latter configurations should be connected with nontrivial topological properties of black holes and could have an essential influence on quantum processes near black holes, for instance, on Hawking radiation. A number of examples of such configurations may be found in Refs. 3 and references therein. Physically, the existence of those configurations should be obliged to the natural presence of magnetic U(N)-monopoles (with N ≥ 1) on black holes though the total (internal) magnetic charge (abelian or nonabelian) of black hole remains equal to zero. One can consider that monopoles reside in black holes as quantum objects without having influence on the black hole metrics. They could reside in the form of monopole gas in which the process of permanent creation and annihilation of the virtual monopole-antimonopole pairs occurs so that the summed internal magnetic charge (i. e., related with topological properties) is equal to zero while the external one (not connected with topological properties) may differ from zero ( e. g. on the Reissner-Nordström or, more generally, Kerr-Newman black holes with magnetic charges). While existing the virtual monopole-antimonopole pair can interact with a particle and, by this, increasing the Hawking radiation (see Refs. 3 and references therein). There arises the question whether there exist the Dirac-like monopole configurations on the black holes immersed into an external magnetic field. Within the given note we shall show that the answer is affirmative by example of the Ernst-Schwarzschild spacetime [4] describing Schwarzschild black hole in asymptotically homogeneous magnetic field. The metric of spacetime manifold in question is ds 2 = g µν dx µ ⊗ dx ν ≡ Λ 2 (adt 2 − a −1 dr 2 − r 2 dϑ 2 ) − r 2 sin 2 ϑdϕ 2 Λ 2 (1) with a = 1 − 2M/r, Λ = 1 + 1 4 B 2 r 2 sin 2 ϑ, |g| = | det(g µν )| = (Λ 2 r 2 sin ϑ) 2 and 0 ≤ r < ∞, 0 ≤ ϑ < π, 0 ≤ ϕ < 2π. At this the surface t=const, r=const is an ellipsoid with topology S 2 . Throughout the paper we employ the system of units withh = c = G = 1, unless explicitly stated. Dirac Monopole Type Solutions To write down the Maxwell equations in spacetime with metric (1) we need to know the action of the Hodge star operator * on 2-forms F = F µν dx µ ∧ dx ν which is defined for any k-dimensional (pseudo)riemannian manifold B provided with a (pseudo)riemannian metric g µν by the relation (see, e. g., Refs. 5) F ∧ * F = (g µα g νβ − g µβ g να )F a µν F a αβ |g| dx 1 ∧ dx 2 · · · ∧ dx k(2) in local coordinates x µ . In the case of the metric (1) this yields for the basis elements * (dt ∧ dr) = |g|g tt g rr dϑ ∧ dϕ = − r 2 sin ϑ Λ 2 dϑ ∧ dϕ , * (dt ∧ dϑ) = − |g|g tt g ϑϑ dr ∧ dϕ = sin ϑ aΛ 2 dr ∧ dϕ , * (dt ∧ dϕ) = |g|g tt g ϕϕ dr ∧ dϑ = − Λ 2 a sin ϑ dr ∧ dϑ , * (dr ∧ dϑ) = |g|g rr g ϑϑ dt ∧ dϕ = a sin ϑ Λ 2 dt ∧ dϕ , * (dr ∧ dϕ) = − |g|g rr g ϕϕ dt ∧ dϑ = − aΛ 2 sin ϑ dt ∧ dϑ , * (dϑ ∧ dϕ) = |g|g ϑϑ g ϕϕ dt ∧ dr = Λ 2 r 2 sin ϑ dt ∧ dr ,(3) so that * 2 = * * = −1, as should be for the manifolds with lorentzian signature [5]. The Maxwell equations are dF = 0,(4)d * F = 0(5) for electromagnetic vector-potential A = A µ dx µ , F = dA with the exterior differential d = ∂ t dt + ∂ r dr + ∂ ϑ dϑ + ∂ ϕ dϕ in coordinates t, r, ϑ, ϕ. It is clear that (4) is identically satisfied (the Bianchi identity) so that it is necessary to solve only the Eq. (5). Let us search for A in the form A = A ϕ (r, ϑ)dϕ, i.e., putting the components A t = A r = A ϑ = 0. This entails F = dA = ∂ r A ϕ dr ∧ dϕ + ∂ ϑ A ϕ dϑ ∧ dϕ and with the help of (3) * F = − aΛ 2 sin ϑ ∂ r A ϕ dt ∧ dϑ + Λ 2 r 2 sin ϑ ∂ ϑ A ϕ dt ∧ dr.(6) Then the Eq. (5) take the form ∂ ∂r |g| g rr g ϕϕ ∂A ϕ ∂r + ∂ ∂ϑ |g| g ϑϑ g ϕϕ ∂A ϕ ∂ϑ = ∂ ∂r aΛ 2 sin ϑ ∂A ϕ ∂r + ∂ ∂ϑ Λ 2 r 2 sin ϑ ∂A ϕ ∂ϑ = 0 .(7) Now we employ the ansatz A ϕ = −αf (ϑ)/Λ with some constant α and inserting it into (7) entails the equation for function f (ϑ) sin ϑ d 2 f d 2 ϑ − cos ϑ df dϑ = 0 .(8) The solution of (8) necessary to us is f (ϑ) = cos ϑ so A = − α cos ϑ Λ dϕ .(9) To fix the constant α let us require the fulfillment the Dirac charge quantization condition S 2 F = S 2 ∂ ϑ A ϕ dϑ ∧ dϕ = 4πq = 4π n e(10) with magnetic charge q = n/e, n ∈ Z, the set of integers, where we integrate over any surface t=const, r=const with topology S 2 , e is elementary electric charge. The direct evaluation gives S 2 F = 2πα π 0 sin ϑ Λ 1 + B 2 r 2 cos 2 ϑ 2Λ dϑ = 4πα,(11) so that α = n/e. Also it is not complicated to check that the Gauss theorem holds true S 2 * F = 0.(12) One can notice that relation (9) passes on to the conforming one for the case of the pure Schwarzschild metric, i.e. at B = 0 [3]. Finally it is easy to check that the given solutions satisfy the Lorentz gauge condition that can be written in the form div(A) = 0, where the divergence of 1-form A = A µ dx µ is defined by the relation div(A) = 1 √ |δ| ∂ µ ( |δ|g µν A ν ) . Concluding Remarks Mathematical reason for existence of the solutions obtained is the following one. It should be noted that the standard spacetime topology on which the metric (1) with arbitrary a = a(r) can be realized in a natural way is of bhform. As was discussed in Refs. 3, such topology admits countable number of complex line bundles while each complex line bundle E can be characterized by its Chern number n ∈ Z. The solutions obtained are just connections in the mentioned bundles -Dirac monopoles. But it should be emphasized that the total (internal) magnetic charge Q m of system(black hole + external magnetic field) which should be considered as the one summed up over all the monopoles remains equal to zero because Q m = 1 e n∈Z n = 0 ,(13) so the external observer does not see any magnetic charge of system though the monopoles are present in the sense described above. On the other hand, the nontrivial topological properties of spacetimes may play essential role while studying quantum geometry of fields on them (see, e. g., our reviews [6]). Therefore physically the results obtained could mean that in the given spacetime there exist topologically inequivalent configurations (TICs) for various fields. Each TIC corresponds to its Chern number n ∈ Z. TIC with n = 0 can be called untwisted, while the rest of the TICs with n = 0 should be referred to as twisted. For example, TICs of complex scalar field φ with mass µ 0 should obey the equations |g| −1/2 (∂ µ − ieA µ )[g µν (|g|) 1/2 (∂ ν − ieA ν )φ] = −µ 2 0 φ ,(14) where along with external electromagnetic field A = −B 2 r 2 sin 2 ϑ/(2Λ)dϕ we should include the addendum corresponding to (9) so that the full A of (14) would be of the form A = − B 2 r 2 sin 2 ϑ 2Λ + n cos ϑ eΛ dϕ .(15) Analogously this also holds true for spinor field. Under the circumstances one can speak about the Hawking radiation process for any TIC of complex scalar or spinor fields and one may try to get the luminosity L(n) with respect to the Hawking radiation for TIC with the Chern number n. We can interpret L(n) with n = 0 as an additional contribution to the Hawking radiation due to the additional charged particles leaving the black hole because of the interaction with monopoles and the conforming radiation can be called the monopole Hawking radiation [7]. Under this situation, for the all configurations luminosity L of black hole in question with respect to the Hawking radiation to be obtained, one should sum up over all n, i. e. L = n∈Z L(n) .(16) As a result, we can expect a marked increase of Hawking radiation from black holes under consideration. The above program has been to a large extent realized for Schwarzschild black holes [3,8] and there is an interest to look at how the results obtained before would be changed in the presence of external magnetic field. But for to get an exact value of this increase one should apply numerical methods. In the case of the pure Schwarzschild black hole, for example, it was found that the contribution due to monopoles can be of order 11 % of the total pion-kaon luminosity [3] while it is of order 22 % for electron-positron case [8]. So that it would be interesting enough to evaluate similar increase in the case of the Ernst-Schwarzschild metric. AcknowledgementsThe work of authors was supported in part by the Russian Foundation for Basic Research (grant No. 01-02-17157). . A N Aliev, D V , Uspehi. Fiz. Nauk. 157129A. N. Aliev and D. V. Gal'tsov, Uspehi. Fiz. Nauk 157, 129 (1989). D V , Particles and Fields in the Vicinity of Black Holes. MoscowMoscow University PressD. V. Gal'tsov, Particles and Fields in the Vicinity of Black Holes (Moscow University Press, Moscow, 1986). . Yu P Goncharov, N E Firsova, Int. J. Mod. Phys. D. 5419Yu. P. Goncharov and N. E. Firsova, Int. J. Mod. Phys. D 5, 419 (1996); . Nucl. Phys. B. 486371Nucl. Phys. B 486, 371 (1997); . Phys. Lett. B. 478439Phys. Lett. B 478, 439 (2000). . F J Ernst, J. Math. Phys. 1754F. J. Ernst, J. Math. Phys. 17, 54 (1976). A L Besse, Einstein Manifolds. BerlinSpringer-VerlagA. L. Besse, Einstein Manifolds (Springer-Verlag, Berlin, 1987); . M M Postnikov, Riemannian Geometry, FactorialMoscowM. M. Postnikov, Riemannian Geometry (Factorial, Moscow, 1998). . Yu P Goncharov, Int. J. Mod. Phys. A. 91Yu. P. Goncharov, Int. J. Mod. Phys. A 9, 1 (1994); . A A Bytsenko, G Cognola, L Vanzo, S Zerbini, Phys. Rep. 2661A. A. Bytsenko, G. Cognola, L. Vanzo and S. Zerbini, Phys. Rep. 266, 1 (1996). Yu P Goncharov, Pis'ma v ZhETF. 69619Yu. P. Goncharov, Pis'ma v ZhETF 69, 619 (1999); . Phys. Lett. B. 45829Phys. Lett. B 458, 29 (1999). . Yu P Goncharov, N E Firsova, Mod. Phys. Lett. A. 162399Yu. P. Goncharov and N. E. Firsova, Mod. Phys. Lett. A 16, 2399 (2001).
[]
[ "under consideration for publication in Physics of Fluids Effect of non-parallel mean flow on the acoustic spectrum of heated supersonic jets: explanation of 'jet quietening'", "under consideration for publication in Physics of Fluids Effect of non-parallel mean flow on the acoustic spectrum of heated supersonic jets: explanation of 'jet quietening'" ]
[ "( M E Goldstein ", "A &amp; M Z Sescu ", "Afsar " ]
[]
[ "J. Fluid Mech" ]
Noise measurements of heated axisymmetric jets at fixed supersonic acoustic Mach number indicate that the acoustic spectrum reduces when the temperature ratio increases. The 'spectral quietening' effect has been observed both experimentally and computationally using Large Eddy Simulations (LES). It was explained by Afsar et al. (M. Z. Afsar and M. E. Goldstein & A. M. Fagan AIAAJ., Vol. 49, p. 2522, 2011) through the cancellation introduced by enthalpy flux/momentum flux coupling term using the generalized acoustic analogy formulation. But the parallel flow assumption is known to give inaccurate predictions at high jet speeds. In this paper we therefore extend the non-parallel flow asymptotic theory of Goldstein et al. the vector Green's function of the adjoint linearized Euler equations (ALEE) in the analogy. Using a steady Reynolds Averaged Navier Stokes (RANS) calculation for the jet mean flow, we find that the coupling term propagator is positive-definite and asymptotically sub-dominant at low frequencies corresponding to the peak jet noise when non-parallel flow effects are taken into account and self-consistent approximations for the turbulence structure are made. The validity of the non-parallel flow-based acoustic analogy model is assessed at various observation angles by computing the overall soundpressure level (OASPL) and use this to suggest a more rational explanation of the quietening effect. In general, our noise predictions are in very good agreement with acoustic data beyond the peak frequency. a) Electronic mail: [email protected]; https://pureportal.strath.ac.uk/en/persons/mohammed-afsar.
10.1063/1.5117231
[ "https://export.arxiv.org/pdf/1909.10373v1.pdf" ]
202,719,131
1909.10373
7814a1c93b51451007c4bb976c65d46e069c0032
under consideration for publication in Physics of Fluids Effect of non-parallel mean flow on the acoustic spectrum of heated supersonic jets: explanation of 'jet quietening' 2012 ( M E Goldstein A &amp; M Z Sescu Afsar under consideration for publication in Physics of Fluids Effect of non-parallel mean flow on the acoustic spectrum of heated supersonic jets: explanation of 'jet quietening' J. Fluid Mech 6951992012(Dated: 21 January 2022) Noise measurements of heated axisymmetric jets at fixed supersonic acoustic Mach number indicate that the acoustic spectrum reduces when the temperature ratio increases. The 'spectral quietening' effect has been observed both experimentally and computationally using Large Eddy Simulations (LES). It was explained by Afsar et al. (M. Z. Afsar and M. E. Goldstein & A. M. Fagan AIAAJ., Vol. 49, p. 2522, 2011) through the cancellation introduced by enthalpy flux/momentum flux coupling term using the generalized acoustic analogy formulation. But the parallel flow assumption is known to give inaccurate predictions at high jet speeds. In this paper we therefore extend the non-parallel flow asymptotic theory of Goldstein et al. the vector Green's function of the adjoint linearized Euler equations (ALEE) in the analogy. Using a steady Reynolds Averaged Navier Stokes (RANS) calculation for the jet mean flow, we find that the coupling term propagator is positive-definite and asymptotically sub-dominant at low frequencies corresponding to the peak jet noise when non-parallel flow effects are taken into account and self-consistent approximations for the turbulence structure are made. The validity of the non-parallel flow-based acoustic analogy model is assessed at various observation angles by computing the overall soundpressure level (OASPL) and use this to suggest a more rational explanation of the quietening effect. In general, our noise predictions are in very good agreement with acoustic data beyond the peak frequency. a) Electronic mail: [email protected]; https://pureportal.strath.ac.uk/en/persons/mohammed-afsar. I. INTRODUCTION The peak sound of a high speed jet radiates at low frequencies (typically at Strouhal numbers, St ∼ 0.2) such that the Overall Sound Pressure Level (OASPL) is being greatest at a small polar observation angle, θ , with respect to the jet centre-line, usually at θ = 30 •1,2 . Although these trends are essentially the same for all acoustic Mach numbers in isothermal flows up to mild supersonic conditions, where the convection speed of the turbulence remains subsonic such that broad-band shock associated noise (see Mayo et al. 3 The discovery of quieter supersonic jets by heating was first made in Hoch et al. 6 and later, more systematically reported by Tanna 1 , for a round jet flow at Ma = 1.47. sound radiated by Mach waves is expected to enter at high frequencies at these speeds, hence the Shea/Stuber results 15,16 will be qualitatively applicable to the mild supersonic conditions under our consideration in which sound amplification due to turbulence/mean flow interaction (i.e. jet quietening effects) remain a low frequency phenomenon. The aim of this paper is to develop a model for low frequency jet noise that gives a consistent physical explanation for the observed quietening of heated supersonic jets in addition to providing a 'low-order' prediction scheme. We use the acoustic analogy of Ref. 18 19 ). The generalized analogy, owing to its first principle derivation, has laid the foundation for systematic analysis of jet noise to be performed without concern if the source terms (that are assumed known) or the propagator are defined consistently; these were the main issues relating to the Lighthill 20 and Lilley 21 formulations. Goldstein 18 showed that the acoustic spectrum per unit volume can be expressed as a convolution product of a propagator and auto-covariance tensor of a generalized (rank-four) fluctuating stress tensor of a stationary random function that reduces to the fluctuating Reynolds stress in the absence of any temperature unsteadiness. Although the auto-covariance tensor possesses 144 components, not all of these are independent. Symmetry between tensor suffixes, shows that it reduces (without approximation) to 63 independent components, which is still too large for practical use. Thankfully, however, rational approximations such as axisymmetric turbulence further reduce this to a manageable number of 11, only a few of which dominate the peak radiated sound. The propagator tensor on the other hand is determined by the vector Green's function of the ALEE at O(1) frequencies which itself is a function of the mean flow field for an, in general, arbitrary spreading jet. Noise predictions using Ref. 18 were successfully implemented by Goldstein & Leib 19 , Karabasov et al. 22 and Leib & Goldstein 23 for various isothermal axisymmetric jets at a range of Ma. The Reynolds stress auto-covariance tensor in these models was approximated to be kinematically axisymmetric and the functional form of the components that entered it was based on experiments and/or LES 11,12,22 . More specifically the functional form was assumed to be proportional to the local RANS turbulent kinetic energy (TKE). The propagators were also determined using the RANS mean flow. The above approaches still remain the current state-of-art for low-order prediction models based on the acoustic analogy models (see Ref. 24). Note that while these models rely on accurate RANS mean flow solutions (which themselves are based on turbulence models) the centerline mean flow distribution is usually accurately predicted compared to particle-image velocimetry (PIV) data, with closest agreement being at the end of jet potential core (see Karabasov et al. 22 Afsar 25 all of whom showed that the acoustic efficiency of this term is raised to a dipole at low frequencies even though, intrinsically, it appears as a quadrupole when multiplied by the appropriate Reynolds stress auto-covariance component in the acoustic spectrum formula 26 . Despite the predictive success of the models described above, there still remain several basic research problems involving the generalized acoustic analogy such as (a). how to model the components of the auto-covariance tensor that enter the prediction model? and (b). what is the appropriate base flow (i.e. uniform, parallel or weakly/strongly non-parallel) to determine the solution to the ALEE, and therefore the propagator tensor? We shall answer both of these questions in this paper when applied to the technologically relevant axisymmetric jet flow. Previous attempts at explaining the quietening of heated supersonic axisymmetric jets using the generalized analogy approach showed that cancellation introduced by the enthalpy flux/momentum flux auto-covariance (coupling) term could potentially reduce the magnitude of the acoustic spectrum (see Afsar,Goldstein & Fagan 27 hereafter referred to as AGF). However, the AGF results were based on low frequency asymptotic estimates of the propagator in a parallel (non-spreading) mean flow. But even in the absence of jet heating, a parallel (i.e. a uni-directional transversely sheared) mean flow does not capture the correct amplification in sound at these frequencies for high subsonic jets (Karabasov et al. 28 ), hence this approach cannot be expected to provide a consistent model for the reduction in sound with heating at supersonic jet speeds. In isothermal flows, Goldstein et al. ( 29 -hereafter referred to as GSA) found that non-parallel flow effects can be properly accounted for in the propagator solution at low frequencies when the time-dependent Green's function evolves temporally at the same order as its spatial development. Further extensions to the GSA theory represents an important area of research since the peak jet noise is observed at low frequencies (typically at Strouhal numbers, St, based on jet exit conditions of St ∼ 0.2) and small θ (Karabasov et al. 22 ). Reduced order models of this phenomenon will therefore support noise reduction efforts while remaining computationally inexpensive compared to the full numerical solution of the ALEE. The above distinguished scaling allows non-parallelism to enter the small θ /lowSt region of parameter space affecting the lowest order solution to the propagator everywhere in the jet. GSA in Karabasov et al. 28 ). At the scaled frequency Ω = ω/ε = O(1), the 'inner' region of the ALEE is governed by a single hyperbolic partial differential equation (PDE) when the streamwise mean flow is taken as one of the independent variables 29 . We show, however, that this transformation can be introduced prior to any asymptotic expansion of the ALEE. The inner equation then, essentially, follows at once after relatively straightforward dominant balance considerations are made. The rest of the paper is organized as follows. In §.II we briefly summarize the generalized acoustic analogy for heated jets at moderate supersonic acoustic Mach numbers. In §.III, we expand the propagator tensor in a heated flow by showing that the same distinguished asymptotic limit of the ALEE derived in GSA must be applicable when T R > 1. The same inner equation is then valid in heated jets as for the isothermal case 29 but with the Favre-averaged mean square speed of sound determined by the Crocco-Busemann rather than the Crocco relation (which applies only at unity T R). In §.IV we use the propagator expansion to obtain an approximate formula for the peak acoustic spectrum in heated jets; this formula is used in §.V to understand what contribution the temperature-associated noise source terms play in the quietening of low frequency sound. We consider two axisymmetric jets at the same supersonic Ma based on Bridges 9 data set that were derived from the Tanna 1 set points. Previous authors have confirmed experimentally 1,9 and computationally 5 that these jets possess a definite amount of spectral quietening. The conditions are summarized in Table I. The mean flow used in the study was obtained by a RANS calculation using FLUENT. Our results are presented in §.V and discussed in §.VI. We find that a jet noise model based on an extended version of the GSA asymptotic theory gives accurate (i.e. within 1 − 2 dB) sound predictions for the peak noise of the heated and isothermal supersonic jets under consideration. Non-parallel flow effects are seen to play a crucial role in correctly determining the sign and magnitude of the coupling and enthalpy flux terms within the acoustic spectrum. II. ACOUSTIC SPECTRUM FORMULA IN HEATED FLOWS A. Basic formalism of generalized analogy Consider a high speed axisymmetric air jet possessing a spatially evolving (i.e. non-parallel) mean flow with arbitrary temperature ratio convecting a localized region of turbulence. It is necessary to briefly summarize the acoustic spectrum formula derived in AGF to appropriately set the context of the asymptotic analysis of the ALEE developed in §.III. Thus, let the (dimensional) pressure p, density ρ, enthalpy h, and speed of sound, c, satisfy the ideal gas law equation of state: p = ρc 2 /γ where γ denotes the specific heat ratio such that h = c 2 /(γ − 1). The acoustic spectrum at the observation point, x, is given by Fourier transform I(x, ω) ≡ 1 2π ∞ −∞ e iωτ p (x,t)p (x,t + τ) dτ,(1) of the far-field pressure auto-covariance, p (x,t)p (x,t + τ). The acoustic spectrum at x = (x 1 , x T ) = (x 1 , x 2 , x 3 ) , due to a unit volume of turbulence at y = (y 1 , y T ) = (y 1 , y 2 , y 3 ), is given by I(x; ω) = V ∞ (y) I(x, y; ω) dy,(2) where, V ∞ (y), is the entire source region, p (y, τ) ≡ p(y, τ) −p(y), and over-bars are being used to denote time averages are defined as: •(x) ≡ lim T →∞ 1 2T T −T •(x,t) dt,(3) where • in (3) is a place holder for any fluid mechanical variable. Goldstein & Leib 19 showed that the integrand in (2) can be determined by the exact integral solution, I(x, y; ω) (2π) 2 =Γ λ , j (y|x; ω) × V ∞ (η) Γ * µ,l (y + η|x; ω)H λ jµl (y, η; ω) dη.(4) The asterisks in (4) Γ λ , j (y|x; ω) ≡Λ λ σ , j (y)G σ (y|x; ω) := δ λ σ ∂ ∂ y j − (γ − 1)δ 4σ ∂ṽ λ ∂ y j G σ (y|x; ω)(5) that involves an inner tensor product in suffix σ , of operator Λ λ σ , j (y), that spans (4 × 4 × 3) dimensions corresponding to suffixes (λ , σ , j) where comma after j indicates that this suffix belongs to a derivative, and the first four components of the Fourier transform G σ (y|x; ω) = 1 2π ∞ −∞ e iω(t−τ) g a σ 4 (y,t − τ|x) d(t − τ),(6) of the five-dimensional adjoint vector Green's function, g a σ 4 (y, τ|x,t), (with suffix σ = 1, 2, 3, 4, 5) on the left hand sides of the five ALEE given by Eqs. (4.8)-(4.10) of G & L 19 subject to the strict causality condition for the adjoint pressure-like Green's function, g a 44 (y,t − τ|x) = 0 for t < τ when |x| → ∞. As frequently commented in previous papers 18,19,23 , (4) and (5) above are completely general and apply to any localized turbulent flow, even in the presence of fixed solid surfaces whose boundaries are given by level curves S(y) = const. as long as g a σ 4 (y, τ|x,t) is assumed to satisfy appropriate surface rigidity conditionsn σ g a σ 4 (y, τ|x,t) = 0 wheren σ = {n i , 0, 0} = {n 1 ,n 2 ,n 3 , 0, 0} denotes the unit normal to S(y). The unit tensor, δ λ σ , that appears in both terms on the second line of (5) above is the symmetric four-dimensional Kronecker delta function and tilde refers to the Favre averaged quantity• = ρ•/ρ, so that the four-dimensional mean velocity vector in (5) isṽ λ = {ṽ i , 0}, i = (1, 2, 3). The 5th component of G σ (y|x; ω) -the Fourier transform of adjoint Green's function for the continuity equation in the generalized analogy (defined by Eq. 2.9a in Goldstein 18 ) -does not enter the propagator formula, given by (5); it does, however, affect its solution through the linearized adjoint equations: −D 0 G i + G j ∂ṽ j ∂ y i − c 2 ∂ G 4 ∂ y i + (γ − 1)X i G 4 − ∂ G 5 ∂ y i = 0 (7a) −D 0 G 4 − ∂ G i ∂ y i + (γ − 1)G 4 ∂ṽ i ∂ y i = δ (x − y) 2π (7b) −D 0 G 5 +X i G i = 0,(7c) where D 0 ≡ iω +ṽ(y).∇ is the convective derivative in which ∇ is the three-dimensional gradient operator and i = (1, 2, 3). Reciprocity (see pp. 878-886 of Morse and Feshbach 30 and Eq. 4.7 in G & L) of the spacetime Green's function demands that g a σ 4 (y, τ|x,t) = g 4σ (x,t|y, τ). The y independent variable in (7) corresponds to the actual physical source point and x, the observation point, which is taken as a parameter in the solution and located in the far field, |x| → ∞. The coefficients that multiply the derivatives that act on G σ (y|x; ω) in the system of equations given by (7) depend on mean flow field through:ṽ i = (ṽ 1 ,ṽ 2 ,ṽ 3 ), the Favre-averaged speed of sound squared c 2 (y) ≡ γp/ρ and X(y) = (ṽ.∇)ṽ, the mean flow advection vector. The scripted tensor, H λ jµl (y, η; ω), in the acoustic spectrum formula, (4), is related to the Fourier transform H λ jµl (y, η; ω) = 1 2π ∞ −∞ e iωτ R λ jµl (y, η; τ) d(τ)(8) of the generalized auto-covariance tensor, R λ jµl (y, η; τ) ≡ lim T →∞ 1 2T T −T e λ j (y, τ)e µl (y + η, τ + τ 0 ) dτ 0 ,(9) of the stationary random function, e λ j (y, τ) = [ρv λ v j − ρv λ v j ](y, τ), by the linear transformation H λ jµl (y, η; ω) := ε λ jσ m H σ mγn (y, η; ω)ε µlγn . The four-dimensional vector, v λ , denotes the per- GSA worked out an asymptotic approximation of the ALEE, given by the system in (7), for a slowly diverging jet flow at temporal frequencies of the order of the small jet spread rate, that is, turbation, v λ (y, τ) ≡ v λ (y, τ) −ṽ λ (y) in which v λ = v i isat ω = O(ε) . This theory applied to a particular class of flow where Crocco's relation 34 is valid. We now show that this asymptotic expansion procedure can be naturally extended to heated jets and, interestingly, results in the same inner equation for the appropriate Green's function variable as in isothermal flow but now with the speed of sound determined via the temperature-dependent Crocco-Busseman formula. A. Transformation of (7) at O(1) spread rate We non-dimensionalize the dependent and independent variables in the ALEE (7), in preparation for the asymptotic analysis of §.III B. Therefore, let independent variables (y, τ) be nor- then normalized by U J , ρ J U 2 J and ρ J (nozzle exit density) respectively. Taking (e 1 , e r , e φ ) as an orthogonal triad of basis vectors in a cylindrical co-ordinate space, shows that the first three components of the vector G ≡ G σ = (G i , G 4 , G 5 ), that is determined by (7a), can be expressed as a linear function of that basis by G j = (G i e i )e j = G 1 δ j1 + G r δ jr + G φ δ jφ where G i = (G 1 , G r , G φ ) are the respective components of G i along the (e 1 , e r , e φ ) directions. The mean flow field (commensurate with an axisymmetric jet) has components, v = (U,V r ) where, at this point, we leave the jet spread rate to be otherwise arbitrary at ε = O(1). As GSA did, we take U to be one of the independent variables of choice; i.e. under the one-toone mapping (y 1 , r) → (y 1 ,U) where r ≡ |y T | = y 2 2 + y 2 3 . The co-ordinate surfaces U(y 1 , r) = const. and y 1 = const. are such that ∇U.∇y 1 = 0 at any fixed radial location, r, in the field space. Since the gradient operator shows that e 1 ≡ ∇y 1 and ∇U ≡ e 1 ∂U/∂ y 1 + e r ∂U/∂ r, the definition of the partial derivative requires that ∇U.∇y 1 = ∂U/∂ y 1 = 0 in the transformed co-ordinate system. Using the fact that G σ (y 1 , r, φ |x; ω) is implicitly related toG σ =G σ (y 1 ,U, φ |x; ω) via: G σ (y 1 ,U(y 1 , r), φ |x; ω) = G σ (y 1 , r, φ |x; ω),(10) the orthogonality condition and the chain rule in (y 1 ,U) co-ordinates similarly shows that the mean flow advection vector, X i = (X 1 , X r ) in (7a-c), takes the slightly more general form X 1 (y 1 ,U) = V r ∂U ∂ r &X r (y 1 ,U) = U ∂ ∂ y 1 +V r ∂ ∂ r V r .(11) compared to Eq.(5.15) in GSA since (11) is now applicable at ε = O(1). The operator D 0 , when acting onG σ (y 1 ,U(y 1 , r), φ |x; ω) in (7a-c), can also be transformed as follows, D 0 G σ (y 1 , r) = iω +U ∂ ∂ y 1 +V r ∂ ∂ r G σ ≡ D 0 +X 1 ∂ ∂U G σ (y 1 ,U),(12) where we have suppressed the remaining arguments in G σ andD 0 ≡ iω +U∂ /∂ y 1 . Since ∂U/∂ r = (∂ r/∂U) −1 and the chain rule shows that ∂ /∂U = (∂ r/∂U)∂ /∂ r, the i = r component of (7b) can be transformed tõ G 1 (y 1 ,U) = c 2 ∂G 4 ∂U + ∂G 5 ∂U +S r (y 1 ,U)(13) whereS r , one component of the vectorS i = (S 1 ,S r ,S 5 ), represents an O(1) 'left-over' term that acts to couple the various components of G σ = (G i , G 4 , G 5 ) in the ALEE, (7). As we shall see shortly, the retention ofS i , while being an exact consequence of the algebraic manipulation, prevents the solution to the lowest order asymptotic expansion of the Green's function variableν (introduced below) from being governed by a hyperbolic PDE. This form of (13) also generalizes Eq. (5.23) in GSA for an axisymmetric jet in which ε = O(1) whereS r (y 1 ,U) is defined bỹ S r (y 1 ,U) = ∂ r ∂U D 0 − ∂V r ∂ r G r − (γ − 1)X rG4 (y 1 ,U),(14) such that mean flow components (U,V r ) in (11), (12) & (14) are arbitrary (i.e. O(1)) at this point in the analysis. Eqs. (13) and (14) can now be used to generalize Eq.(5.26) in GSA. Inserting the second member of (12) into (7c) and using (13) & (14) shows that (7c) can be transformed tõ D 0ν (y 1 ,U) = c 2 D 0G4 +S 5 (y 1 ,U)(15) for the Green's function variable,ν =ν(y 1 ,U) ≡ c 2G 4 +G 5 when c 2 = f (U) in which f is an arbitrary function at this point but will be specified shortly to eliminate any 'G 4 terms' appearing on the left side of (16). Here,S 5 (y 1 ,U) is given byS 5 =X rGr +X 1Sr . To set about showing that anyG 4 terms on the left hand side of the equation that governs ν(y 1 ,U) vanish depending on the choice of c 2 = f (U), we first integrate (13) by parts to re-write its right hand side in terms ofν(y 1 ,U) and insert the result, (12) & (15) into the i = 1 component of (7a). The latter can then be written in the following form ∂ ∂UD 0ν − 1 c 2 ∂ c 2 ∂UD 0ν +X 1 ∂ 2ν ∂U 2 −X 1 (γ − 1) + ∂ 2 c 2 ∂U 2 G 4 = −S 1 + S 5 c 2 + D 0Sr ,(16) whereS 1 is defined below. The pre-factor multiplyingG 4 in (16) is identically zero when c 2 = f (U) is assumed to satisfy Crocco's relation 29,34 , c 2 (U) = c 2 ∞ − (γ − 1)U 2 /2 (where c ∞ is the speed of sound at infinity). This approximation assumes that the mean flow stagnation enthalpy is constant and although it is used as a first approximation to the static temperature (enthalpy) field in a compressible laminar boundary layer (White 35 For a heated jet, on the other hand, it is more appropriate to use the Crocco-Buseman relation in which a flow of unity Prandtl number and zero streamwise pressure gradient 38 possesses a static enthalpyh =h(U) = −U 2 /2 +c 1 U +c 2 (White 35 , p.579 & f. and 627-628). Since the jet has zero flow in the outer region, evaluating constants (c 1 ,c 2 ) gives (cf. Eq. 2.4c in Leeshaft et al. 39 ) c 2 (U) = c 2 ∞ + c 2 ∞ (T R − 1)U + γ − 1 2 U(1 −U)(17) and therefore that ∂ 2 c 2 /∂U 2 = −(γ − 1). Eq.(17) implies that, even for a heated jet, the square brackets on the left side of (16) is identically eliminated. We verify the Crocco-Bussemann relation in §.V using the streamwise meanflow, U, obtained from a steady RANS solution of the heated and isothermal supersonic jets in Table I who showed that the effect of assuming the Crocco-Bussemann relation produced a slight change in the momentum thickness of initial shear layer at jet Mach numbers ranging: (0.5 − 0.98). Owing to the fact that the square brackets in (16) vanish when using (17) for c 2 , we can integrate by parts in (16) to show that the combined Green's function variable,ν(y 1 ,U), is determined by the following PDE: Lν(y 1 ,U) = F (S), for ε = O(1),(18) where L (y 1 ,U) ≡ c 2 ∂ ∂U 1 c 2D 0 +X 1 ∂ 2 ∂U 2 ,(19) is a hyperbolic PDE operator andS = {S 1 ,S r ,S 5 }, the vector of so-called 'left over' terms enters via the functional F (S) = (δ i1 + δ ir D 0 − δ ir / c 2 )S i defined explicitly by, F (S) = F (S 1 ,S r ,S 5 ) :=S 1 − S 5 c 2 + D 0Sr .(20) The components ofS i that enter F (S) are linearly related to the adjoint Green's function component for the radial momentum equation,G r , and the mean flow component, V r . The latter entersS i as a coefficient or derivative, for example as in, S 1 (y 1 ,U) = ∂V r ∂ y 1G r (y 1 ,U),(21) or viaX r , in (11), which is present in both theS r component, given by (14), andS 5 defined below (15). To sum up thus far, the final equation we have derived, (18), is simply a direct re-arrangement of Fourier transformed ALEE, (7a-c), where F (S) is defined explicitly in (20). It is valid for an arbitrary axisymmetric jet flow with mean flow components, v = (U,V r ) at O(1) jet spread rates, where the speed of sound is determined by Crocco-Busemann relation, (17) andG σ =G σ (y 1 ,U, φ |x; ω) is the appropriate O(1) frequency adjoint vector Green's function solution to (7) with suffix σ = 1, 2, ...5. The mapping of independent variables (y 1 , r) → (y 1 ,U) can, in principle, be used for any problem governed by a system of equations of the type given by (7). Although we have reduced the total number of independent equations that need to be solved from 5 in (7) down to 4 in (18)- (20), the Green's function problem is just as complex as the original ALEE system. This is because the functional, F (S), depends on the 'leftover terms' through the vector,S i = {S 1 ,S r ,S 5 }, that appears on the right hand side of (18) and which transforms it to a mixed PDE that requires the solution of 4 coupled equations for (ν,G 4 ,G r ,G φ ) using (15), (18) and i = (r, φ ) components of (7a) when (13) is substituted forG 1 . But the vectorS i = (S 1 ,S r ,S 5 ) turns out to be asymptotically sub-dominant (i.e., negligible in comparison to the lowest order solution to Eq. 18) at low frequencies when ω = O(ε) and under an appropriate distinguished scaling 29 for the (r, φ ) components ofG σ (y 1 ,U, φ |x; ω) that balances (7). In the next section we prove that F (S) = o(1) in the limit as ε → 0 and how it necessarily shows that (18) decouples into a homogeneous (i.e. right hand side equal to zero) hyperbolic PDE for the single dependent variable,ν. But even though F (S) will be shown to play a largely irrelevant role in the solution at the lowest order asymptotic expansion ofν, our simplified derivation highlights the crucial role the dominant balance ofG r plays in the elimination of thẽ S i = {S 1 ,S r ,S 5 } vector. The Green's function,G φ , is also important to this dominant balance calculation thanks to the i = φ component of (7a) and the adjoint energy equation, (7b), both of which were not explicitly used in deriving (18). As opposed to GSA, our analysis applies in flows where T R = 1. B.i ={U(Y ),V r (Y,U)} =      U + εU (1) (Y,U) + O(ε 2 ), i = 1 ε(V r + εV (2) r )(Y,U) + O(ε 3 ), i = r(22) when the lowest order expansion of c 2 is determined by the Crocco-Busemann relation, (17). We have not put superscripts on the lowest order mean flow components, that would otherwise appear as (U (0) ,V(1) r ) respectively; they will be taken as that computed by the RANS solution. Moreover, at this order in ε,ρ(Y,U) =ρ(U),p(Y,U) = const. and the mean flow advection vector, X i (y), that enters algebraically in the {S r ,S 5 } components ofS i , similarly expands as X i ={X 1 ,X r }(Y,U) =      εX 1 (Y,U) + ε 2X (2) 1 (Y,U) + O(ε 3 ), i = 1 ε 2X r (Y,U) + O(ε 3 ), i = r(23) where the leading terms are defined by,X 1 ≡X (1) 1 = V r (∂U/∂ r) andX r ≡X(2) r = (U∂ /∂Y + V r ∂ /∂ r)V r for the streamwise and radial components respectively. Hence, when measured from the jet centerline, the mean flow separates into an inner region, given by (22) Inserting the above distinguished scaling into (12) and using the mean flow expansion (22) & (23) shows that the latter operator acting onν(Y,U) is given by (24), and the first line of (23), shows that the left side of (18) will be at most O(ε) whenν = O(1). The latter of which must be the case since the solution toν in the outer region (see Eq.5.40 of GSA) expands in this manner. The right side of (18) will then be O(ε 2 ) prior to considering the dominant balance ofG r since, by (24) & (26), the 'leftover' terms on the right side of (18) (defined by 21, 14 & line below 15) when substituted into (20), expand to leading order as follows D 0ν (y 1 ,U) =ε iΩ +U ∂ ∂Y +V r ∂ ∂ r ν ≡ε D 0 +X 1 ∂ ∂U ν(Y,U),(24)whereD 0 ≡ iΩ + U∂ /∂Y at Ω = O(1). Eq.F (S) → ε 2 ∂V r ∂YG r − 1 c 2 ε εX rGr +X 1Sr − ε D 0 +X 1 ∂ ∂U S r ,(25) where,S r (y 1 ,U) = ∂ r ∂U D 0 − ∂V r ∂ r G r + o(1) ≡ε ∂ r ∂U D 0 +X 1 ∂ ∂U − ∂V r ∂ r G r (Y,U) + O(ε 2 ).(26) after again using (22) still drops out of (18) because, whatever asymptotic expansion we take forG (r,φ ) , both of these Green's functions must remain bounded on the jet axis. That is, by considering the conditions across the surface r = 0 in the i = φ component of (7a) and using ∇.ṽ ∼ D 0G4 = O(ε) in the adjoint energy equation, (7b), shows thatG (r,φ ) = 0 at lowest order in (7), (18), (20), (25) & (26). GSA use an alternative explanation to restrict the modal expansion of the azimuthal Fourier transform ofν; this, however, follows straightforwardly here because ifG φ = 0 at lowest order, the i = φ component of (7a) recovers the fact that the lowest order (axisymmetric) solutionν to (18) is independent of azimuthal angle φ . In other words, the Fourier transform ofν(Y,U, φ |X, Φ; Ω) in the difference, (Φ − φ ), is given bŷ ν (n) (Y,U) = 1 2π ∞ −∞ν (Y,U|X, |x T |, Φ − φ ; Ω)e in(Φ−φ ) d(Φ − φ ) ≡δ (n)ν(Y,U) | (Φ−φ )=0 ,(27) where δ (•) is the Dirac delta function of argument (•) (we have suppressed repeated variable list in theν-solution). Using (6), the solution,ν(Y,U), given by the scaled Fourier transform (note error in pre-factor of Eq. 5.8 in GSA); ν(Y,U)≡ ε 4πc 2 ∞ |x| e iΩX/c ∞ν (Y,U|X, |x T |, 0; Ω) = 1 2πε ∞ −∞ e iΩ(T 0 −T ) ( c 2g 44 +g 54 )(Y,U|X, |x T |, 0;T 0 −T ) d(T 0 −T ),(28) is now determined by (18) when F (S) = o(1) at arbitrary Ω = O(1) frequencies. Hence, setting the right hand side in (18) equal to zero, shows that the lowest order term in the expansion ν(y 1 , r) =ν(y 1 , r) +ν (1) (y 1 , r) + ... is given by the solution to Lν(Y,U) ≡ c 2 ∂ ∂U 1 c 2D 0ν +X 1 ∂ 2ν ∂U 2 = 0, for ε O(1),(29) by the implicit function theorem whereν(Y,U) ≡ c 2Ḡ 4 +Ḡ 5 is related to the zeroth-order az- x, only through θ . Eq. (29) applies to jet flows with T R > 1 and is identical to Eq. (5.31) in GSA but with c 2 now determined by (17). imuthal modeν (0) (Y,U) through the inverse Fourier transform of (27) in (Φ − φ ) where (X, T 0 ) = ε(x 1 , The hyperbolic structure of (29) shows that it is unnecessary to impose a downstream boundary condition. Fig. 1 in GSA indicates how 'ν-waves' propagate to both left and right from the U = 0 boundary and that no boundary conditions are required on the Y = 0 and Y → ∞ level curves (i.e. no inflow condition is necessary). Henceν(Y,U) is now uniquely determined by the outer boundary conditions (i.e., by matching to the inner limit of the outer solution using Van Dyke's rule 45 ) obtained from the (zero flow wave equation) solution to (29) whenX 1 = 0. That is, ν(Y, 0) = −iΩc 2 ∞ e −iΩY cos θ /c ∞ ,(30)∂ν ∂U (Y, 0) = −iΩc ∞ cos θ e −iΩY cos θ /c ∞ ,(31) apply on the non-characteristic curve, U = 0, where U → 0 corresponds to the outer limit, r → ∞ cannot behave as ln r ∼ ln(ln(1/U)) 1/2 as U → 0, where r 2 ∼ ln(1/U) as U → 0 via Eq. (6.1) in GSA. Therefore, any influence of the nozzle can also be entirely neglected in the solution to (29) for all T R ≥ 1. C. Propagator expansion at Ω = O(1) frequencies Since the propagator, (5), depends onḠ σ (Y, r|x; Ω) and the mean flow expansion (22), its solution must also separate out into the same asymptotic regions as (22) & (23) and depend on scaled variable/parameter (Y, Ω) = O(1). Hence,Γ λ , j (y 1 , r|x; ω) =Γ λ , j (Y, r|x; Ω). Taking the gradient operator, ∇ ≡ e 1 ∂ /∂ y 1 +e r ∂ /∂ r +e φ ∂ /r∂ φ of the lowest order mean flow vector,ṽ(y), in (22) we can easily show that non-symmetric rank-two tensor, ∂ṽ λ /∂ y j , in (5), whereṽ λ ≡ {ṽ i , 0} = {U,V r , 0, 0} at ε O(1) , possesses the following asymptotic expansion, ∂ṽ λ ∂ y j (Y, r) =δ λ 1 δ jr ∂U ∂ r + εδ λ 1 δ jr ∂U ∂Y + δ jr ∂V r ∂ r +ε V r r δ λ φ δ jφ + O(ε 2 ),(32) in (Y, r, φ ) cylindrical co-ordinates using (22) and ∂ e r /∂ φ = e φ & ∂ e φ /∂ φ = −e r . Then, inserting (22) and the lowest order scaled Green's function vector,Ḡ σ (Y, r|x; Ω) =Ḡ 1 δ σ 1 +Ḡ 4 δ σ 4 into (5) shows that the latter propagator expands like, Γ λ , j (Y, r|x; Ω) = δ λ 1 δ jr ∂Ḡ 1 ∂ r − (γ − 1) ∂U ∂ rḠ 4 + +δ λ 4 δ jr ∂Ḡ 4 ∂ r + εδ λ 1 δ j1 ∂Ḡ 1 ∂Y − (γ − 1) ∂U ∂YḠ 4 + εδ λ 4 δ j1 ∂Ḡ 4 ∂Y −ε(γ − 1) δ λ 1 δ jr ∂V r ∂ r + δ λ φ δ jφ V r r Ḡ 4 + O(ε 2 ),(33)in (Y, r) co-ordinates at Ω = O(1) frequencies.Ḡ σ (Y, r|x; Ω) is found in (Y,U) co-ordinates using an equivalent re-scaling as (28). It is transformed back to (Y = εy 1 , r) co-ordinates for integration over y in (2). More specifically, sinceS i = {S 1 ,S r ,S 5 } ≡ 0 at lowest order, the solution toν(Y,U) found by solving (29) allowsḠ 4 to be determined using (15) after re-scaling (24) using (28) and inserting the latter into the left hand side of (15).Ḡ 1 is then determined by substitutingḠ 4 into (13) replacingḠ 5 withḠ 4 andν (see sentence below 29) where, again, both (13) and (15) are interpreted in terms of the scaled Green's function variables via (28) and use is made of the chain rule, ∂Ḡ 1 /∂ r = (∂U/∂ r)∂Ḡ 1 /∂U. IV. APPROXIMATE FORMULA FOR THE PEAK JET NOISE IN HEATED FLOWS A. WKB reduction of (4) The variation of the propagator Γ * µ,l (y + η|x; ω) over η can be approximated by taking advantage of the scale disparity between the mean flow and turbulence relative to the acoustic wavelength, λ acoustic , in the correlation volume V (η) of integral in (2). In an asymptotic sense, the ALEE solution that determines Γ * µ,l in (5) will only contribute to integral over O(|η|) distances in (2) when the mean flow length scales that determine the coefficients (and therefore solution structure) of (7) are of the same order as the turbulence correlation lengths in their respective directions. This is because the latter propagator tensor, evaluated at (y + η), multiplies R λ jµl in (2). At minimum, the critical variation in Γ * µ,l occurs at k ∞ 1, thus allowing Γ * µ,l to be represented by a Wentzel-Kramers-Brillioun-Jeffreys (WKBJ) approximation inasmuch as Γ * µ,l (y + η|x; ω) ≈ Γ * µ,l (y|x; ω)e ik. η . Inserting the above into (4) therefore gives an algebraic formula for the acoustic spectrum: I(x, y; ω) (2π) 2 ≈ Γ λ , j (y|x; ω)Γ * µ,l (y|x; ω)Φ * λ jµl (y, k 1 , k T ; ω),(34) where Φ * λ jµl (y, k 1 , k T ; ω) := V ∞ (η) H λ jµl (y, η; ω)e ik.η dη,(35) such that the spectral tensor, Φ * λ jµl , possesses two-pair symmetries, Φ i jkl = Φ jikl = Φ i jlk when (λ , µ) = (i, k) and one-pair symmetry, Φ * 4 jkl = Φ * 4 jlk when (λ , µ) = 4. The robustness of this approximation (Wundrow & Khavaran 46 ) at ω = O(1) allows it to remain valid for long, λ acoustic = O(1/ε), wavelengths (of focus in this paper) when the propagators in (34) are determined at these frequencies and where Γ * µ,l (y + η|x; ω), varies slowly over V (η) relative to λ acoustic . It also implies that the amplitude and phase approximation for B. Generalizing the axisymmetric representation of R λ jµl As mentioned in §.I, the tensor R λ jµl (y, η 1 , η ⊥ ; τ) possesses 144 components (3 × 4 × 3 × 4), however, owing to its two pair symmetry property -inasmuch as R i jkl = R jikl and R i jkl = R i jlk when (λ , µ) = (1, 2, 3) -not all of these are independent. AGF (see table 1 on p.2525 of their paper) show that 144 reduces to 63 independent components when these symmetries are taken into account and prior to any kinematic approximation, such as isotropy for example. In this paper, we use an axisymmetric turbulence model that is a much more realistic kinematic representation for jets and which reduces the 63 components to a manageable number. The approximation assumes that the transverse correlation lengths are small compared to that in the streamwise flow direction. This is a well founded assertion in jets (see I(x, y; ω) → ε 2c 2 ∞ |x| 2 × 4|Ḡ 12 | 2 Φ * 1212 + 2Re Γ 41Ḡ * 11 Φ * 4111 + |Γ 41 | 2 Φ * 4141(36) where the tensor G i j is the symmetric part of the propagator tensor (33) when λ = i. The prefactor in (36) is what results after inserting (33) into the equivalent propagator statement of the re-scaling in (28) whenγ λ , j (Y,U; T 0 − T ) appearing on the far right side andΓ λ , j (Y,U) multiplied by appropriate pre-factor on the left in the second member of (28). Substituting this form of the scaled propagator into (34) results in formula (36). The scaled propagators in (36) are also defined by the implicit function theorem statement of the form as in (10), G 12 (y 1 , r, ψ|x; ω) =G 12 (Y (y 1 ),U(y 1 , r)) = ∂G 1 ∂ r − (γ − 1)G 4 ∂U ∂ r ,(37) and, G 11 (y 1 , r, ψ|x; ω) =G 11 (Y (y 1 ),U(y 1 , r)) =ε ∂G 1 ∂Y − (γ − 1)G 4 ∂U ∂Y(38)Γ 41 (y 1 , r, ψ|x; ω) =Γ 41 (Y (y 1 ),U(y 1 , r)) = ε ∂G 1 ∂Y ,(39) when (G 1 ,G 4 ), and thereforeν, are inserted into (28). The second and third terms in square brackets in (36) involving Φ * 4111 and Φ * 4141 components are referred to as the momentum/enthalpy flux coupling and the enthlapy flux terms respectively. As we explained earlier, the absence of tem- . But (28) shows that this particular scaling forḠ 1 results in an inconsistent asymptotic balance in (13) since the combined Green's function,ν, and thereforeḠ 4 , must expand like O(1) at lowest order to match on to the outer wave equation solution. We know this by taking the outer limit (U → 0) of (29) using matching conditions, (30) & (31). Hence, if we write (13) in (Y,U; Ω) variables and insert (22), G 1 (Y,U) = ∂ν ∂U − ∂ c 2 ∂UḠ 4 +S r (Y,U),(40) we can see that the only wayḠ 1 = O(1/ε) at lowest order is ifḠ r = O(1/ε 2 ) in (26). This is because the first two terms in (40) Fig. 9d prove this to be the case. C. Justification of reduced formula (36) and further approximation to (41) The spectral tensor Φ * λ jµl cannot be measured directly, however, there is extensive data for the physical space tensor, R λ jµl , in isothermal flows at various subsonic Ma 22 show that this ratio is ≈ 1/5. A similar conclusion can be found in Table 3 R{Γ 41 (Ḡ * 22 +Ḡ * 33 )Φ * 4122 } O(ε 2 ) H 4122 ≈ 0 R{Γ 41Ḡ * 11 Φ * 4111 } O(ε 2 ) H 4111 = O(1) R{(Γ 42Ḡ * 12 +Γ 43Ḡ * 13 )Φ * 4221 } O(1) H 4221 ≈ 0 (|Γ 42 | 2 + |Γ 43 | 2 )Φ * 4242 O(1) H 4242 ≈ 0 |Γ 41 | 2 Φ * 4141 O(ε 2 ) H 4141 = O(1) alized auto-covariance tensor. Such universality was argued by Semiletov & Karabasov 55 . Their Figs. 2 & 3 illustrate a strong similarity in the normalized correlation curves for various compo- nents of R i jkl . Hence, consistent with our model being a low-order representation of the acoustic spectrum, as a first approximation, we allow normalized components, R 4111 = n 2 R 1212 and R 4141 = n 3 R 1212 where (n 2 , n 3 ) are O(1) constants. Therefore, we further approximate the acoustic spectrum formula in (36) to the following formula: I(x, y; ω) ≈ ε 2c 2 ∞ |x| 2 4|Ḡ 12 | 2 + (3 − γ)n 2 Re Γ 41Ḡ * 11 + n 3 |Γ 41 | 2 Φ * 1212 .(41) V. ANALYSIS OF THE ACOUSTIC SPECTRUM OF SUPERSONIC HEATED JETS We investigate (41) A. Fluent RANS simulations of mean flow for jet in Table (I) The mean flow field for the Green's function calculation is found from a steady RANS calculation using FLUENT. In Fig. 1 The centerline profile of U (Fig. 5.1a) shows a small degree of oscillations at 0 ≤ y 1 ≤ 4 due to slightly imperfectly expanded conditions (i.e. consecutive expansions and compressions that were not present in the SP49 case). FLUENT gives a much smoother solution for SP90 with amplitude of the oscillations being approximately 0.5% compared to PIV data (the WIND solution is at less than 2%) in region, y 1 < 2. In Fig. 2 this effect as negligible since the acoustic data itself has an error of the order of about 1 dB 2,9 . In general, the RANS simulations recover the general features of heated turbulent jets. That is, the length of the potential core in Figs. 2a and 2d for SP90 relative to SP49 is reduced by approximately 30% at the most intense region (which lies at y 1 ∼ 7 for SP49 compared to y 1 ∼ 10 for SP90). Similarly, at its maximum, the potential core spreads out faster for SP49 to y 1 ≈ 9 compared with y 1 ∼ 14 for SP90. While the radial mean velocity component, V r , is significantly smaller than the streamwise component, U, the former does affect the magnitude and structure of coefficient,X 1 , defined below (23). The peak value ofX 1 remains focused at the nozzle lip line with negative values along the interface between the potential core and the mixing region above SP49: X 1 (y 1 , r). the shear layer. For SP49, on the other hand,X 1 is more localized and negative compared to SP90 (cf. Figs.2c and 2f respectively). In Fig. 3 we compare the Crocco (Eq. 5.33 in GSA) and Crocco-Busemann relation (17) The mean flow in the calculation is non-parallel inasmuch asX 1 = 0 in the solution to (29). The slope of the upper most level curve in Fig. 2a and Fig. 2d gives the spread rates: ε ≈ 0.09 for SP90 and 0.12 for SP49; this allows us to transform between slow variable, Y , and the physical variable, y 1 that will be needed in the computation of (2). The non-parallel flow structure of |Ḡ 12 | in Figs. 5a & 5b possesses a single (almost) streamwisealigned lobe which peaks between 2 < y 1 < 8 for SP90 (Fig. 5a) continuing to more-or-less the Afsar 25 ). Therefore, the initial shear layers (of large ∂U/∂ r) dominate the spatial structure of |Ḡ 12 | at subsonic Ma. In supersonic conditions, however, the locally parallel-based |Ḡ 12 | 2 is singular at the critical layer location (y 1 , r) at given observation angle θ = θ c at the peak frequency. This is exemplified by a white region between the edge of the potential core and shear layer in Hence a critical layer exists in bothν(Y,U) and |Ḡ 12 | (the latter having 3 inverse Doppler factors) for SP49 when the non-parallel flow term in (29) is set to zero. Our calculations show that the critical far-field location, θ c , for SP49 of θ c 51 • , starts at a slightly higher value than SP90 but decreases much more rapidly owing to the faster decay of the mean flow with radial location, r, at fixed y 1 consistent with the greater spread rate for the heated jet. The temperature-associated propagator terms in Fig. 6 have the most dramatic change in spatial structure at the peak frequency and observation angle of (St, θ ) = (0.2, 30 • ). In the case of locally parallel flow both ReΓ 41Ḡ * 11 and |Γ 41 | 2 are consistent with parallel flow estimates of AGF. That is, in the sense that the coupling term propagator, ReΓ 41Ḡ * 11 , in Fig. 6b has a negative region along the critical layer edge between the shear layer and potential core. This was predicted in AGF using In non-parallel flow, the propagator structure is quite different however. That is, our calculation in Fig. 6a reveals that ReΓ 41Ḡ * 11 remains entirely positive-definite for the supersonic heated jet we have considered here. Moreover, the peak in the spatial structure of this term is shifted from the shear layer (as in the locally parallel case of Figs. 6b & 6d) to much further downstream. That is at y 1 ∼ 10 and r < 0.2 for ReΓ 41Ḡ * 11 in Fig. 6a and at a similar location for |Γ 41 | 2 in Fig. 6c, where the coupling term propagator has the greater magnitude compared to the enthalpy flux propagator. As we show later, the auto-covariance component, R 1212 , turns out to be weak in this downstream region. Thus, contrary to what AGF found, our results indicate that there cannot be any cancellation in the acoustic spectrum formula (41) at low frequencies and small observation angles (i.e. for the peak noise) due to the momentum flux/enthalpy flux coupling term because its propagator will always be positive when non-parallel flow effects are taken into account. Our numerical calculations show that the positive-definiteness of the coupling term remains true at higher frequencies and for even larger observation angles as well, but the non-parallel flow asymptotic theory developed in §.III has less direct validity at these locations. C. Spectral tensor component, Φ * 1212 (y, k; ω) Since H 1212 ≡ H 1212 , the spectral tensor component, Φ * 1212 (y, k 1 , k 2 T , ω), is explicitly related to R 1212 using (8), (9), the linear transformation below (9) and the space-time Fourier transform, (35), as follows: Φ * 1212 (y, k; ω) = 1 2π V ∞ (η) ∞ −∞ e i(k.η−ωτ) R 1212 (y, η 1 , η T , τ) dτ dη,(42) where η T = |η T |. We let R 1212 (y, η 1 , η T , τ) be represented by the following functional form R 1212 (y, η 1 , η T , τ) R 1212 (y, 0, 0) = a 0 + a 1 τ ∂ ∂ τ + a 2 η 1 ∂ ∂ η 1 + ... e α−X(η 1 ,η T ,τ)(43) We do not include explicit convective streamwise variable η 1 − U c τ (where U c is the convection velocity) in (43) or mixed higher-order derivatives as Eqs. (47) The leading term (a 0 ) in square brackets in (43) gives a cusp for the auto-correlation of R 1212 (y, 0, τ) as τ → 0 and the derivative terms, bounded by pre-factors a 1 , a 2 , allow for anti (i.e. negative)-correlations with increasing τ and streamwise separation, η 1 , respectively. Inspired by Leib & Goldstein 23 we use the separation function, X(η 1 , η T , τ) = α 2 + η 2 1 /l 2 1 + (η 1 −U c τ) 2 /l 2 0 + f (η T ) where an algebraic form of the transverse decay function, f (η T ) ∼ η m T (with integer values of m), is chosen to allow (43) to decay fast enough in η T . A model of this type was found to agree with the structure of the high-order correlation functions in the jet measured by, among others, Harper-Bourne 11,12,60 . The length scales (l 0 , l 1 ) are therefore turbulence correlation lengths to appropriately normalize X(η 1 , η T , τ) in (43). They are taken to be proportional to the local RANS length scales in (C3) with pre-factors (c 0 , c 1 ) that we discuss shortly. α is an O(1) parameter in (43), it is introduced to give a more rounded (α > 0) cusp of the auto-correlation of R 1212 (y, η 1 , η T , τ) (see Ref. 19 For an axisymmetric jet the acoustic spectrum is I(x; ω) = 2π r y 1 I(x, y; ω)r dy 1 dr(44) where I(x, y; ω), given by (41), is equal to the product of propagators defined by (37), (38) & (39) and spectral tensor component, Φ * 1212 (y, k 1 , k 2 T ; ω), is worked out explicitly in Appendix C. Our numerical tests showed that there was very little effect on the acoustic spectrum when taking The contour plots of the turbulent kinetic energy k(y 1 , r) and Φ * 1212 at the peak noise location (St, θ ) ≈ (0.2, 30 • ) show that jet heating causes greater concentration of contour lines as well as a reduction in magnitude. This reduction is about 0.7 between the maxima of Φ * 1212 for SP49 relative to SP90. However we expect that the localization of contour lines in Fig. 7 with heating shall produce a bigger impact to the acoustic spectrum after multiplication by the propagator terms since Figs. 5 & 6 show that the latter also display a degree of localization and/or redistribution of peak contour lines with jet heating that do not entirely coincide with Φ * 1212 in Fig. 7. Note that, any oscillation in Fig. 7c will have a largely negligible impact on the predictions because the propagator is zero in the region where it occurs (cf. Figs. 5a & 7c). D. Analysis of acoustic predictions Low-order/fast computations using RANS-based jet noise models (Leib & Goldstein 23 , Karabasov et al. 22 , Afsar 25 etc.) take R λ jµl (y, 0; 0) = a λ jµlρ 2 (y)k 2 (y) where the density (ρ) and turbulence kinetic energy (TKE, k) fields are obtained from a local RANS solution in which a λ jµl could also be a function of y but is usually approximated by a single value on the shear layer location r = 0.5 at the end of the potential core (Fig. 4 in Semiletov & Karabasov 55 ). This value of a λ jµl can be found by examining the spatial distribution of R λ jµl (y, 0; ω) measured in either experiment or via appropriate LES calculation 22 . The prediction model (41) requires 8 independent parameters: 6 are required to quantify the turbulence structure in the model of Φ * 1212 (y, k 1 , k 2 T ; ω), (C6): i.e. turbulence length scales (l 1 , l 0 , l ⊥ ) and anti-correlation parameters (a 1 , a 2 ) as well as the amplitude constant a 1212 . Finally, parameters (n 2 , n 3 ) bound the coupling and enthalpy flux terms in the acoustic spectrum formula, (41). But the functional form of R 1212 , (43), depends on (l 1 , l 0 ) through the ratio l 1 /l 0 (see C7). Indeed as shown in C, (l 1 /l 0 , a 1 , a 2 ) and a 1212 can easily be determined by appropriate comparison to turbulence data. Also, the peak radiated sound of SP49 turns out to be insensitive to parameters (n 2 , n 3 ) (see Fig. 9d). Essentially, then, there are two free parameters: l 0 or l 1 (determined once the ratio, l 1 /l 0 , is fixed after comparison to correlation function data for R 1212 ) and l ⊥ . By (C3), this requires determining coefficients c 0 or c 1 and c ⊥ when c 1 /c 0 is fixed. Since no turbulence data exists for the SP90 and SP49 jets, we have compared our model (43) to space-time data of R 1212 for two round jets at a fixed M J = 1.5 and varying T R (with one being isothermal; see Brés et al. 56 ). These jets were analyzed in Ref. 61 and, at least for the isothermal case, are expected to exhibit a consistent turbulence structure as SP90. But even if there is a difference in Reynolds number between SP90/SP49 and the Brés et al. 56 jets, Fig. 6b in Karabasov et al. 22 shows that the Reynolds number effect does not introduce an appreciable impact on the streamwise spatial and/or temporal decay of (at least) R 1111 (y, η 1 , η T , τ). This conclusion can be extended to R 1212 also because the normalized space-time variation of this component of auto-covariance tensor is similar to R 1111 (see Fig.1 in Semiletov & Karabasov 64 ). We compared our model for R 1212 in the form of (C7), to R 1212 (y, η 1 , η T , τ) data in Figs. 11c & 11c extracted from the LES solutions reported in Brés et al. 56 to determine the turbulence length scale ratio c 1 /c 0 and anti-correlation parameters (a 1 , a 2 ). The numerical values for Tables (III) sion. We use the LES data 56 to estimate the value of a 1212 at r = 0.5. Encouragingly, we find that the values of a 1212 ≈ (0.45, 0.5) remain largely constant throughout the jet between 2 < y 1 < 20 at r = 0.5 for both the isothermal jet and heated supersonic cases respectively (see Afsar et al. 61 ). (c 0 , c 1 , c ⊥ , a 1 , a 2 ) are summarized in These values are more-or-less consistent with Table 3 In Fig. 8 we show the acoustic predictions against data measured at the NASA Glenn Research Nevertheless, for SP49 the agreement is even better across the frequency spectrum; in particular, the low and high frequency decay remains within 1dB of data up to St ≈ 0.8. (The prediction does depart from the data near the peak frequency location St ∼ 0.2 which we estimate at being 2dB Fig.8f). at θ = 35 • in We found a value of c ⊥ almost an order of magnitude smaller than that chosen for c 1 (which was found by comparison to turbulence data in Figs with spectral tensor model determined by (C6) using parameters in Table (III). Propagators in (44) are determined by (29), (30), (31), (37), (38) and (39). The coefficients (n 2 , when the momentum flux term |G 12 | 2 , the coupling term and the enthlapy flux terms are individually retained in (41). From the contours of the coupling term propagator (38) & (39) in Fig. 6, it is clear that the coupling term makes a positive-definite contribution to the acoustic spectrum. The peak value of radius weighted acoustic spectrum is two-orders of magnitude smaller (Fig. 9b cf. 9a) than the contribution made by the momentum flux term |Ḡ 12 | 2 to (44) at (St, θ ) = (0.2, 30 • ). Since the enthalpy flux term is even smaller (as Fig. 9c indicates) we can legitimately approximate the integrand of (44) by: I(x, y; ω) → ε c 2 ∞ |x| 2 |Ḡ 12 | 2 Φ * 1212 ,(45) as the lowest order term in the acoustic spectrum that captures the peak sound for a heated jet since Fig. 9d shows letting (n 2 , n 3 ) = 0 in (41) gives predictions that are valid up to St ≈ 0.8, which is well beyond the peak frequency. This approximation therefore removes any influence of temperature-associated correlations in the acoustic spectrum formula. Indeed, (45) is equivalent to taking O(ε) to be the error term in (33), i.e. retaining the momentum flux propagator Γ λ , j (Y, r|x; Ω) =δ λ 1 δ jr ∂Ḡ 1 ∂ r − (γ − 1) ∂U ∂ rḠ 4 +δ λ 4 δ jr ∂Ḡ 4 ∂ r + O(ε),(46) in (Y, r; Ω) co-ordinates and inserting the latter into (34) after using (28). It is an interesting artifact that only the second much smaller peaks in the coupling and enthalpy flux term propagators (circled in Figs. 6a & 6c) remain large when these terms are weighted with Φ * 1212 in (41). Both of these circled regions are centered on the shear layer of SP49 and although being significantly smaller than the larger peak region positioned on the jet axis near y 1 ∼ 10 in Figs. 6a & 6c they still 'produce noise' since Φ * 1212 remains large along the shear layer compared to that at the jet axis location, r = 0, at y 1 ∼ 10 (see Fig. 7d). Fig. 9 shows that the coupling and enthalpy flux terms in (41) have an impact on I(x, y; ω) for SP49 at St > 0.8. The spectra increases by ≤ 3.5 dB above that predicted by retaining the momentum flux alone (n 2 = n 3 = 0) in (44). But this occurs between 0.8 ≤ St ≤ 1.0 and, therefore, is beyond the low frequency regime. Essentially, then, both temperature-associated terms in (36) remain acoustically silent for the entire low frequency sound regime of the supersonic jets we have considered here. By this result, the sound prediction can be legitimately approximated by (45) for all St < 0.8. VI. DISCUSSION -APPLICABILITY OF THE ASYMPTOTIC THEORY Our results confirm that non-parallelism has a pronounced effect on the spatial structure of the propagator causing both amplification in its value and enlargement of the area in which the peak occurs relative to the locally parallel flow in contour plots of the integrand of the acoustic spectrum (41). For SP90 (Ma, T R = 1.5, 1.0), Fig. 5a shows that this occurs between 2 < y 1 < 10. The At the SP49 set point (Ma, T R = 1.5, 2.7), the solution to G σ (y|x; ω) based on a locally parallel mean flow will also fail to capture the correct level of spectral amplification for a similar reason (see Fig. 5b). Moreover, the coupling term will fail to explain the reduction in sound because it will introduce cancellation owing to the odd power of inverse Doppler factor (see Eqs. 28 & 29 in AGF) when the mean flow is locally parallel. But we have shown that (38) Fig. 9d). This is basically evident from the pre-factors in (38) & (39), which show that both temperature-related terms are O(ε) and therefore their inclusion is not legitimately warranted at the lowest order expansion of the propagator (33) and acoustic spectrum (41). The reason why a heated jet is quieter at fixed acoustic Mach number can be explained by the spatial localization and reduction in magnitude of the Favre-averaged turbulent kinetic energy (k) of SP49 in Fig. 7b compared to SP90 in Fig. 7a. The lower k reduces the amplitude of R 1212 in (43) and, therefore, the acoustic spectrum (41) by a reduction in the spectral tensor component, Φ * 1212 , in (42). Given that the Fluent simulations were run at the same turbulence intensity for both heated and isothermal jets, a physical explanation for this localization of k at T R > 1 is due to the heated jet carrying lower momentum owing to its reduced density by the equation of state. In a sense, our results are consistent with the measurements of Ecker et al. 16 and Stuber et al. 16 whose experiments indicate that jet heating results in a reduction in convective amplification of noise sources contained within the turbulence. This can only come about through a reduction in the magnitude of the momentum flux term using our acoustic analogy model, (41). Our results also show that the formula for the acoustic spectrum (45) remains accurate across most of St range in Figs. 8d -8f & 9d, beyond the peak frequency. This is probably because the non-parallel flow-based Green's function solution to (29) prevents the formation of a critical layer at supersonic speeds (present in the locally parallel flow case), which ensures the amplification in propagator term (37) is much greater than it would be in a subsonic flow (see Fig. 21 in GSA). Further tests on the limit of applicability on the asymptotic theory we have developed in this paper can be assessed by considering what happens when we extend the range of prediction of (41) to higher polar angles (44 • ≤ θ ≤ 60 • ) with the turbulence scales kept the same as the peak sound predictions of Fig. 8. We find that (45) over predicts the acoustic data as θ increases. This is not surprising since the asymptotic theory applies only in the prediction region centered at (St, θ ≈ 0.2, 30 • ). The integrated effect of this over frequencies covering the peak sound regime (0.01 < St < 0.6) shown in Fig. 10 displays a similar conclusion. Namely that the peak sound predictions of Fig. 8 give accurate OASPL values compared to the data between 25 • < θ < 35 • , keeping c ⊥ fixed. Thereafter, (45) yields an over prediction for the OASPL. But since the predicted spectral shape remains a good match to the acoustic data (even though the amplitude is over predicted), one way to bring about an appropriate reduction in sound is to reduce the transverse length scale parameter c ⊥ (defined by C3), given that the latter enters as the prefactor c 2 ⊥ in formula (C6). When doing this, the OASPL predictions in Fig. 10b remain accurate for larger θ but this, of course, introduces more empiricism into the model. The terms that we have neglected in deriving (41) involved the auto-covariance components (R 4221 , R 4242 ) (y, η; τ). Gryazev et al 53 showed these terms are negligible for a heated supersonic co-axial jet. While Gryazev et al. 53 found that all temperature-related correlations in R λ jµl were small, we retained the streamwise components R 4111 and R 4141 to firstly, compare against AGF's locally parallel flow analysis and secondly, to assess the sensitivity of the acoustic spectrum to these terms. Note from Table II the propagator terms associated with R 4221 , R 4242 in the acoustic spectrum can be as large as O(1), which means their outright exclusion must be more carefully assessed. However, we can bound the size of |Γ 42 | in the propagator of these terms compared tō G 12 using (5) and the definition ofν below (29). That is,Γ 42 = ∂Ḡ 4 /∂ r ≈ (1/c 2 ∞ )∂ν/∂ r and, therefore, by the chain rule |Γ 42 | ≈ |(1/c 2 ∞ )(∂ν/∂U)∂U/∂ r|, which for a locally parallel flow will be less directive thanḠ 12 at low frequencies. This is clear using (7.1) & (7.2) in GSA, which show that |Γ 42 | 2 ∼ cos 2 θ /(1 −U cos θ /c ∞ ) 4 whereas |Ḡ 12 | 2 ∼ cos 4 θ /(1 −U cos θ /c ∞ ) 6 with R 4221 , R 4242 might be large as O(1) in the acoustic spectrum, they are expected to be smaller in magnitude and less directive than the momentum flux term, |Ḡ 12 |, in (41). VII. CONCLUSIONS Our main contributions in this paper involved extending the asymptotic theory in Ref. (29, referred to here as GSA) to heated flows and using it within an acoustic analogy prediction model to explain, among other things, the observed spectral quietening of heated jets at supersonic acoustic Mach numbers, Ma. We found that for an arbitrary axisymmetric jet flow with O(1) spread rate, ε, the adjoint linearized Euler equations (ALEE) in (7), can be transformed by taking (y 1 ,U) as the two independent variables of choice. Since the flow is heated, the Favre-averaged speed of sound ( c 2 ) in this case, satisfies the Crocco-Busseman relation, (17). The transformation results in the mixed partial differential equation given by (18), where a hyperbolic operator L (y 1 ,U) appearing on the left hand side acts on combined Green's function variable,ν =ν(y 1 ,U) ≡ c 2G 4 +G 5 . As seen by using (14), (20), (21) and also line below (15), the right hand side of (18), F (S), couples theν solution with the other components of the vector Green's function namely, (G 4 ,G r ,G ψ ). Here, G σ (y|x; ω) is the Fourier transform, of the 5-dimensional adjoint vector Green's function speeds indicate that they must. Therefore, g a σ 4 (y, τ|x,t) must evolve through a slow O(1) timescale,T = ετ in the low frequency regime. Our analysis goes onto show that the richest dominant balance for (G 4 ,G r ,G ψ ) (given by the Fourier transform of Eqs. 5.5 & 5.6 in GSA) will ensure that F (S) = 0. Interestingly, the inner equation forν(Y,U) in (29) at r = y 2 2 + y 2 3 = O(1) distances from the jet center line, is the same as that found by GSA in isothermal flows (their Eq. 5.31). But (29) now applies to heated jets where T R ≥ 1. We used this extended asymptotic theory for the Green's function to determine the propagator, turbulence scales in the model function (43) apart from transverse correlation length scale. Although the latter was hand-tuned to give the correct level in the acoustic predictions in Fig. 8, it was found to be an order of magnitude smaller than the streamwise length scale (see Table III). This is consistent with axisymmetric turbulence approximation used in the paper and verified ex- The acoustic spectrum model (41) was derived for ω O(ε), but the predictions in Fig. 8 show excellent agreement over a wider frequency range (i.e. at ω ≈ O(1)) than the assumptions used to derive it. The sensitivity analysis in Fig. 9d indicates that both the (enthalpy flux/momentum flux) coupling term and enthalpy flux co-variance in (41) are largely silent at frequencies We find that the inclusion of non-parallel flow effects does not allow propagator associated with the momentum/enthalpy flux coupling term to change sign at small observation angles. That is, the AGF conclusion of sign-change emerges only for a parallel mean flow using the low frequency asymptotic properties of the adjoint Rayleigh equation (i.e. equivalent to lettingX 1 = 0 in 29). As a result of this surprising discovery, we obtain an alternative explanation for the quietening of a supersonic heated jet that is more consistent with experimental data reported in Ecker et al. 65 . Namely that the reduction in sound with heating at fixed supersonic Ma is due to the weakness of the momentum flux term in the acoustic spectrum, (45), as T R increases. This conclusion emerges only when the (true) non-parallel mean flow based Green's function is used to calculate the propagators in (45). ACKNOWLEDGMENTS Computational resources from HPC2, Mississippi State University, are appreciated. MZA would like to thank Strathclyde University for financial support from the Chancellor's Fellowship. We would also like to thank Dr. S. J. Leib (Ohio Aerospace Institute) for providing us with his spectral tensor routines. wise direction. But a tensor of odd suffixes (or parity) such as Φ * is (A2) will include terms such as, Φ * 4 jkl = A 5 ε jkl + A 6 ε jkp ε l pq k q + A 7 ε jkp ε l pq λ q + A 8 ε jpq ε kpq k l + A 9 ε jpq ε kpq λ l + A 10 ε jkq k q k l + ... + A 13 ε jkq λ q λ l + A 14 ε jpq ε jpq k j k k k l + ... + A 21 ε jpq ε jpq λ j λ k λ l + A 22 ε jpq k p k q k k k l + ... + A 30 ε jpq λ p λ q λ k λ l + ... (A3) and so on. However, all but the leading term (ε jkl ) in (A3) are zero because they involve basic invariants that either reduce to those contained in (A1) using properties of Levi-Civita symbol (i.e. involve an even number of reflections which correspond to a rotation of the vector configuration in Fig. A.1 of Afsar 49 ) or are zero after the turbulence is assumed to have weak transverse correlation inasmuch as Φ * λ jµl (y, k 1 , k ⊥ ; ω) ≈ Φ * λ jµl (y, k 1 , 0; ω) and k i = δ i1 k 1 and λ i = δ i1 . Thus, A 6+I(p,m,n) = 0 for all permutations in (A2) and expansion (A3) and we find that (A1) reduces to: Φ * 4 jkl (y, k 1 , k 2 T ; ω) = δ j1 δ kl A 1 + δ k1 δ jl A 2 + δ l1 δ jk A 3 + δ j1 δ k1 δ l1 A 4 + ε jkl A 5 (A4) Appendix B: Calculation of the spectral tensor component, (42) Inserting Eq. (43) into (42) and re-writing terms algebraic in (τ, η 1 ) gives: 2πΦ * 1212 (y, k 1 , k 2 T ; ω) R 1212 (y, 0, 0) = V ∞ (η) ∞ −∞ e i(k.η−ωτ) a 0 − a 1 i ∂ ∂ ω ∂ ∂ τ + + a 2 i ∂ ∂ k 1 ∂ ∂ η 1 + ... e α−X(η 1 ,η T ,τ) dτ dη (B1) Introducing non-dimensional variables of integration:η 1 = η i /l i (l i = (l 1 , l 2 , l 3 )) andτ = U c τ/l 0 shows that the spectral function X(η 1 , η T , τ) can be written as X(η 1 , η T , τ) = α 2 +η 2 1 +ξ 2 + f (η T ) whereξ = (η 1 l 1 /l 0 −τ). Integrating each term by parts gives and inserting this latter expression into (B1) gives Φ * 1212 (y, k 1 , k 2 T ; ω) R 1212 (y, 0, 0) = l 0 l 1 l 2 l 3 U c (a 0 − a 1 − a 2 ) − a 1ω ∂ ∂ω − a 2k1 ∂ ∂k 1 + ... Φ(y,k; ω) in which the unit-spectrum of turbulence, Φ(y,k 1 ,k 2 T ; ω, α), is defined by 2πΦ(y,k 1 ,k 2 T ; ω, α) = dη 1 e ik 1η1 × η 2 1 + α 2 + f (|η T |) π(ω 2 + 1) 1/2 K 1 (ω 2 + 1) 1/2 η 2 1 + α 2 + f (|η T |) , χ 3/2 e −χ 1/2 √ f (|η T |)+α 2 ,(B5) where χ ≡ χ(k 1 ,ω) =k 2 1 +ω 2 + 1 = (k 1 − (l 1 /l 0 )ω) 2 +ω 2 + 1. When η T = 0, the space-time structure of R 1212 (y, η 1 , 0, τ) in (43) where X = X(η 1 ,η T ,τ) = η 2 1 + |ξ | 2 . To simplify its graphical presentation, we use nondimensional variables,η 1 , defined below (B1), andτ = U c τ/l 1 , which allows the space-time dependence in (C7) to be determined through the scaled non-dimensional convected variable, ξ = l r (η 1 −τ) where l r = l 1 /l 0 = c 1 /c 0 by (C3). Note by (C7) when a 2 = 0, R 1212 (y, τ)/R 1212 (y) takes the simple exponential form ∼ (1 − a 1τ l r )e −l rτ at η = 0. In Figs. 11c & 11d we set η = 0 for the comparison of (C7) and Eq. (54) and 11b respectively) when using the same turbulence parameters and non-parallel flow-based propagators as in the results of Fig. 8. Without altering c 1 , this difference can be easily remedied by an appropriate increase in c ⊥ to c ⊥ = (0.063, 0, 043) for SP90 and SP49 respectively. In Table (IV) we compare the corrected values of c ⊥ to those used in the Fig. 8 predictions. It is clear that using Eq. 54 of Ref. 23 to determine Φ * 1212 results in a small increase of c ⊥ compared to using (C7) in (41). While the increase is modest, our choice of c ⊥ used in Fig. 8 and Kalyan & Karabasov 4 ) is basically negligible, there is an unexplained property of the acoustic spectrum with increasing jet temperature ratio, T R = T J /T ∞ > 1.0. That is, when an axisymmetric air jet at such mild supersonic speeds (i.e. turbulence convection Mach number, M c < 1) is heated at constant acoustic Mach number Ma = U J /c ∞ (where U J is the jet velocity and c ∞ is the speed of sound at infinity) the jet noise reduces at all observation angles (Bodony & Lele 5 ). Fig. 7 in Tanna's experiments showed that the OASPL curve of the heated supersonic jet lies beneath the isothermal, or cold jet spectra, at all observation angles with the reduction in noise greatest at small θ . Later experiments 7 were conducted to check the accuracy of earlier studies and to extend their applicability to a greater parameter range in Ma & T R. Some of the pertinent literature in heated jet acoustics are Seiner et al. 7 , Lee & Bridges 8 , Bridges 9 , Bhat 10 , Harper-Bourne 11,12 and Tester & Morfey 13 . The Harper-Bourne 11,12 and NASA experiments 2,8,9 have both found similar generic behavior as Tanna 1 and Tanna et al. 14 ; see for example, Fig. 27 and the discussion on p.21 in Harper-Bourne 11 . Similar spectral properties were found by Bodony & Lele 5 who performed Large-Eddy Simulations (LES) calculations of axisymmetric round jets at the Tanna set points. The quietening of the hot jet acoustic spectra was clearly apparent in the OASPL calculations in Fig. 9 (see also & p.249) of Bodony & Lele 5 . More recent work in the acoustics of heated flows has looked at the structure of jet turbulence by measuring the local source convection velocities. For example, Shea et al.'s 15 data correlates the reduction in potential core length of a heated jet with increased jet temperature. Stuber et al.'s 16 measurements quantify the reduction in core length with an appropriate reduction in turbulence convection velocity (see Bridges & Wernet 2 also), which might indicate the weakness of the sound source in heated supersonic flows. Although the Ma and T R of these studies are somewhat beyond our concerns (involving supersonic source convection effects), Liu et al. ( 17 , p.6) indicate that as the basis of the model by extending a previously derived asymptotic theory for the vector Green's function of the adjoint linearized Euler equations (ALEE) operator (Eqs. 4.8-4.10 of Goldstein & Leib and §.3). Using LES to extract the mean flow/turbulence correlations would naturally be more computationally expensive. The Goldstein & Leib 19 (G&L) predictions were computed at O(1) frequencies for a propagator tensor based on a weakly non-parallel mean flow. Non-parallelism appeared in the analysis at supersonic speeds and only affected the solution within a thin critical layer where the adjoint vector Green's function is singular for the locally parallel mean flow. G&L constructed a uniformly valid composite solution for the adjoint Green's function, to eliminate the critical-layer singularity at a particular θ . As θ → 0, the dominant contribution to the propagator in their calculations was due to the radial derivative of the Fourier transformed adjoint Green's function for the streamwise mass flux perturbation. This was also confirmed by Karabasov et al.'s 22 numerical calculations and by found a qualitatively similar spatial structure for the dominant dipole-like momentum flux (i.e. fluctuating Reynolds stress) associated propagator component relative to Karabasov et al.'s 28 full numerical solution to the ALEE (see Fig. 16c in Karabasov et al. AIAA 2011-2929 or Fig. 12b the ordinary fluid velocity perturbation when suffix, λ = i = (1, 2, 3), otherwise v λ = v 4 . The latter denotes v 4 := (γ − 1)(h + v 2 /2) ≡ (c 2 ) + (γ − 1)v 2 /2 where h is the fluctuating static enthalpy and (c 2 ) is the fluctuations in the sound speed squared such that v 4 /(γ − 1) represents the moving frame stagnation enthalpy fluctuation 18 . Since the above relation shows that c = O( √ v 2 ), Eq. (30) in AGF allows the im-portance of e 4l compared to e il to be quantified by the dimensionless ratio,|e 4l |/(c ∞ |e il |), that remains O( √ v 2 /(c ∞ v )). This can also be expressed as|e 4l |/(c ∞ |e il |) = O(M ) where M = v /c ∞ isproportional to square root of the fluctuating temperature ratio T /T ∞ when Ma = O(1); this is obtained after linearizing the definition of the speed of sound and using Eq.(2.14) in G & L 19 . Similar arguments were made for heated jets at low subsonic speeds by Morfey et al. (Eq.22 of Ref. 31) and Lilley ( 32 , p.471). The above scaling is, however, slightly different to Ref.19 (p.307f.). It implies that for isothermal (or, slightly cold) jets, where T ≈ 0 and |e 4l |/(c ∞ |e il |) → 0. In other words, v 4 = o(h ) when T ≈ 0 so that the (λ = µ = 4) component of R λ jµl can be set equal to zero. For heated jets on the other hand, |e 4l |/(c ∞ |e il |) = 0 and is expected to be fairly large, especially for Ma > 1 (see Figs.1 & 2 in AGF and Fig. 20 in Sharma & Lele 33 ). Comparing Eq. (5.12) to (5.13) in G & L 19 and using appropriate outer products of unit tensors (see also sentence below Eq.6) in suffixes (λ , j, σ , m) allows definition of the tensor ε λ jσ m as, ε λ jσ m ≡ δ λ σ δ jm − δ λ j δ σ m (γ − 1)/2 in the linear relation for H λ jµl . III. INCLUDING NON-PARALLEL FLOW EFFECTS AT LOWEST ORDER FOR NON-UNITY TEMPERATURE EFFECTS (T R > 1) IN (5) & (7) Eq.( 5 . 521) (p.209 & f.) in GSA showed that significant reduction in complexity of the lowest order inner equations took place when the streamwise mean flow component, U, was taken in place of the radial co-ordinate, r, as one of the independent variables. But this transformation can be easily applied to the ALEE at the outset, prior to any asymptotic analysis. The advantage of this being that when the latter is utilized, in the form of method of multiple scales and matched asymptotic expansions (in that order), the basic inner equation for the more general heated flow results at once. malized by the O(1) characteristic length D J and time D J /U J respectively where U J & D J are the mean velocity and nozzle exit diameter respectively. The fluid mechanical variables (ṽ, p, ρ) are , p. 579 and van Oudheusden 36 ), Fig.4 in Afsar et al. 37 showed that it remains within 2% of the mean flow obtained from a steady RANS calculation of an unheated jet at Ma = 0.9. & (23), where the inner radial co-ordinate is r = O(1), and an outer region where this expansion break downs-at large radial locations (using inner variable, r) for which R ≡ εr = O(1). But as discussed in §.1, GSA show that the long O(1/ε) streamwise variation of non-parallel flow alters the leading order asymptotic structure of propagator Γ λ , j (y|x; ω) everywhere in the flow at Ma = O(1) when g a σ 4 (y, τ|x,t) modulates in time under an appropriate slowly breathing asymptotic scaling. This happens at low frequencies when time variations are slow and (crucially) of the same order as the streamwise variations in the mean flow. Mathematically, g a σ 4 (y, τ|x,t) depends on τ through re-scaled O(1) time variableT ≡ ετ = O(1). In frequency space, the Strouhal number, St, is of the order of the jet spread rate, ε, in the solution G σ (y|x; ω) of (7). Hence, the distinguished asymptotic scaling in the latter occurs when ε → 0 and the scaled frequency, Ω ≡ ω/ε = O(1) is held fixed. It is only in this limit, where the solution to the ALEE becomes asymptotically disparate as ε → 0 and (just as Eqs. 22 & 23) divides into an inner solution where r = O(1) and an outer solution valid at R ≡ εr = O(1). Note that if g a σ 4 (y, τ|x,t) were to depend on τ through O(ε −m ) scaled times in such a manner that G σ (y|x; ω) depends on relatively O(1) frequencies, ω = O(ε m ), for exponent values −N ≤ m ≤ 1, it would confine non-parallel flow effects to supersonic speeds in the thin critical layer where G σ (y|x; ω) is otherwise singular and the mean flow can be reduced to a locally parallel flow away from this region at all frequencies of interest 19 . , (23) & (24).Although an asymptotic expansion ofG r starting asG r = O(1) would therefore cause F (S), on the right side of (18), to drop out of the lowest orderν-equation, this does not turn out to give the richest possible balance. The only other self-consistent asymptotic expansion ofG r , which turns out to also be the least degenerate solution toν is given by the Fourier transform of Eq.(5.6) in GSA; namely, in the present formalism, one whereG (r,φ ) = O(1/ε) at leading order. However this would causeS r (y 1 ,U) = O(1) in (25) and (26) which would, on the face of it, balance the O(ε) expansion of the left hand side of (18) after inserting (23) &(24) in the latter. But thankfully F (S) t) are appropriate O(1) slow variables for the observation field point (x 1 ,t). Moreover, Y = const. and dU/dY =X 1 /U represent the characteristic curves (Garebedian 44 , pp. 121-122) of (29). The pre-factor of the second member on the first line of (28) allows the outer boundary conditions in (30) & (31) for the scaled inner solutionν(Y,U) to depend on the observation point, and, Y ≥ 0 (note the sign error in Eqs. 5.45 & 5.48 in GSA). Eqs. (29) -(31) show that the Green's functionν(Y,U; Ω) is independent of jet spread-rate, ε, at lowest order after the numerical solution to(29) is determined in (Y,U) co-ordinates at fixed scaled frequencies, Ω.The matching conditions, (30) & (31), also show that any oscillatory behavior (which Eq. 29 admits near the outer boundary U → 0) of the form,ν ∼ UΓe −iΩ lnU/E , where Γ = Γ(Y ) is an arbitrary function andX 1 → E(Y )U as U → 0, is entirely eliminated (see Eqs. 5.40 & 5.47 in GSA). Jet heating does not obviously effect the contribution the nozzle plays to the solution of (29) at Ω = O(1) frequencies. GSA (p.207 & f.) show that nozzle contribution to the outer boundary condition enters through the inner limit of a 'scattered potential' function that satisfies the homogeneous two-dimensional Helmholtz equation on a half plane extending to upstream infinity at O(r J ) distance from the jet centerline (Morse & Feshbach 30 , p. 891). The axisymmetric mode of which behaves logarithmically. But the inner solutionν(Y,U) generates scattered waves (i.e. is induced by outer incoming waves) through the matching conditions, (30) & (31), and it cannot behave logarithmically as r → ∞ when matched to that outer solution at any T R. Hence Van Dyke's rule45 shows thatν(Y, 0) will not behave like ln R as R → 0 at O(Ω 0 ). Or, alternatively,ν(Y,U) Γ * µ,l (y|x; ω) are given by appropriate Taylor expansions (Eqs.B.2, B.3 & B.7 in AGF). The latter, phase function S(y|x), is related to wavenumber vector by k = (k 1 , k T ) = k ∞ ∇S where, k ∞ = ω/c ∞ , is the far-field wavenumber. The transverse wave-number vector, k T = (k 2 , k 3 ), is then defined by the parallel flow Eikonal equation (cf. Eq. 13 in Durbin 47 ) |k T | 2 /k 2 ∞ = |∇ ⊥ S| 2 = (c 2 ∞ / c 2 )(1 − M(y T ) cos θ ) 2 − cos 2 θ when k 1 = k ∞ cos θ as |x| → ∞ and ∇ ⊥ is the gradient operator in transverse (y 2 , y 3 ) plane (Leib & Goldstein 23 , Eq. 18). , for example, Pokora & McGuirk's measurements 48 in Figs. 19-21 and also Fig.10 of their conference paper, AIAA 2008-3028). AGF used Pokora & McGuirk's data to propose that R λ jµl (y, η 1 , η ⊥ ; τ) is an axisymmetric tensor where η ⊥ = |η ⊥ | and η ⊥ = (η 2 , η 3 ). The spectral equivalent of this (lemma's 3.1 and 3.2 in Afsar 49 ) requires that Φ * λ jµl (y, k 1 , k 2 ⊥ ; ω) is axisymmetric with the streamwise direction, k 1 , being the principle direction of invariance. The physical space approximation is consistent with experiments by Morris & Zaman 50 who show in their Fig. 15 that the transverse and azimuthal correlation lengths are virtually constant across range, St = (0.01 − 1.0) for an isothermal axisymmetric jet. Indeed, the axisymmetric approximation in the form used by AGF and Afsar 49 was corroborated using LES data for high speed subsonic jet in Afsar et al. 51 and also by AGF using turbulence correlations extracted via PIV data of incompressible water jet. Since the momentum flux/enthalpy flux propagator term involves a spectral tensor component with odd number of suffixes (of type, Φ * 4 jkl ), an obvious generalization of the axisymmetric representation of this term worked out in AGF (Eq. C.5) is to allow its three-form defined by Φ * 4 jkl to remain invariant to proper rotations only thus taking into account any possible sign changes coming about by an improper rotation of axes. This is worked out in App. A. Interestingly, since this tensor has one-pair symmetry (Φ * 4 jkl = Φ * 4 jlk ), the general formula for Φ * 4 jkl given by (A4) reduces to Eq. (C.5) in AGF (i.e. A 2 = A 3 and A 4 = 0 in A4).The 63 independent components of the real-space tensor R λ jµl (y, η 1 , η ⊥ ; τ), then reduce to the same 11 components as AGF. Hence inserting Eqs. (C.4), (C.5) & (C.8) in AGF for the axisymmetric representations of Φ i jkl , Φ 4 jkl and Φ 4 j4l respectively shows that the low frequency acoustic spectrum, (34), corresponding to the peak jet noise can be approximated by the following perature fluctuations, T ≈ 0, implies that the enthalpy fluctuation component of v λ is negligible, i.e. v 4 = o(1). Hence by(35),(8),(9) and the tensor relation defined below it, Φ * 4111 and Φ * 4141 are negligible in this case and the acoustic spectrum for the peak jet noise reduces to that involving only the momentum flux term Φ * 1212 . This is consistent withFig. 19ainKarabasov et al. 52 where the full numerical solution of the ALEE reveal that the G 12 propagator component (which multiplies Φ * 1212 in I(x; ω), Eq. 44) varies much more rapidly across the shear layer at the end of the potential core compared to any other component of the symmetric tensor, G i j , at the peak frequency for the 30 • spectrum.Karabasov's et al 52 .'s conclusions correspond (asymptotically) to the scaling derived in GSA; namely that,Ḡ 12 = O(1) at fixed Ω = ω/ε = O(1) frequencies. The latter distinguished limit was derived by GSA using Karabasov et al.'s 28 numerical solution to the ALEE as a guide to discern what spatial/temporal scaling ensures that the lowest order solution to these equations (and therefore theḠ 12 propagator in 37) is everywhere different from that obtained by a locally parallel flow approximation. But since Karabasov's 28 calculation was for an isothermal flow, it does raise the question whether this asymptotic scaling continues to hold in heated flows? While there is no evidence proving if the propagators, (38) & (39), are numerically smaller than |G 12 | for a spatially spreading heated jet at ω O(1), the analysis above (i.e. Eq. 33 and Fig.9d displayed later in the paper) indicates that it must be true. That is, since axisymmetric jet flows possess small spread rates 43 , a slowly diverging mean flow approximation where the radial component, V r , is asymptotically smaller than the streamwise, U, mean flow component (i.e. U(Y, r) = O(1) + ... and V r (Y, r) = O(ε); see Eq. 22) shows that the only dominant balance that could allowḠ 11 = O(1) is forḠ 1 = O(1/ε) at its lowest order of expansion when g a σ 4 (y, τ|x,t) evolves atT = ετ = O(1) times. As we have explained, any other time-scaling would render the non-parallel flow effects as a higher-order correction to the locally parallel flow Green's function and only enters the leading order in the thin critical layer at frequencies larger than ω O(1) 19 . (Note that,Ḡ 11 is used in this proof because this term is contained in both coupling and enthalpy flux propagators, 38 & 39) expand like O(1), soS r must go like ∼ 1/ε to allowḠ 1 = O(1/ε). But this result -which would be equivalent to requiring that F (S) = O(1) after insertinḡ G r = O(1/ε 2 ) into (26) & (25) using (28) -is inconsistent with the i = φ component of (7a) and adjoint energy equation (7b) which, when taken together, show thatḠ (r,φ ) = 0 at lowest order and, therefore, thatḠ 1 = O(1) and ∂Ḡ 1 /∂Y = O(ε) at this order. Thus, the scalingḠ 11 = O(ε) at Ω = O(1) frequencies must hold. Our numerical calculations in (Karabasov et al. 22 ); isothermal water jets in(Pokora & McGuirk 48 ); in subsonic heated co-axial jets(Gryazev et al. 53 ) and for supersonic mixing layers in isothermal and heated conditions(Sharma & Lele 33 ). For example,Fig. 10in Gryazev et al.53 shows the streamwise distribution of the LES extracted amplitudes of the temperature-associated components of R λ jµl along the jet shear layer. They find that the "correlation amplitudes corresponding to the momentum/temperature terms R 4 jkl are negligible in comparison with the temperature-temperature source terms, R 4 j4 j " for a co-axial jetwith subsonic core Mach number of M j = 0.877, where ( j, k, l) = (1, 2, 3) (see p.10 of their paper). Gryazev et al.'s 53 conclusions are more-or-less in line with the other available data sets. Fig. 16c in Sharma & Lele's 33 LES study of a heated/isothermal mixing layer indicates that R 4242 will be almost one-half of R 4141 along the lip line of a splitter plate, when normalized by R 1111 (cf. Fig. 10b of Gryazev et al.) Moreover, R 4122 /R 4111 is likely to be bounded by R 1122 /R 1111 since '41' is at the location y in R 4122 & R 4111 . Figs.10a & 10b in Karabasov et al. of Sharma & Lele (2012) and Fig. 6.25 in Sharma 54 , all of which is basically consistent with Fig. 10 in Gryazev et al 53 . While Gryazev et al 53 shows that all temperature-related correlations in the auto-covariance tensor R λ jµl are negligible for co-axial jets in their study, the PIV data shown in AGF (Figs. 1 & 2) from experiments at the NASA Glenn research center, indicates that R 4111 remains important relative to R 4141 for the single stream jet SP49 (where Ma = 1.48 and T R = 2.7). That is,R 4111 /c ∞ R 1111 ∼ R 4141 /c 2 ∞ R 1111at these jet speeds and temperatures. Hence we have neglected contribution of Φ * 4212 (where H 4221 = H 4221 ), Φ * 4122 and Φ * 4242 in (36) but retained Φ * 4111 and Φ * 4141 to allow (among other things) direct comparison between our results and AGF. The asymptotic structure of the propagator and experimentally deduced scalings of the turbulence components in (36) and Eqs.(26) & (27) of AGF are summarized in Table II. Note that, H 4122 = (2 − γ)H 4122 − (γ − 1)H 4111 /2 and H 4111 = (3 − γ)H 4111 /2 − (γ − 1)H 4122 ≈ (3 − γ)H 4111 /2 after using (8) and the linear relations at end of §.II. There is some similarity in temporal de-correlation of R 4111 and R 4141 compared to R 1212 at fixed streamwise separation, η 1 , in space-time structure shown in Fig. 19 in Sharma & Lele 33 as well as in amplitude along lip line (Fig. 20 in Sharma & Lele) to give confidence in the possible universality of the normalized components of the gener- numerically in this section using a formula for the component of the turbulence spectrum, Φ * 1212 , derived in App.(B) that is based on a relatively simple turbulence model for real-space function R 1212 . The latter model is validated in App.(C) against an LES database for similar jets (Bres et al. 56 ). we compare the streamwise and radial profiles of the mean flow component, U(y 1 , r), against PIV data from the NASA Glenn Research Center together with RANS solutions obtained using the WIND code (Nelson & Power 57 ). We use the same computational grids (based on a structured mesh with rectangular cells) for the FLUENT calculation that were used in the WIND code solutions but with a more optimized turbulence model. The SP49 domain consists of approximately 297, 536 cells (converging nozzle) while SP90 domain has 59, 600 cells (convergent-divergent nozzle). Lower mesh density was required for SP90 to achieve a converged solution. CFD calculations are then implemented using the usual pressure-based (ambient) far-field conditions on the left, top, bottom and right boundaries. Total pressure conditions are specified at the nozzle inlet to obtain the required T R and Ma. No-slip boundary conditions are applied on the nozzle walls with symmetry boundary conditions at the jet axis (consistent with an axisymmetric mean flow field). Both domains were simulated in FLUENT using the density-based, steady-state solver using Menter's Shear Stress Transport (SST) turbulence model. No appreciable differences found when using the (k − ε) model in FLUENT. The streamwise and radial variation of U is shown in Fig. 1. We compare profiles of the normalized streamwise velocity obtained from RANS (FLUENT & WIND code) with PIV data measured at the NASA Glenn Research Center (Bridges & Wernet 2017). There will always be some differences between RANS and PIV measurements of jet flow. Having said this, the centerline r = 0, streamwise velocity (Figs. 1a & 1b) and radial distribution at y 1 = 6, (1c & 1d) do compare favorably with NASA PIV data. In general, Fig. 1 shows that both RANS solutions are basically the same, with FLUENT showing slightly closer agreement to PIV. we show the (y 1 , r) spatial distribution of the mean flowṽ i = {U,V r } obtained directly by FLUENT simulations.X 1 in Figs. 2c & 2f is determined by (11) using the RANS mean flow and central differencing in r to determine mean flow gradient, ∂U/∂ r. The contours of SP90 in Fig. 2 FIG. 1 : 21show slight oscillations because M J > 1 for this jet. Note that Figs. 1a & 1b shows that the oscillations are present in the PIV data as well. But this does not introduce a significant impact to the subsequent acoustic predictions. We investigated this by filtering out the small oscillations (using the scheme of Vasilyev et al. 58 ) in the same manner as Ref. 59. The effect on the predictions in peak noise direction of θ = 30 • was found to be less than 0.5dB when the filtered mean flow was used for the propagator calculations in the acoustic spectrum formula (41) instead of the nonfiltered flow (used in Figs. 2a & 2c) whilst keeping the turbulence model (C6) fixed. Streamwise and radial variation of U(y 1 , r) at fixed r and y 1 respectively for SP90 (Ma = 1.5 & T R = 1) and SP49 (Ma = 1.5 & T R = 2.7). (a). SP90: r = 0; (b). SP49: r = 0; (c). SP90: y 1 = 6; (d). SP49: y 1 = 6. FIG. 2 : 2Spatial distribution of mean flow components:ṽ i = (U,V r ) and streamwise mean flow advection,X 1 for SP90 (Ma = 1.5 & T R = 1) and SP49 (Ma = 1.5 & T R = 2.7). (a). SP90: U(y 1 , r); (b). SP90: V r (y 1 , r); (c). SP90: X 1 (y 1 , r); (d). SP49: U(y 1 , r); (e). SP49: V r (y 1 , r); (f). to the RANS-based c 2 for SP90 and SP49 respectively. It is clear that both of these approximations are accurate enough for aero-acoustic calculations with the Crocco relation (for SP90) having a maximum error of 2% (consistent with Dahl's 40 results). But this is only at large streamwise distance at y 1 = 14 from the nozzle exit, which is far downstream from the region of maximum turbulence. The Crocco-Busemann relation is equally accurate at most locations within the jet and there is only a small discrepancy near the core region r < 0.1 at y 1 < 6 (possibly due to non-unity Prandtl number in the RANS calculations) giving an error of about 4% localized here.InFig. 4we investigate the grid independence of the solutionν(Y,U) determined by(29)-(31). Theν solution and its derivative, ∂ν/∂U, are quite well converged using grid 2 (450 × 300: 144, 000 points, see Fig.4 caption) with only a very slight deviation near the inner boundary, U → 1 of less than 2%. An increase in grid points does remedy the difference (cf. grid 3 and 4 in Fig. 4) in this region of the jet however, given that our experiments reveal a commensurate rise in computation time of (1 − 2) hours when increasing the grid resolution up to (550 × 400) (i.e., grid 4: 220, 000 points) performed on an AMD Opteron 6274 processor, the remaining calculations in the paper are therefore based on the (500 × 350) grid 3, which is still virtually identical to the numerical solution to (29)-(31) obtained using highest dimension grid 4. The dimensions of the grids used in the refinement study are given in the caption of Fig. 4. B. Spatial structure of propagator terms, (37), (38) & (39) The spatial structure of momentum flux propagator, |Ḡ 12 | 2 , in (36) is shown in Fig. 5 at the peak frequency and observation angle of (St, θ ) = (0.2, 30 • ) for SP90 and SP49. The scaled frequency Ω is now Ω = 2πSt/ε where the Strouhal number St is based on jet exit diameter and velocity. FIG. 3 : 3Verification of the Crocco relation (Eq. 5.33 in GSA) and Crocco-Busemann (17) and against RANS mean flow for SP90 (Ma = 1.5 & T R = 1) and SP49 (Ma = 1.5 & T R = 2.7) respectively at various points in the jet. (a).SP90: y 1 = 6; (b). SP90: y 1 = 10; (c). SP49: y 1 = 6; (d). SP49: y 1 = 10.end of the potential core at y 1 ∼ 12. This is different to Karabasov et al's ( 22 ,Fig. 13a) calculations and GSA's qualitative analysis both of which showed that a double-peak structure of |Ḡ 12 | is recovered in subsonic flows. We can explain this by considering the asymptotic structure of the propagator for a locally parallel mean flow. Here the asymptotic solution of |Ḡ 12 | 2 is independent of y 1 and proportional to (∂U/∂ r) 2 /(1 − M(y 1 , r) cos θ )6 (Goldstein 1975; Goldstein & Leib 19 ; FIG. 4 : 4Convergence ofν(Y,U) and ∂ν(Y,U)/∂U at Y = 1.0 and (Ω, θ ) = (0.2, 30 • ) for SP90 and SP49 (table I). Grid dimensions are: grid 1 -400 × 250 (100000); grid 2 -450 × 300 (135000); grid 3 -500 × 350 (175000) and grid 4 -550 × 400 (220000). (a). SP90:ν; (b). SP49: ν; (c). SP90: ∂ν/∂U; (d). SP49: ∂ν/∂U. Figs. 5c &FIG. 5 : 5c55d where |Ḡ 12 | 2 is infinite and therefore invalid; the edge of this region has a (peak)red contour line according to the figure legend. On the other hand, the non-parallel flow solution to (29) is large in magnitude because it prevents the locally parallel solution toν (i.e. whenX 1 = 0 in 29), and |Ḡ 12 | from being singular everywhere in the jet (cf. Figs.17 and 21 in GSA). The magnitude of |Ḡ 12 | increases for SP49, which is more concentrated over a shorter (streamwise/radial) region for both non-parallel and locally parallel (Figs. 5a & 5c cf. 5b & 5d respec-Spatial structure of momentum-flux propagator |G 12 | 2 in (37) using Non-parallel (N-P) and locally-parallel (P) solution toν(Y,U) via (29) at (St, θ ) = (0.2, 30 • ). (a) & (c) are N-P & P for SP90; (b) & (c) N-P & P for SP49. tively) solutions. For SP90, Ma = M J , hence a critical layer first comes into play in the locally parallel flow solution toν(Y,U) at θ c = 48 • where M(y 1 , r) = M J = Ma = 1.5. But there will also be critical layer for all θ ≤ 48 • in this solution owing to the reduction in M(y 1 , r) with increasing r at fixed streamwise location, y 1 . SP49 does, of course, have a subsonic jet Mach number in the core region because T R > 1, nevertheless a critical layer inν(Y,U) continues to persist since the locally parallel flow solution to (29) possesses the pre-factor (1 −U cos θ /c ∞ ) −1 (Eq.7.1 in GSA). the scaled adjoint Lilley Green's function at ω = O(1) frequencies (determined by a solution to the adjoint Rayleigh equation) given by Eqs. (4.20), (5.20), (5.22), (C.2) & (C.4) in Ref. 19. When the latter two formulae in Ref. 19 that define the propagator components are inserted into (41) using (38) and (39) in a locally parallel flow, we find that ReΓ 41Ḡ * 11 is proportional to (1 − M(r) cos θ ) −3 and therefore changes sign as θ → 0 when (Ma, T R) > 1 (AGF, pp.2526-2528). Note also that AGF's asymptotic estimates did not take into account the correction factors that are required to render the parallel flow Green's function uniformly valid at supersonic speeds across the critical layer. But Eq. (4.52), the last term in Eqs. (6.14), (C.16) & (C.21) all in Ref. 19 show that the peak noise component of the acoustic spectrum involving Φ * 1212 possesses a correction that, while altering the magnitude of the propagator, remains positive-definite for an isothermal flow as θ → 0 for ω = O(1) and at Ma = O(1). Although these correction factors have not been worked out in a heated jet, it seems reasonable to estimate their effect as producing a similar result to the isothermal case; i.e. that the correction changes only the magnitude of the propagator term and not its sign (since jet heating only affects the propagator by a non-uniform variation in c 2 ). FIG. 6 : 6Spatial structure of propagator terms in (38) & (39) for SP49 at (St, θ ) = (0.2, 30 • ). (a). 2Re Γ 41Ḡ * 11 : N-P; (b). 2Re Γ 41Ḡ * 11 : P; (c). |Γ 41 | 2 : N-P; (d). |Γ 41 | 2 : P. See caption of Fig. 5. & (48) in Ref. 23 possess. The numerical analysis in Appendix C justifies this by showing that model (43) gives a slightly more realistic estimation of c ⊥ relative to Eqs.(47) & (48) in Ref. 23 when comparing models of R 1212 (y, 0, 0) against LES-extracted correlation function data of a similar jet flow. and Afsar et al. 61 ) at η = 0 and τ = 0. The faster decay in (η 1 , τ) away from the cusp of the auto-correlation of R 1212 (y, η 1 , η T , τ) is a feature of higher-order turbulence correlations in more homogeneous settings (see Fig. 36d in Yaglom & Monin 62 on p.249 &f. taken from Frenkiel & Klebanoff 63 ) but it has also been found in (non-homogeneous) jet flows 50,60 ; see Fig. 19 in Pokora & McGuirk 48 . FIG. 7 : 7Spatial distribution of TKE, k(y 1 , r) and spectral tensor component, Φ * 1212 , at (St, θ ) = (0.2, 30 • ); seeTable III andcaption to Fig. 8 for turbulence parameters used in (C6. (a). & (c). show (k, Φ * 1212 ) for SP90; (b). & (d). show (k, Φ * 1212 ) for SP49. non-zero values for (α,k T ) in (43) across Strouhal numbers 0.01 ≤ St ≤ 1.0. Therefore, we focus on the algebraically compact formula (C6) where (α,k T ) = 0 but (a 1 , a 2 ) are slightly non-zero. & (IV); see alsoFig.11and associated discus- & Fig. 20 in Sharma & Lele 33 and (without anything else to go on) we use them for the constant a 1212 values of SP90 and SP49 in this paper. The values of the coefficients (n 2 , n 3 ) in (41) are defined in the caption of Fig. 8 and, following Harper-Bourne 60 , the convection Mach number in (C6) is set at U c = 0.68 for all predictions. Center 2 . 2As mentioned in §.2, the asymptotic theory finds greatest applicability at small θ where the peak noise occurs (typically at θ = 30 • ). We, therefore, show 2 polar observation angles on either side of θ = 30 • ; i.e., we consider θ = (23.3 • , 28.6 • , 33.9 • ) for SP90 and (25 • , 30 • , 35 • ) for SP49. The predictions remain accurate up to and beyond the peak noise. For SP90, the agreement lies within 1dB of the NASA data up to St = 0.5 (this is almost at St = 0.6 for θ = 23.3 • in Fig.8a). FIG. 8 : 8.11c & 11d) gave the necessary amplitude scaling for the predictions to remain in agreement with acoustic data. That is, (c 1 , c ⊥ ) = (0.125, 0.022) for SP90 and (c 1 , c ⊥ ) = (0.17, 0.017) for SP49. Note that experiments/simulations22,48,50 show that the transverse correlation length scale of R 1111 (η 1 , η ⊥ , τ) reduced by almost an order of magnitude compared to streamwise (cf.Fig. 19bto 20b and 21b in Pokora & McGuirk 48 ). As mentioned earlier, R 1111 is only relevant here inasmuch as its normalized space/time structure in (η 1 , η ⊥ , τ) was found to be similar to R 1212 in Semiletov & Karabasov64 (see theirFig. 1).InFig. 9, we show the contours of rI(x, y; ω) at the peak noise location (St, θ ) = (0.2, 30 • ) Comparison of prediction with NASA experiments using acoustic spectrum formula(44) n 3 ) 3in (41) are chosen to be (2.5, 4.0) respectively. (a). SP90: θ = 23.3 • ; (b). SP90: θ = 28.6 • ; (c). SP90: θ = 33.9 • ; (d). SP49: θ = 25 • ; (e). SP49: θ = 30 • ; (e). SP49: θ = 35 • . FIG. 9 : 9Spatial distribution of each constituent propagator term in rI(x, y; ω), (41), for SP49. Figs. (a-c) computed at (St, θ ) = (0.2, 30 • ) (see Fig. 8 for turbulence scales in (C6)& (n 2 , n 3 ) values): (a). |G 12 | 2 only; (b). Re {Γ 41 G * 11 } only; (c). |Γ 41 | 2 only; (d). sensitivity of 30 • spectrum to (n 2 , n 3 ). amplification of the propagators, (37) & (38), in non-parallel flow is clear from the fact that (29), possesses the algebraic expansionν(Y,U) ∼ν 0 (Y ) + Uν 1 (Y ) + O(U 2 lnU), near the outer boundary U → 0 whereν 0 (Y ) is given by the outer boundary condition, (30), andν 1 (Y ) by Eq. (5.42) in GSA. On the other hand, the locally parallel flow solution, obtained whenX 1 = 0 in Eq.(29), can only behave likeν ∼ e −iΩY cos θ /c ∞ /(1 −U cos θ /c ∞ ). In a parallel flow, the momentum flux propagator |Ḡ 12 | 2 is proportional to square of the local mean flow gradient, which peaks at the initial shear layers where ∂U(y 1 , r)/∂ r is maximum at fixed y 1 and not further downstream.Unless an appropriate uniformly valid solution is constructed (of Goldstein & Leib 19 type), |Ḡ 12 | 2 will be singular for the parallel flow approximation to the Green's function in the thin critical layer at ω = O(1) frequencies. . Just as the outer boundary conditions (30) & (31) determine the structure of the solution,ν(Y,U), in (29), the outer limit of any inner expansion ofΓ 42 (Y,U) andΓ 12 (Y,U) must match onto the far-field (r → ∞) form of the above parallel flow results. Hence we can expect that the Ω = O(1) nonparallel solution to |Γ 42 | 2 should be less directive than |Ḡ 12 | 2 . So while the propagators associated 10: Overall sound pressure level, OASPL (dB) over frequencies 0.01 < St < 0.6. See Table (III) for turbulence parameters in (C6) and caption to Fig. 8 for (n 2 , n 3 ). (a). Fig. (8) values for c ⊥ ; (b). c ⊥ tuned at each θ . a σ 4 4(y, τ|x,t) of the linearized Euler equations (Eq. 2.13 in Goldstein 18 , and Eqs. 3.1-3.3 of Goldstein & Leib 19 ) where σ = 1, ..., 5. But since high Reynolds number jets diverge slowly, i.e. , F (S) will always drop out of (18) when non-parallel flow effects enter the lowest order Green's function solution, whichKarabasov et al's (2013) numerical simulations at high jet Γ λ , j (y|x; ω), in the acoustic spectrum formula(41) at the SP90 (Ma = 1.5, T R = 1.0) and SP49 (Ma = 1.5, T R = 2.7) set points that possesses a measureable region of low frequency spectral quietening (Tanna 1 ; Bridges 9 ; Bodony and Lele 5 ). The acoustic spectrum model,(41), required only a single component of the generalized auto-covariance tensor R λ jµl (y, η; τ) , R 1212 , to be modeled when the other streamwise momentum/temperature fluctuation-related components in (41) (R 4111 and R 4141 ) were assumed, as a first step, to be proportional to it. We validated the model for R 1212 (y, η; τ), given by(43), against LES data in Figs. 11c & 11d for jets that should have more-or-less the same space-time de-correlation. This allowed determination of all relevant perimentally and computationally by, among others, Pokora & McGuirk (2015) and Karabasov et al. (2010) respectively. St < 0.8 for the SP49 heated jet. Consequently, a Green's function based on locally parallel flow model will incorrectly estimate both the coupling and enthalpy flux propagators (38) & (39) or result in an unbounded result due to the presence of a critical layer in the momentum flux propagator, (37). 0 + f (|η T |) dξ ,(B3)wherek 1 =k 1 −ω(l 1 /l 0 ),ω = ωl 0 /U c andk i = l i k i (no sum on suffix i = (1, 2, 3)). Note that the square-brackets in (B2) expand in the manner shown by G & L 19 (Eq. 6.31):a 0 −a 1 (1+ω∂ /∂ ω + ...) − a 2 (1 + k 1 ∂ /∂ k 1 + ...)where further terms of type a n ω n ∂ /∂ ω n and a m k m1 ∂ /∂ k m (positive integer (m, n) > 1) represent the oscillations of R 1212 (y, η 1 , η T , τ) at τ = O(1). Convergenceof the space-time series (43) in τ and/or η = (η 1 , η T ) is guaranteed since each term will decay exponentially like, e −X(η 1 ,η T ,τ) , when τ → ∞ and |η| = O(1) and vice versa. The spectral tensor component Φ 1212 (y, k 1 , k 2 T ; ω), (B2), on the other hand decays algebraically (see C6) asω −4 at infinity (which by the Abelian theorem in the theory of Fourier transforms, corresponds to the cusp of the auto-correlation of R 1212 at τ = 0 and η = O(1)).Theξ −integral can now be performed using result #867 in Campbell & Forster 68 ; thus giving: e −α Φ(y,k 1 ,k 2 T ; ω, α) expands as follows R 1212 (y, η 1 , 0, τ) R 1212 (y, 1 + l 2 r (η 1 −τ)] + ... e −X (C7) FIG. 11 : 11in Leib & Goldstein (2011 23 ) against auto-correlation data of R 1212 withτ. The LES data 56 was extracted at the streamwise position roughly at end of the jet potential core region, y 1 = 8, for three radial locations r = (0.25, 0.5, 0.75) above/below the jet shear layer. The agreement is slightly better for (C7) in the isothermal case (M J = Ma = 1.5) which should have similar turbulence structure as SP90. In general, however, both models appear to compare favorably. For the acoustic predictions on the other hand, using the Leib & Goldstein 23 R 1212 model to determine (42) in (41) under predicts the SPL by (8 − 10)dB in both SP90 and SP49 (Comparison of Fig. 8 predictions with that obtained using Eq.54 in Leib & Goldstein 23 to determine (42) in (41). (a). SP90: θ = 28 • ; (b). SP49: θ = 30 • ; (c). SP90: R 1212 ; (d). SP49: R 1212 . Figs. (c) & (d) compare (C7) & Eq. (54) in Leib & Goldstein 23 to LES-extracted turbulence data 56,61 where SP90 is compared to a (M J , T R) = (1.5, 1.0) jet and SP49 to a (M J , T R) = (1.5, 1.7) one. TABLE I : Iaxisymmetric jets used in this study.Tanna 1 set point Ma = U J /c ∞ T R = T J /T ∞ SP90 1.48 1.0 SP49 1.48 2.7 denotes complex conjugate and the Einstein summation convention is beingused with the Greek tensor suffixes ranging (λ , µ) = (1, 2, 3, 4) and Latin suffixes (i, j, k, l) = (1, 2, 3) representing the components of a rectangular Cartesian co-ordinate system, where '1' is the streamwise direction and (2, 3) represent components in the transverse plane. The mean flow is now contained within the propagator tensor TABLE II : IIAsymptotic structure of momentum/enthalpy flux coupling and enthalpy flux auto-covariance term in Eqs.(26) & (27) of AGF at Ω = O(1) frequencies.Component Propagator at lowest order Scaling of H ν jµl TABLE III : IIITurbulence model parameters used in (C6).Tanna 1 set point (c 0 , c 1 , c ⊥ ) (a 1 , a 2 ) SP90 (0.1, 0.125, 0.022) (0.19,0.01) SP49 (0.2, 0.17, 0.017) (0.20, 0.01) & (39) will always remain positive definite (Figs. 6a & 6c cf. Figs. 6b & 6d respectively) and are largely insignificant noise generators throughout the low frequency spectrum when the true non-parallel flow Green's function is used to determine it (see where, K 1 [...] is the modified Bessel function of the second kind. Similar to above, theη 1 −integral is given by 2π times result #917.8 in Campbell & Forster 68 . Whence:e −α Φ(y,k 1 ,k 2 T ; ω, α) = η T dη T e ik TηT × 1 + χ 1/2 f (|η T |) + α 2 remains more consistent with turbulence measurements 48,50 since c ⊥ c 1 as Refs. 22, 48, and 50 have found. TABLE IV : IVStreamwise/transverse parameters (c 1 , c ⊥ ) needed for accurate peak noise predictions in Figs. 11a & 11b.Tanna 1 set point Eq.(C7) L&G ( 23 , Eq. 54) SP90 (0.125, 0.022) (0.125,0.063) SP49 (0.17, 0.017) (0.17, 0.0425) Appendix A: Generalization of the axisymmetric representation of Φ * 4 jkl for mirror-symmetry breaking AGF developed a model (Eq. C.5) for the axisymmetric representation of the spectral tensor associated with the coupling term, Φ * 4 jkl (y, k 1 , k 2 T ; ω), using Batchelor's 66 (Eq. 3.3.10) formula of a rank-3 tensor that depends on two independent vectors (k, λ). Batchelor's formulae in homogeneous axisymmetric turbulence (p.43in Ref.66)can be used in the non-homogeneous setting of the jet flow problem because the field point, y, is treated as being fixed for the stationary random functions e λ j and e µl in(9). Hence, the tensor R λ jµl (y, η; τ) depends only on η with time delay τ acting as a parameter. In other words, as stated in Ref.49, the kinematic modeling of R λ jµl (y, η; τ) is a locally-homogeneous field problem. In terms of the spectral tensor, at a given fixed field point y, Φ * 4 jkl (y, k 1 , k 2 T ; ω) is a function of the wave number vector, k (defined by 35), and λ, which is a unit vector indicating the direction of symmetry (i.e. λ = e 1 in this case).The axisymmetric model of Φ * 4 jkl (y, k 1 , k 2 T ; ω) is derived by determining all the basic invariants that can be formed from these two vector arguments and the unit tensor δ jk in suffixes ( j, k) when the three-form Φ * 4 jkl a j b k c l remains invariant to the full rotation group about the stream-improper rotations such as reflections of the transverse (y 2 − y 3 ) co-ordinate plane through the62). Hence a more general representation requires that the three-form Φ * 4 jkl a j b k c l is invariant to proper rotations only about k 1 axis with respect to the vector configuration formed by two field points: y and y + k separated by k with λ = e 1 being the principal direction of symmetry (see The spectral tensor(35)is usually assumed to be a weak function of k ⊥ ; this follows by Watson's lemma because the physical space tensor R λ jµl (y, η; τ) that enters the Fourier transform integral,(35), through(8)and linear relation below(9), is a rapidly varying function of |η ⊥ | 48,50 .In the limit of the infinitely long streamwise eddy, k ⊥ = |k ⊥ | → 0 and k i = δ i1 k 1 . But, as first argued by Afsar et al.51When mirror invariance in Φ * 4 jkl is broken, on the other hand, a large number of additional basic invariants can be formed on top of those already contained in Batchelor ( 66 , Eq. 3.3.10) for a rank-3 axisymmetric tensor. More explicitly,where A 1,2,3,4 = A 1,2,3,4 (y, k 2 , k 1 ; ω) are arbitrary scalars that depend on invariants k 2 = k.k = k 2 1 + k 2 T and k 1 = e 1 .k and Φ * 4 jkl = Φ * 4 jkl (y, k 1 , k 2 ; ω) is given by the sum of permutations of the Grassman products (i.e. skew symmetric terms, see p. 92 of Bishop & Goldberg 67 ) of the form:where A 4+I(p,m,n) (y, k 2 , k 1 ; ω) is a scalar field corresponding to a basic invariant formed by appropriate tensor multiplication of unit alternating tensor (Levi-Civita symbol), ε jkl , with vectors (k i , λ i ), for the nth permutation in suffixes ( j, k, l).The notation perm( j, k, l) denotes permutation over all possible combination of tensor suffixes ( j, k, l) where the function I(p, m, n) is a mapping of a three-dimensional subspace p = 1, 2, 3, ..., P; m, n = 1, 2, 3 of positive integers where the index, p, individuates the unique permutation of perm( j, k, l) with P being the total number of individual permutations possible. ThatUsing the Leibnitz rule, we can re-write (B5) into the remarkably simple result,Changing variables to a cylindrical-polar co-ordinate system in whichη T = (η T cos φ ,η T sin φ )in which the Hankel transform is defined by: and therefore letting f (η T ) ∼η 4 T , gives very good jet noise predictions across various subsonic and supersonic acoustic Mach numbers. They also showed that this model agreed with Harper-Bourne's measured data for R 1111 (y, η 1 , η T , τ) for a low Mach number jet (Ma = 0.22), which itself was remarkably close to an LES calculation of a round jet at Ma = 0.75 (seeFig. 6in radial separation does not agree with either LES or Harper-Bourne's data over almost all radial locations and at two different frequencies.Hence, inserting a transverse decay function of the form f (η T ) =η 4 T + 2αη 2 T gives the following Hankel transform in the standard form of a zeroth-order Weber integral,after completing the square in argument of the exponential in F(η T ; χ, α) defined above (B9) andusing the standard result in Lebedev 70 (p.132). The unit spectrum, Φ(y,k 1 ,k 2 T ; ω, α), is found by substituting the above Hankel transform into (B9); viz.: Appendix C: Algebraic formulae for Φ * 1212 (y, k 1 , k 2 T ; ω)Since the unit turbulence spectrum Φ(y,k 1 ,k 2 T ; ω, 0) depends on the streamwise wavenumber, k 1 , through,k 1 , in the wavenumber-frequency function χ(k 1 ,ω), (B6), we re-write the independent variables (k 1 ,ω) in (B12) using the chain rule to transform the derivative ∂ /∂ω asso that (k 1 ,ω) are taken as independent variables where the modified wavenumber isk 1 =k 1 − ω(l 1 /l 0 ). Using (C1) to re-write the derivative with respect tok 1 in (B2) and inserting the unitspectrum Φ(y,k; ω), (B12), into this result shows thatwhere χ(k 1 ,ω) is defined by (B6). The length scales in (C2) are taken to be proportional to the local turbulent kinetic energy, k(y), and the rate of energy dissipation,ε(y) determined by RANS calculation throughwhere suffix i = (0, 1, 2, 3) and c i are empirical parameters.Performing the differentiation in (C2) we obtain the final formula:expands in powers of χ −(3+m) and commensurately as powers ofω −2(m+3) for m = (0, 1/2, 1) whenω → ∞. Thus, the spectral tensor component in (C2) scales as a truncated power series in χ: Φ * 1212 (y, k 1 , k 2 T ; ω) ∼ χ −(2+n) and therefore is bounded as Φ * 1212 = O(ω −2(n+2) ) for n = (0, 1/2, 1, 3/2, 2) at large frequencies.Whenk T = 0 but (a 1 , a 2 ) = 0, (C4) reduces to the algebraic formula: where, consistent for an axisymmetric jet, we have put l 2 = l 3 = l ⊥ (or c 2 = c 3 = c ⊥ from C3) anda 0 = 1 so that R 1212 (y, 0, 0, 0)/R 1212 (0, 0, 0) = 1. This model now depends on 5 independent parameters that estimate the turbulence structure: that is, streamwise/temporal length scales, (l 1 , l 0 ); transverse length scale, l ⊥ and anti-correlation parameters: (a 1 , a 2 ). Note that (C6) remains algebraically bounded at large frequenciesω. This can be deduced by reducing the formula to the at a 1 = a 2 = 0. The ratio Φ 1212 (y, k 1 , 0; ω)/R 1212 (y, 0, 0) is then ∼ χ −2 and remains O(ω −4 ) asω → ∞ wheñ k T = 0 thus recovering the fact that the 'spectral extent' of the peak jet noise producing region decays. case where the anti-correlation (negative loops) region is negligible; i.e. algebraically in this case) at very high frequenciescase where the anti-correlation (negative loops) region is negligible; i.e. at a 1 = a 2 = 0. The ratio Φ 1212 (y, k 1 , 0; ω)/R 1212 (y, 0, 0) is then ∼ χ −2 and remains O(ω −4 ) asω → ∞ wheñ k T = 0 thus recovering the fact that the 'spectral extent' of the peak jet noise producing region decays (algebraically in this case) at very high frequencies. Validation of turbulence model (43) and comparison to Leib & Goldstein ( 23 , Eq. 54). Validation of turbulence model (43) and comparison to Leib & Goldstein ( 23 , Eq. 54) ) against the postprocessed turbulence data reported in Afsar et al. 61 of the Brés et al. 56 fixed M J > 1. We compare (43) used in the derivation of (42) and predictions in Fig. 8, to the Leib & Goldstein 23 turbulence model (Eq. 54 in their paper) and subsequent predictions obtained by using that model as an alternative to (43) in (44) respectively. Identical turbulence scales as given in Table III were used to determine Φ * 1212 using the Leib & Goldstein model via Fourier transform (42). Moreover, the same Green's function solution. Finally, 37in this appendix we present a validation of the turbulence model (43. was used to determine the propagatorsFinally, in this appendix we present a validation of the turbulence model (43) against the post- processed turbulence data reported in Afsar et al. 61 of the Brés et al. 56 fixed M J > 1. We compare (43) used in the derivation of (42) and predictions in Fig. 8, to the Leib & Goldstein 23 turbulence model (Eq. 54 in their paper) and subsequent predictions obtained by using that model as an alternative to (43) in (44) respectively. Identical turbulence scales as given in Table III were used to determine Φ * 1212 using the Leib & Goldstein model via Fourier transform (42). Moreover, the same Green's function solution, (29)-(31), was used to determine the propagators (37), (38) & An experimental study of jet noise. part i: Turbulent mixing noise. H K Tanna, J. Sound & Vib. 50405H. K. Tanna, "An experimental study of jet noise. part i: Turbulent mixing noise," J. Sound & Vib. 50, 405 (1977). Measurements of turbulent convection speeds in multistream jets using time-resolved piv. J E Bridges, M Wernet, 23rd AIAAJ. E. Bridges and M. Wernet, "Measurements of turbulent convection speeds in multistream jets using time-resolved piv," (23rd AIAA/CEAS Aero-acoustics Conference, 2017). Experimental investigation of a heated supersonic jet with a total temperature non-uniformity. D Mayo, K Daniel, K T Lowe, W F Ng, 23rd AIAA/CEAS Aero-acoustics ConferenceD. Mayo, K. Daniel, K. T. Lowe, and W. F. Ng, "Experimental investigation of a heated super- sonic jet with a total temperature non-uniformity," (23rd AIAA/CEAS Aero-acoustics Confer- ence, 2017). Theoretical modeling of broadband shock associated noise in asymmetric jets. A Kalyan, S A Karabasov, 21st AIAA/CEAS Aeroacoustics Conference. A. Kalyan and S. A. Karabasov, "Theoretical modeling of broadband shock associated noise in asymmetric jets," (21st AIAA/CEAS Aeroacoustics Conference, 2015). Low frequency sound sources in high-speed turbulent jets. D J Bodony, S K Lele, J. Fluid Mech. 617231D. J. Bodony and S. K. Lele, "Low frequency sound sources in high-speed turbulent jets," J. Fluid Mech. 617, 231 (2008). Studies of the influence of density on jet noise. R G Hoch, J P Duponchel, B J Cocking, W D Bryce, J. Sound & Vib. 28649R. G. Hoch, J. P. Duponchel, B. J. Cocking, and W. D. Bryce, "Studies of the influence of density on jet noise," J. Sound & Vib. 28, 649 (1973). The effects of temperature on supersonic jet noise emission. J M Seiner, M K Ponton, B J Jansen, N T Lagen, 14th DGLR/AIAA Aeroacoustics ConferenceJ. M. Seiner, M. K. Ponton, B. J. Jansen, and N. T. Lagen, "The effects of temperature on supersonic jet noise emission," (14th DGLR/AIAA Aeroacoustics Conference, 1992). Phased-array measurements of single flow hot jet. S S Lee, J E Bridges, 11th AIAA Aeroacoustics Conference. S. S. Lee and J. E. Bridges, "Phased-array measurements of single flow hot jet," (11th AIAA Aeroacoustics Conference, 2005). Effect of heat on space-time correlations in jets. J E Bridges, 12th AIAA Aeroacoustics Conference. J. E. Bridges, "Effect of heat on space-time correlations in jets," (12th AIAA Aeroacoustics Conference, 2006). Reynolds number and temperature effects on jet noise. T R S Bhat, 13th AIAA Aeroacoustics Conference. T. R. S. Bhat, "Reynolds number and temperature effects on jet noise," (13th AIAA Aeroacous- tics Conference, 2007). Some observations on the noise of heated jets. M Harper-Bourne, 13th AIAA Aeroacoustics Conference. M. Harper-Bourne, "Some observations on the noise of heated jets," (13th AIAA Aeroacoustics Conference, 2007). Jet mixing noise and the effect of temperature. M Harper-Bourne, 15th AIAA Aeroacoustics Conference. M. Harper-Bourne, "Jet mixing noise and the effect of temperature," (15th AIAA Aeroacoustics Conference, 2009). Developments in jet noise modeling -theoretical predictions and comparisons with measured data. B J Tester, C L Morfey, J. Sound & Vib. 4679B. J. Tester and C. L. Morfey, "Developments in jet noise modeling -theoretical predictions and comparisons with measured data," J. Sound & Vib. 46, 79 (1976). The influence of temperature on shock-free supersonic jet noise. H K Tanna, P D Dean, M J Fischer, J. Sound & Vib. 39429H. K. Tanna, P. D. Dean, and M. J. Fischer, "The influence of temperature on shock-free super- sonic jet noise," J. Sound & Vib. 39, 429 (1975). Eddy convection in cold and heated supersonic jets. S Shea, K T Lowe, W F Ng, 23rd AIAAS. Shea, K. T. Lowe, and W. F. Ng, "Eddy convection in cold and heated supersonic jets," (23rd AIAA/CEAS Aero-acoustics Conference, 2017). Synthesis of convection measurements in three-stream jets for investigation of noise sources. M A Stuber, K T Lowe, W F Ng, 23rd AIAA/CEAS Aero-acoustics ConferenceM. A. Stuber, K. T. Lowe, and W. F. Ng, "Synthesis of convection measurements in three-stream jets for investigation of noise sources," (23rd AIAA/CEAS Aero-acoustics Conference, 2017). Effects of temperature on noise generation in supersonic jets. J Liu, A Corrigan, K Kailasanath, E Gutmark, 22nd AIAA/CEAS Aero-acoustics Conference. J. Liu, A. Corrigan, K. Kailasanath, and E. Gutmark, "Effects of temperature on noise generation in supersonic jets," (22nd AIAA/CEAS Aero-acoustics Conference, 2016). A generalized acoustic analogy. M E Goldstein, J. Fluid Mech. 488315M. E. Goldstein, "A generalized acoustic analogy," J. Fluid Mech. 488, 315 (2003). The aero-acoustics of slowly diverging supersonic jets. M E Goldstein, S J Leib, J. Fluid Mech. 488315M. E. Goldstein and S. J. Leib, "The aero-acoustics of slowly diverging supersonic jets," J. Fluid Mech. 488, 315 (2008). On sound generated aerodynamically: I. general theory. M J Lighthill, Proc. R. Soc. Lon., A. 211564M. J. Lighthill, "On sound generated aerodynamically: I. general theory," Proc. R. Soc. Lon., A 211, 564 (1952). On the noise from jets. G M Lilley, AGARD12G. M. Lilley, "On the noise from jets," (AGARD, 1974) pp. 13.1-13.12. Jet noise: acoustic analogy informed by large-eddy simulation. S A Karabasov, M Z Afsar, T P Hynes, A P Dowling, W A Mcmullan, C D Pokora, G J Page, J J Mcguirk, AIAA J. 481312S. A. Karabasov, M. Z. Afsar, T. P. Hynes, A. P. Dowling, W. A. McMullan, C. D. Pokora, G. J. Page, and J. J. McGuirk, "Jet noise: acoustic analogy informed by large-eddy simulation," AIAA J. 48, 1312 (2010). Hybrid source model for predicting high-speed jet noise. S J Leib, M E Goldstein, AIAA J. 491324S. J. Leib and M. E. Goldstein, "Hybrid source model for predicting high-speed jet noise," AIAA J. 49, 1324 (2011). Azimuthal source noncompactness and mode coupling in sound radiation from high-speed axisymmetric jets. M E Goldstein, S J Leib, AIAA J. 56M. E. Goldstein and S. J. Leib, "Azimuthal source noncompactness and mode coupling in sound radiation from high-speed axisymmetric jets," AIAA J. 56, 1-11 (2018). Asymptotic properties of the overall sound pressure level of sub-sonic air jets using isotropy as a paradigm. M Z Afsar, J. Fluid Mech. 664510M. Z. Afsar, "Asymptotic properties of the overall sound pressure level of sub-sonic air jets using isotropy as a paradigm," J. Fluid Mech. 664, 510 (2010). The low frequency sound from multipole sources in axysymmetric shear flows with application to jet noise. M E Goldstein, J. Fluid Mech. 70M. E. Goldstein, "The low frequency sound from multipole sources in axysymmetric shear flows with application to jet noise," J. Fluid Mech. 70, 595-604 (1975). Enthalpy flux/momentum flux coupling in the acoustic spectrum of heated jets. M Z Afsar, M E Goldstein, A M Fagan, AIAA J. 492522M. Z. Afsar, M. E. Goldstein, and A. M. Fagan, "Enthalpy flux/momentum flux coupling in the acoustic spectrum of heated jets," AIAA J. 49, 2522 (2011). An investigation of the mechanisms of sound generation in initially laminar subsonic jets using the goldstein acoustic analogy. S A Karabasov, C Bogey, T P Hynes, J. Fluid Mech. 71424S. A. Karabasov, C. Bogey, and T. P. Hynes, "An investigation of the mechanisms of sound generation in initially laminar subsonic jets using the goldstein acoustic analogy," J. Fluid Mech. 714, 24 (2013). Effect of non-parallel mean flow on the green's function for predicting the low-frequency sound from turbulent air jets. M E Goldstein, A Sescu, M Z Afsar, J. Fluid Mech. 695199M. E. Goldstein, A. Sescu, and M. Z. Afsar, "Effect of non-parallel mean flow on the green's function for predicting the low-frequency sound from turbulent air jets," J. Fluid Mech. 695, 199 (2012). Methods of theoretical physics. P M Morse, H Feshbach, McGraw-HillUSAP. M. Morse and H. Feshbach, "Methods of theoretical physics," (McGraw-Hill, USA, 1953). New scaling laws for hot and cold jet mixing noise based on a geometric acoustics model. C L Morfey, V M Szewczyk, B J Tester, J. Sound & Vib. 61255C. L. Morfey, V. M. Szewczyk, and B. J. Tester, "New scaling laws for hot and cold jet mixing noise based on a geometric acoustics model," J. Sound & Vib. 61, 255 (1978). The sound radiated from isotropic turbulence with application to the theory of jet noise. G M Lilley, J. Sound & Vib. 190463G. M. Lilley, "The sound radiated from isotropic turbulence with application to the theory of jet noise," J. Sound & Vib. 190, 463 (1996). Effects of heating on noise from turbulent mixing layers with initially laminar and turbulent boundary layers. A Sharma, S K Lele, 50th Aerospace Sciences Meeting. A. Sharma and S. K. Lele, "Effects of heating on noise from turbulent mixing layers with initially laminar and turbulent boundary layers," (50th Aerospace Sciences Meeting, 2012). Sulla transmission del calore da una lmina piana aun fluido scorrente ad alta velocita. L Crocco, L'Aerotenca. 12181L. Crocco, "Sulla transmission del calore da una lmina piana aun fluido scorrente ad alta ve- locita," L'Aerotenca 12, 181 (1932). Viscous fluid flow. F M White, McGraw-HillUSAF. M. White, "Viscous fluid flow," (McGraw-Hill, USA, 1974). Compressibility effects on the extended crocco relation and the thermal recovery factor in laminar boundary layer flow. B W Van Oudheusden, J. Fluids Engrg. 12632B. W. van Oudheusden, "Compressibility effects on the extended crocco relation and the thermal recovery factor in laminar boundary layer flow," J. Fluids Engrg. 126, 32 (2004). Predictive capability of low frequency jet noise using an asymptotic theory for the adjoint vector green's function in non-parallel flow. M Z Afsar, S K Lele, 22nd AIAA/CEAS Aeroacoustics ConferenceM. Z. Afsar and S. K. Lele, "Predictive capability of low frequency jet noise using an asymptotic theory for the adjoint vector green's function in non-parallel flow," (22nd AIAA/CEAS Aero- acoustics Conference, 2016). This will be consistent when the leading order of mean flow expansion. is considered in §.IIIThis will be consistent when the leading order of mean flow expansion, (22), is considered in §.III. Nonlinear global modes in hot jets. L Leesshafft, P Huerre, P Sagaut, J. Fluid Mech. 554393L. Leesshafft, P. Huerre, and P. Sagaut, "Nonlinear global modes in hot jets," J. Fluid Mech. 554, 393 (2006). The aero-acoustics of supersonic co-axial jets. M Dahl, nASA TM 106782Penn State University, Department of Aerospace Engrg.PhD dissertationM. Dahl, The aero-acoustics of supersonic co-axial jets, PhD dissertation, Penn State University, Department of Aerospace Engrg. (1994), nASA TM 106782. Very near-nozzle shear-layer turbulence and jet noise. R A Fontaine, G S Elliot, J M Austin, J B Freund, J. Fluid Mech. 77027R. A. Fontaine, G. S. Elliot, J. M. Austin, and J. B. Freund, "Very near-nozzle shear-layer turbulence and jet noise," J. Fluid Mech. 770, 27 (2015). Turbulence measurements in axisymmetric jets of air and helium. part 1. air jet. N R Panchapakesan, J L Lumley, J. Fluid Mech. 246197N. R. Panchapakesan and J. L. Lumley, "Turbulence measurements in axisymmetric jets of air and helium. part 1. air jet," J. Fluid Mech. 246, 197 (1993). Turbulent flows. S B Pope, Cambridge University PressUKS. B. Pope, "Turbulent flows," (Cambridge University Press, UK, 2000). Partial differential equations. P R Garabedian, AMS Chelsea PublishingUSAP. R. Garabedian, "Partial differential equations," (AMS Chelsea Publishing, USA, 1998). Perturbation methods in fluid mechanics. M Vandyke, The Parabolic PressUSAM. VanDyke, "Perturbation methods in fluid mechanics," (The Parabolic Press, USA, 1975). On the applicability of high-frequency approximations to lilley's equation. D W Wundrow, A Khavaran, J. Sound & Vib. 272793D. W. Wundrow and A. Khavaran, "On the applicability of high-frequency approximations to lilley's equation," J. Sound & Vib. 272, 793 (2004). High frequency green function for aerodynamic noise in moving media, part i: General theory. P A Durbin, J. Sound & Vib. 91P. A. Durbin, "High frequency green function for aerodynamic noise in moving media, part i: General theory," J. Sound & Vib. 91, 519-525 (1983). Stereo-piv measurements of spatio-temporal turbulence correlations in an axisymmetric jet. C D Pokora, J J Mcguirk, J. Fluid Mech. 778216C. D. Pokora and J. J. McGuirk, "Stereo-piv measurements of spatio-temporal turbulence corre- lations in an axisymmetric jet," J. Fluid Mech. 778, 216 (2015). Insight into the two-source structure of the jet noise spectrum using a generalized shell model of turbulence. M Z Afsar, Euro. J. Mech. B/Fluids. 31129M. Z. Afsar, "Insight into the two-source structure of the jet noise spectrum using a generalized shell model of turbulence," Euro. J. Mech. B/Fluids 31, 129 (2012). Velocity measurements in jets with application to noise source modeling. P J Morris, K Zaman, J. Sound & Vib. 329394P. J. Morris and K. Zaman, "Velocity measurements in jets with application to noise source modeling," J. Sound & Vib. 329, 394 (2010). Statistical axisymmetric model of the twopoint time delayed auto-covariance of the reynolds stress tenso. M Z Afsar, C D Pokora, J J Mcguirk, 19th Polish Fluid Mechanics Conference. Poznan, PolandM. Z. Afsar, C. D. Pokora, and J. J. McGuirk, "Statistical axisymmetric model of the two- point time delayed auto-covariance of the reynolds stress tenso," (19th Polish Fluid Mechanics Conference, Poznan, Poland, 2010). Effect of mean-flow evolution on sound propagation through non-uniform jet flows. S A Karabasov, A P Dowling, T P Hynes, 13th AIAA Aeroacoustics ConferenceS. A. Karabasov, A. P. Dowling, and T. P. Hynes, "Effect of mean-flow evolution on sound propagation through non-uniform jet flows," (13th AIAA Aeroacoustics Conference, 2007). Low-order models of dual-stream jet noise with temperature effects based on the goldstein generalised acoustic analogy. V Gryazev, A P Markesteijn, S A Karabasov, 25th AIAA Aeroacoustics ConferenceV. Gryazev, A. P. Markesteijn, and S. A. Karabasov, "Low-order models of dual-stream jet noise with temperature effects based on the goldstein generalised acoustic analogy," (25th AIAA Aeroacoustics Conference, 2019). Aero-acoustics of turbulent mixing layers. A Sharma, Stanford University, Department of Aerospace Engrg.PhD dissertationA. Sharma, Aero-acoustics of turbulent mixing layers, PhD dissertation, Stanford University, Department of Aerospace Engrg. (2012). On the properties of fluctuating turbulent stress sources for high-speed jet noise. V A Semiletov, S A Karabasov, 22nd AIAA Aeroacoustics ConferenceV. A. Semiletov and S. A. Karabasov, "On the properties of fluctuating turbulent stress sources for high-speed jet noise," (22nd AIAA Aeroacoustics Conference, 2016). Unstructured large-eddy simulations of supersonic jets. G A Brès, F Ham, J W Nichols, S K Lele, AIAA J. 551164G. A. Brès, F. Ham, J. W. Nichols, and S. K. Lele, "Unstructured large-eddy simulations of supersonic jets," AIAA J. 55, 1164 (2017). Chssi project cfd-7:the nparc alliance flow simulation system. C C Nelson, G D Power, 39th Aerospace Sciences Meeting and Exhibit. C. C. Nelson and G. D. Power, "Chssi project cfd-7:the nparc alliance flow simulation system," (39th Aerospace Sciences Meeting and Exhibit, 2001). A general class of commutative filters for les in complex geometries. O V Vasilyev, T S L P Moin, J. Comp. Phys. 14682O. V. Vasilyev and T. S. L. P. Moin, "A general class of commutative filters for les in complex geometries," J. Comp. Phys. 146, 82 (1998). Solution of the parallel shear layer green's function using conservation equations. M Z Afsar, Int. J. Aero-Ac. 8M. Z. Afsar, "Solution of the parallel shear layer green's function using conservation equations," Int. J. Aero-Ac. 8, 585-601 (2009). Jet noise measurements: past and present. M Harper-Bourne, Int. J. Aero-Ac. 9M. Harper-Bourne, "Jet noise measurements: past and present," Int. J. Aero-Ac. 9, 559-588 (2010). Towards the prediction of supersonic jet noise predictions using a unified asymptotic approximation for the adjoint vector green's function. M Z Afsar, A Sescu, V Sassanis, S K Lele, 23rd AIAA Aeroacoustics ConferenceM. Z. Afsar, A. Sescu, V. Sassanis, and S. K. Lele, "Towards the prediction of supersonic jet noise predictions using a unified asymptotic approximation for the adjoint vector green's function," (23rd AIAA Aeroacoustics Conference, 2017). . A S Monin, A M Yaglom, Statistical fluid mechanics. Dover Publications, USAA. S. Monin and A. M. Yaglom, "Statistical fluid mechanics, vol ii," (Dover Publications, USA, 2007). Higher-order correlations in a turbulent field. F Frenkiel, P Klebanoff, Phys. Fluids. 10507F. Frenkiel and P. Klebanoff, "Higher-order correlations in a turbulent field," Phys. Fluids. 10, 507 (1967). Empiricism-free noise calculation from les solution based on goldstein generalized acoustic analogy: volume noise sources and mean flow effects. V A Semiletov, S A Karabasov, A P Markesteijn, 21st AIAA/CEAS Aeroacoustics ConferenceV. A. Semiletov, S. A. Karabasov, and A. P. Markesteijn, "Empiricism-free noise calculation from les solution based on goldstein generalized acoustic analogy: volume noise sources and mean flow effects," (21st AIAA/CEAS Aeroacoustics Conference, 2015). Eddy convection in developing heated supersonic jets. T Ecker, K T Lowe, W F Ng, AIAA J. 533305T. Ecker, K. T. Lowe, and W. F. Ng, "Eddy convection in developing heated supersonic jets," AIAA J. 53, 3305 (2015). The theory of homogeneous turbulence. G K Batchelor, Cambridge University PressUKG. K. Batchelor, "The theory of homogeneous turbulence," (Cambridge University Press, UK, 1953). Tensor analysis on manifolds. R L Bishop, S I Goldberg, Dover Publications, USAR. L. Bishop and S. I. Goldberg, "Tensor analysis on manifolds," (Dover Publications, USA, 1980). Fourier integrals for practical applications. G A Campbell, R M Foster, American Telephone and Telegraph CoUSAG. A. Campbell and R. M. Foster, "Fourier integrals for practical applications," (American Telephone and Telegraph Co., USA, 1942). Impact of the source-correlation model in jetnoise prediction by acoustic analogy. A Bassetti, C Morfey, M Harper-Bourne, 13th AIAA/CEAS Aeroacoustics Conference. A. Bassetti, C. Morfey, and M. Harper-Bourne, "Impact of the source-correlation model in jet- noise prediction by acoustic analogy," (13th AIAA/CEAS Aeroacoustics Conference, 2007). Special functions and their applications. N N Lebedev, Dover Publications, USAN. N. Lebedev, "Special functions and their applications," (Dover Publications, USA, 1972).
[]
[ "Topological quantum catalyst: the case of two-dimensional traversing nodal line states associated with high catalytic performance for hydrogen evolution reaction", "Topological quantum catalyst: the case of two-dimensional traversing nodal line states associated with high catalytic performance for hydrogen evolution reaction" ]
[ "Lirong Wang \nState Key Laboratory of Reliability and Intelligence of Electrical Equipment\nHebei University of Technology\n300130TianjinChina\n\nSchool of Materials Science and Engineering\nHebei University of Technology\n300130TianjinChina\n", "Xiaoming Zhang [email protected] \nState Key Laboratory of Reliability and Intelligence of Electrical Equipment\nHebei University of Technology\n300130TianjinChina\n\nSchool of Materials Science and Engineering\nHebei University of Technology\n300130TianjinChina\n", "Weizhen Meng \nState Key Laboratory of Reliability and Intelligence of Electrical Equipment\nHebei University of Technology\n300130TianjinChina\n\nSchool of Materials Science and Engineering\nHebei University of Technology\n300130TianjinChina\n", "Ying Liu \nSchool of Materials Science and Engineering\nHebei University of Technology\n300130TianjinChina\n", "Xuefang Dai \nSchool of Materials Science and Engineering\nHebei University of Technology\n300130TianjinChina\n", "Guodong Liu \nState Key Laboratory of Reliability and Intelligence of Electrical Equipment\nHebei University of Technology\n300130TianjinChina\n\nSchool of Materials Science and Engineering\nHebei University of Technology\n300130TianjinChina\n" ]
[ "State Key Laboratory of Reliability and Intelligence of Electrical Equipment\nHebei University of Technology\n300130TianjinChina", "School of Materials Science and Engineering\nHebei University of Technology\n300130TianjinChina", "State Key Laboratory of Reliability and Intelligence of Electrical Equipment\nHebei University of Technology\n300130TianjinChina", "School of Materials Science and Engineering\nHebei University of Technology\n300130TianjinChina", "State Key Laboratory of Reliability and Intelligence of Electrical Equipment\nHebei University of Technology\n300130TianjinChina", "School of Materials Science and Engineering\nHebei University of Technology\n300130TianjinChina", "School of Materials Science and Engineering\nHebei University of Technology\n300130TianjinChina", "School of Materials Science and Engineering\nHebei University of Technology\n300130TianjinChina", "State Key Laboratory of Reliability and Intelligence of Electrical Equipment\nHebei University of Technology\n300130TianjinChina", "School of Materials Science and Engineering\nHebei University of Technology\n300130TianjinChina" ]
[]
Topological quantum catalysts (TQCs), where metallic surface states from nontrivial band topology serve as the mechanism to favor heterogeneous catalysis processes, have been well demonstrated in three dimensional (3D) examples but have been rarely discussed in 2D scale. Here, we develop a design scheme to realize 2D TQCs with showing traversing nodal line at the Brillouin zone boundary, large Fermi arc on the edge, and nearly zero Gibbs free energy (ΔGH*) for hydrogen evolution reaction (HER). We demonstrate the 2D Cu2C2N4 sheet is a such example. The material manifests an open nodal line traversing the whole k-path S-Y. It shows a long Fermi arc that spans the entire edge boundary, which is robust against spin-orbit coupling and the H adsorption. As the result, the edge of Cu2C2N4 sheet is relatively active for HER catalysis with possessing a ΔGH* as low as 0.10 eV, which is comparable with that of Pt and superior to other traditional catalysts and 3D TQCs as well. Our work offers an effective route to develop high performance HER catalysis without containing noble metals by utilizing 2D TQCs with traversing nodal line.
10.1039/d1ta06553j
[ "https://arxiv.org/pdf/2112.12914v1.pdf" ]
239,655,362
2112.12914
3f7371ef5fb3bbeb05a256e2533936c28aa711ce
Topological quantum catalyst: the case of two-dimensional traversing nodal line states associated with high catalytic performance for hydrogen evolution reaction Lirong Wang State Key Laboratory of Reliability and Intelligence of Electrical Equipment Hebei University of Technology 300130TianjinChina School of Materials Science and Engineering Hebei University of Technology 300130TianjinChina Xiaoming Zhang [email protected] State Key Laboratory of Reliability and Intelligence of Electrical Equipment Hebei University of Technology 300130TianjinChina School of Materials Science and Engineering Hebei University of Technology 300130TianjinChina Weizhen Meng State Key Laboratory of Reliability and Intelligence of Electrical Equipment Hebei University of Technology 300130TianjinChina School of Materials Science and Engineering Hebei University of Technology 300130TianjinChina Ying Liu School of Materials Science and Engineering Hebei University of Technology 300130TianjinChina Xuefang Dai School of Materials Science and Engineering Hebei University of Technology 300130TianjinChina Guodong Liu State Key Laboratory of Reliability and Intelligence of Electrical Equipment Hebei University of Technology 300130TianjinChina School of Materials Science and Engineering Hebei University of Technology 300130TianjinChina Topological quantum catalyst: the case of two-dimensional traversing nodal line states associated with high catalytic performance for hydrogen evolution reaction 1 Correspondence:First-principles modellingHydrogen evolution reaction2D nodal-line semimetalTopological edge stateGibbs free energy 2 Topological quantum catalysts (TQCs), where metallic surface states from nontrivial band topology serve as the mechanism to favor heterogeneous catalysis processes, have been well demonstrated in three dimensional (3D) examples but have been rarely discussed in 2D scale. Here, we develop a design scheme to realize 2D TQCs with showing traversing nodal line at the Brillouin zone boundary, large Fermi arc on the edge, and nearly zero Gibbs free energy (ΔGH*) for hydrogen evolution reaction (HER). We demonstrate the 2D Cu2C2N4 sheet is a such example. The material manifests an open nodal line traversing the whole k-path S-Y. It shows a long Fermi arc that spans the entire edge boundary, which is robust against spin-orbit coupling and the H adsorption. As the result, the edge of Cu2C2N4 sheet is relatively active for HER catalysis with possessing a ΔGH* as low as 0.10 eV, which is comparable with that of Pt and superior to other traditional catalysts and 3D TQCs as well. Our work offers an effective route to develop high performance HER catalysis without containing noble metals by utilizing 2D TQCs with traversing nodal line. Introduction Facing on the increasingly serious situation of global warming from fossil fuels, in recent years tremendous attention has been paid on innovating the energy carriers to reduce power-station emissions of greenhouse gases. Hydrogen is believed as an excellent renewable energy carrier candidate because it can offer high-energy density but immune from carbon emission [1][2][3]. For hydrogen production, decomposing water is a promising and environmentally friendly method. At the current, one of the most crucial research focuses for the hydrogen generation is to develop excellent electrocatalysts with high efficiency and good stability for hydrogen evolution reaction (HER) [4][5][6][7][8]. Precious metals especially Pt, have been proved to be high-efficiency HER catalysts [9]. The high efficiency can be traced back to their relatively low Gibbs free energy (ΔGH*) during hydrogen adsorption, with locating nearly on the top of the HER volcano plot [10][11][12][13]. However, the main drawbacks of these catalysts lie in their scarcity and the associated high price, which highly limit their application in large-scale industrial hydrogen production. For such a consideration, exploring highly efficient HER catalysts without precious metals is urgently need. Recently, known as topological quantum catalysts (TQCs), topological nontrivial materials are highly hoped for providing another reasonable catalytic mechanism for designing high-efficiency catalysts [14]. In TQCs, metallic surface states induced by nontrivial bulk band topology have been proved to be effective in favoring charge-transfer kinetics during catalytic process for hydrogen/oxygen evolution, CO oxidation, and so on [15,16]. Till now, various TQCs with different topological features have been proposed. As the representative examples, topological insulators Bi2Se3 and Bi2Te3 were reported effective for HER and CO catalysis [16,17]. In addition, Weyl semimetals TaAs, NdAs, and Co3Sn2S, multiplefold degenerate semimetals PtGa, PtAl and Nb2S2C, nodal line semimetal TiSi family were also proposed to be conducive to the HER process. Most of these proposals have been verified by experiments [18][19][20][21][22][23][24][25]. Especially, a recent work finds the topological state and catalytic properties in TQCs can be linked by the exchange current parameter I0 3 [26]. One can notice that these TQC examples are all limited to bulk materials. Considering the facts that: i) similar with bulk materials, two-dimensional (2D) system can also offer nontrivial band topology and the associated metallic edge states [27,28], which may also the facilitate HER catalytic process; ii) comparing 3D catalysts, 2D ones have shown several advantages such as higher charge transfer efficiency because of the unique morphology in 2D scale in non-TQC process [29][30][31]. Then, one may naturally wonder that, can TQCs be extend to 2D scale? Motivated by above discussions, in this work, we investigate the feasibility of developing 2D TQCs for HER catalysis. We first construct a design scheme, which includes different topological features in 2D scale and evaluates the corresponding nontrivial edge states and potential HER catalysts activity. Then we have successfully identified 2D Cu2C2N4 sheet as a concrete example following the design scheme. The Computational methods The numerical calculations in current work were performed within the Vienna ab initio Simulation Package (VASP) [32] in the framework of density-functional theory (DFT) [33]. The generalized gradient approximation (GGA) of the Perdew−Burke−Ernzerhof (PBE) method [34] was applied for the exchange-correlation potential. To avoid interactions among layers, we build a vacuum space larger than 20 Å for the crystal structure of Cu2C2N4 sheet. The cutoff energy was adopted as 520 eV, and the BZ was sampled with Γ-centered k-point mesh of 11×11×1. The DFT-D2 method [34] was used to consider the long-range van der Waals interactions. The PHONOPY code [35] was used to calculate the phonon spectra of Cu2C2N4 sheet. From the maximally localized Wannier functions [36,37] and the WannierTools package [38], topological features of edge states for Cu2C2N4 sheet were calculated. Design scheme In 3D TQCs, the catalytic activity is found to be highly relative to the density of metallic surface states near the Fermi energy [16,17]. For example, chiral fermions with larger topological charge in PtAl can generate longer Fermi arc surface states and higher HER catalytic activity than traditional Weyl semimetals [39]. Further, Li et. al. argued nodal lines can in principle provide higher surface intensity than those by nodal points, which has induced a nearly zero ΔGH* in 3D nodal line TQC TiSi [25]. However, this theory may not be simply extended to 2D TQC. In 3D system, topological band crossings can take the form of 0D nodal point, 1D nodal line, and 2D nodal surface [40][41][42][43][44][45][46][47]. In 2D system, only 0D and 1D band crossings exist [48][49][50][51]. For nodal points in 2D [see Fig. 1 TQC with open nodal lines would be the best electrocatalyst for the HER, and should carry a ΔGH* close to zero. 5 Results and discussions Structure and stability of Cu 2 C 2 N 4 sheet Within this context, in the following we will show Cu2C2N4 sheet is an ideal 2D TQC candidate with hosting open nodal lines, traversing edge Fermi arc states, and relatively low ΔGH* for HER. The 2D Cu2C2N4 sheet is firstly proposed by the 2D materials database (also known as 2DMatPedia), where Cu2C2N4 monolayer along with nearly 2,000 monolayers have been demonstrated to be easily strippable from their 3D counterparts based on the geometry algorithm and exfoliation energy [54]. In Fig. 2(a), we show the crystal structure for Cu2C2N4 sheet in the form of a 1×3 supercell. The shadowed region in Fig. 2 Although 2DMatPedia has demonstrated Cu2C2N4 sheet can be easily exfoliated from the bulk, it is also essential to verify whether the freestanding Cu2C2N4 sheet is stable. For such consideration, we first estimate the thermal stability of Cu2C2N4 sheet by using the ab initio molecular dynamic (AIMD) simulation. In this simulation, the 3×2 supercell of Cu2C2N4 sheet is used with the temperature set as 300K. The AIMD simulation is totally carried out for 2000 fs with 0.5 fs as one step. As shown in Fig. 2(b), one can find that Cu2C2N4 structure nicely retains during the simulation, suggesting at room temperature Cu2C2N4 sheet carries an excellent thermal stability. In addition, the phonon spectrum for Cu2C2N4 sheet is calculated to determine its dynamic stability, as displayed in Fig. 2(d). We can observe no virtual modes throughout the highly symmetric k-paths. This verifies that Cu2C2N4 sheet is also dynamically stable. 6 Topological band structure and edge states of Cu 2 C 2 N 4 sheet We show the electron band structure and the density of states (DOSs) of Cu2C2N4 sheet in Fig. 3(a). We can find Cu2C2N4 sheet has a metallic electronic structure with sizable DOSs at the Fermi level. From the orbital projected DOSs, we can find the conducting electronic states are mostly contributed by the Cu and N atoms. As shown in the band structure in Fig. 3(a), there are two bands locating near the Fermi level. The two bands overlap together along the k-path S-Y. For other k-paths, the two bands are well separated with each other. From the 3D plotting of the two bands [see Fig. 3(b)], we find they in fact form a nodal line. As shown in Fig. 3(c), the nodal line situates at the 2D BZ boundary and is characterized as an open nodal line. In Fig. 3(d), we show the edge states of the nodal line in Cu2C2N4 sheet. We can find a long Fermi arc appears on the edge originating from the nodal line. To be noted, in this case, the Fermi arc is the longest for 2D nodal line, because it spans the whole edge. After taking into account the spin-orbit coupling (SOC), we find the profile of the band structure does not have much change, as shown in Fig. 4(a). Except that, the doubly-degenerate bands in the S-Y path are slightly split. However, at the S and Y points the band degeneracy retains, which is required by symmetry. In Fig. 4(b), we show the enlarged band structure near the S and Y points. To be noted, in this occasion, the long Fermi arc on the edge still exists because of the presences of band crossing at the S and Y points, as shown in Fig. 4(d). These discussions show the long edge state in Cu2C2N4 sheet is robust against SOC. Catalytic properties of Cu 2 C 2 N 4 sheet The appearance of the long Fermi arc for the open nodal line in Cu2C2N4 sheet motivates us to evaluate its electrocatalytic HER activity on the edge, as conceptually displayed in Fig. 5(a). The HER mechanism can be summarized as the following three steps. The first step is the Volmer reaction, during which an electron transfers to a proton and forms an H atom adsorbing on the catalyst surface, described as: Remarkably, under this state, the calculated ΔGH* for HER in Cu2C2N4 sheet is as low as 0.10 eV. As compared in Fig. 5(c), this value is greatly lower than those in typical 3D Weyl TQCs including NbP, TaAs, and NbAs (0.31-0.96 eV) [18]. In addition, to compared the performance in Cu2C2N4 sheet with other typical HER catalysts, the volcano curve has been displayed in Fig. 5(d). Notably, ΔGH* for HER in H + + e -+ * Summary In summary, we have demonstrated the feasibility of developing The phonon spectrum for Cu2C2N4 sheet. Figures and captions: material takes a nodal line traversing through the whole Brillouin zone (BZ) boundary and features with long Fermi arc spanning the entire edge. Remarkably, our calculation show Cu2C2N4 sheet indeed yields a relatively low ΔGH* (0.10 eV), which is close to the value in noble metal Pt and situates near the top of the volcano plot for HER. These results reveal 2D TQC with traversing nodal line is a possible platform to develop HER catalysis without containing noble metals. (a)], they produce a nearly zero electronic density in the plane. Due to the presence of Fermi arcs, they show a nonzero electronic density on the edge, where the edge electronic density is in principle relative to the length of the Fermi arcs. For traditional nodal line in 2D system [see Fig. 1(b)], although electronic density in the plane is enhanced, the edge electronic density is still not strong enough because of the partial Fermi arcs. Even worse, such closed nodal line is topologically trivial and sometimes does not carry definite edge states [52]. To capture the strongest edge electronic density, it requires the edge Fermi arcs transverse the whole BZ boundary. Such an occasion can be realized by open nodal lines[53]. For open nodal line in 2D system [seeFig. 1(c)], it can realize strong electronic density both in the plane and on the edge. Especially, the edge electronic density is unusually stronger than the cases inFig. 1(a) and (b), because the Fermi arc in this occasion can transverse the entire edge. Therefore, 2D (a) indicates the primitive cell of Cu2C2N4 sheet. It obviously takes a rectangular lattice. From the symmetry view, the lattice structure belongs to the space group of PMMA (No. 51). One primitive cell of Cu2C2N4 sheet contains two Cu atom, two C atom and four N atoms. From the top view, the bonding among Cu-N and C-N atoms forms two hexagonal lattice configurations. As shown by the side view, the hexagonal lattice configurations are fluctuant in the out-of-plane direction. The optimized lattice constants for Cu2C2N4 sheet are a = 9.40 Å, b = 2.96 Å. The Cu-N and C-N bond length yields to be 1.99 Å and 1.24 Å, respectively. Cu2C2N4 sheet almost situates at the top of the volcano curve, indicating the edge of Cu2C2N4 sheet is significantly active, due to its long edge states from the nontrivial nodal line. To be noted, such long edge states still retain after H adsorption except for of a shift of its position [see Fig. 5(b)]. From the volcano curve in Fig. 5(d), we can find that except Pt, ΔGH* of Cu2C2N4 sheet is superior to all the HER catalysts displayed in the volcano curve, including the transition metals (Pd, Rh, Ir, Ni, Cu, Ag, 8Au, and etc.) and TQCs proposed previously (PtAl, PtGa, NbP, TaAs, NbAs, and etc.)[18][19][20][21][22][23][24][56][57][58]. Meanwhile, the value of |ΔGH*|of Cu2C2N4 is even comparable with that of Pt (0.10 eV versus 0.09 eV). Thus, Cu2C2N4 sheet may serve as a high performance HER catalyst without containing noble metals.To further capture the relationship of the topological nodal line and the HER activity, we shift the position of nodal line by artificially tuning the number of electrons (Nelec.) in Cu2C2N4 sheet. As shown inFig. 6(a), the nodal line in the S-Y path will be lifted away from the Fermi level when takes out 0.5 efrom Cu2C2N4sheet. Differently, by adding 0.5 einto Cu2C2N4 system, the nodal line will be pulled below the Fermi level [see Fig. 6(c)]. The case for native Cu2C2N4 sheet is provide in Fig. 6(b) for comparison, where most region of nodal line locates around the Fermi level. Compared with native Cu2C2N4 sheet, the nodal line and corresponding surface Fermi arcs in the cases with Nelec.= 49.5 and 50.5 are less contributive to the conducting carriers and the HER process. We further calculate ΔGH* for the three cases in Fig. 6(a)-(c). Just as expected, with pulling nodal line away from the Fermi level, ΔGH* in Cu2C2N4 sheet is significantly increased (8.65 eV for Nelec.= 49.5, and 1.65 eV for Nelec.= 50.5), as shown in Fig. 6(d). These results have fully evidenced the HER enhancement from nodal line and corresponding surface Fermi arcs near the Fermi level in Cu2C2N4 sheet. Fig. 1 1Design scheme with showing momentum space diagrams and DOSs for 2D Weyl points, 2D closed nodal line, and 2D open nodal line. TWs and DNLs. (a) A pair of Weyl points in 2D plane (left lower panel) and partial Fermi arc (left upper panel) on the edge. Their corresponding DOSs are displayed in right panels. (b) A closed nodal line in the plane (left lower panel) and the partial Fermi arc (left upper panel) on the edge. Their corresponding DOSs are displayed in right panels. (c) A open nodal line at the 2D BZ boundary (left lower panel) and the traversing Fermi arc (left upper panel) on the edge. Their corresponding DOSs are displayed in right panels. 15 Fig. 2 (a) Crystal structure for Cu2C2N4 sheet shown as the top and side views. The shadowed region indicates the primitive cell form of Cu2C2N4 sheet. (b) Total potential energy fluctuation of Cu2C2N4 sheet during the AIMD simulation. The temperature is set at 300 K. The final state for Cu2C2N4 sheet structure after the AIMD simulation is shown as the inset of (b). (c) The BZ of 2D Cu2C2N4 sheet. (d) 16 Fig. 3 ( 163a) Electronic band structure, total DOSs and partial DOSs of Cu2C2N4 sheet. The framed region in the band structure show the band overlap in the k-path S-Y. (b) The 3D plotting of the two bands near the Fermi level. The band crossing (nodal line) is indicated in the figure. (c) The Brillouin zone of the Cu2C2N4 sheet and its edge projection. The position of nodal line is indicated in the figure. (d) The projected edge state for the nodal line, as pointed by the arrows.17 Fig. 4 (a) Electronic band structure for Cu2C2N4 sheet under SOC. (b) Enlarged band structures along the S-Y-Γ and X-S-Y k-paths. (c) The position of the crossing point (at the S and Y points) and its edge projection. (c) The projected edge state of Cu2C2N4 sheet, as pointed by the arrows.18 Fig. 5 (a) Illustration of the effects of nodal line and edge Fermi arc on the HER activity in Cu2C2N4 sheet. (b) The electron depletion (the top panel) and accumulation (the bottom panel) during H adsorption on edge of Cu2C2N4 sheet. The isosurface value is set as 0.008 e Å -3 . (c) The free energy diagram for HER, where a potential U = 0 is set relative to the standard hydrogen electrode at pH = 0. The free energy of H + + eis by definition the same as that of 1/2 H2 at standard condition of equilibrium. The data of NbAs, NbP (−0.31 eV) and TaAs are picked from Ref. [18]. (d) Volcano plot for the HER of Cu2C2N4 sheet in comparison with various pure metals and 3D TQCs. The data for pure metals and 3D TQCs are taken from literatures 18-24,56-58. 19 Fig. 6 (a) The model and electronic band structure with taking out 0.5 efrom the Cu2C2N4 system. The native number of electrons (Nelec.) in a Cu2C2N4 cell is 50. (b) and (c) are similar with (a) but for the cases of native Cu2C2N4 and that with adding 0.5 einto Cu2C2N4 system, respectively. (d) Comparison of ΔGH* in Cu2C2N4 sheet for the cases of Nelec.= 49.5, 50, and 50.5. → H*(1) where * and H* represent the active center and the intermediate, respectively. Then, the H2 desorption realizes following the Tafel reaction or the Heyrovsky reaction, where can be respectively described as:where ΔEH is the adsorption energy for H, ΔEZPE and ΔSH are the changes in zero-point energy and entropy between the absorbed H and gaseous H, respectively.For the H adsorption process, here the model is constructed by adsorbing one H atom on edge of the 1×3 supercell of Cu2C2N4 sheet. To be noted, the value of ΔGH* is almost unchanged if larger supercell is assigned. By display H atom on the top of Cu, C, and N. We find that the Cu site is the most stable site, while adsorption on other sites was less stable. At the most favorable adsorption, we have calculated the charge2H* →H2+ 2* (2) H + + e -+ H* →H2 + * (3) where H* still serves as the intermediate. Thus, the HER rate is highly relevant to the binding condition between the intermediate and the active site. As the result, ΔGH* of hydrogen adsorption is a crucial parameter to characterize the HER activity. The parameter ΔGH* can be obtained by using the formula [55]: ΔGH* = ΔEH + ΔEZPE − T ΔSH (4) density difference to trace the charge transfer during the period. As shown in Fig. 5(b), it is clear that charge depletion occurs on the H atom (see the top panel) while charge accumulation occurs around Cu atoms (see the bottom panel). volcano curve, and is comparable with that in Pt. The current work provides a platform to develop TQCs in 2D scale, and paves a feasible way to exploit HER catalysts without containing noble metals as well.TQCs for HER in 2D scale. We build an effective design scheme for developing 2D TQCs by taking into account various topological states and corresponding edge features. The design scheme suggests 2D open nodal line with long Fermi arc that transverses the entire edge boundary can maximize the HER activity in 2D TQCs. Following this design scheme, we identify a new 2D material namely Cu2C2N4 sheet with open nodal line at the BZ boundary, and theoretically verify that on its edge, where the long Fermi arc exist, the ΔGH* for HER is relatively low (0.10 eV), suggesting a high catalytic performance. Especially, the ΔGH* in Cu2C2N4 sheet nearly situates at the top of the 9 AcknowledgementsThis work is supported by National Natural Science Foundation of China (Grants No. . J A Turner, Science. 972J. A. Turner, Science, 2004, 305, 972. . X Wang, K Maeda, A Thomas, K Takanabe, G Xin, J M Carlsson, K Domen, M Antonietti, Nat. Mater. 876X. Wang, K. Maeda, A. Thomas, K. Takanabe, G. Xin, J. M. Carlsson, K. Domen and M. Antonietti, Nat. Mater., 2009, 8, 76. . J Mahmood, M A R Anjum, S H Shin, I Ahmad, H J Noh, S J Kim, H Y , J. Mahmood, M. A. R. Anjum, S. H. Shin, I. Ahmad, H. J. Noh, S. J. Kim, H. Y. . J S Jeong, J B Lee, Baek, Adv. Mater. 301805606Jeong, J. S. Lee and J. B. Baek, Adv. Mater., 2018, 30, 1805606 . J Zhang, K Sasaki, E Sutter, R Adzic, Science. 220J. Zhang, K. Sasaki and E. Sutter, R. Adzic, Science, 2007, 315, 220. . R Subbaraman, D Tripkovic, D Strmcnik, K C Chang, M Uchimura, A P , R. Subbaraman, D. Tripkovic, D. Strmcnik, K. C. Chang, M. Uchimura, A. P. . V Paulikas, N M Stamenkovic, Markovic, Science. Paulikas, V. Stamenkovic and N. M. Markovic, Science, 2011, 334, 1256. . J N Tiwari, S Sultan, C W Myung, T Yoon, N N Li, M R Ha, A M Harzandi, H J Park, D Y Kim, S S Chandrasekaran, W G Lee, V Vij, H J Kang, Nat. Energy. T. J. Shin, H. S. Shin, G. Lee, Z. Lee and K. S. Kim3773J. N. Tiwari, S. Sultan, C. W. Myung, T. Yoon, N. N. Li, M. R. Ha, A. M. Harzandi, H. J. Park, D. Y. Kim, S. S. Chandrasekaran, W. G. Lee, V. Vij, H. J. Kang, T. J. Shin, H. S. Shin, G. Lee, Z. Lee and K. S. Kim, Nat. Energy, 2018, 3, 773. . G Chen, T Wang, J Zhang, P Liu, H Sun, X Zhuang, M Chen, X Feng, 1706279. 10Adv. Mater. 30G. Chen, T. Wang, J. Zhang, P. Liu, H. Sun, X. Zhuang, M. Chen and X. Feng, Adv. Mater., 2018, 30, 1706279. 10 . M F Li, K N Duanmu, C Z Wan, T Cheng, L Zhang, S Dai, W X Chen, Z. P. M. F. Li, K. N. Duanmu, C. Z. Wan, T. Cheng, L. Zhang, S. Dai, W. X. Chen, Z. P. . P Zhao, H L Li, Y M Fei, R Zhu, J Yu, K T Luo, Z Y Zang, M N Lin, J Ding, Zhao, P. Li, H. L. Fei, Y. M. Zhu, R. Yu, J. Luo, K. T. Zang, Z. Y. Lin, M. N. Ding, J. . H T Huang, J H Sun, X Q Guo, W A Pan, P Goddard, Y Sautet, X Huang, Huang, H. T. Sun, J. H. Guo, X. Q. Pan, W. A. Goddard, P. Sautet, Y. Huang and X. F. . Nat Duan, Catal, 2495Duan, Nat. Catal., 2019, 2, 495. . N C Cheng, S Stambula, D Wang, M N Banis, J Liu, A Riese, B W Xiao, R , N. C. Cheng, S. Stambula, D. Wang, M. N. Banis, J. Liu, A. Riese, B. W. Xiao, R. . T. -K Li, L. -M Sham, G L G A Liu, X L Botton, Sun, Nat. Communications. Li, T. -K. Sham, L. -M. Liu, G. L. G. A. Botton and X. L. Sun, Nat. Communications, 2016, 7, 13638. . Z W Seh, J Kibsgaard, C F Dickens, I Chorkendorff, J K Norskov, T , Z. W. Seh, J. Kibsgaard, C. F. Dickens, I. Chorkendorff, J. K. Norskov and T. F. . Jaramillo, Science. 4998Jaramillo, Science, 2017, 355, eaad4998. . J Greeley, T F Jaramillo, J Bonde, I Chorkendorff, J K Nørskov, Nat. Mater. 5909J. Greeley, T. F. Jaramillo, J. Bonde, I. Chorkendorff and J. K. Nørskov, Nat. Mater., 2006, 5, 909. . X Zhang, A Chen, Z H Zhang, Z Zhou, J. Mater. Chem. A. 6X. Zhang, A. Chen, Z. h. Zhang, Z. Zhou, J. Mater. Chem. A, 2018, 6, 18599-18604. . C Y Xiong, B B Li, H G Liu, W Zhao, C Duan, H W Wua, Y H Niac, J. Mater. Chem. A. 8C. Y. Xiong, B. B. Li, H. G. Liu, W. Zhao, C. Duan, H. W. Wua and Y. H. Niac, J. Mater. Chem. A, 2020, 8, 10898-10908. . G W Lia, C Felsera, Appl. Phys. Lett. 70501G. w. Lia and C. Felsera, Appl. Phys. Lett., 2020, 116, 070501. . J P Xiao, L Z Kou, C Y Yam, T Frauenheim, B H Yan, Acs Catal. 5J. P. Xiao, L. Z. Kou, C. Y. Yam, T. Frauenheim and B. H. Yan, Acs Catal. 2015, 5, 7063-7067. . H Chen, W Zhu, D Xiao, Z Zhang, Phys. Rev. Lett. 56804H. Chen, W. Zhu, D. Xiao and Z. Zhang, Phys. Rev. Lett., 2011, 107, 056804. . C R Rajamathi, U Gupta, N Kumar, H Yang, Y Sun, V Suss, C Shekhar, M , C. R. Rajamathi, U. Gupta, N. Kumar, H. Yang, Y. Sun, V. Suss, C. Shekhar, M. . H Schmidt, P Blumtritt, B Werner, S Yan, C Parkin, C N R Felser, Rao, Adv. Mater. 1606202Schmidt, H. Blumtritt, P. Werner, B. Yan, S. Parkin, C. Felser and C. N. R. Rao, Adv. Mater., 2017, 29, 1606202. . H Zhang, P An, W Zhou, B Y Guan, P Zhang, J Dong, X W D Lou, Sci. Adv. H. Zhang, P. An, W. Zhou, B. Y. Guan, P. Zhang, J. Dong and X. W. D. Lou, Sci. Adv., 2018, 4, eaao6657. . G Li, C Fu, W Shi, L Jiao, J Wu, Q Yang, R Saha, M E Kamminga, A K , G. Li, C. Fu, W. Shi, L. Jiao, J. Wu, Q. Yang, R. Saha, M. E. Kamminga, A. K. . E Srivastava, A N Liu, N Yazdani, J Kumar, G R Zhang, X Blake, M Liu, Srivastava, E. Liu, A. N. Yazdani, N. Kumar, J. Zhang, G. R. Blake, X. Liu, M. . S Fahlman, G Wirth, J Auffermann, S Gooth, V Parkin, X Madhavan, Y Feng, C Sun, Felser, Angew. Chem., Int. Ed. 5813107Fahlman, S. Wirth, G. Auffermann, J. Gooth, S. Parkin, V. Madhavan, X. Feng, Y. Sun and C. Felser, Angew. Chem., Int. Ed. 2019, 58, 13107. . G Li, Q Xu, W Shi, C Fu, L Jiao, M E Kamminga, M Yu, H Tuysuz, N , G. Li, Q. Xu, W. Shi, C. Fu, L. Jiao, M. E. Kamminga, M. Yu, H. Tuysuz, N. . V Kumar, R Suss, A K Saha, S Srivastava, G Wirth, J Auffermann, S Gooth, Kumar, V. Suss, R. Saha, A. K. Srivastava, S. Wirth, G. Auffermann, J. Gooth, S. 11 . Y Parkin, E Sun, C Liu, Felser, Sci. Adv. Parkin, Y. Sun, E. Liu and C. Felser, Sci. Adv., 2019, 5, eaaw9867. . H Chen, W Zhu, D Xiao, Z Zhang, Phys. Rev. Lett. 56804H. Chen, W. Zhu, D. Xiao and Z. Zhang, Phys. Rev. Lett., 2011, 107, 056804. . J P Xiao, L Z Kou, C Y Yam, T Frauenheim, B H Yan, ACS Catal. 57063J. P. Xiao, L. Z. Kou, C. Y. Yam, T. Frauenheim and B. H. Yan, ACS Catal., 2015, 5, 7063. . L Li, J Zeng, W Qin, P Cui, Z Zhang, Nano Energy. 5840L. Li, J. Zeng, W. Qin, P. Cui and Z. Zhang, Nano Energy, 2019, 58, 40. . Q Yang, C C Le, G W Li, T Heine, C Felser, Y Sun, App. Mater. Today. 22100921Q. Yang, C. C. Le, G. W. Li, T. Heine, C. Felser and Y. Sun, App. Mater. Today, 2021, 22, 100921. . J Li, H Ma, Q Xie, S Feng, S Ullah, R Li, J Dong, D Li, Y Li, X , J. Li, H. Ma, Q. Xie, S. Feng, S. Ullah, R. Li, J. Dong, D. Li, Y. Li and X. Q. . Chen , Sci. China Mater. 23Chen, Sci. China Mater., 2018, 61, 23. . Q N Xu, G W Li, Y Zhang, Q Yang, Y Sun, C Felser, 10Q. N. Xu, G. W. Li, Y. Zhang, Q. Yang, Y. Sun and C. Felser, ACS Catal. 2020, 10, 9, 5042-5048. . B A Bernevig, T L Hughes, S.-C Zhang, Science. 314B. A. Bernevig, T. L. Hughes and S.-C. Zhang, Science, 2006, 314, 1757-1761. . M König, S Wiedmann, C Brüne, A Roth, H Buhmann, L W Molenkamp, X.-L Qi, S.-C Zhang, Science. 318M. König, S. Wiedmann, C. Brüne, A. Roth, H. Buhmann, L. W. Molenkamp, X.-L. Qi and S.-C. Zhang, Science, 2007, 318, 766-770. . H T Wang, Z Y Lu, S C Xu, D S Kong, J J Cha, G Y Zheng, P.-C Hsu, K , H. T. Wang, Z. Y. Lu, S. C. Xu, D. S. Kong, J. J. Cha, G. Y. Zheng, P.-C. Hsu, K. D Yan, F B Bradshaw, Y Prinz, Cui, Proc. Natl. Acad. Sci. Natl. Acad. Sci110Yan, D. Bradshaw, F. B. Prinz, and Y. Cui, Proc. Natl. Acad. Sci., 2013, 110, 49, 19701−19706. . Z W Seh, J Kibsgaard, C F Dickens, I Chorkendorff, J K Nørskov, T , Z. W. Seh, J. Kibsgaard, C. F. Dickens, I. Chorkendorff, J. K. Nørskov and T. F. . Jaramillo, 355Jaramillo, Science, 2017, 355, 6321, No. eaad4998. . T Yang, J Zhou, T T Song, L Shen, Y P Feng, M Yang, ACS Energy Lett. 2020T. Yang, J. Zhou, T. T. Song, L. Shen, Y. P. Feng and M. Yang, ACS Energy Lett. 2020, 5, 7, 2313-2321. . G Kresse, D Joubert, Phys. Rev. B: Condens. Matter Mater. Phys. 1758G. Kresse and D. Joubert, Phys. Rev. B: Condens. Matter Mater. Phys., 1999, 59, 1758. . J P Perdew, K Burke, M Ernzerhof, Phys. Rev. Lett. 3865J. P. Perdew, K. Burke and M. Ernzerhof, Phys. Rev. Lett., 1996, 77, 3865. . S Grimme, J. Comput. Chem. S. Grimme, J. Comput. Chem., 2006, 27, 1787. . A Togo, F Oba, I Tanaka, Phys. Rev. B. 78134106A. Togo, F. Oba and I. Tanaka, Phys. Rev. B, 2008, 78, 134106. . N Marzari, D Vanderbilt, Phys. Rev. B: Condens. Matter Mater. Phys. 5612847N. Marzari and D. Vanderbilt, Phys. Rev. B: Condens. Matter Mater. Phys., 1997, 56, 12847. . A A Mostofi, J R Yates, Y.-S Lee, I Souza, D Vanderbilt, N Marzari, Comput. Phys. Commun. 17812A. A. Mostofi, J. R. Yates, Y.-S. Lee, I. Souza, D. Vanderbilt and N. Marzari, Comput. Phys. Commun., 2008, 178, 685. 12 . Q S Wu, S N Zhang, H.-F Song, M Troyer, A A Soluyanov, Comput. Phys. Commun. 405Q. S. Wu, S. N. Zhang, H.-F. Song, M. Troyer and A. A. Soluyanov, Comput. Phys. Commun., 2018, 224, 405. . Q Yang, G W Li, K Manna, F R Fan, C Felser, Y Sun, Adv. Mater. 32Q. Yang, G. W. Li, K. Manna, F. R. Fan, C. Felser, Y. Sun, Adv. Mater., 2020, 32, 1908518. . A Bansil, H Lin, T Das, Rev. Mod. Phys. 21004A. Bansil, H. Lin, and T. Das, Rev. Mod. Phys., 2016, 88, 021004. . C.-K Chiu, J C Teo, A P Schnyder, S Ryu, Rev. Mod. Phys. 35005C.-K. Chiu, J. C. Teo, A. P. Schnyder, and S. Ryu, Rev. Mod. Phys., 2016, 88, 035005. . N P Armitage, E J Mele, A Vishwanath, Rev. Mod. Phys. 15001N. P. Armitage, E. J. Mele, and A. Vishwanath, Rev. Mod. Phys., 2019, 90, 015001. . X Wan, A M Turner, A Vishwanath, S Y Savrasov, Phys. Rev. B. X. Wan, A.M. Turner, A. Vishwanath, and S. Y. Savrasov, Phys. Rev. B, 2011, 83, 205101. . H Weng, Y Liang, Q Xu, R Yu, Z Fang, X Dai, Y Kawazoe, Phys. Rev. B. 45108H. Weng, Y. Liang, Q. Xu, R. Yu, Z. Fang, X. Dai, and Y. Kawazoe, Phys. Rev. B, 2015, 92, 045108. . S A Yang, H Pan, F Zhang, Phys. Rev. Lett. 46401S. A. Yang, H. Pan, and F. Zhang, Phys. Rev. Lett., 2014, 113, 046401. . X M Zhang, Z.-M Yu, Z M Zhu, W K Wu, S.-S Wang, X.-L Sheng, S , X. M. Zhang, Z.-M. Yu, Z. M. Zhu, W. K. Wu, S.-S. Wang, X.-L. Sheng, and S. Y. . A Yang, Phys. Rev. B. A. Yang, Phys. Rev. B, 2018, 97, 235150. . W K Wu, Y Liu, S Li, C Y Zhong, Z.-M Yu, X.-L Sheng, Y X Zhao, S , W. K. Wu, Y. Liu, S. Li, C. Y. Zhong, Z.-M. Yu, X.-L. Sheng, Y. X. Zhao, and S. . Y A Yang, Phys. Rev. B. 97115125Y. A. Yang, Phys. Rev. B, 2018, 97, 115125 . J.-Y You, C Chen, Z Zhang, X.-L Sheng, S Y A Yang, G Su, Phys. Rev. J.-Y. You, C. Chen, Z. Zhang, X.-L. Sheng, S. Y. A. Yang, and G. Su, Phys. Rev. . S.-S Wang, Z.-M Yu, Y Liu, Y Jiao, S Guan, X.-L Sheng, S A Yang, Phys. Rev. Materials. 384201S.-S. Wang, Z.-M. Yu, Y. Liu, Y. Jiao, S. Guan, X.-L. Sheng, S.A. Yang, Phys. Rev. Materials, 2019, 3, 084201. . T L He, X M Zhang, Y Liu, X F Dai, G D Liu, Z M Yu, Y G Yao, Phys. Rev. B. 10275133T. L. He, X. M. Zhang, Y. Liu, X. F. Dai, G. D. Liu, Z. M. Yu and Y. G. Yao, Phys. Rev. B, 2020, 102, 075133. . L Jin, X M Zhang, Y Liu, X F Dai, X Shen, L Y Wang, G D Liu, Phys. Rev. L. Jin, X. M. Zhang, Y. Liu, X. F. Dai, X. Shen, L. Y. Wang, G. D. Liu, Phys. Rev. . S Li, Z.-M Yu, Y Liu, S Guan, S.-S Wang, X Zhang, Y Yao, S A Yang, Phys. Rev. B. 81106S. Li, Z.-M. Yu, Y. Liu, S. Guan, S.-S. Wang, X. Zhang, Y. Yao, S.A. Yang, Phys. Rev. B, 2017, 96, 081106. . L R Wang, L Jin, G D Liu, Y Liu, X F Dai, X M Zhang, 101057. 13App. Mater. Today. 23L. R. Wang, L. Jin, G. D. Liu, Y. Liu, X. F. Dai, X. M. Zhang, App. Mater. Today, 2021, 23, 101057. 13 . J Zhou, L Shen, M D Costa, K A Persson, S P Ong, P Huck, Y H Lu, X Y , J. Zhou, L. Shen, M. D. Costa, K. A. Persson, S. P. Ong, P. Huck, Y. H. Lu, X. Y. . Y M Ma, H Chen, Y P Tang, Feng, Sci. Data. 686Ma, Y. M. Chen, H. Tang and Y. P. Feng, Sci. Data, 2019, 6, 86. . J K Nørskov, T Bligaard, A Logadottir, J R Kitchin, J G Chen, S Pandelov, U Stimming, Soc, 152J. K. Nørskov, T. Bligaard, A. Logadottir, J. R. Kitchin, J. G. Chen, S. Pandelov and U. Stimming, Soc., 2005, 152, J23-J26 . B Hinnemann, P G Moses, J Bonde, K P Jørgensen, J H Nielsen, S Horch, I , B. Hinnemann, P. G. Moses, J. Bonde, K. P. Jørgensen, J. H. Nielsen, S. Horch, I. . J K Chorkendorff, Nørskov, J. Am. Chem. Soc. 127Chorkendorff, and J. K. Nørskov, J. Am. Chem. Soc., 2005, 127, 15, 5308-5309. . J Greeley, T F Jaramillo, J Bonde, I Chorkendorff, J K Nørskov, Nat. Mater. 5J. Greeley, T. F. Jaramillo, J. Bonde, I. Chorkendorff and J. K. Nørskov, Nat. Mater., 2006, 5, 909-913. . H Li, C Tsai, A L Koh, L L Cai, A W Contryman, A H Fragapane, J , H. Li, C. Tsai, A. L. Koh, L. L. Cai, A. W. Contryman, A. H. Fragapane, J. H. . H S Zhao, H C Han, F Manoharan, J K Abild-Pedersen, X L Nørskov, Zheng, Nat. Mater. 15Zhao, H. S. Han, H. C. Manoharan, F. Abild-Pedersen, J. K. Nørskov and X. L. Zheng., Nat. Mater., 2016, 15, 48-53
[]
[ "THE SPHERE FORMULA", "THE SPHERE FORMULA" ]
[ "Oliver Knill " ]
[]
[]
The sphere formula states that in an arbitrary finite abstract simplicial complex, the sum of the Euler characteristic of unit spheres centered at even-dimensional simplices is equal to the sum of the Euler characteristic of unit spheres centered at odddimensional simplices. It follows that if a geometry has constant unit sphere Euler characteristic, like a manifold, then either all its unit spheres have zero Euler characteristic or the space itself has zero Euler characteristic. Especially, odd-dimensional manifolds have zero Euler characteristic, a fact usually verified either in algebraic topology using Poincaré duality together with Riemann-Hurwitz then deriving it from the existence of a Morse function, using that the Morse indices of the function and its negative add up to zero in odd dimensions. Gauss Bonnet also shows that odddimensional Dehn-Sommerville spaces have zero Euler characteristic because they have constant zero curvature. Zero curvature phenomenons can be understood integral geometrically as index expectation or as Dehn-Sommerville relations.
10.48550/arxiv.2301.05736
[ "https://export.arxiv.org/pdf/2301.05736v1.pdf" ]
255,942,293
2301.05736
dfd02dc0107b43f7b2776c8a708aa28e3c1de777
THE SPHERE FORMULA 13 Jan 2023 Oliver Knill THE SPHERE FORMULA 13 Jan 2023 The sphere formula states that in an arbitrary finite abstract simplicial complex, the sum of the Euler characteristic of unit spheres centered at even-dimensional simplices is equal to the sum of the Euler characteristic of unit spheres centered at odddimensional simplices. It follows that if a geometry has constant unit sphere Euler characteristic, like a manifold, then either all its unit spheres have zero Euler characteristic or the space itself has zero Euler characteristic. Especially, odd-dimensional manifolds have zero Euler characteristic, a fact usually verified either in algebraic topology using Poincaré duality together with Riemann-Hurwitz then deriving it from the existence of a Morse function, using that the Morse indices of the function and its negative add up to zero in odd dimensions. Gauss Bonnet also shows that odddimensional Dehn-Sommerville spaces have zero Euler characteristic because they have constant zero curvature. Zero curvature phenomenons can be understood integral geometrically as index expectation or as Dehn-Sommerville relations. 1.2. Let us start with an energy formula which expresses Euler characteristic as the total potential energy x,y g(x, y) between any pair of simplices. The potential energy g(x, y) is the Green function g(x, y) = w(x)w(y)w(U(x) ∩ U(y)) . This defines a unimodular n × n matrix if G has n elements [71,95,76,84,78]. This matrix is the inverse of the connection Laplacian L(x, y) = 1 if x ∩ y = ∅ and L(x, y) = 0 else. If x,y g(x, y) is the total energy, we had shown that the potential at a point V (x) = y g(x, y) can be simplified to w(x)w(U(x)). The Euler characteristic of the smallest open set containing x is so linked to the potential at x which also can be seen as a curvature, dual to w(x) at x. Let us just directly and independently verify. Theorem 1 (Energy formula). x∈G w(x)w(U(x)) = w(G). Proof. Replace the ±1-valued function w(x) by a more general function h(x) and define w h (G) = x h(x) and g h (x) = w(x)w h (U(x)). The map h → g h is linear and the energy formula is linear so that the formula only needs to be shown in the case when h(x 0 ) = 1 for x = x 0 . The right hand side is then w h (G) = 1. Now look at the left hand side x∈G w(x)w(U(x)). This is a sum over all w(x) for which x 0 ∈ U(x) which means x,x⊂x 0 w(x) but this is the Euler characteristic of the simplicial complex {x ⊂ x 0 } = {x 0 } which is always 1. So, also the left hand side is 1. Proof. Use induction with respect to the number of elements in B(x). If B(x) has one element, it has w(B(x)) = 1. We can reduce the size of B(x) by taking a way an element y ∈ B(x) different from x. The complex B ′ (x) = B(x) \ U(y) is now a unit ball B ′ (x) in a smaller G \ U(y). The induction assumption assures that w(B ′ (x)) = 1. The local valuation formula gives that w(B(x))) = w(B ′ (x). 2.3. We get for all complexes (see [83] Corollary 6). Theorem 2 (Sphere Formula). x∈G w(x)w(S(x)) = 0. Proof. The property x∈G w(x)w(B(x)) = x∈G w(x) = w(G) reduces so to the definition of w. Subtract x∈G w(x)w(U(x)) = w(G) from the sphere formula x∈G w(x)w(B(x)) = w(G) and use the local valuation formula: w(B(x)) = w(U(x)) + w(S(x)). 2.4. In analogy to the Green function g(x, y) = w(x)w(y)w((U(x) ∩ U(y)) whose super trace is w(G), we could look at the sphere Green matrix s(x, y) = w(x)w(y)w(S(x) ∩ S(y)). The sphere theorem tells that the super trace of s is 0. In our experiments, we notice the unexplained fact that the matrix s is always singular. The reader is invited to experiment with the code provided below. This is in contrast to the Green matrix g which is always unimodular, meaning that the determinant is 1 or −1. 2.5. Let us mention in this context a discrete analogue of a theorem of Hadwiger stating that the linear space of valuations has dimension dim(G) + 1 and that the f -vectors f k (G) counting the k-dimensional simplices form a basis for k = 0, . . . , dim(G). Euler characteristic w(G) = dim(G) k=0 (−1) k f k (G) is the only valuation which preserves the Barycentric refinement operation, an operation which induces a linear map f → Qf on f vectors and so a map w → Q T w with a concrete matrix Q(x, y) = Stirling(x, y)x! which has a unique eigenvalue 1. The corresponding eigenvector for Q T is (1, −1, 1, −1, ...). The dynamics of Brycentric refinement is interesting from a spectral point of view [67,69]. Euler's Gem 3.1. For x ∈ G, the index i(x) = 1 − w(S(x)) tells how the Euler characteristic changes if the open set U(x) is taken away from G. If H = G \ U(x), then w(G) = w(H)+w(B(x))−w(S(x)) = w(H)+w(U(x)) = w(H)+i(x) . 3 This means that under reduction G → H = G \ U(x), the Euler characteristic changes as w(H) = w(G) − i(x). This process leads rather general Poincaré-Hopf theorems. We will mention below a version for simplicial complexes. 3.2. Inductively, a complex G is called contractible if there exists x ∈ G such that the both closed sets S(x) as well as G \ U(x) are both contractible. To found the induction, the one point complex G = 1 is declared to be contractible. We have just seen in the proof of the unit ball lemma that every unit ball is contractible. 3.3. A complex G is called a d-manifold, if every unit sphere S(x) is a (d − 1) sphere. A complex G is called a d-sphere, if it is a d- manifold and G \ U(x) is contractible for some x. The empty complex is G = 0 is declared to be the (−1) sphere. Theorem 3 (Euler Gem). w(G) = 1 + (−1) d for every d-sphere G. Proof. By definition of a sphere, there is x ∈ G such that G − U(x) is contractible. The valuation formula and w(B(x)) = 1 gives, using induction that 1 = w(G − U(x)) = w(G) + w(B(x)) − w(S(x)) = w(G) + 1 − (1 + (−1) (d−1) ) so that w(G) = 1 + (−1) d . It follows: Corollary 1. If all unit spheres S(x) of G have the same Euler characteristic, then either w(G) = 0 or w(S(x)) = 0 for all x. Proof. The reason is that if w(S(x)) = c, then by the sphere formula, x w(x)c = 0. This implies that either c = 0 or then that x w(x) = w(G) = 0. 3.5. A complex G is called homotopic to 1 if a sequence of contraction or inverse extension steps leads from G to 1. One of the simplest complexes which can not be contracted but which is homotopic to 1 is the dunce hat. There is an implementation of that space with fvector (17,52,36). One could also look at more general spheres, by replacing "contractible" with "homotopic to 1". This would be impractical however as checking whether a complex G is homotopic to 1 is difficult, even NP complete, while checking whether is contractible can be done in polynomial time. But still, also for this more general spaces, the Euler Gem formula holds. One can extend the class even further by looking at Dehn-Sommerville spaces. 4 3.6. A simplicial complex G can be seen as a special CW complex in which simplices x = x k are added along a time-line, starting with the zero-dimensional simplices, attached to (−1)-spheres 0, where i(x) = 1, then adding 1-dimensional simplices, seen as attaching cells to 0spheres with i(x) = −1, then adding 2-dimensional cells etc. As i(x) = w(x) in each step, one can interpret w(G) = x i(x) as a Poincaré-Hopf formula. If allowing cells to be attached G k+1 = G k + U k x k to either contractible parts S k leading to w(G k+1 ) = w(G k )+1−w(S k ) = w(G− k) or by attaching new cells to spheres S k , in which case w( G k+1 ) = w(G k ) + 1 − w(S k ) = w(G k ) + (1 + (−1) dim(S(x k )) , the Morse function f (x) = k if x = x k encodes the build-up and i f (x) = 1 − w(S − f (x)) with S − f (x) = {y ∈ S(x), f (y) < f (x)} is the Poincaré-Hopf index. The dimension dim(B(x k ) is the Morse index and the Poincaré-Hopf formula w(G) = x i f (x) holds. 3.7. For the cube CW complex for example, one builds first the cube simplicial complex with 8 vertices and 12 edges, then adds 6 cells along circular complexes of Morse index 2 and Poincaré-Hopf index i f (x) = 1. The Poincaré-Hopf formula is now w(G) = x i f (x) = 8 − 12 + 6 = 2, leading to the Euler formula w(G) = v − e + f = 2 for Platonic solids, a discovery of Descartes. The Alexandroff topology defined by generalizing the notion of stars U(x) as a basis for open sets still attaches a finite topology to the CW complex. If a new index x k is added then for every x ∈ U(x k ), retroactively, x k is added to U(x). The data structure of a CW complex now is given by a set G, a basis for a finite topology on G and a function f : G → N which tells how the structure is built-up. The data structure "simplicial complex" is easier to work with but is in general more costly as one has to deal with more cells. A CW cube only has 6 faces, 12 edges and 8 vertices, while a triangulated cube has 24 faces, 36 edges and 14=8+6 vertices 4. Zero Characteristic 4.1. The Euler-Gem formula and the sphere formula together immediately give the zero Euler characteristic result: Corollary 2 (Corollary). All odd-dimensional manifolds satisfy w(G) = 0. Proof. By the Euler-Gem formula, for odd-dimensional manifolds, every unit sphere has Euler characteristic 2. The sphere formula gives from this 0 = x w(x)w(S(x)) = x w(x)2 = w(G). 5 4.2. As mentioned already, the proof shows that for any complex G for which the unit spheres S(x) have constant Euler characteristic c, we either must have that this constant c = 0 or then that G has zero Euler characteristic. We for example can take a suspension of an arbitrary even-dimensional manifold of Euler characteristic 2. A small example in three dimensions is a suspension of a copy of two projective planes. There are explicit implementations with f G = (15,42,28). This is not a manifold but all unit spheres have Euler characteristic 2. The curvature is constant 0. An other 3 dimensional non-manifold case is the suspension of a disjoint union of a torus and a sphere. The construction of more general classes is done below using Dehn-Sommerville. 4.3. The above corollary is usually proven using Poincaré-duality which tells that the Betti vector (b 0 , b 1 , . . . , b dim(G) ) is palindromic b k = b dim(G)−k . The Betti number b k (G) is defined as the nul- lity of the block L k in the n × n Hodge matrix L = (d + d * ) 2 of the complex G with n elements. The exterior derivative df (x) = y⊂x,|x|−|y|=1 sign(x|y)f (y) is a n × n matrix too satisfying d 2 = 0 so that L = dd * + d * d is block diagonal. Functions restricted to x with dim(x) = k are called k-forms. The matrices d and so L depend on an initial arbitrary orientation of the simplices which enters sign(x|y) which is defined to be 1 if the orientation of y matches the induced orientation of x and −1 else. A complex is orientable if one fix an orientation on all simplices by fixing the orientation on maximal simplices. But one does not need to have an orientable complex to define the derivative d. Changing the orientations is an orthogonal transformation on all forms and so is just a change of basis in the Hilbert space on which d and L acts. The spectrum and so the nullity b k (G) is not affected. 4.4. The Poincaré duality only holds if G is orientable, meaning that the maximal simplices can be oriented in a way such that the order is compatible on intersections. For all simplicial complexes, one has the Euler-Poincaré formula k (−1) k b k = w(G). This identity is just linear algebra using the rank-nullity theorem which in this context is just the Hodge relation relating the rank ran(D) = ran(d) ⊕ im(d * ) of the Dirac operator D with the rank ker(D). If d k maps k forms to k + 1 forms and d * k+1 maps k + 1 forms to k forms, one can see b k = dim(ker(d k /ran(d k−1 ). If f k is the dimension of k-forms which is the number of k-simplices in G, this immediately shows k (−1) k b k = k (−1) k f k . One can also derive the Euler-Poincaré formula by looking 6 str(e −tL ) of the heat kernel e −tL , where str(A) = x∈G w(x)A(x, x) is the super trace. McKean-Singer have pointed out that the non-zero eigenvalues on even and odd forms are the same so that str(e −tL ) = w(G) is constant. But since w(G) is an integer and for large t, we e −tL is close to the kernel of L, the left hand side is for large t equal to k (−1) k b k and for t = 0 we have k (−1) k f k . 4.5. If H is an odd-dimensional manifold that is not orientable, one can look at the double cover G, apply Poincaré-duality to see w(G) = 0, then use Riemann-Hurwitz relation w(H) = w(G/A) = w(G)/|A| for a group A = Z 2 acting on H without fixed points in G. The Riemann-Hurwitz formula is more general and also can take into account ramification points, points for which the orbits of A are smaller than in general. One can see many complexes as branched covers G of simpler complexes H which can be seen as H = G/A for a finite group A acting on G. In any case, algebraic topology together with some algebraic geometry allows to see the fact that odd-dimensional manifolds must have Euler characteristic zero. We will mention below an other simple classical approach using Morse theory and using that the indices of a Morse function on a manifold at a critical point satisfies While calculus is involved here, note that f G is a polynomial and that when dealing with polynomials, one stays in a finite setup by just declaring d dt t n = nt n−1 and t 0 s n ds = t n+1 /(n + 1). Seen as such, writing down derivatives and integrals is notation. We do not invoke any limits even so there calculus as a background theory interprets the expressions using limits. For the following, see [77]. Note that the curvature is located on the 0-dimensional part of space. i f (x) = −i −f (x),Theorem 4 (Gauss-Bonnet). f G (t) − 1 = v∈V F S(v) (t). Proof. Every y ∈ S(x) carries a charge t k+1 , then f G (t) counts the total charge. Every k-simplex y ∈ S(x) defines a (k + 1)-simplex z in U(x) ⊂ G carrying the charge t k+2 . It contains (k + 2) vertices. Distributing this charge equally to these points gives each a charge t k+2 /(k + 2). The curvature F S(x) (t) adds up all the charges of z. where K(x) = F S(x) (−1) is the Levitt curvature, the discrete analogue of the Gauss-Bonnet-Chern curvature in the continuum [19]. An explicit formula for K(v) with f −1 = 1 is K(v) = d k=−1 (−1) k f k (S(v)) k + 2 which appeared in [90] and be placed into the Gauss-Bonnet context in [56] (We had not been aware of [90] when writing that paper). It is quite obvious once one realizes that Euler characteristic is a total energy of the function w(x) = (−1) dim(x) and that we can shove this value to the zero dimensional parts by placing w(x)/(dim(x) + 1) to each of the dim(x) + 1 vertices v in x. When looking at the total value on a vertex v, we are interested in how much has been sent to us. 8 We have interpreted this curvature also as an expectation of Poincaré-Hopf indices K(v) = E[i f (v)] for example by taking the probability measure which assigns constant weights to all colorings f . To formulate Poincaré-Hopf within simplicial complexes, one can start with a function f : V → R on vertices , where R is an ordered ring like Z and V = {x ∈ G, dim(x) = 0}. Now define the index i f (v) = x,v∈xis max on x w(x). Because every simplex energy w(x) has now must moved to the vertex in x where f was maximal, we have the Poincaré-Hopf result x w(x) = v∈V i f (v) . In the Gauss-Bonnet case, the value w(x) has been distributed equally to each of the dim(x) + 1 vertices in x. We can now write i f (v) = 1 − w(S − f (v)), where S − f (v) consists of all simplices in S(v) for which all f values are smaller than f (v). This set S − f (v) is closed. 6. Dehn-Sommerville 6.1. The h-function h G (x) = (x − 1) d f G (1/(x − 1)) generates coeffi- cients h k of the form h G (x) = h 0 + h 1 x + · · · + h d x d + h d+1 x d+1 . In other words, it is the generating function for the h-vector (h 0 , h 1 , . . . , h d+1 ) . G is called Dehn-Sommerville if this vector is palindromic meaning that h i = h d+1−i for all i = 0, . . . , d + 1. Corollary 3. Odd-dimensional Dehn-Sommerville spaces and especially, odd-dimensional manifolds all have zero Euler characteristic. Lemma 3. G is Dehn-Sommerville if and only if f G (t) satisfies f (t) + (−1) d f (−1 − t), meaning that g(t) = f (t − 1/2) is either even or odd. Proof. h is palindromic if and only if the roots of h(t) = 1 + h 0 t + · · · + h d t d+1 = (t − 1) d f (1/(t − 1)) are invariant6.3. Define X (−1) = {} and inductively X d = {G | w(G) = 1 + (−1) d , S(x) ∈ X d−1 , ∀x ∈ G}. If G, H ∈ X , then G + H ∈ X be- cause S G+H (x) = S(x) + H for x ∈ G and S G+H (x) = G + S(x) for x ∈ H. Also the assumption on Euler characteristic works as w(G) = 1 − f G (−1) ∈ Z 2 = {−1, 1} and w(G + H) = f G (−1)f G (−1) is still in {−1, 1}.(v) = f (w) for every w ∈ S(x) ∩ V . For such a function, define S − f (v) = {x ∈ S(v), f (v) < f (w) ∀w ∈ V ∩ S(v)} and S + f (v) = {S(v), f (v) > f (w) ∀w ∈ V ∩ S(v)} = S − −f (v) . The Poincaré-Hopf theorem assures that w(G) = w∈V ∩S(v) i f (w) = w∈V ∩S(v) i −f (w) . Following [59], we have now w(G) = w∈V ∩S(v) j f (w) , where j f (w) = (i f (w) + i −f (w))/2 is the average of the two indices. The valuation formula gives w(S − (v)) + w(S + (v)) = w(S(v)) − w(C(v)) , where C(v) are the simplices x in S(v) on which f − f (v) changes sign. This naturally becomes a sub simplicial complex of the Barycentric refinement of G, by looking at the elements as vertices in a graph and taking the Whitney complex. The valuation formula is equivalent to w(S(v)) − 2 − j f (v) = w(C f (v)) . We called C f (v) the center manifold of f at v. In classical Morse theory, this is the manifold {w ∈ S r (v), f (w) = f (v)} in a small sphere S r (v) around a critical point of f which typically is a (d − 2)-manifold or empty by the classical Sard theorem. Proof. This can be shown by induction with respect to dimension. In order to show that the unit sphere S(x) is a (d − 1) sphere, note that by induction the intersection {f = c} in every unit sphere (a (d-1) sphere) is a d − 2 dimensional manifold. But we have more: in general in G 1 , every unit sphere is the join of S − (x) = {y ∈ S(x), y ⊂ x} and S + (x) = {y ∈ S(x), x ⊂ y}. Now, since f −f (x) changes sign on S + (x) but not on S − by induction, the level surface in S + (x) is a sphere of dimension 1 lower. The sphere S(x) ∩ {f = c} in the level surface of G is now the join of S − (x) and the level {f = c} in S + (x) which is a sphere of co-dimension 1 in S + (x). The symmetric index of a locally injective function f is j f (v) = [2 −w(S(v)) −w(C f (v))]/2, where C f (v) is the center manifold in S(v). (See [59]) 11 7.5. If G is an odd-dimensional d-manifold, then S(v) is an even dimensional sphere and C f (v) is a (d − 2)-dimensional manifold by the discrete Sard theorem. We haven then w(S(v)) − 2 = 0 because S(v) was an even dimensional sphere. The symmetric index is j f (v) = w(C(v)) = 0 . We see that index expectation by averaging two functions f, −f gives us a curvature that is constant 0. The conclusion is again that w(G) = 0. 7.6. The index formula is also interesting in even dimensions: for a 4-manifold for example, one can see the Gauss-Bonnet curvature as the expectation of j f (v) = 1 − w(C f (v))/2. This allows to see the Euler characteristic of a 4-manifold in terms of the expectation of Euler characteristic of "random two dimensional center manifolds" C f (x). In the positive curvature case, C v (v) is connected which from the classification of manifolds shows that w(C f (v)) ≤ 2 so that j f (v) ≥ 0, corresponding to the observation of Milnor [16] that the Gauss-Bonnet Chern curvature is non-negative in the positive curvature case for 4-manifolds. This argument is no more available in dimensions 6 and higher as the curvature can have become negative [33,55]. See [44,6,9]. The index analysis suggests to look for probability spaces of functions for which j f (v) ≥ 0 and not being constant 0. This is equivalent to w(C f (v)) ≤ 2. 8. Some references 8.1. The notion of finite abstract simplicial complex is due to Dehn and Heegaard [20,13,96]. The Dehn-Sommerville relations go back to Dehn and Sommerville [23,54,101,98,91,12,40,51,7]. Euler characteristic was considered first for Platonic solids and experimentally first studied by Descartes. Euler gave the gem formula in the case d = 2 and for graphs which are planar. [53,89,36,105]. Euler characteristic has been seen in the context of invariant valuations [38,52]. It is the only valuation in the (dim(G) + 1)-dimensional space of valuations which is normalized and invariant under Barycentric refinements. For finite topological spaces related to simplicial complexes [1,112,92]. For McKean-Singer in the continuum [93,19], for the classical Gauss-Bonnet theorem in higher dimension [3,2,27,15,106,19] and in the discrete [90]. For [102,43,94]. For discrete notions of Morse theory [28,29,31,32,30] within discrete Morse theory. For notions of spheres [46,47,25] within digital topology [26] or [11] for discrete differential geometry. While topologists call geometric realizations of simplicial complexes "Polytopes", most 12 of the polytop literature considers convex polytopes and so geometric realizations of d-spheres [109,18,37,49,116]. Discrete notions of homotopy was considered already in the discrete [114] and Evako. A crucial simiplification occured in [14]. Probabilistic notions in geometry like integral geometry go back to [10,108,4,5,100]. The join in graph theory was introduced in [117]. For the history of manifolds, see [48,110]. In the context of Dehn-Sommerville, the arithmetic of graphs comes in. While manifolds are preserved by disjoint union, they are not preserved by the join operation. But spheres, and more generally Dehn-Sommerville spaces are join monoids. For graph multiplication, see [111,107,45,39]. The Hopf conjecture has very early on seen in the context of Gauss-Bonnet [42,3,27,2,15,17,19]. The Hopf conjecture [44] have reappeared in the sixties [9] and [16] and are listed as problems 8) and 10) in the problem collection [115]. For historical remarks on Gauss-Bonnet [17], on manifolds [21,48]. Discrete notions in curvature have appeared first in [24]. It was then used in graph coloring contexts like [8]. Variants have appeared in two dimensions [34,103,104,41,99,113]. 8.2. We have explored the topic in the last couple of years, often in a graph theoretical frame work which is almost equivalent as every graph comes naturally with a Whitney simplicial complex coming from the vertex sets of complete subgraphs and every simplicial complex G defines a graph in which G are the vertices and two are connected if one is contained in the other. Many texts treat graphs as one-dimensional simplicial complexes, meaning that the simplicial complex on the graph is the 1-skeleton complex V ∪ E. In topological graph theory, one studies graphs embedded in surfaces [35] and so deals with 2-dimensional CW complexes, where the connected components in an embedding serve as 2-cells. There are strong links between graphs and simplicial complexes because there are various ways to get also higher dimensional simplicial complexes from a graph. The Whitney is the most natural one as all the connectivity, geometric, differential geometric, or cohomological properties match what one expects in geometric realizations. [22,97] and especially [50]. 8.3. For the Poincare-Hopf which is related to the energy theorem, see [58,80,81,79]. For the energy theme, see [76,84,78,83]. For index expectation, see [60,65,59,86,75]. For the Sard theorem, see [68] which tried to be close to the classical Sard theorem . For the Gauss-Bonnet theorem, see [57,56,70,72,77]. For some discrete work on Hopf type questions [85,86] and constant curvature [75]. For the Hodge or McKean-Singer theme see [61,63] For the theme of finite topologies on 13 graphs or complexes [66,88]. For overview attempts, see [62,74,64]. For our own explorations on the arithmetic of graphs including the Zykov-Sabidussi ring and its dual, the Shannon ring [73,87,82]. 9. Code 9.1. Here is some code which allows an interested reader to experiment with some of the notions which appeared. The computer uses the Whitney functor to nicely generates random complexes from random graphs. With the parameters given, the random complexes produced are typically 4-5 dimensional. unimodular. In experiments we tried to correlate the nullity with the f-vector. 2 . 2Sphere Formula 2.1. Euler characteristic is a very special particular functional on simplicial complexes. It is a point in a (n + 1)-dimensional space of valuations F , functions that satisfy F (A) + F (B) = F (A ∪ B) + F (A ∩ B) for any subsets of A and B. Lemma 1 (Local valuation). w(B(x)) = w(U(x)) + w(S(x)) Proof. For any subsets A, B ⊂ G, whether open or closed or neither, we have the valuation formula w(A) + w(B) = w(A ∪ B) − w(A ∩ B) because each of the basic valuations f k (G) counting the number of k-dimensional simplices satisfies the formula and w is a linear combination of such basic valuations. 2.2. The unit ball B(x), defined as the closure of the star U(x) = {y, x ⊂ y} is a closed set which contains x as well as its simplicial complex, the core {x} = {y, y ⊂ x}. 2 Lemma 2 (Unit balls). w(B(x)) = 1 for all x. if the dimension of the manifold is odd. 4.6. Why does the sphere theorem not apply for the cube or dodecahedron as we have unit spheres at the vertices of constant Euler characteristic 3? The reason is that one has to look at all the unit spheres in G and not just at the unit spheres of 0-dimensional parts of space. For the cube simplicial complex G, we have eight 0-dimensional points x ∈ G and twelve 1-dimensional points x ∈ G. While w(S(x)) = 3 for the 0-dimensional x we have w(S(x) = 2 for the 1-dimensional x ∈ G. Now 8 * 3 = 24 and 12 * 2 = 24. The Euler characteristic is 8 − 12 = −4 which can be understood as the Euler characteristic of a 2-sphere with 6 holes so that 2−6 = −4. In the Dodecahedron case it is 2−12 = −10. 4.7. We will write in full generality the Euler characteristic in the next section as a sum of curvatures located on 0-dimensional simplices. In the case of a 1-dimensional complex, the curvature at a ver-tex x is K(x) = 1 − |S(x)|/2. Summing up the curvatures x K(x) = |V | − x |S(x)|/2 = |V | − |E| = f 0 (G) − f 1 (G) is just invoking the Euler-handshake formula v f 0 (S(v)) = f 1 (G) inany one dimensional complex G. In the case of the cube or Dodecahedron complex, the curvature on the vertices is constant −1/2, leading to Euler characteristic 8 * (−1/2) = −4 or 20 * (−1/2) = −10 respectively. 7 5. Gauss-Bonnet 5.1. Let f = (f 0 , . . . , f d ) denote the f -vector of G, where f k (G) is the number of elements in G of length k + 1. Define the simplex generating function f G (t) call it the f-function. Define F G (t) = t 0 f G (s) ds. 5. 2 . 2For t = −1, one gets a more traditional form as one can write Euler characteristic as a sum of curvatures under the involution t → 1/t. This is equivalent that the roots of f are invariant under the involution t → −1 − t and so to the symmetry f (−1 − t) = ±f (t) for the f -function. If G is a complex with maximal dimension d andf G satisfies f (t) + (−1) d f (−1 − t) then f (−1) = (−1) d f (0) so that w(G) = 1 + f (−1) = 1 + (−1) d . 9 6.2. Let G + H = G ∪ H ∪ {x + y, x ∈ G, y ∈ H} be the join of G and H. Since f G+H (t) = f G (t)f H (t),we immediately see that if G and H are Dehn-Sommerville, then the join G + H is Dehn-Sommerville again. Also Barycentric refinements and connected sums of G and H along a sphere S are Dehn-Sommerville. Also edge refinements of Dehn-Sommerville spaces are Dehn-Sommerville. Corollary 4 . 4Every d-manifold of Euler characteristic 1 + (−1) d is Dehn-Sommerville. Proof. The reason is that the unit spheres are spheres and so Dehn-Sommerville. The Gauss-Bonnet formula shows that the f -vector of G as an integral of Dehn-Sommerville expressions satisfies the Dehn-Sommerville relation. 6.4. Dehn-Sommerville is remarkable. For example, for any 4-sphere, the f -vector satisfies −22f 1 + 33f 2 − 40f 3 + 45f 4 = 0 . For example, for the smallest 4-sphere with f = (f 0 , f 1 , f 2 , f 3 , f 4 ) = (10, 40, 80, 80, 32) one has (10, 40, 80, 80, 32) · (0, −22, 33, −40, 45) = 0 . 7. Poincaré-Hopf 7.1. Again, let V =⊂ G be the set of 0-dimensional sets in G. It can naturally be identified with x x (even so it is not the same. The object {{v}} is not the same than {v}). A function on V is locally injective f 7. 3 . 3For Morse functions f , the space C f (v) is actually a (d − 2)manifold or empty. For a Morse function on a manifold, one also has the symmetric index j f (v) = i f (v) + i −f (v) = 0 at every critical point and so can deduce the zero Euler characteristic result again. The Barycentric refinement G 1 is the Whitney complex of the graph in which two sets in G are connected if one is contained in the other: Theorem 5 (Discrete Sard). If G is a d-manifold and f : G → R is locally injective, then for any c = f (V ), the {f = c} is a discrete (d − 1)-manifold in G 1 . [V [ [ k ] ] , V [ [ l ] ] ] ] , { k , n } , { l , n } ] ] ; C u r v a tu r e [ G , t ] : =Module [ { h=F f u n c t i o n [G, y ] } , Integrate [ h , { y , 0 , t } ] ] ; C u r v a tu r e s [ G , t ] : =Module [ { S=S p h e r e s [G] } , Table [ I f [ Length [G [ [ k ] ] ] = = 1 , C u r v a tu r e [ S [ [ k ] ] , t ] , 0 ] , { k , Length [ S ] } ] ] ; L e v i t t [ G ] := −C u r v a tu r e s [G, t ] / . t −> ( − 1 ); Zykov [ A , B ] : =Module [ { q=Max[ Flatten [A ] ] , Q,G=A} , Q=Table [ B [ [ k ] ] + q , { k , Length [ B ] } ] ; Do[G=Append [ G, Union [A [ [ a ] ] , Q [ [ b ] ] ] ] , { a , Length [A] } , { b , Length [Q ] } ] ; G=Union [ G,Q ] ; I f [A=={},G=B ] ; I f [ B=={},G=A ] ; G ] ; DehnSommervilleQ [ G ] : =Module [ { f } , Clear [ t ] ; f=F f u n c t i o n [ G, t ] ; Simplify [ f ] === Simplify [ ( f / . t−>−1−t ) ] ] ; Cl [ A ] : = I f [A=={} ,{} , Delete [ Union [ Sort [ Flatten [Map[ Subsets ,A ] , 1 ] ] ] , 1 ] ] ; F v e c t o r [ G ] : = I f [ Length [G]==0 ,{} , Delete [ BinCounts [Map[ Length ,G ] ] , 1 ] ] ; F f u n c t i o n [ G , t ] : =Module [ { f=F v e c t o r [G] , n } , Clear [ t ] ; n=Length [ f ] ; I f [ Length [G]==0 ,1 ,1+Sum[ f [ [ k ] ] * tˆk , { k , n } ] ] ] ; Whitney [ s ] : = I f [ Length [ E d g e L i s t [ s ]]==0 ,Map[{#}& , V e r t e x L i s t [ s ] ] , Map[ Sort , Sort [ Cl [ F i n d C l i q u e [ s , In f i n i t y , All ] ] ] ] ] ; U[ G , x ] : =Module [ { u ={}} , Do[ I f [ SubsetQ [G [ [ k ] ] , x ] , u=Append [ u ,G [ [ k ] ] ] ] , { k , Length [G ] } ] ; u ] ; S t a r s [ G ] : = Table [U[ G,G [ [ k ] ] ] , { k , Length [G ] } ] ; S p h e r e s [ G ] : = Table [ u=U[G,G [ [ k ] ] ] ; Complement [ Cl [ u ] , u ] , { k , Length [G ] } ] w[ x ]:= −( −1)ˆLength [ x ] ; Chi [ A ] : = Total [Map[ w,A ] ] ; g [ G ] : =Module [ {V=S t a r s [G] , n=Length [G] } , Table [ w [G [ [ k ] ] ] * w [G [ [ l ] ] ] * Chi [ Intersection [V [ [ k ] ] , V [ [ l ] ] ] ] , { k , n } , { l , n } ] ] ; s g [ G ] : =Module [ {V=S p h e r e s [G] , n=Length [G] } , Table [ w [G [ [ k ] ] ] * w [G [ [ l ] ] ] * Chi [ Intersection✞ Diskrete Räume. P , Mat. Sb. 22P. Alexandroff. Diskrete Räume. Mat. Sb. 2, 2, 1937. The Gauss-Bonnet theorem for Riemannian Polyhedra. C Allendoerfer, A Weil, Transactions of the American Mathematical Society. 53C. Allendoerfer and A. Weil. The Gauss-Bonnet theorem for Riemannian Polyhedra. Transactions of the American Mathematical Society, 53:101-129, 1943. The Euler number of a Riemann manifold. C B Allendoerfer, Amer. J. Math. 62243C.B. Allendoerfer. The Euler number of a Riemann manifold. Amer. J. Math., 62:243, 1940. Critical points and curvature for embedded polyhedra. T Banchoff, J. Differential Geometry. 1T. Banchoff. Critical points and curvature for embedded polyhedra. J. Dif- ferential Geometry, 1:245-256, 1967. Critical points and curvature for embedded polyhedral surfaces. T F Banchoff, Amer. Math. Monthly. 77T. F. Banchoff. Critical points and curvature for embedded polyhedral sur- faces. Amer. Math. Monthly, 77:475-485, 1970. A Panoramic View of Riemannian Geometry. M Berger, SpringerM. Berger. A Panoramic View of Riemannian Geometry. Springer, 2003. Jacob's Ladder of Differential Geometry. M Berger, Springer VerlagBerlinM. Berger. Jacob's Ladder of Differential Geometry. Springer Verlag, Berlin, 2009. . H-G Bigalke, Heinrich Heesch, Kristallgeometrie, Parkettierungen, Vierfarbenforschung, Birkhäuser, H-G. Bigalke. Heinrich Heesch, Kristallgeometrie, Parkettierungen, Vierfar- benforschung. Birkhäuser, 1988. Some implications on the generalized Gauss-Bonnet theorem. R L Bishop, S I Goldberg, Transactions of the AMS. 112R.L. Bishop and S.I. Goldberg. Some implications on the generalized Gauss- Bonnet theorem. Transactions of the AMS, 112:508-535, 1964. . W Blaschke, Vorlesungenüber Integralgeometrie. Chelsea Publishing CompanyNew YorkW. Blaschke. Vorlesungenüber Integralgeometrie. Chelsea Publishing Com- pany, New York, 1949. Discrete Differential Geometry, Integrable Structure. A Bobenko, Y Suris, Graduate Studies in Mathematics. 98AMSA. Bobenko and Y. Suris. Discrete Differential Geometry, Integrable Struc- ture, volume 98 of Graduate Studies in Mathematics. AMS, 2008. f -vectors of barycentric subdivisions. F Brenti, V Welker, Math. Z. 2594F. Brenti and V. Welker. f -vectors of barycentric subdivisions. Math. Z., 259(4):849-865, 2008. Development of the concept of a complex. G Burde, H Zieschang, History of Topology. ElsevierG. Burde and H. Zieschang. Development of the concept of a complex. In History of Topology. Elsevier, 1999. Graph homotopy and Graham homotopy. B Chen, S-T Yau, Y-N Yeh, Discrete Math. 2411-3Selected papers in honor of Helge TverbergB. Chen, S-T. Yau, and Y-N. Yeh. Graph homotopy and Graham homotopy. Discrete Math., 241(1-3):153-170, 2001. Selected papers in honor of Helge Tverberg. A simple intrinsic proof of the Gauss-Bonnet formula for closed Riemannian manifolds. S.-S Chern, Annals of Mathematics. 45S.-S. Chern. A simple intrinsic proof of the Gauss-Bonnet formula for closed Riemannian manifolds. Annals of Mathematics, 45, 1944. The geometry of G-structures. S-S Chern, Bull. Amer. Math. Soc. 72S-S. Chern. The geometry of G-structures. Bull. Amer. Math. Soc., 72:167- 219, 1966. Historical remarks on Gauss-Bonnet. S-S Chern, Analysis, et cetera. Boston, MAAcademic PressS-S. Chern. Historical remarks on Gauss-Bonnet. In Analysis, et cetera, pages 209-217. Academic Press, Boston, MA, 1990. Regular Polytopes. H S M Coxeter, Dover PublicationsNew YorkH.S.M. Coxeter. Regular Polytopes. Dover Publications, New York, 1973. Schrödinger Operatorswith Application to Quantum Mechanics and Global Geometry. H L Cycon, R G Froese, W Kirsch, B Simon, Springer-VerlagH.L. Cycon, R.G.Froese, W.Kirsch, and B.Simon. Schrödinger Operators- with Application to Quantum Mechanics and Global Geometry. Springer- Verlag, 1987. . M Dehn, P Heegaard, Analysis situs. Enzyklopaedie d. Math. Wiss, III.1. 1M. Dehn and P. Heegaard. Analysis situs. Enzyklopaedie d. Math. Wiss, III.1.1:153-220, 1907. A History of Algebraic and Differential Topology. J Dieudonne, J. Dieudonne. A History of Algebraic and Differential Topology, 1900-1960. Birkhäuser, 1989. Simplicial Structures in Topology. R A Piccinini, D L Ferrario, SpringerR.A. Piccinini D.L. Ferrario. Simplicial Structures in Topology. Springer, 2011. The relations connecting the angle sums and volume of a polytope in space of n dimensions. D Dommerville, Proceedings of the Royal Society, Series A. 115D. Dommerville. The relations connecting the angle sums and volume of a polytope in space of n dimensions. Proceedings of the Royal Society, Series A, 115:103-19, 1927. Morphologie der Polyeder. E Eberhard, Teubner Verlag1891E. Eberhard. Morphologie der Polyeder. Teubner Verlag, 1891. Dimension on discrete spaces. A V Evako, Internat. J. Theoret. Phys. 337A.V. Evako. Dimension on discrete spaces. Internat. J. Theoret. Phys., 33(7):1553-1568, 1994. The Jordan-Brouwer theorem for the digital normal n-space space Z n. A V Evako, A.V. Evako. The Jordan-Brouwer theorem for the digital normal n-space space Z n . http://arxiv.org/abs/1302.5342, 2013. On total curvatures for Riemannianm manifolds (i). W Fenchel, J. London Math. Soc. 1515W. Fenchel. On total curvatures for Riemannianm manifolds (i). J. London Math. Soc, 15:15, 1940. A discrete Morse theory for cell complexes. R Forman, Geometry, topology, and physics, Conf. Proc. Lecture Notes Geom. Topology, IV. Cambridge, MAInt. PressR. Forman. A discrete Morse theory for cell complexes. In Geometry, topology, and physics, Conf. Proc. Lecture Notes Geom. Topology, IV, pages 112-125. Int. Press, Cambridge, MA, 1995. Morse theory for cell complexes. R Forman, Adv. Math. 90R. Forman. Morse theory for cell complexes. Adv. Math., page 90, 1998. R Forman, Combinatorial differential topology and geometry. New Perspectives in Geometric Combinatorics. 38R. Forman. Combinatorial differential topology and geometry. New Perspec- tives in Geometric Combinatorics, 38, 1999. The Euler characteristic is the unique locally determined numerical invariant of finite simplicial complexes which assigns the same number to every cone. R Forman, Discrete Comput. Geom. 23R. Forman. The Euler characteristic is the unique locally determined numer- ical invariant of finite simplicial complexes which assigns the same number to every cone. Discrete Comput. Geom, 23:485-488, 2000. A user's guide to discrete Morse theory. Séminaire Lotharingien de Combinatoire. R Forman, 48R. Forman. A user's guide to discrete Morse theory. Séminaire Lotharingien de Combinatoire, 48, 2002. Positive sectional curvatures does not imply positive Gauss-Bonnet integrand. R Geroch, Proceedings of the AMS. 54R. Geroch. Positive sectional curvatures does not imply positive Gauss- Bonnet integrand. Proceedings of the AMS, 54, 1976. Hyperbolic groups. M Gromov, Essays in group theory. Springer8M. Gromov. Hyperbolic groups. In Essays in group theory, volume 8 of Math. Sci. Res. Inst. Publ., pages 75-263. Springer, 1987. Topological Graph Theory. J L Gross, T W Tucker, John Wiley and SonsJ.L. Gross and T.W. Tucker. Topological Graph Theory. John Wiley and Sons, 1987. Are your polyhedra the same as my polyhedra?. B Grünbaum, Discrete and computational geometry. BerlinSpringer25B. Grünbaum. Are your polyhedra the same as my polyhedra? In Discrete and computational geometry, volume 25 of Algorithms Combin., pages 461- 488. Springer, Berlin, 2003. Convex Polytopes. B Grünbaum, SpringerB. Grünbaum. Convex Polytopes. Springer, 2003. Vorlesungenüber Inhalt,Oberfläche und Isoperimetrie. H Hadwiger, Springer VerlagBerlinH. Hadwiger. Vorlesungenüber Inhalt,Oberfläche und Isoperimetrie. Springer Verlag, Berlin, 1957, 1957. Handbook of product graphs. R Hammack, W Imrich, S Klavžar, Discrete Mathematics and its Applications. Peter WinklerCRC Presssecond editionR. Hammack, W. Imrich, and S. Klavžar. Handbook of product graphs. Dis- crete Mathematics and its Applications (Boca Raton). CRC Press, Boca Ra- ton, FL, second edition, 2011. With a foreword by Peter Winkler. The Stirling polynomial of a simplicial complex. G Hetyei, 35Discrete and Computational GeometryG. Hetyei. The Stirling polynomial of a simplicial complex. Discrete and Com- putational Geometry, 35:437-455, 2006. Combinatorial curvature for planar graphs. Y Higuchi, J. Graph Theory. 38Y. Higuchi. Combinatorial curvature for planar graphs. J. Graph Theory, 38:220-229, 2001. Über die Curvatura integra geschlossener Hyperflächen. H Hopf, Math. Ann. 951H. Hopf.Über die Curvatura integra geschlossener Hyperflächen. Math. Ann., 95(1):340-367, 1926. H Hopf, Über die Curvatura integra geschlossener Hyperflaechen. Mathematische Annalen. 95H. Hopf.Über die Curvatura integra geschlossener Hyperflaechen. Mathema- tische Annalen, 95:340-367, 1926. Differentialgeometrie und Topologische Gestalt. H Hopf, Jahresbericht der Deutschen Mathematiker-Vereinigung. 41H. Hopf. Differentialgeometrie und Topologische Gestalt. Jahresbericht der Deutschen Mathematiker-Vereinigung, 41:209-228, 1932. Product graphs, Structure and recognition. W Imrich, S Klavzar, John Wiley and Sons, Inc. New YorkW. Imrich and S. Klavzar. Product graphs, Structure and recognition. John Wiley and Sons, Inc. New York, 2000. Contractible transformations do not change the homology groups of graphs. A Ivashchenko, Discrete Math. 1261-3A. Ivashchenko. Contractible transformations do not change the homology groups of graphs. Discrete Math., 126(1-3):159-170, 1994. Graphs of spheres and tori. A V Ivashchenko, Discrete Math. 1281-3A.V. Ivashchenko. Graphs of spheres and tori. Discrete Math., 128(1-3):247- 255, 1994. History of topology. J James, History of Topology. J. James. History of topology. In History of Topology, 1999. C Goodman-Strauss, J H Conway, H Burgiel, The Symmetries of Things. A.K. Peterse, Ltd. C. Goodman-Strauss J.H. Conway, H.Burgiel. The Symmetries of Things. A.K. Peterse, Ltd., 2008. Simplicial Complexes of Graphs. J Jonsson, Lecture Notes in Mathematics. 1928SpringerJ. Jonsson. Simplicial Complexes of Graphs, volume 1928 of Lecture Notes in Mathematics. Springer, 2008. Dehn-Sommerville relations for triangulated manifolds. D Klain, D. Klain. Dehn-Sommerville relations for triangulated manifolds. http://faculty.uml.edu/dklain/ds.pdf, 2002. Introduction to geometric probability. D A Klain, G-C Rota, Lezioni Lincee. Accademia nazionale dei lincei. D.A. Klain and G-C. Rota. Introduction to geometric probability. Lezioni Lincee. Accademia nazionale dei lincei, 1997. The Euler characteristic in combinatorial geometry. V Klee, The American Mathematical Monthly. 702V. Klee. The Euler characteristic in combinatorial geometry. The American Mathematical Monthly, 70(2):pp. 119-127, 1963. A combinatorial analogue of Poincaré's duality theorem. V Klee, Canadian J. Math. 16V. Klee. A combinatorial analogue of Poincaré's duality theorem. Canadian J. Math., 16:517-531, 1964. On Geroch's counterexample to the algebraic Hopf conjecture. P F Klembeck, Proc. of the AMS. 59P.F. Klembeck. On Geroch's counterexample to the algebraic Hopf conjecture. Proc. of the AMS, 59, 1976. A graph theoretical Gauss-Bonnet-Chern theorem. O , O. Knill. A graph theoretical Gauss-Bonnet-Chern theorem. http://arxiv.org/abs/1111.5395, 2011. A discrete Gauss-Bonnet type theorem. O , Elemente der Mathematik. 67O. Knill. A discrete Gauss-Bonnet type theorem. Elemente der Mathematik, 67:1-17, 2012. A graph theoretical Poincaré-Hopf theorem. O , O. Knill. A graph theoretical Poincaré-Hopf theorem. http://arxiv.org/abs/1201.1162, 2012. An index formula for simple graphs. O , O. Knill. An index formula for simple graphs . http://arxiv.org/abs/1205.0306, 2012. On index expectation and curvature for networks. O , O. Knill. On index expectation and curvature for networks. http://arxiv.org/abs/1202.4514, 2012. The McKean-Singer Formula in Graph Theory. O , O. Knill. The McKean-Singer Formula in Graph Theory. http://arxiv.org/abs/1301.1408, 2012. The theorems of Green-Stokes,Gauss-Bonnet and Poincare-Hopf in Graph Theory. O , O. Knill. The theorems of Green-Stokes,Gauss-Bonnet and Poincare-Hopf in Graph Theory. http://arxiv.org/abs/1201.6049, 2012. The Dirac operator of a graph. O , O. Knill. The Dirac operator of a graph. http://http://arxiv.org/abs/1306.2166, 2013. O , Classical mathematical structures within topological graph theory. O. Knill. Classical mathematical structures within topological graph theory. http://arxiv.org/abs/1402.2029, 2014. Curvature from graph colorings. O , O. Knill. Curvature from graph colorings. http://arxiv.org/abs/1410.1217, 2014. A notion of graph homeomorphism. O , O. Knill. A notion of graph homeomorphism. http://arxiv.org/abs/1401.2819, 2014. The graph spectrum of barycentric refinements. O , O. Knill. The graph spectrum of barycentric refinements. http://arxiv.org/abs/1508.02027, 2015. A Sard theorem for graph theory. O , O. Knill. A Sard theorem for graph theory. http://arxiv.org/abs/1508.05657, 2015. Universality for Barycentric subdivision. O , O. Knill. Universality for Barycentric subdivision. http://arxiv.org/abs/1509.06092, 2015. Gauss-Bonnet for multi-linear valuations. O , O. Knill. Gauss-Bonnet for multi-linear valuations. http://arxiv.org/abs/1601.04533, 2016. On Fredholm determinants in topology. O , O. Knill. On Fredholm determinants in topology. https://arxiv.org/abs/1612.08229, 2016. On a Dehn-Sommerville functional for simplicial complexes. O , O. Knill. On a Dehn-Sommerville functional for simplicial complexes. https://arxiv.org/abs/1705.10439, 2017. On the arithmetic of graphs. O , O. Knill. On the arithmetic of graphs. https://arxiv.org/abs/1706.05767, 2017. The amazing world of simplicial complexes. O , O. Knill. The amazing world of simplicial complexes. https://arxiv.org/abs/1804.08211, 2018. Constant index expectation curvature for graphs or Riemannian manifolds. O , O. Knill. Constant index expectation curvature for graphs or Riemannian manifolds. https://arxiv.org/abs/1912.11315, 2019. The counting matrix of a simplicial complex. O , O. Knill. The counting matrix of a simplicial complex. https://arxiv.org/abs/1907.09092, 2019. O , Dehn-Sommerville from Gauss-Bonnet. O. Knill. Dehn-Sommerville from Gauss-Bonnet. https://arxiv.org/abs/1905.04831, 2019. Energized simplicial complexes. O , O. Knill. Energized simplicial complexes. https://arxiv.org/abs/1908.06563, 2019. O , More on Poincaré-Hopf and Gauss-Bonnet. O. Knill. More on Poincaré-Hopf and Gauss-Bonnet. A parametrized Poincare-Hopf theorem and clique cardinalities of graphs. O , O. Knill. A parametrized Poincare-Hopf theorem and clique cardinalities of graphs. https://arxiv.org/abs/1906.06611, 2019. Poincaré-Hopf for vector fields on graphs. O , O. Knill. Poincaré-Hopf for vector fields on graphs. https://arxiv.org/abs/1911.04208, 2019. Complexes, Graphs, Homotopy, Products and Shannon Capacity. O , O. Knill. Complexes, Graphs, Homotopy, Products and Shannon Capacity. https://arxiv.org/abs/2012.07247, 2020. Linear Algebra and its Applications. O , 600The energy of a simplicial complexO. Knill. The energy of a simplicial complex. Linear Algebra and its Applica- tions, 600:96-129, 2020. Green functions of energized complexes. O , O. Knill. Green functions of energized complexes. https://arxiv.org/abs/2010.09152, 2020. Integral geometric Hopf conjectures. O , O. Knill. Integral geometric Hopf conjectures. https://arxiv.org/abs/2001.01398, 2020. On index expectation curvature for manifolds. O , O. Knill. On index expectation curvature for manifolds. https://arxiv.org/abs/2001.06925, 2020. Remarks about the arithmetic of graphs. O , O. Knill. Remarks about the arithmetic of graphs. https://arxiv.org/abs/2106.10093, 2021. O , Finite topologies for finite geometries. O. Knill. Finite topologies for finite geometries. http://arxiv.org/abs/2301.03156, 2023. Proofs and Refutations. I Lakatos, Cambridge University PressI. Lakatos. Proofs and Refutations. Cambridge University Press, 1976. The Euler characteristic is the unique locally determined numerical homotopy invariant of finite complexes. N Levitt, Discrete Comput. Geom. 7N. Levitt. The Euler characteristic is the unique locally determined numerical homotopy invariant of finite complexes. Discrete Comput. Geom., 7:59-67, 1992. Pascal triangle, Stirling numbers and the unique invariance of the euler characteristic. A Luzon, M A Moron, arxiv.1202.0663A. Luzon and M.A. Moron. Pascal triangle, Stirling numbers and the unique invariance of the euler characteristic. arxiv.1202.0663, 2012. Finite topological spaces. Notes for REU, Chicago. J P May, J.P. May. Finite topological spaces. Notes for REU, Chicago, 2003-2008, 2008. Curvature and the eigenvalues of the Laplacian. H P Mckean, I M Singer, J. Differential Geometry. 11H.P. McKean and I.M. Singer. Curvature and the eigenvalues of the Laplacian. J. Differential Geometry, 1(1):43-69, 1967. Singular points of vector fields under general boundary conditions. M Morse, American Journal of Mathematics. 51M. Morse. Singular points of vector fields under general boundary conditions. American Journal of Mathematics, 51, 1929. A simple elementary proof of The Unimodularity Theorem of Oliver Knill. Linear Algebra and Its applications. S K Mukherjee, S Bera, S.K. Mukherjee and S. Bera. A simple elementary proof of The Unimodularity Theorem of Oliver Knill. Linear Algebra and Its applications, pages 124-127, 2018. . E S Munkholm, H J Munkholm, Poul heegaard, the Dehn-Heegaard Enzyklopädie articleE.S. Munkholm and H.J. Munkholm. Poul hee- gaard, the Dehn-Heegaard Enzyklopädie article (1907). Elements of Algebraic Topology. J R Munkres, Addison-WesleyJ.R. Munkres. Elements of Algebraic Topology. Addison-Wesley, 1984. Face numbers of manifolds with boundary. S Murai, I Novik, S. Murai and I. Novik. Face numbers of manifolds with boundary. http://arxiv.org/abs/1509.05115, 2015. Large-scale curvature of networks. O Narayan, I Saniee, Physical Review E. 84O. Narayan and I. Saniee. Large-scale curvature of networks. Physical Review E, 84, 2011. . L Nicolaescu, Lectures on the Geometry of Manifolds. World Scientific. second editionL. Nicolaescu. Lectures on the Geometry of Manifolds. World Scientific, sec- ond edition, 2009. Applications of Klee's Dehn-Sommerville relations. I Novik, E Swartz, Discrete Comput. Geom. 422I. Novik and E. Swartz. Applications of Klee's Dehn-Sommerville relations. Discrete Comput. Geom., 42(2):261-276, 2009. Sur les courbes definies par les equation differentielle III. H Poincaré, Journal de Mathematique pures et appliquées. H. Poincaré. Sur les courbes definies par les equation differentielle III. Journal de Mathematique pures et appliquées, pages 167-244, 1885. Positional information as symmetry of morphogenetic fields. E Presnov, V Isaeva, Forma. 5E. Presnov and V. Isaeva. Positional information as symmetry of morpho- genetic fields. Forma, 5:59-61, 1990. Local and global aspects of biological morphogenesis. E Presnov, V Isaeva, Speculations in Science and Technology. 1468E. Presnov and V. Isaeva. Local and global aspects of biological morphogen- esis. Speculations in Science and Technology, 14:68, 1991. Euler's Gem. D S Richeson, The polyhedron formula and the birth of topology. Princeton, NJPrinceton University PressD.S. Richeson. Euler's Gem. Princeton University Press, Princeton, NJ, 2008. The polyhedron formula and the birth of topology. The Laplacian on a Riemannian Manifold. S Rosenberg, Student Texts. Cambridge University Press31S. Rosenberg. The Laplacian on a Riemannian Manifold, volume 31 of London Mathematical Society, Student Texts. Cambridge University Press, 1997. Graph multiplication. G Sabidussi, Math. Z. 72G. Sabidussi. Graph multiplication. Math. Z., 72:446-457, 1959/1960. Introduction to integral geometry. L A Santalo, Hermann and Editeurs. L.A. Santalo. Introduction to integral geometry. Hermann and Editeurs, Paris, 1953. Theorie der Vielfachen Kontinuität. L Schläfli, Cornell University Library Digital CollectionsL. Schläfli. Theorie der Vielfachen Kontinuität. Cornell University Library Digital Collections, 1901. The concept of manifold, 1850-1950. E Scholz, History of Topology. Elsevier. E. Scholz. The concept of manifold, 1850-1950. In History of Topology. Else- vier, 1999. The zero error capacity of a noisy channel. C Shannon, IRE Transactions on Information Theory. 2C. Shannon. The zero error capacity of a noisy channel. IRE Transactions on Information Theory, 2:8-19, 1956. Finite topological spaces. R E Stong, Transactions of the American Mathematical Society. R.E. Stong. Finite topological spaces. Transactions of the American Mathe- matical Society, pages 325-340, 1965. On the polyhedral graphs with positive combinatorial curvature. T Réti, E Bitay, Z Kosztolányi, Acta Polytechnica Hungarica. 2T. Réti, E. Bitay, Z. Kosztolányi. On the polyhedral graphs with positive combinatorial curvature. Acta Polytechnica Hungarica, 2:19-37, 2005. Simplicial spaces, nuclei and m-groups. J H C Whitehead, Proc. London Math. Soc. 451J.H.C. Whitehead. Simplicial spaces, nuclei and m-groups. Proc. London Math. Soc., 45(1):243-327, 1939. Problem section. S T Yau, Seminar on Differential Geometry. Princeton University Press102S.T. Yau. Problem section. In Seminar on Differential Geometry, volume 102 of Annals of Mathematics Studies. Princeton University Press, 1982. G M Ziegler, Lectures on Polytopes. Springer VerlagG.M. Ziegler. Lectures on Polytopes. Springer Verlag, 1995. On some properties of linear complexes. (russian). A A Zykov, Mat. Sbornik N.S. 24662138Department of Mathematics, Harvard UniversityA.A. Zykov. On some properties of linear complexes. (russian). Mat. Sbornik N.S., 24(66):163-188, 1949. Department of Mathematics, Harvard University, Cambridge, MA, 02138
[]
[ "Gradient-Based Automated Iterative Recovery for Parameter-Efficient Tuning", "Gradient-Based Automated Iterative Recovery for Parameter-Efficient Tuning" ]
[ "Maximilian Mozes [email protected] \nGoogle Research\n\n\nUniversity College London\n\n", "Tolga Bolukbasi [email protected] \nGoogle Research\n\n", "Ann Yuan [email protected] \nGoogle Research\n\n", "Frederick Liu [email protected] \nGoogle Research\n\n", "Nithum Thain [email protected] \nGoogle Research\n\n", "Lucas Dixon [email protected] \nGoogle Research\n\n" ]
[ "Google Research\n", "University College London\n", "Google Research\n", "Google Research\n", "Google Research\n", "Google Research\n", "Google Research\n" ]
[]
Pretrained large language models (LLMs) are able to solve a wide variety of tasks through transfer learning. Various explainability methods have been developed to investigate their decision making process.TracIn(Pruthi et al., 2020)is one such gradient-based method which explains model inferences based on the influence of training examples. In this paper, we explore the use of TracIn to improve model performance in the parameter-efficient tuning (PET) setting. We develop conversational safety classifiers via the prompt-tuning PET method and show how the unique characteristics of the PET regime enable TracIn to identify the cause for certain misclassifications by LLMs. We develop a new methodology for using gradient-based explainability techniques to improve model performance, G-BAIR: gradient-based automated iterative recovery. We show that G-BAIR can recover LLM performance on benchmarks after manually corrupting training labels. This suggests that influence methods like TracIn can be used to automatically perform data cleaning, and introduces the potential for interactive debugging and relabeling for PET-based transfer learning methods.
10.48550/arxiv.2302.06598
[ "https://export.arxiv.org/pdf/2302.06598v1.pdf" ]
256,827,405
2302.06598
4805470c7e5abf36781bf89f6fe8743c7344ab90
Gradient-Based Automated Iterative Recovery for Parameter-Efficient Tuning Maximilian Mozes [email protected] Google Research University College London Tolga Bolukbasi [email protected] Google Research Ann Yuan [email protected] Google Research Frederick Liu [email protected] Google Research Nithum Thain [email protected] Google Research Lucas Dixon [email protected] Google Research Gradient-Based Automated Iterative Recovery for Parameter-Efficient Tuning Pretrained large language models (LLMs) are able to solve a wide variety of tasks through transfer learning. Various explainability methods have been developed to investigate their decision making process.TracIn(Pruthi et al., 2020)is one such gradient-based method which explains model inferences based on the influence of training examples. In this paper, we explore the use of TracIn to improve model performance in the parameter-efficient tuning (PET) setting. We develop conversational safety classifiers via the prompt-tuning PET method and show how the unique characteristics of the PET regime enable TracIn to identify the cause for certain misclassifications by LLMs. We develop a new methodology for using gradient-based explainability techniques to improve model performance, G-BAIR: gradient-based automated iterative recovery. We show that G-BAIR can recover LLM performance on benchmarks after manually corrupting training labels. This suggests that influence methods like TracIn can be used to automatically perform data cleaning, and introduces the potential for interactive debugging and relabeling for PET-based transfer learning methods. Introduction Pretrained large language models (LLMs) are Transformer-based models (Vaswani et al., 2017) with hundreds of millions, or even billions of parameters trained on large datasets containing hundreds of billions of words (Raffel et al., 2020;Brown et al., 2020;Chowdhery et al., 2022). LLMs have recently become ubiquitous due to their ability to solve a wide range of problems, and their capacity for transfer learning with relatively little data. G -B A I R Test set performance Baseline performance Sen ten ceT 5 Figure 1: Illustration of our G-BAIR method used to recover prompt-tuning model performance drops incurred through data corruption. Clean model performance (G) drops as a result of training data corruption (). G-BAIR ( ) can be applied to identify and mitigate corrupted examples, thereby recovering clean test set performance better than the compared SentenceT5 (N) baseline. 2021), (2) fine-tuning the entire model on large datasets containing thousands of examples (Peters et al., 2018;Devlin et al., 2019), and (3) parameterefficient tuning (PET), in which only a small number of model parameters (e.g., a few thousand) are tuned (Li and Liang, 2021;Liu et al., 2022). This last approach has been shown to outperform incontext learning and achieve comparable performance to fine-tuning given only moderately sized datasets containing hundreds of examples (Agrawal et al., 2022). An advantage of using smaller datasets and training fewer parameters is that it becomes possible to iteratively improve the resulting model, for example upon observing incorrect predictions on a test set. To do so requires interpreting the underlying cause of incorrect predictions. Various techniques have been developed for this purpose. Popular approaches include saliency methods, like integrated gradients or SHapley Additive exPlanations (SHAP; Lundberg and Lee, 2017) which identify key features that the model is using in its calculation, and training data attribution meth-ods, like TracIn (Pruthi et al., 2020) and influence functions (Koh and Liang, 2017) which retrieve the most relevant training examples based on their influence on a test prediction. Beyond explainability, these techniques have also been applied for mitigation to improve model performance, by manipulating either highlighted features or training examples. In this paper, we demonstrate the efficacy of TracIn for parameter-efficient tuning. This recipe has a number of unique advantages. Using TracIn with whole model fine-tuning is intractable without approximation techniques, like layer selection or gradient projection (Yeh et al., 2022), due to in-practice memory constraints. By contrast with PET, we are working with both a smaller training dataset and a smaller number of training parameters. Thus when using TracIn with PET, we are able to compute the exact influence of each training example on a test prediction. We introduce the Gradient-Based Automated Iterative Recovery (G-BAIR) protocol, by which we iteratively improve a PET model through identifying examples using TracIn that are responsible for lowering model performance in a validation set ( Figure 1). We develop a corrupted data benchmark on two datasets related to offensive content and toxicity detection, PARLAI SINGLE STAN-DARD and PARLAI SINGLE ADVERSARIAL (Dinan et al., 2019), to evaluate our protocol for identifying mislabeled examples and improving model performance. Using the recently proposed T5 BASE, T5 XXL (Raffel et al., 2020), and PALM 62B (Chowdhery et al., 2022) LLMs, we show that our protocol is able to recover a significant portion of the precision that is lost by corrupted data labels for both datasets, thereby outperforming both random and semantics-based baselines. Related work Parameter-efficient tuning and data quality Recently, methods for parameter-efficient tuning of language models have been introduced that are effective with smaller datasets (i.e., 100s of examples; Liu et al., 2022;Agrawal et al., 2022). However, commonly used natural language processing (NLP) datasets, including benchmark sets, have been discovered to contain noise in the form of typos, spelling mistakes, and even mislabelings (Ankit Kumar, 2020;Northcutt et al., 2021). The smaller the dataset, the higher the price of mislabeled examples. Automated dataset denoising techniques have been proposed (Müller and Markert, 2019). Although one can employ multiple strategies to achieve cleaner datasets, our goal is to identify the noisy examples that actually affect the model predictions. We take a model interpretability-based approach and choose to ignore examples that appear to have no effect on the model quality. This is different from standard data cleaning approaches where the focus is on final dataset quality independent of the model. Influence functions and other applications of TracIn Earlier methods for studying the influence of training examples on model parameters scale poorly with the number of model parameters and dataset size (Koh and Liang, 2017;Yeh et al., 2018). More recent methods address some of the scalability issues of the original influence functions through approximations (Schioppa et al., 2022). Basu et al. (2021) show that the original formulation is fairly accurate for shallow networks, but are often noisy for deeper networks. In this paper, we focus on TracIn (Pruthi et al., 2020) where the original influence problem is reformulated to reduce down to gradient-based similarity. Søgaard et al. (2021) show that TracIn is more robust and accurate compared to older second-order methods. Beyond the traditional use case of finetuning for smaller models, TracIn is successfully applied to augmenting task data from the pretraining data (Han and Tsvetkov, 2022) and for facttracing in large language models (Akyürek et al., 2022). Dialog safety Toxicity detection is a longstanding problem in NLP (Wulczyn et al., 2017). Whereas earlier approaches rely on decision trees and support vector machines (Banik and Rahman, 2019), state-of-the-art classifiers use deep architectures such as transformers (Caselli et al., 2020;Zhou et al., 2021). With the rise of general purpose chatbots (OpenAI, 2022), particular attention has been paid to the problem of toxicity detection in dialog contexts, many featuring adversarial examples deliberately crafted to trick chatbots (Miller et al., 2017;Xu et al., 2021). Influence functions and TracIn Given a training example z = (x, y) and a test example z test = (x test , y test ), influence functions estimate the change in L(z test ) (the test example loss) caused by the training example z. Earlier influence function work (Koh and Liang, 2017) computes this by perturbing the training example around the converged checkpoint and measuring the effect this has on L(z test ) through changes in the parameters. This essentially comes down to a second order approximation of loss: I(z, z test ) = −∇ W L(z test ,Ŵ )H −1 W ∇ W L(z,Ŵ ) where H is the Hessian of the loss at the final model checkpoint. In this paper, we use an approach from a more recent method that is less computationally expensive and shows promising results (Søgaard et al., 2021;Han and Tsvetkov, 2022). TracIn formulates the question of attribution as an accounting task throughout training. Every time a training example is seen, it records the change in loss for each test example and accumulates the losses throughout training. Then, it approximates the loss with a first-order Taylor series: I(z, z test ) = −∇ W L(z test ,Ŵ ) T ∇ W L(z,Ŵ ) The total accounting cost thus reduces down to computing gradients over a set of checkpoints. When gradient similarity is used in this form, outlier examples with abnormally large gradients may dominate the retrieval results. We use cosine similarity to alleviate this effect, following Barshan et al. (2020). We observe that normalized retrieval tends to return examples that are more semantically related. 1 TracIn for soft prompts As mentioned, measuring data influence through TracIn is achieved by computing gradient similarities between a training and a test example. For 1 We report on additional experiments computing similarities using the dot product without normalization in Section 6.3. fine-tuned billion parameter models, this involves computing and saving gradients of the size of the model (number of trainable parameters) per example. While this is intractable without approximation, we utilize parameter-efficient tuning methods, i.e., prompt-tuning (Lester et al., 2021), to reduce the computational cost. Since prompt-tuning updates only a small subset of parameters during training (i.e., thousands), our gradient representations are low-dimensional (768 for T5 BASE, 4,096 for T5 XXL, and 8,192 for PALM 62B) and we can easily measure their similarities. It is therefore possible to precisely compute the influence of thousands of training examples on a single test example efficiently by simply measuring vector similarities for the samples' prompt gradients. To test this method, we train a soft prompt to classify offensive examples in the PARLAI dataset (Dinan et al., 2019), a popular open source dialog toxicity classification dataset comprised of conversational statements and labels indicating whether the statement would be acceptable in friendly conversation. We then evaluate our model on the test set, and use TracIn to find the closest, i.e., most influential training set examples for misclassified validation set examples. Table 1 G-BAIR Having established the advantages of measuring data influence efficiently using prompt-tuning, we here explain how this approach can be used to identify and mitigate corrupt training examples. We propose Gradient-Based Automated Iterative Recovery (G-BAIR) for parameter-efficient tuning-a protocol for identifying and relabeling mislabeled training examples in a dataset. G-BAIR is meant to be applied iteratively to a training set over a number of n iterations. Algorithm 1 Gradient-Based Automated Iterative Recovery (G-BAIR) Require: Language model L, training set T train , validation set T val , number of iterations n, num- ber of influential examples to consider k, number of examples to relabel τ T 1 train ← T train for i ∈ {1, ..., n} do p ← train_prompt(L, T i train ) V i ← sample_validation_set(T val ) V i mis ← get_misclassified(L, p, V i ) T i inf ← get_inf(L, p, T i train , V i mis , k, τ ) T i R ← relabel_examples(T i inf ) T i+1 train ← T i train \ T i inf ∪ T i R end for The method is illustrated in Algorithm 1. Suppose we are given a language model L, a training set T train containing a fraction of mislabeled examples, as well as a validation set T val and a test set T test containing only correctly labeled examples. In each iteration i, G-BAIR uses TracIn to identify influential training set examples for misclassified validation set examples. To do so, we first train a prompt p on the training set T i train using language model L (train_prompt). We then sample a validation subset V i from T val (sample_validation_set) and run inference over it, retaining only the misclassified instances from the validation set, denoted V i mis (get_misclassified). Using TracIn, we compute the k most influential training set examples for each example in V i mis , and rank the retrieved influential examples according to their frequency (get_inf). We then consider the set T i inf containing the τ most commonly occurring influential examples to be mislabeled, and relabel them to obtain T i R (relabel_examples). 2 This set is used to modify the training set by removing T i inf and adding T i R . Afterwards, we retrain the prompt p on the modified training set T i+1 train . Following this protocol over multiple iterations, we assess model performance using the prompt at each iteration on the held-out test set T test . Experiments To assess G-BAIR's performance at identifying and mitigating mislabeled training data, we report on a series of experiments using manually corrupted datasets. Models We conduct our experiments on three pretrained language models and further prompt-tune them with the datasets described in Section 5.2. The first two are variants of T5 (Raffel et al., 2020), namely the BASE version with 220 million parameters and the XXL version with 11 billion parameters. The third is the 62 billion parameter version of PALM (Chowdhery et al., 2022). We decided to use these three models in order to test whether there may exist a correlation between model size and TracIn performance. Across experiments, we tune soft prompts consisting of 10 token vectors, using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.1 and a weight decay of 0.0001. For T5 models we use a batch size of 32, and for PALM 62B one of 4. 3 We train all models for 20 epochs. Datasets We experiment with two datasets from the PAR-LAI (Dinan et al., 2019) data collection effort, denoted STANDARD and ADVERSARIAL. The PAR-LAI datasets consist of single-turn conversations annotated based on offensiveness. For the STAN-DARD dataset, crowdworkers were simply asked to write sentences that they would consider offensive, whereas for the ADVERSARIAL one, the workers were asked to write sentences that are offensive, but that a classifier might predict to be safe. Both datasets come with pre-defined splits of 24,000 examples for training, and 3,000 each for validation and testing. All three language models perform well on the test set portions of the two datasets when prompttuned with random samples of 1,000 examples from the training sets. During sampling, we ensure that the resulting training set is class-balanced (the validation and test sets are imbalanced with positive examples making up around 10% of the data). Experimental details To evaluate our method, we manually corrupt the dataset by randomly flipping 30% of the labels (we denote the corrupted training set as T C train ). We found this level to be sufficient for causing a significant drop in model accuracy and compare the effects of choosing different levels of corruption in ablation studies (Section 6.2). Figure 2 shows the impact on performance for each model as a result of this corruption. Then we train a classifier on the corrupted dataset, and evaluate on the validation data. Using G-BAIR for n = 10 iterations, we take misclassified examples in the validation set, and identify their most influential training set examples according to TracIn. We collect the τ = 20 most frequently identified training set examples according to this method (aggregated from the k = 3 most influential training set examples for each misclassified validation set example) in each iteration, and relabel them. We furthermore select a subset of 500 examples from the entire validation set (containing 3,000 examples each for STANDARD and ADVERSARIAL) at each iteration to form V i . Our method aims to iteratively clean up the dataset by repeating this intervention, retraining the classifier each time. For the prompts trained in each iteration, we sample 200 examples from the datasets' validation sets for checkpoint evaluation. Specifically, after each epoch during prompttuning, we evaluate performance on the sampled validation set, and select the checkpoint producing the lowest loss for testing. Baselines We compare this intervention to two baselines: (1) Ni et al., 2021). 4 The embedding similarity baseline lets us study the effect of semantic similarity in isolation, in order to rule out the possibility that model performance on a validation set example can be predicted through its tokens alone. Evaluation metrics We analyze G-BAIR task performance in two ways. First, we are interested in the recovery performance, measured in terms of average precision (AP), achieved by the method over a range of iterations. The AP is computed as the area under the precision-recall curve. Second, we analyze the method's corrupted instances identification rate (CI 2 R). CI 2 R measures how many relevant (i.e., via the corruption process mis- CI 2 R = 1 n n i=1 |T i inf ∩ T C train | |T i inf | The CI 2 R lies between a value of 0.0 and 1.0, with the former representing a total miss (i.e., none of the identified influential examples have been corrupted) and the latter representing a total hit (i.e., all of the identified influential examples have been corrupted). Table 2: Mean (standard deviation) performance scores in terms of average precision (AP) as well as the CI 2 R for clean, corrupted, and recovered training sets across three seeds. For AP, Clean and Corrupted denote performances on the test set before and after corrupting 30% of the training data. Random and SentenceT5 show the recovered performances using the two baselines, and G-BAIR shows recovered performance using our proposed method. Best performances per metric and model-dataset combination are highlighted in bold. Results Average precision Performance results using the two baselines (Random and SentenceT5) as well as G-BAIR in terms of average precision can be found in Table 2 (area denoted with AP). We report the clean model performance (Clean), the performance after corruption (Corrupted), and the best performance achieved after iterating over and relabeling examples in the dataset using the three methods. We first observe that both T5 BASE and T5 XXL exhibit substantial hits in performance after corruption (e.g., from 0.91 AP to 0.19 AP for T5 BASE and 0.95 AP to 0.38 AP for T5 XXL on ADVERSARIAL). This is in contrast to PALM 62B, which shows less substantial decreases in performance after corruption. For both T5 BASE and T5 XXL, we observe that across both datasets, G-BAIR largely outperforms the two baselines in terms of AP recovery over iterations, recovering up to 35% AP (T5 BASE on ADVERSARIAL with 0.19 AP → 0.54 AP and T5 XXL on the same dataset with 0.38 AP → 0.73 AP). The SentenceT5 baseline seems to provide little additional benefits over the Random baseline (with the exception of T5 XXL on AD-VERSARIAL, where we observe a difference of 0.13 AP between the two baselines), indicating that relabeling instances based on semantic similarity cannot recover the performance drop incurred through training data corruption. The performance recovery for PALM 62B is less clear as compared to T5. We do observe that for both datasets, G-BAIR outperforms baselines in terms of AP recovery, yet the differences be-tween baselines and G-BAIR is at only 0.01 AP in absolute value for STANDARD, and 0.07 AP for ADVERSARIAL. Given the relatively small drop in performance after corruption (0.98 AP → 0.87 AP for STANDARD and 0.96 AP → 0.83 AP for ADVERSARIAL), these results might not be unexpected. The larger model seems less affected by mislabeled examples, a result also observed of incontext learning (Min et al., 2022): it performs well even after a 30% corruption, thus mislabeled training examples seem to play a less impactful role for model decision making, and mitigating their existence is hence less impactful to the resulting model AP performance scores. CI 2 R. Performance results in terms of CI 2 R can be found in Table 2 (area denoted CI 2 R). Here we report the CI 2 R for both baselines as well as G-BAIR across models and datasets. In line with the AP recovery results, we observe that G-BAIR largely outperforms both baselines in terms of CI 2 R. We observe that both the Random and SentenceT5 baselines exhibit scores of around 0.2 consistently across experiments. For the former this is expected: the Random baseline relabels 20 training examples in each iteration, of which 6 (i.e., 30% of 20) are on average misclassified. After n = 10 iterations the baseline has then relabeled 10 · 6 = 60 mislabeled examples, which makes up 20% of the 300 corrupted training examples. It is interesting to see that the SentenceT5 baseline does not provide any additional benefit in terms of CI 2 R over the Random one. G-BAIR, however, exhibits CI 2 R scores far above the random draw, with scores reaching up to 0.52 (T5 XXL and STANDARD). This demonstrates that G-BAIR is able to use TracIn effectively to identify corrupted training examples, and gradients encode Ablation studies We conduct a series of additional analyses to better understand the impact of validation set size, corruption rate, similarity measure, and intervention method for influential examples when using G-BAIR. Different validation set sizes We first investigate the impact of the validation set size on the recovery rate of G-BAIR. To do this, we experiment with validation set sizes of 300, 1,000, 2,000, and 3,000 in addition to the 500 as shown above. Experiments are conducted with T5 BASE, on STANDARD. Results can be seen in Figure 4. We observe that performance recovery does not seem to dramatically differ between different validation set sizes. This is somewhat unexpected, since one could argue that a larger validation set size leads to a larger absolute number of misclassified validation set instances (for a fixed model performance), which in turn creates a larger pool of influential training examples that may better represent the corrupt training set. However, the experimental results hint at a different picture. It seems that even with a validation set of 300 examples, G-BAIR is capable of identifying a reasonable set of corrupted examples, which, when removed from the training set, leads to notable performance recovery on the test set. This finding suggests that G-BAIR may be useful even without a large validation set. 5 Figure 5: Comparison of G-BAIR recovery performance for different corruption rates (10%, 20%, 30%, 40%) when run on STANDARD with T5 BASE. We show average results across three seeds. Corruption rate We furthermore experiment with T5 BASE on STANDARD using different corruption rates, i.e., 10%, 20%, and 40% in addition to the results with 30% shown above. The results can be found in Figure 5. It can be seen that the larger the corruption rate, the larger the initial drop in performance on the test sets. However, across corruption rates, we observe that G-BAIR is able to successfully recover performances, indicating that the method is able to identify mislabeled data and mitigate their harms even in the presence of a smaller number (i.e., 10%) of corrupted examples. Similarity measure We also study whether using the dot product, i.e., the unnormalized cosine distance, as an alternative measure of similarity might have an impact on the recovery performance. As mentioned in Section 3, using unnormalized measures of similarity between gradient representations may lead to the retrieval of outlier examples with large gradient magnitudes, and could potentially hinder the effects obtained from relabeling influential examples. In line with previous experiments, we report results using T5 BASE on STANDARD. The results in Figure 6 show that in practice, the choice of similarity measure seems to make little difference with respect to G-BAIR recovery perthe validation data can also be noisy. Our results use the validation data as is and therefore serve as a lower bound to the performance. G-BAIR also helps in settings where validation data can be very noisy, since cleaning a small set of examples is easier than cleaning the whole training data. formance. We observe that both measures yield similar recovery results. The standard deviations obtained using the dot product tend to be slightly larger as compared to the cosine similarity. This could be explained through the aforementioned argument that unnormalized measures of similarity might retrieve a smaller, more concentrated set of influential examples with large gradient magnitudes. This might result in worse generalization after the relabeling process. Figure 7: Comparison of G-BAIR recovery performance with two different recovery intervention methods (relabeling and removing) when run on STANDARD with T5 BASE. We show average results across three seeds. Relabeling or removing instances Finally, we repeat an experiment with T5 BASE on STANDARD in which instead of relabeling influential training examples, we remove them from the datasets. Removing examples instead of relabeling them has the advantage that it generalizes to non-binary tasks where easy automated relabeling is not possible. Unlike relabeling, however, removal shrinks the model's training set and might lead to scenarios in which too few training examples remain to fit a model via prompt-tuning. Figure 7 shows that, although relabeling tends to work better for G-BAIR, removal performs reasonably well, and we do not observe significant drops in model performance due to smaller training data. The Random Remove baseline yields fairly constant AP scores across iterations, even though 200 training examples (i.e., 20% of the training set) will have been removed after 10 iterations. Discussion In this paper we introduced G-BAIR, a protocol for iteratively improving the performance of an LLM trained under the PET regime. We showed that gradient-based measures of influence on misclassified validation set examples can identify corruptions in the training set. Finally, we presented effective mitigation strategies that enable LLMs to recover from such corruption at a range of different rates (from 10% corruption to 40% corruption). We observed that the model size, and accordingly an increased test set performance on clean data, seems to play a role in the effectiveness of recovery. PALM 62B, shown to be robust against a corruption rate of 30% on the training data (Figure 2), exhibited a less clear recovery of AP performance through G-BAIR. Nevertheless, considering performance in terms of CI 2 R, it is clear that TracIn-based retrieval of influential examples yields far more corrupted examples compared to embedding similarity-based and random baselines. We also discovered that the model performance can be consistently recovered through G-BAIR across validation set sizes (Section 6.1), showing that a few hundred, rather than thousands of validation examples suffice to identify and mitigate corrupted examples in training sets. A core limiting assumption for our method is that one has access to a golden, correctly labeled validation set. This is of course not always the case, but more fundamentally we presume that golden labels are obtainable for one's task. As LLMs are tasked with increasingly difficult problems, especially ones requiring judgment, the notion of ground truth starts to become elusive (Gordon et al., 2021). We observed when inspecting training examples from our test domain of conversational safety, that reasonable individuals may have genuine disagreements over the acceptability of an utterance. We believe a fruitful area of future work is bringing humans into the iteration loop to see whether more sophisticated interventions, beyond simply removing or relabeling examples, could further improve performance. Flipped labels are only one (straightforward) example of a data quality issue, which lends itself to automated mitigations. In the case of a legitimately ambiguous example, human intervention may be the only recourse. For example, TracIn may identify confusing examples that could be manually edited to provide more signal to the classifier. We envision methods like G-BAIR as tools to ultimately empower humans to more quickly diagnose data quality issues. As methods like parameter-efficient tuning enable us to move toward faster training loops using smaller datasets, data quality becomes even more important, and so do methods for dataset iteration. A Illustrations of performance recovery Additional illustrations analogous to the ones in Figure 3 can found in Figure B Baseline results with USE To assess potential effects from using SentenceT5 as our semantic encoder for the baseline com- Table 3: Mean (standard deviation) performance scores in terms of average precision (AP) as well as the CI 2 R for clean, corrupted, and recovered training sets across three seeds. For AP, Clean and Corrupted denote performances on the test set before and after corrupting 30% of the training data. Random and USE show the recovered performances using the two baselines, and G-BAIR shows recovered performance using our proposed method. Researchers have explored three approaches for transfer learning: (1) in-context few shot learning which requires only a handful of examples (Radford et al.; Brown et al., 2020; Schick and Schütze, * Work done during an internship at Google Research. shows a sample of such pairs of misclassified validation set examples and their most influential training set examples. According to these results the misclassifications may not indicate a failure by the model to learn the task, but rather the existence of questionably labeled examples in the training set. Figure 2 : 2Illustration of prompt-tuning model performances (in terms of average precision) before (Clean) and after corrupting 30% of the training data (Corrupted) for both STANDARD (a) and ADVERSARIAL (b). randomly removing τ = 20 training set examples, and (2) removing the τ = 20 training set examples that are semantically closest to misclassified validation set examples in embedding space, computed with SentenceT5 ( labeled) training set examples G-BAIR identified over the iterations. To do so, we consider the set of corrupted training examples T C train and measure the fraction of corrupted examples retrieved in T i inf at each iteration i, and compute the average over n iterations. Formally, Figure 3 : 3Illustration of model performance recovery for T5 BASE on ADVERSARIAL in terms of AP (a) and the fraction of identified corrupted examples per iteration (b). Results are averaged across three independent runs with the standard deviations shown.extra information that is not present in embeddings.Figure 3illustrates the recovery performance for G-BAIR and the two baselines with respect to the AP (a) and the fraction of identified corrupted training examples per iteration (b). For (a), iteration 0 denotes model test set performance when trained on the clean training set and iteration 1 when trained on the corrupted training set. Iterations 2-10 then show the performance recovery for each method. As we can see, G-BAIR shows clear improvements with respect to both evaluation settings. For (b), we additionally observe that in the first iteration, close to 100% of the influential examples identified by G-BAIR were indeed corrupted. The fraction of identified corrupted examples gradually decreases with an increasing number of iterations, indicating that an increasing test set performance yields a decrease in the retrieval of corrupted influential examples. Additional figures illustrating the remaining experiments can be found in Appendix A. Figure 4 : 4Comparison of G-BAIR recovery performance for different validation set sizes (300, 500, 1,000, 2,000, 3,000) when run on STANDARD with T5 BASE. We show average results across three seeds. Figure 6 : 6Comparison of G-BAIR recovery performance with two different similarity measures (cosine similarity and dot product) when run on STANDARD with T5 BASE. We show average results across three seeds with their respective standard deviations. Figure 10 ( 10T5 XXL on ADVERSARIAL), Figure 11 (PALM 62B on STANDARD), and Figure 12 (PALM 62B on ADVERSARIAL). Figure 8 :Figure 9 :Figure 10 :Figure 11 :Figure 12 : 89101112Illustration of model performance recovery for T5 BASE on STANDARD in terms of AP (a) and the fraction of identified corrupted examples per iteration (b). Results are averaged across three independent runs with the standard deviations shown. Illustration of model performance recovery for T5 XXL on STANDARD in terms of AP (a) and the fraction of identified corrupted examples per iteration (b). Results are averaged across three independent runs with the standard deviations shown. ) Fraction of corrupted examples identified by TracIn Illustration of model performance recovery for T5 XXL on ADVERSARIAL in terms of AP (a) and the fraction of identified corrupted examples per iteration (b). Results are averaged across three independent runs with the standard deviations shown. Illustration of model performance recovery for PALM 62B on STANDARD in terms of AP (a) and the fraction of identified corrupted examples per iteration (b). Results are averaged across three independent runs with the standard deviations shown. Illustration of model performance recovery for PALM 62B on ADVERSARIAL in terms of AP (a) and the fraction of identified corrupted examples per iteration (b). Results are averaged across three independent runs with the standard deviations shown. Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. Emily Dinan, Samuel Humeau, Bharath Chintagunta, and Jason Weston. 2019. Build it break it fix it for dialogue safety: Robustness from adversarial human attack. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4537-4546, Hong Kong, China. Association for Computational Linguistics. Mitchell L Gordon, Kaitlyn Zhou, Kayur Patel, Tatsunori Hashimoto, and Michael S Bernstein. 2021. The disagreement deconvolution: Bringing machine learning performance metrics in line with reality. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1-14. Xiaochuang Han and Yulia Tsvetkov. 2022. Orca: Interpreting prompted language models via locating supporting data evidence in the ocean of pretraining data. arXiv preprint arXiv:2205.12600. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In International Conference on Machine Learning, pages 1885-1894. PMLR. Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045-3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Xuhui Zhou, Maarten Sap, Swabha Swayamdipta, Yejin Choi, and Noah A Smith. 2021. Challenges in automated debiasing for toxic language detection. In EACL.Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 ( parison, we conducted additional experiments using universal sentence encoders(USE; Cer et al., 2018) instead of SentenceT5. Performance recovery scores for these experiments can be found inTable 3. In line with results in the main paper (Table 2), we can see little to no improvement when using USE as vector representations for measuring data influence. For AP, USE performs similar to Random (with the exception of T5 XXL on AD-VERSARIAL) and is hence largely outperformed by G-BAIR. A similar picture emerges for CI 2 R, where the scores for USE are close to the Random baseline across experiments, indicating that using universal sentence encodings does not aid in identifying corrupted examples more than randomly choosing examples from the training set. Since we only consider binary datasets in our experiments, relabeling is achieved by swapping the label. The PALM 62B model is very large so memory constraints limit the batch size we were able to use during prompttuning. We conducted additional experiments using the universal sentence encoder (Cer et al., 2018), but did not notice any substantial performance differences. Results for these experiments can be found in Appendix B. For many datasets, including PARLAI, a proportion of Priyanka Agrawal, Chris Alberti, Fantine Huot, Joshua Maynez, Ji Ma, Sebastian Ruder, Kuzman Ganchev, arXiv:2211.08264Dipanjan Das, and Mirella Lapata. 2022. Qameleon: Multilingual qa with only 5 examples. arXiv preprintPriyanka Agrawal, Chris Alberti, Fantine Huot, Joshua Maynez, Ji Ma, Sebastian Ruder, Kuzman Ganchev, Dipanjan Das, and Mirella Lapata. 2022. Qameleon: Multilingual qa with only 5 examples. arXiv preprint arXiv:2211.08264. . Ekin Akyürek, Tolga Bolukbasi, Frederick Liu, Binbin Xiong, Ian Tenney, Jacob Andreas, and Kelvin Guu. 2022. Towards tracing factual knowledge in language models back to the training dataEkin Akyürek, Tolga Bolukbasi, Frederick Liu, Bin- bin Xiong, Ian Tenney, Jacob Andreas, and Kelvin Guu. 2022. Towards tracing factual knowledge in language models back to the training data. Noisy text data: Achilles' heel of bert. Anuj Gupta, Ankit Kumar, Piyush Makhija, Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020). the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)35Anuj Gupta Ankit Kumar, Piyush Makhija. 2020. Noisy text data: Achilles' heel of bert. Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020), 35:16-21. Toxicity detection on bengali social media comments using supervised models. Nayan Banik, Hasan Rahman, International Conference on Innovation in Engineering and Technology. Nayan Banik and Hasan Rahman. 2019. Toxicity de- tection on bengali social media comments using su- pervised models. International Conference on Inno- vation in Engineering and Technology. . Elnaz Barshan, Marc-Etienne Brunet, Elnaz Barshan, Marc-Etienne Brunet, and Relatif: Identifying explanatory training samples via relative influence. Gintare Karolina Dziugaite, PMLRInternational Conference on Artificial Intelligence and Statistics. Gintare Karolina Dziugaite. 2020. Relatif: Iden- tifying explanatory training samples via relative influence. In International Conference on Artifi- cial Intelligence and Statistics, pages 1899-1909. PMLR. Influence functions in deep learning are fragile. Samyadeep Basu, Phil Pope, Soheil Feizi, International Conference on Learning Representations. Samyadeep Basu, Phil Pope, and Soheil Feizi. 2021. Influence functions in deep learning are fragile. In International Conference on Learning Representa- tions. Prefix-tuning: Optimizing continuous prompts for generation. Lisa Xiang, Percy Li, Liang, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingLong Papers1Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 4582-4597. Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, arXiv:2205.05638Mohit Bansal, and Colin Raffel. 2022. Few-shot parameter-efficient finetuning is better and cheaper than in-context learning. arXiv preprintHaokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit Bansal, and Colin Raffel. 2022. Few-shot parameter-efficient fine- tuning is better and cheaper than in-context learning. arXiv preprint arXiv:2205.05638. A unified approach to interpreting model predictions. M Scott, Su-In Lundberg, Lee, Advances in Neural Information Processing Systems. Curran Associates, Inc30Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Ad- vances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. Alexander H Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra Batra, Antoine Bordes, Devi Parikh, Jason Weston, arXiv:1705.06476Parlai: A dialog research software platform. arXiv preprintAlexander H. Miller, Will Feng, Adam Fisch, Ji- asen Lu, Dhruv Batra Batra, Antoine Bordes, Devi Parikh, and Jason Weston. 2017. Parlai: A di- alog research software platform. arXiv preprint arXiv:1705.06476. Rethinking the role of demonstrations: What makes in-context learning work. Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer, EMNLP. Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettle- moyer. 2022. Rethinking the role of demonstrations: What makes in-context learning work? In EMNLP. Identifying mislabeled instances in classification datasets. Nicolas M Müller, Karla Markert, International Joint Conference on Neural Networks (IJCNN). Nicolas M. Müller and Karla Markert. 2019. Identi- fying mislabeled instances in classification datasets. International Joint Conference on Neural Networks (IJCNN). Sentence-t5: Scalable sentence encoders from pre-trained text-to-text models. Jianmo Ni, Gustavo Hernández Ábrego, Noah Constant, Ji Ma, B Keith, Daniel Hall, Yinfei Cer, Yang, arXiv:2108.08877arXiv preprintJianmo Ni, Gustavo Hernández Ábrego, Noah Con- stant, Ji Ma, Keith B Hall, Daniel Cer, and Yin- fei Yang. 2021. Sentence-t5: Scalable sentence en- coders from pre-trained text-to-text models. arXiv preprint arXiv:2108.08877. Pervasive label errors in test sets destabilize machine learning benchmarks. Curtis G Northcutt, Anish Athalye, Jonas Mueller, Neural Information Processing Systems, Track on Datasets and Benchmarks. 35Curtis G. Northcutt, Anish Athalye, and Jonas Mueller. 2021. Pervasive label errors in test sets destabilize machine learning benchmarks. Neural Information Processing Systems, Track on Datasets and Bench- marks, 35. Chatgpt: Optimizing language models for dialogue. Openai, OpenAI. 2022. Chatgpt: Optimizing language models for dialogue. Deep contextualized word representations. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer, 10.18653/v1/N18-1202Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics1Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics. Estimating training data influence by tracing gradient descent. Garima Pruthi, Frederick Liu, Satyen Kale, Mukund Sundararajan, Advances in Neural Information Processing Systems. 33Garima Pruthi, Frederick Liu, Satyen Kale, and Mukund Sundararajan. 2020. Estimating training data influence by tracing gradient descent. Advances in Neural Information Processing Systems, 33. Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language mod- els are unsupervised multitask learners. Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, J Peter, Liu, J. Mach. Learn. Res. 21140Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- former. J. Mach. Learn. Res., 21(140):1-67. It's not just size that matters: Small language models are also few-shot learners. Timo Schick, Hinrich Schütze, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesTimo Schick and Hinrich Schütze. 2021. It's not just size that matters: Small language models are also few-shot learners. In Proceedings of the 2021 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 2339-2352. Scaling up influence functions. Andrea Schioppa, Polina Zablotskaia, David Vilar, Artem Sokolov, 10.1609/aaai.v36i8.20791Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence36Andrea Schioppa, Polina Zablotskaia, David Vilar, and Artem Sokolov. 2022. Scaling up influence func- tions. Proceedings of the AAAI Conference on Ar- tificial Intelligence, 36(8):8179-8186. Anders Søgaard, arXiv:2111.04683Revisiting methods for finding influential examples. arXiv preprintAnders Søgaard et al. 2021. Revisiting methods for finding influential examples. arXiv preprint arXiv:2111.04683. Attention is all you need. Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, 30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information process- ing systems, 30. Ex machina: Personal attacks seen at scale. Ellery Wulczyn, Nithum Thain, Lucas Dixon, Proceedings of the 26th international conference on world wide web. the 26th international conference on world wide webEllery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Ex machina: Personal attacks seen at scale. In Proceedings of the 26th international conference on world wide web, pages 1391-1399. Bot-adversarial dialogue for safe conversational agents. Jing Xu, Da Ju, Margaret Li, Y-Lan Boureau, Jason Weston, Emily Dinan, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesJing Xu, Da Ju, Margaret Li, Y-Lan Boureau, Jason We- ston, and Emily Dinan. 2021. Bot-adversarial dia- logue for safe conversational agents. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 2950-2968. Representer point selection for explaining deep neural networks. Chih-Kuan Yeh, Joon Sik Kim, Ian En-Hsu Yen, Pradeep Ravikumar, Proc. NeurIPS. NeurIPSChih-Kuan Yeh, Joon Sik Kim, Ian En-Hsu Yen, and Pradeep Ravikumar. 2018. Representer point selec- tion for explaining deep neural networks. In Proc. NeurIPS. Chih-Kuan Yeh, Ankur Taly, Mukund Sundararajan, Frederick Liu, Pradeep Ravikumar, arXiv:2202.11844First is better than last for training data influence. arXiv preprintChih-Kuan Yeh, Ankur Taly, Mukund Sundararajan, Frederick Liu, and Pradeep Ravikumar. 2022. First is better than last for training data influence. arXiv preprint arXiv:2202.11844.
[]
[ "Linear Precoding for the MIMO Multiple Access Channel with Finite Alphabet Inputs and Statistical CSI", "Linear Precoding for the MIMO Multiple Access Channel with Finite Alphabet Inputs and Statistical CSI" ]
[ "Member, IEEEYongpeng Wu ", "Member, IEEEChao-Kai Wen ", "Fellow, IEEE, Xiqi Gao, Senior Member, IEEEChengshan Xiao ", "Fellow, IEEE.Robert Schober " ]
[]
[]
In this paper, we investigate the design of linear precoders for the multiple-input multiple-output (MIMO) multiple access channel (MAC). We assume that statistical channel state information (CSI) is available at the transmitters and consider the problem under the practical finite alphabet input assumption. First, we derive an asymptotic (in the large system limit) expression for the weighted sum rate (WSR) of the MIMO MAC with finite alphabet inputs and Weichselberger's MIMO channel model. Subsequently, we obtain the optimal structures of the linear precoders of the users maximizing the asymptotic WSR and an iterative algorithm for determining the precoders. We show that the complexity of the proposed precoder design is significantly lower than that of MIMO MAC precoders designed for finite alphabet inputs and instantaneous CSI. Simulation results for finite alphabet signalling indicate that the proposed precoder achieves significant performance gains over existing precoder designs.Index Terms-Finite alphabet, linear precoding, MIMO MAC, statistical CSI.
10.1109/twc.2014.2363105
[ "https://arxiv.org/pdf/1410.2963v1.pdf" ]
19,505
1410.2963
f89386eb3bbf80f02f22f590c7e87e1af7697cd4
Linear Precoding for the MIMO Multiple Access Channel with Finite Alphabet Inputs and Statistical CSI 11 Oct 2014 Member, IEEEYongpeng Wu Member, IEEEChao-Kai Wen Fellow, IEEE, Xiqi Gao, Senior Member, IEEEChengshan Xiao Fellow, IEEE.Robert Schober Linear Precoding for the MIMO Multiple Access Channel with Finite Alphabet Inputs and Statistical CSI 11 Oct 20141 In this paper, we investigate the design of linear precoders for the multiple-input multiple-output (MIMO) multiple access channel (MAC). We assume that statistical channel state information (CSI) is available at the transmitters and consider the problem under the practical finite alphabet input assumption. First, we derive an asymptotic (in the large system limit) expression for the weighted sum rate (WSR) of the MIMO MAC with finite alphabet inputs and Weichselberger's MIMO channel model. Subsequently, we obtain the optimal structures of the linear precoders of the users maximizing the asymptotic WSR and an iterative algorithm for determining the precoders. We show that the complexity of the proposed precoder design is significantly lower than that of MIMO MAC precoders designed for finite alphabet inputs and instantaneous CSI. Simulation results for finite alphabet signalling indicate that the proposed precoder achieves significant performance gains over existing precoder designs.Index Terms-Finite alphabet, linear precoding, MIMO MAC, statistical CSI. I. INTRODUCTION In recent years, the channel capacity and the design of optimum transmission strategies for the multiple-input multipleoutput (MIMO) multiple access channel (MAC) have been widely studied [1][2][3]. For instance, it was proved in [1] that the boundary of the MIMO MAC capacity region is achieved This paper was presented in part at IEEE ICC 2014. The work of Y. Wu by Gaussian input signals. It was further demonstrated in [2] that, for the MIMO MAC, the optimization of the transmit signal covariance matrices for weighted sum rate (WSR) maximization leads to a convex optimization problem. For sum rate maximization, the authors of [3] developed an efficient iterative water-filling algorithm for finding the optimal input signal covariance matrices for all users. However, the results in [1][2][3] rely on the critical assumption of Gaussian input signals. Although Gaussian inputs are optimal in theory, they are rarely used in practice. Rather, it is well-known that practical communication signals usually are drawn from finite constellation sets, such as pulse amplitude modulation (PAM), phase shift keying (PSK) modulation, and quadrature amplitude modulation (QAM). These finite constellation sets differ significantly from the Gaussian idealization [4][5][6][7][8][9]. Accordingly, transmission schemes designed based on the Gaussian input assumption may result in substantial performance losses when finite alphabet inputs are used for transmission. In [5], the globally optimal linear precoder design for point-to-point communication systems with finite alphabet inputs was obtained, building upon earlier works [10][11][12][13]. For the case of the two-user single-input single-output MAC with finite alphabet inputs, the optimal angle of rotation and the optimal power division between the transmit signals were found in [14] and [15], respectively. For the MIMO MAC with an arbitrary number of users and generic antenna configurations, an iterative algorithm for searching for the optimal precoding matrices of all users was proposed in [6]. The transmission schemes in [5][6][7][8][9][10][11][12][13][14][15] require accurate instantaneous channel state information (CSI) at the transmitters for precoder design. If the channels vary relatively slowly, in frequency division duplex systems, the instantaneous CSI can be estimated accurately at the receiver via uplink training and then sent to the transmitters through dedicated feedback links, and in time division duplex systems, the instantaneous CSI can be obtained by exploiting the reciprocity of uplink and downlink. Nevertheless, when the mobility of the users increases and channel fluctuations vary more rapidly, the round-trip delays of the CSI become non-negligible with respect to the coherence time of the channels. In this case, the obtained instantaneous CSI at the transmitters might be outdated. Therefore, for these scenarios, it is more reasonable to exploit the channel statistics at the transmitter for precoder design, as the statistics change much more slowly than the instantaneous channel parameters. Transmitter design for statistical CSI has received much attention for the case of Gaussian input signals [16][17][18][19][20][21][22]. For finite alphabet inputs, point-to-point systems, an efficient precoding algorithm for maximization of the ergodic capacity over Kronecker fading channels was developed in [23]. Also, in [24], asymptotic (in the large system limit) expressions for the mutual information of the MIMO MAC with Kronecker fading were derived. Recently, an iterative algorithm for precoder optimization for sum rate maximization of the MIMO MAC with Kronecker fading was proposed in [25]. Despite these previous works, the study of the MIMO MAC with statistical CSI at the transmitter and finite alphabet inputs remains incomplete, for three reasons: First, the Kronecker fading model characterizes the correlations of the transmit and the receive antennas separately, which is often not in agreement with measurements [26,27]. In contrast, jointlycorrelated fading models, such as Weichselberger's model [27], do not only account for the correlations at both ends of the link, but also characterize their mutual dependence. As a consequence, Weichselberger's model provides a more general representation of MIMO channels. Second, explicit structures of the optimal precoders for the MIMO MAC with statistical CSI and finite alphabet inputs have not been reported yet. Third, in contrast to the sum rate, the weighted sum rate (WSR) enables service differentiation in practical communication systems [28]. Thus, it is of interest to study the WSR optimization problem. In this paper, we investigate the linear precoder design for the K-user MIMO MAC assuming Weichselberger's fading model, finite alphabet inputs, and availability of statistical CSI at the transmitter. By exploiting a random matrix theory tool from statistical physics, referred to as the replica method 1 , we first derive an asymptotic expression for the WSR of the MIMO MAC for Weichselberger's fading model in the large system regime where the numbers of transmit and receive antennas are both large. The derived expression indicates that the WSR can be obtained asymptotically by calculating the mutual information of each user separately over equivalent deterministic channels. This property significantly reduces the computational effort for calculation of the WSR. Furthermore, we prove that the optimal left singular matrix of each user's optimal precoder corresponds to the eigenmatrix of the transmit correlation matrix of the user. This result facilitates the derivation of an iterative algorithm 2 for computing the optimal precoder for each user. The proposed algorithm updates the power allocation matrix and the right singular matrix of each user in an alternating manner along the gradient decent direction. We show that the proposed algorithm does not only provide a systematic precoder design method for the MIMO MAC with statistical CSI at the transmitter, but also reduces the implementation complexity by several orders of magnitude compared to the precoder design for instantaneous CSI. Moreover, precoders designed for statistical CSI can be updated much less frequently than precoders designed 1 We note that the replica method has been applied to communications problems before [24,[29][30][31]. 2 It is noted that although we derive the asymptotic WSR in the large system regime, the proposed algorithm can also be applied for systems with a finite number of antennas. for instantaneous CSI as the channel statistics change very slowly compared to the instantaneous CSI. Numerical results demonstrate that the proposed design provides substantial performance gains over systems without precoding and systems employing precoders designed under the Gaussian input assumption for both systems with a moderate number of antennas and massive MIMO systems [32]. The remainder of this paper is organized as follows. Section II describes the considered MIMO MAC model. In Section III, we derive the asymptotic mutual information expression for the MIMO MAC with statistical CSI at the transmitter and finite alphabet inputs. In Section IV, we obtain a closedform expression for the left singular matrix of each user's precoder maximizing the asymptotic WSR and propose an iterative algorithm for determining the precoders of all users. Numerical results are provided in Section V and our main results are summarized in Section VI. The following notations are adopted throughout the paper: Column vectors are represented by lower-case boldface letters, and matrices are represented by upper-case boldface letters. Superscripts (·) T , (·) * , and (·) H stand for the matrix/vector transpose, conjugate, and conjugate-transpose operations, respectively. det(·) and tr(·) denote the matrix determinant and trace operations, respectively. diag {b} and blockdiag {A k } K k=1 denote a diagonal matrix and a block diagonal matrix containing in the main diagonal and in the block diagonal the elements of vector b and matrices A k , k = 1, 2, · · · , K, respectively. diag {B} denotes a diagonal matrix containing in the main diagonal the diagonal elements of matrix B. ⊙ and denote the element-wise product and the Kronecker product of two matrices, respectively. vec (A) is a column vector which contains the stacked columns of matrix A. [A] mn denotes the element in the mth row and nth column of matrix A. X F denotes the Frobenius norm of matrix X. I M denotes an M × M identity matrix, and E V [·] represents the expectation with respect to random variable V , which can be a scalar, vector, or matrix. Finally, DA denotes the integral measure for the real and imaginary parts of the elements of A. That is, for an n × m matrix A, we have DA = n i=1 m j=1 dRe[A]ij dIm[A]ij π , where Re and Im extract the real and imaginary parts, respectively. II. SYSTEM MODEL Consider a single-cell MIMO MAC system with K independent users. We suppose each of the K users has N t transmit antennas 3 and the receiver has N r antennas. Then, the received signal y ∈ C Nr×1 is given by y = K k=1 H k x k + v(1) where x k ∈ C Nt×1 and H k ∈ C Nr×Nt denote the transmitted signal and the channel matrix of user k, respectively. v ∈ C Nr×1 is a zero-mean complex Gaussian noise vector with covariance matrix 4 I Nr . Furthermore, we make the common assumption (as, e.g., [19,34]) that the receiver has the instantaneous CSI of all users, and each transmitter has the statistical CSI of all users. The transmitted signal vector x k can be expressed as x k = B k d k(2) where B k ∈ C Nt×Nt and d k ∈ C Nt×1 denote the linear precoding matrix and the input data vector of user k, respectively. Furthermore, we assume d k is a zero-mean vector with covariance matrix I Nt . Instead of employing the traditional assumption of a Gaussian transmit signal, here we assume d k is taken from a discrete constellation, where all elements of the constellation are equally likely. In addition, the transmit signal x k conforms to the power constraint E x k x H k x k = tr B k B H k ≤ P k , k = 1, 2, · · · , K. (3) For the jointly-correlated fading MIMO channel, we adopt Weichselberger's model [27] throughout this paper, which is also referred to as the unitary-independent-unitary model [33]. This model jointly characterizes the correlation at the transmitter and receiver side. In particular, for user k, H k is modeled as H k = U R k G k ⊙ W k U H T k (4) where U R k = [u R k ,1 , u R k ,2 , · · · , u R k ,Nr ] ∈ C Nr×Nr and U T k = [u T k ,1 , u T k ,2 , · · · , u T k ,Nt ] ∈ C Nt×Nt represent deterministic unitary matrices, respectively. G k ∈ C Nr×Nt is a deterministic matrix with real-valued nonnegative elements, and W k ∈ C Nr×Nt is a random matrix with independent identically distributed (i.i.d.) Gaussian elements with zeromean and unit variance. We define G k = G k ⊙ G k and let g k,n,m denote the (n, m)th element of matrix G k . Here, G k is referred to as the "coupling matrix" as g k,n,m corresponds to the average coupling energy between u R k ,n and u T k ,m [27]. The transmit and receive correlation matrices of user k can be written as R t,k = E H k H H k H k = U T k Γ T k U H T k R r,k = E H k H k H H k = U R k Γ R k U H R k(5) where Γ T k and Γ R k are diagonal matrices with main diagonal elements [Γ T k ] mm = Nr n=1 g k,n,m , m = 1, 2, · · · , N t , and [Γ R k ] nn = Nt m=1 g k,n,m , n = 1, 2, · · · , N r , respectively. We note that (4) is a general model which includes many popular statistical fading models as special cases. For example, if G k is a rank-one matrix, then (4) reduces to the separatelycorrelated Kronecker model [35,36]. On the other hand, if U R k and U T k are discrete Fourier transform matrices, (4) corresponds to the virtual channel representation for uniform linear arrays [37]. We emphasize that Weichselberger's model avoids the separability assumption of the Kronecker model and can account for arbitrary coupling between the transmitter and receiver ends. Therefore, Weichselberger's model improves the capability to correctly model actual MIMO channels. For example, [27,Fig. 6] shows that Weichselberger's model can provide significantly more accurate estimates of the mutual information for actual MIMO channels than the Kronecker model. This was the motivation for using Weichselberger's model in previous work, e.g. [17,19], and is also the main reason for using it in this paper. Hence, with this flexible model, we can obtain a more accurate theoretical analysis and more realistic performance results for practical communication systems compared to the simple Kronecker model. III. ASYMPTOTIC WSR OF THE MIMO MAC WITH FINITE ALPHABET INPUTS We divide all users into two groups, denoted as set A and its complement set A c : A = {i 1 , i 2 , · · · , i K1 } ⊆ {1, 2, · · · , K} and A c = {j 1 , j 2 , · · · , j K2 }, K 1 + K 2 = K. Also, we define H A = H i1 H i2 · · · H iK 1 , d A = d T i1 d T i2 · · · d T iK 1 T , d A c = d T j1 d T j2 · · · d T jK 2 T , B A = blockdiag B i1 , B i2 , · · · , B iK 1 , and y A = H A B A d A + v. Then, the achievable rate region (R 1 , R 2 , · · · , R K ) of the Kuser MIMO MAC satisfies the following conditions [38]: i∈A R i ≤ I (d A ; y|d A c ) , ∀A ⊆ {1, 2, · · · , K} (6) where I (d A ; y |d A c )=E HA E dA,yA log 2 p (y A |d A , H A ) p (y A |H A ) |H A . (7) In (7), p(y A |H A ) denotes the marginal probability density function (p.d.f.) of p(d A , y A |H A ) . As a result, we have I(d A ; y |d A c ) = − E HA E yA log 2 E dA e − yA−HABAdA 2 H A − N r log 2 e. (8) The expectation in (8) can be evaluated numerically by Monte-Carlo simulation. However, for a large number of antennas, the associated computational complexity could be enormous. Therefore, by employing the replica method, a classical technique from statistical physics, we obtain an asymptotic expression for (8) as detailed in the following. A. Some Useful Definitions We first introduce some useful definitions. Consider a virtual 5 MIMO channel defined by z A = T A B A d A +v A .(9) Hence, T A ∈ C K1Nt×K1Nt is given by T A = blockdiag T i1 , T i2 , · · · , T iK 1 ∈ C K1Nt×K1Nt , where T i k ∈ C Nt×Nt is a deterministic matrix, k = 1, 2, · · · , K 1 .v A ∈ C K1Nr×1 is a standard complex Gaussian random vector with i.i.d. elements. The minimum mean square error (MMSE) estimate of signal vector d A given (9) can be expressed aŝ d A = E dA d A z A , T A , B A .(10) 5 The virtual channel model does not relate to a physical channel but it plays an important role in the derivation of our final asymptotic expression (15) in Proposition 1. Define the following mean square error (MSE) matrix Ω A = B A E A B H A (11) where E A = E zA E dA (d A −d A )(d A −d A ) H .(12) Define the matrices of the i k th (i 1 ≤ i k ≤ i K1 ) user Ω i k and E i k as submatrices obtained by extracting the ((k − 1)N t + 1)th to the (kN t )th row and column elements of matrices Ω A and E A , respectively. The component matrices T i k in T A are functions of auxil- iary variables {R i k , γ i k , ψ i k }, which are the solutions of the following set of coupled equations: T i k = U Ti k diag G T i k γ i k U H Ti k ∈ C Nt×Nt R i k = U Ri k diag (G i k ψ i k ) U H Ri k ∈ C Nr×Nr(13) where (13) and (14). We will show later that in the asymptotic regime the mutual information in (8) can be evaluated based on the virtual MIMO channel 6 in (9). However, in contrast to channel matrix H A in (8), for the virtual MIMO channel, channel matrix √ T A is deterministic. γ i k = [γ i k ,1 , γ i k ,2 , · · · , γ i k ,Nr ] T and ψ i k = [ψ i k ,1 , ψ i k ,2 , · · · , ψ i k ,Nt ] T with γ i k ,n = u H Ri k ,n (I Nr + R A ) −1 u Ri k ,n , n = 1, 2, · · · , N r ψ i k ,m = u H Ti k ,m Ω i k u Ti k ,m , m = 1, 2, · · · , N t (14) and R A = K1 k=1 R i k . Computing T i k requires finding {R i k , γ i k , ψ i k } through fixed point equations B. Asymptotic Mutual Information Suppose the transmit signal d k is taken from a discrete constellation with cardinality Q k . Define M k = Q Nt k , let S k denote the constellation set for user k, and let a k,j denote the jth element of S k , k = 1, 2, · · · , K, j = 1, 2, · · · , M k . We define the large system limit as the scenario where N r and N t are large but the ratio β = N t /N r is fixed. Now, we are ready to provide a simplified asymptotic expression for (8). Proposition 1: For the MIMO MAC model (1), in the large system limit the mutual information in (8) can be asymptotically approximated by I (d A ; y |d A c ) ≃ i∈A I d i k ; z i k T i k B i k + log 2 det (I Nr + R A ) − log 2 e K1 k=1 γ T i k G i k ψ i k (15) where I d i k ; z i k T i k B i k = log 2 M i k − 1 M i k × Mi k m=1 E v    log 2 Mi k p=1 e − √ Ti k Bi k (a k,p −a k,m )+v 2 − v 2    .(16) Proof: See Appendix A. Remark 1: The asymptotic expressions provided in Proposition 1 constitute approximations for matrices of finite dimension. In addition, because the derivation of the asymptotic expression is based on the replica method, wherein some steps lack a rigorous proof, we state the result in a proposition rather than a theorem. Remark 2: As mentioned above, Weichselberger's model is a general channel model. Therefore, the unified expression in Proposition 1 is applicable to many special cases. For example, if G k is a rank-one matrix, (15) reduces to [25,Eq. (28)], which was derived for the Kronecker model. Remark 3: γ i k and ψ i k in Proposition 1 can be obtained through the fixed-point equations in (14). From statistical physics, it is known that there are multiple solutions for γ i k ,n and ψ i k ,m that satisfy 7 (14). For the problem at hand, the solution minimizing I (d A ; y |d A c ) in (15) yields the mutual information. Before proceeding, let us recall some notations used in this paper. For the virtual channel model (9), the virtual channel matrix and the corresponding asymptotic parameters obtained for different sets A based on the fixed point equations (13) and (14) are different due to the equality R A = i k ∈A R i k . Therefore, we define the set A k = {1, 2, · · · , k}. Then, we denote the virtual channel matrix and the corresponding asymptotic parameters obtained from the fixed point equations (13) and (14), (9), and (11) for A = A k as T IV. LINEAR PRECODING DESIGN FOR THE MIMO MAC In this section, we first formulate the WSR optimization problem for linear precoder design for the MIMO MAC. Then, we establish the structure of the asymptotically optimal precoders maximizing the WSR in the large system limit. Finally, we propose an iterative algorithm for finding the optimal precoders. A. Weighted Asymptotic Sum Rate It is well known that the achievable rate region of the MIMO MAC (R 1 , R 2 , · · · , R K ) can be obtained by solving the WSR optimization problem [1]. Without loss of generality, assume weights µ 1 ≥ µ 2 ≥ · · · ≥ µ K ≥ µ K+1 = 0, i.e., the users are decoded in the order K, K − 1, · · · , 1 [6]. Then, the WSR problem can be expressed as WSR = max B1,B2,··· ,BK R w sum (B 1 , B 2 , · · · , B K ), s.t. tr B k B H k ≤ P k , ∀k(17) where (17). When µ 1 = µ 2 = · · · = µ K = 1, (17) reduces to the sum rate maximization. R w sum (B 1 , B 2 , · · · , B K ) = K k=1 ∆ k f (B 1 , B 2 , · · · , B k ) (18) with ∆ k = µ k − µ k+1 and f (B 1 , B 2 , · · · , B k ) = I(d 1 , · · · , d k ; y|d k+1 , · · · , d K ), k = 1, 2, · · · , K. By evaluating I(d 1 , · · · , d k ; y|d k+1 , · · · , d K ) based on Proposition 1, we obtain an asymptotic expression R w sum,asy (B 1 , B 2 , · · · , B K ) for R w sum (B 1 , B 2 , · · · , B K ) in B. Asymptotically Optimal Precoder Structure Consider the singular value decomposition (SVD) of the precoder of user l, B l = U B l Γ B l V B l , where U B l and V B l are unitary matrices, and Γ B l is a diagonal matrix with nonnegative main diagonal elements. Then, we have the following theorem. Theorem 1: The left singular matrices U B l of the asymptotically optimal precoders which maximize the asymptotic WSR R w sum,asy (B 1 , B 2 , · · · , B K ) are the eigenmatrices U T l of the transmit correlation matrices in (5), l = 1, 2, · · · , K. Using the new notations below Remark 3 and based on the optimal precoder structure, (17) simplifies to max Γ B l ,V B l , tr Γ 2 B l ≤P l K k=l ∆ k I d l ; z (k) l diag G T l γ (k) l Γ B l V B l (19) for l = 1, 2, · · · , K. Proof: See Appendix B. Remark 4: We note that for finite alphabet input scenarios, the optimal precoder structure of the left singular matrix has been obtained for point-to-point MIMO systems [5,23]. Also, the optimal precoder structure for the MIMO MAC was implicitly used in [25] for the Kronecker model and finite alphabet inputs without proof. Therefore, the main contribution of Theorem 1 is the explicit presentation of the optimal precoder structure for the MIMO MAC for Weichselberger's model and finite alphabet inputs and its proof. Remark 5: For Weichselberger's model in (4), if only sum rate maximization is considered, i.e., [25] since the mutual information expression in (19) is concave with respect to this matrix. However, for WSR optimization, since the values of γ (19). Thus, we optimize Γ B l and V B l in an alternating manner. ∆ 1 = ∆ 2 = · · · = ∆ K−1 , we can directly optimize matrix V H BL Γ H BL diag G T L γ (k) L Γ BL V BL(k) l are different for different k, it is not possible to find a common matrix V H B l Γ H B l diag G T l γ (k) l Γ B l V B l to be optimized inAlgorithm 1: Iterative algorithm for WSR maximization with respect to {B 1 , B 2 , · · · , B K } 1) Initialize Γ (1) B l , V (1) B l , l = 1, 2, · · · , K, T (k) t , R (k) t , ψ (k),(1) t , and γ (k),(1) t , t = 1, 2, · · · , k, k = 1, 2, · · · , K. Set n = 1 and compute R w,(n) sum,asy . 2) Update Γ (n+1) B l 2 , l = 1, 2, · · · , K, along the gradient decent direction in (20). 3) Update V (n+1) B l , l = 1, 2, · · · , K, along the gradient decent direction in (21). 4) Compute B (n+1) l = U T l Γ (n+1) B l V (n+1) B l and update the asymptotic parameters T (k) t , R (k) t , γ (k),(n+1) t , and ψ (k),(n+1) t . 5) Compute R w,(n+1) sum,asy . If R w,(n+1) sum,asy −R w,(n) sum,asy is larger than a threshold and n is less than the maximal number of iterations, set n := n + 1, repeat Steps 2-4; otherwise, stop the algorithm. Next, we obtain the gradients of R w sum,asy (B 1 , B 2 , · · · , B K ) with respect to Γ 2 B l and V B l , which are given by [5,Eq. (19)] and [40,Eq. (22)] as ∇ Γ 2 B l R w sum,asy (B 1 , B 2 , · · · , B K ) = K k=l ∆ k diag V H B l E (k) l V B l diag G T l γ (k) l(20) and ∇ VB l R w sum,asy (B 1 , B 2 , · · · , B K ) = K k=l ∆ k diag G T l γ (k) l Γ 2 B l V B l E (k) l ,(21) respectively. Now, we are ready to propose an iterative algorithm to determine the optimal precoders B l numerically. C. Iterative Algorithm for Weighted Sum Rate Maximization Based on Theorem 1, (20), and (21), an efficient iterative algorithm can be formulated to determine the optimal precoders B l numerically. The resulting algorithm is summarized in Algorithm 1. In Step 2 of Algorithm 1, we optimize Γ (n) B l 2 along the gradient descent direction Γ (n) B l 2 = Γ (n) B l 2 + u∇ Γ 2 B l R w sum,asy (B 1 , B 2 , · · · , B K ), where ∇ Γ 2 B l R w sum,asy (B 1 , B 2 , · · · , B K ) is given by (20) and the step size u is determined by the backtracking line search method [42]. Thereby, the values of the backtracking line search parameters θ and ω are set as θ ∈ (0, 0.5) and ω ∈ (0, 1) [42]. If the updated Γ B l = V (n) B l + u∇ VB l R w sum,asy (B 1 , B 2 , · · · , B K ), where ∇ VB l R w sum,asy (B 1 , B 2 , · · · , B K ) is given by (21). We compute the SVD of V (n) B l = U V l Γ V l V V l . Then, we project V (n) B l on the Stiefel manifold V (n+1) B l = U V l V V l [41, Sec. 7.4.8]. In Step 4, we compute B (n+1) l = U T l Γ (n+1) B l V (n+1) B l . Then, we update the asymptotic parameters T (k) t , R (k) t , γ (k),(n+1) t , and ψ (k),(n+1) t in Proposition 1 based on the updated precoders B (n+1) l , l = 1, 2, · · · , K, and the fixed point equations (13) and (14). In Step 5, we compute R w,(n+1) sum,asy based on B (n+1) l , l = 1, 2, · · · , K, T (k) t , R (k) t , γ (k),(n+1) t , and ψ (k),(n+1) t , t = 1, 2, · · · , k, k = 1, 2, · · · , K. Finally, if R w,(n+1) sum,asy − R w,(n) sum,asy is larger than a threshold and n is less than the maximal number of iterations, we perform the next iteration, otherwise, we stop the algorithm. Remark 6: We note that the iterative algorithms in [5] and [23] are for point-to-point MIMO systems. Therefore, both algorithms only need to consider the maximization of a single mutual information expression. For the MIMO MAC, the iterative algorithm in [6] optimizes the precoders of all users jointly. Therefore, its implementation complexity is very high. Based on the asymptotic WSR expression, Algorithm 1 optimizes the precoder of each user separately. Thus, its implementation complexity is significantly lower than that of the iterative algorithm in [6]. Moreover, Algorithm 1 exploits the optimal structure of the precoders for WSR maximization and optimizes the power allocation matrix and the right singular matrix of the precoder of each user in an alternating manner. The iterative algorithm in [25] is for the Kronecker model. For the special case of the Kronecker model, even for WSR optimization, γ [25]. Hence, for the Kronecker model, the algorithm in [25] can also be used to optimize the WSR and may be preferable for numerical calculation. This is because the algorithm in [25] only requires an eigenvalue decomposition where Algorithm 1 requires a SVD. However, as indicated in Remark 5, the algorithm in [25] can not be directly applied to the WSR optimization problem for Weichselberger's model considered in this paper. (k) l can be moved outside V H B l Γ H B l diag G T l γ (k) l Γ B l V B l , as indicated in Remark 7: We note that calculating the mutual information and the MSE matrix (e.g., (16), (20), (21) or [6, Eq. . As a result, the computational complexity of Algorithm 1 is significantly lower than that of the conventional design. We note that this computational complexity reduction is more obvious when the number of transmit antennas or the number of users become large. To show this more clearly, we give an example. We consider a practical massive MIMO MAC system where the base station is equipped with a large number of antennas and serves multiple users having much smaller numbers of antennas [32,[43][44][45][46][47]. In particular, we assume N r = 64, N t = 4, K = 4, µ 1 = µ 2 = µ 3 = µ 4 , and all users employ the same modulation constellation. The numbers of additions required for calculating the mutual information and the MSE matrix in Algorithm 1 and in the precoder design in [6] are listed in Table I for different modulation formats. We observe from Table I that Algorithm 1 requires a significantly lower number of additions for the MIMO MAC precoder design for finite alphabet inputs compared to the design in [6]. Moreover, since Algorithm 1 is based on the channel statistics {U T k } ∀k , {U R k } ∀k , {G k } ∀k , it avoids the time-consuming averaging process over each channel realization of the mutual information in (7). In addition, Algorithm 1 is executed only once since the precoders are constant as long as the channel statistics do not change, whereas the algorithm in [6] has to be executed for each channel realization. Remark 8: We note that Algorithm 1 never decreases the asymptotic WSR R w sum,asy (B 1 , B 2 , · · · , B K ) in any iteration, see Step 5. From the expression in (15), we also know that the asymptotic WSR R w sum,asy (B 1 , B 2 , · · · , B K ) is upperbounded. This implies that Algorithm 1, which produces non-decreasing sequences that are upper-bounded, is convergent. Due to the non-convexity of the objective function R w sum,asy (B 1 , B 2 , · · · , B K ), in general, Algorithm 1 will find a local maximum of the WSR. Therefore, we run Algorithm 1 for several random initializations B (1) k and select the result that offers the maximal WSR as the final design solution [6,48]. V. NUMERICAL RESULTS In this section, we provide examples to illustrate the performance of the proposed iterative optimization algorithm. We assume equal individual power limits P 1 = P 2 = · · · = P K = P and the same modulation format for all K users. The average SNR for the MIMO MAC with statistical CSI is defined as SNR = E[tr(H k H H k )]P NtNr . We use GP, NP, FAP, and AL as abbreviations for Gaussian precoding, no precoding, finite alphabet precoding, and algorithm in [6], respectively. First, we consider a two-user MIMO MAC with two transmit antennas for each user and two receive antennas in order to illustrate that, although Algorithm 1 was derived for the large system limit, it also performs well if the numbers of antennas are small. For the channel statistics of Weichselberger's model, U T k , U R k , and G k , k = 1, 2, are chosen at random. Figure 1 depicts the average exact sum rate obtained based on (8) and the sum rate obtained with the asymptotic expression in Proposition 1 for different precoding designs and QPSK inputs. For the case without precoding, we set B 1 = B 2 = P Nt I Nt , and denote the corresponding exact and asymptotic sum rates as "NP, Exact" and "NP, Asymptotic", respectively. Furthermore, we denote the exact and asymptotic sum rates achieved by the design proposed in Algorithm 1 as "FAP, Exact" and "FAP, Asympotic". From Figure 1, we observe that the asymptotic sum rate expression in Proposition 1 provides a good estimate of the exact sum rate even for small numbers of antennas. On the other hand, if the numbers of antennas are large, evaluating the exact mutual information in (7) numerically via Monte Carlo simulation is extremely timeconsuming. In contrast, Proposition 1 provides an efficient method for estimating the ergodic WSR of the MIMO MAC with finite alphabet inputs. Figure 2 illustrates the convergence behavior of Algorithm 1 for different SNR values and QPSK inputs. We set the backtracking line search parameters to θ = 0.1 and ω = 0.5. Figure 2 shows the sum rate in each iteration. We observe that in all considered cases, the proposed algorithm needs only a few iterations to converge. In Figure 3, we show the sum rate for different transmission schemes and QPSK inputs. We employ the Gauss-Seidel algorithm together with stochastic programming 9 to obtain the optimal covariance matrices of the users under the Gaussian input assumption [19]. Then, we decompose the obtained optimal covariance matrices {Q 1 , Q 2 , · · · , Q K } as Q k = U k Λ k U H k , and set B k = U k Λ 1 2 k , k = 1, 2, · · · , K. Finally, we calculate the average sum rate for this precoding design for QPSK inputs. We denote the corresponding sum rate as "GP with QPSK inputs". For the case without precoding, we set B 1 = B 2 = P Nt I Nt . We denote the corresponding sum rate as "NP with QPSK inputs". We denote the proposed design in Algorithm 1 as "FAP with QPSK inputs". For comparison purpose, we also show the average sum rate achieved by Algorithm 1 in [6] with instantaneous CSI and denote it as "AL in [12] with QPSK inputs". The sum rates achieved with the Gauss-Seidel algorithm and without precoding for Gaussian inputs are also plotted in Figure 3, and are denoted as "GP with Gaussian input" and "NP with Gaussian input", respectively. From Figure 3, we make the following observations: 1) For QPSK modulation, the proposed iterative algorithm achieves a considerably higher sum rate compared to the other statistical CSI based precoder designs. Specifically, to achieve a target sum rate of 4 b/s/Hz, the proposed algorithm achieves SNR gains of approximately 2.5 dB and 11 dB compared to the "NP with QPSK inputs" design and the "GP with QPSK inputs" design, respectively. 2) The sum rate achieved by the proposed algorithm is close to the sum rate achieved by Algorithm 1 in [6] which requires instantaneous CSI. At a target sum rate of 4 b/s/Hz, the SNR gap between the proposed algorithm and Algorithm 1 in [6] is less than 1 dB. However, the proposed algorithm only requires statistical CSI and its implementation complexity is much lower than that of Algorithm 1 in [6]. 3) The sum rate achieved by the proposed algorithm and the 9 We note that the asymptotically optimal precoder design for Gaussian input signals in [20] has a concise structure and a low implementation complexity. However, the main purpose of considering the precoder design under the Gaussian input assumption in this paper is to show that the Gaussian input assumption precoder design departs remarkably from the practical finite alphabet input design. Therefore, we consider the Gauss-Seidel algorithm together with stochastic programming for precoder design as this method optimizes the exact sum rate of the MIMO MAC with Gaussian input. Although this approach is complicated, it achieves the best sum rate performance for Gaussian input. GP with Gaussian input NP with Gaussian input AL in [12] with QPSK inputs FAP with QPSK inputs NP with QPSK inputs GP with QPSK inputs "NP with QPSK inputs" design merge at high SNR, and both saturate at K log 2 M = 8 b/s/Hz. 4) The sum rate achieved by the "GP with QPSK inputs" design remains almost constant for SNRs between 10 dB and 20 dB. This is because the Gauss-Seidel algorithm design implements a "water filling" power allocation policy in this SNR region. As a result, when the SNR is smaller than a threshold (e.g., 20 dB in this case), the precoders allocate most of the available power to the strongest subchannels and allocate little power to the weaker subchannels. Therefore, one eigenvalue of Q k approaches zero. For example, for SNR = 10 dB, the optimal covariance matrices obtained by the Gauss-Seidel algorithm are given by After eigenvalue decomposition Q k = U k Λ k U H k , we have Λ 1 = diag {9.9976, 0.0024} , Λ 2 = diag {9.9987, 0.0013} .(23) The precoders are given by B 1 = −2.5097 − 0.0000j 0.0298 − 0.0000j −1.9168 + 0.1574j − 0.0388 + 0.0032j B 2 = −0.8638 + 0.0000j − 0.0349 − 0.0000j 3.0184 + 0.3767j − 0.0098 − 0.0012j .(24) From the structure of the precoders in (24), we can see that most energy is allocated to one transmitted symbol. For finite alphabet inputs, this power allocation policy may result in allocating most power to the subchannels that are close to saturation. This will lead to a waste of transmit power and impede the further improvement of the sum rate performance. This confirms that precoders designed under the ideal Gaussian input assumption may result in a considerable performance loss when adopted directly in practical systems with finite alphabet constraints. In Figure 4, we show the achievable rate region of different precoder designs for SNR = 5 dB and QPSK inputs. The achievable rate regions are obtained by solving the WSR optimization problem in (17) for different precoder designs. We observe from Figure 4 that the proposed design has a much larger rate region than the case without precoding and the design based on the Gaussian input assumption. We note that since for GP, most energy is allocated to one transmitted symbol, for finite alphabet inputs, the achievable sum rate of this transmission design may result in a value that is even smaller than the single user rate. Therefore, there is only one point in the achievable rate region for the GP design. A similar phenomenon has also been observed for the MIMO MAC with instantaneous CSI and finite alphabet inputs, see [6, Fig. 6]. To further validate the performance of the proposed design, Figure 5 shows the sum rate performance for different precoding schemes for 16QAM modulation. Figure 5 indicates that the proposed design outperforms the other precoding schemes 10 also for 16QAM modulation. At a sum rate of 8 b/s/Hz, the proposed algorithm achieves SNR gains of about 1.7 dB and 7.5 dB over the "NP with 16QAM inputs" design and the "GP with 16QAM inputs" design, respectively. In the following, we investigate the performance of the proposed precoder design in a practical massive MIMO MAC system where the base station is equipped with a large number of antennas and simultaneously serves multiple users with much smaller numbers of antennas [32,[43][44][45]. We assume N r = 64 and N t = 4. Furthermore, we adopt the 3rd generation partnership project spatial channel model (SCM) in [49]. We set 11 the transmit and receive antenna spacings to half a wave length, and the velocity 12 of the users to 180 km/h. Figures 6 and 7 show the sum rate performance for different precoder designs, K = 4, and QPSK inputs for the suburban and the urban scenarios of the SCM, respectively. We observe from Figures 6 and 7 that, for QPSK inputs, the proposed algorithm achieves a better performance than the other precoder designs for both scenarios. For a sum rate of 24 b/s/Hz, the SNR gains of the proposed algorithm over the "NP with QPSK inputs" design for the suburban and the urban scenarios are about 5 dB and 4.5 dB, respectively. The SNR gain for the suburban scenarios is larger than that for the urban scenarios, since the correlation of the transmit antennas is stronger in suburban scenarios. As a result, the precoder design based on statistical CSI is more effective and yields a larger performance gain. Also, the "GP with QPSK inputs" design results in a substantial performance loss in both scenarios. To illustrate the importance of designing the precoders for Weichsenberger's channel model, we also show in Figure 7 the sum rate performance of precoders designed for the Kronecker's model (i.e., the mutual coupling is ignored for precoder calculation) and denote the corresponding curve as "FAP with QPSK inputs and KR". We observe from Figure 7 that for a sum rate of 24 b/s/Hz, we lose about 1 dB in performance if we design the precoders for the Kronecker's model. Figure 8 shows the average sum rate for different precoder designs as a function of the number of users for QPSK inputs, 11 The SCM simulation model in [49] has several system parameters, including the number of user, the numbers of antenna, the antenna spacing, the velocity of the users, etc. After setting these parameters, we generated a large number of channel realizations and calculated the statistical CSI based on these channel realizations. 12 We consider the scenario where the mobility of the users is high. In such a scenario, it is reasonable to exploit the statistical CSI at the transmitter for precoder design [17]. the urban scenario, and SNR = 0 dB. We observe from Figure 8 that the average sum rate scales linearly with the number of users. This coincides with the conclusion in Proposition 1 that the sum rate can be approximated by the sum of the individual rates of all users. VI. CONCLUSION In this paper, we have studied the linear precoder design for the K-user MIMO MAC with statistical CSI at the transmitter. We formulated the problem from the standpoint of finite alphabet inputs based on Weichselberger's MIMO channel model. We first obtained the WSR expression for the MIMO MAC assuming Weichselberger's model for the asymptotic large system regime under the finite alphabet input constraint. Then, we established the optimal structures of the precoding matrices which maximize the asymptotic WSR. Subsequently, we proposed an iterative algorithm to find the precoding matrices of all users for statistical CSI at the transmitter. We show that the proposed algorithm significantly reduces the implementation complexity compared to a previously proposed precoder design method for the MIMO MAC with finite alphabet inputs and instantaneous CSI at the transmitter. Numerical results showed that, for finite alphabet inputs, precoders designed with the proposed iterative algorithm achieve substantial performance gains over precoders designed based on the Gaussian input assumption and transmission without precoding. These gains can be observed for both MIMO systems with small numbers of antennas and massive MIMO systems. APPENDIX A PROOF OF PROPOSITION 1 Before we present the proof, we introduce the following three useful lemmas. Lemma 1: Let S ∈ C m×n , A 1 ∈ C m×n , and A 2 ∈ C m×n be complex matrices and A 3 ∈ C n×n and A 4 ∈ C m×m positive definite matrices, respectively. Then, the following equality holds [39]: DS e −tr(A3S H A4S+A H 1 S−S H A2) = 1 det(A 3 ⊗ A 4 ) e −tr(A −1 3 A H 1 A −1 4 A2) .(25) For A 1 = A 2 = 0 and Gaussian random matrix S, we obtain with this lemma the useful result DS e −tr(A3S H A4S) = 1 det(A 3 ⊗ A 4 ) . Lemma 2: The eigen-decomposition of matrix A = b11 H + (a − b)I r+1 ∈ C (r+1)×(r+1) , where 1 ∈ C (r+1)×1 is the all-one vector, and a and b are arbitrary constants, is A = F diag (a + rb, a − b, · · · , a − b) F H(27) where F ∈ C (r+1)×1 is the discrete Fourier transform matrix with elements [F] nm = 1 √ r+1 e −j 2π r+1 (n−1)(m−1) . Lemma 3: Hubbard-Stratonovich Transformation: Let s and a be arbitrary m × 1 complex vectors. Then, we have e a † a = Dse −(s † s−a † s−s † a) .(28) The identity can be proven easily by using the definition of a matrix variate Gaussian distribution. The transformation is a convenient tool to reduce a quadratic form to a linear expression by introducing auxiliary variables [51]. Now, we begin with the proof of Proposition 1. We note that throughout this section, the virtual channel model defined in Section III-A is only used if explicitly stated. First, we consider the case K 1 = K. Define H = [H 1 H 2 · · · H K ], B = blockdiag {B 1 , B 2 , · · · , B K }, x = x T 1 x T 2 · · · x T K T , and d = d T 1 d T 2 · · · d T This reformulation is very useful because it allows us to first evaluate E y,H [ (Z(y, H)) r ] for a positive integer-valued r, and then extend the result to r → 0. Note, however, that the replica method is not rigorous. Nevertheless, it has been widely adopted in the field of statistical physics [51] and has been also used to derive a number of interesting results in information and communication theory [19, 24, 29-31, 39, 54]. Some results obtained based on the replica method have been recently confirmed by more rigorous analyses, see e.g. [52,53]. In a first step, to compute the expectation over Z(y, H), it is useful to introduce r + 1 replicated signal vectors x (α) k , for α = 0, 1, · · · , r, yielding E y,H [(Z(y, H)) r ] = E H,X Dy r α=0 e − y− K k=1 H k x (α) k 2 (30) where X = X T 1 X T 2 · · · X T K T , X k = x (0) k x (1) k · · · x (r) k , and the {x (α) k } are i.i.d. with distribution p(x k ) . Now, the integration over y can be performed in (30) because it is reduced to the Gaussian integral. However, the expectations over H and X are involved. To tackle this problem, we separate the expectations with respect to X and H. Towards this end, define a set of random matrices: V = [V 1 V 2 · · · V K ], V k = v T k,1 v T k,2 · · · v T k,Nr T and random vectors v k,n = m v k,n,m , v k,n,m = v (0) k,n,m v (1) k,n,m · · · v (r) k,n,m , and v (α) k,n,m = [W k ] n,m [G k ] n,m u H T k ,m x (α) k for α = 0, 1, · · · , r. Then, we have from (4) H k x (α) k = Nr n=1 Nt m=1 v (α) k,n,m u R k ,n .(31) Notice that, for given X k , v k,n,m is a Gaussian random vector with zero mean and covariance matrix Q k,n,m , where Q k,n,m ∈ C (r+1)×(r+1) is a matrix with en- tries [Q k,n,m ] αβ = E [W k ]n,m v (α) k,n,m H v (β) k,n,m = g k,n,m x (α) k H u T k ,m u H T k ,m x (β) k for α = 0, 1, · · · , r, β = 0, 1, · · · , r. For ease of notation, we further define T k,m = u T k ,m u H T k ,m and R k,n = u R k ,n u H R k ,n . Therefore, we have [Q k,n,m ] αβ = g k,n,m x (α) k H T k,m x (β) k . Using (31) and letting Q = {Q k,n,m } ∀k,n,m , where ∀k, n, m stands for k = 1, 2, · · · , K, m = 1, 2, · · · , N t , and n = 1, 2, · · · , N r , we have E H Dy r α=0 e − y− K k=1 H k x (α) k 2 = e S(Q)(32) where S(Q) = ln Dy × E V r α=0 e − y− K k=1 Nr n=1 N t m=1 v (α) k,n,m uR k ,n 2 . Clearly, the interactions between H and X depend only on Q. Therefore, it is useful to separate the expectation over X in (30) into an integral over all possible Q k,n,m and all possible x (α) k configurations for a given Q k,n,m by introducing a δfunction, E y,H [(Z(y, H)) r ] = E X   k,n,m r 0≤α≤β d[Q k,n,m ] αβ e S(Q) × k,n,m r 0≤α≤β δ g k,n,m x (α) k H T k,m x (β) k −[Q k,n,m ] αβ   . (34) Let µ(Q) = E X   k,n,m r 0≤α≤β δ g k,n,m x (α) k H T k,m x (β) k −[Q k,n,m ] αβ   . (35) Clearly, (34) can be written as E y,H [(Z(y, H)) r ] = e S(Q) dµ(Q).(36) Now, integrating the function in (34) over y, (33) becomes e S(Q) = E V 1 (r + 1) N e −tr ( k (UR k V k ))Σ( k (UR k V k )) H (37) where Σ = − 1 (r+1) 11 H + I r+1 . Recalling that v k,n,m is a zero-mean Gaussian vector with covariance matrix Q k,n,m , we obtain that k (U R k V k ) is a zero-mean Gaussian random vector with covariance Q ⊗ R = k,n ( m Q k,n,m ) ⊗ R k,n ∈ C (r+1)Nr×(r+1)Nr . Thus, applying Lemma 1, we can eliminate V resulting in e S(Q) = 1 (r + 1) N det I (r+1)Nr + QΣ ⊗ R . Inserting (38) into (36), we then deal with the expectation over X for a given Q. Notice that only µ(Q) is related to the components of X. Using the inverse Laplace transform of the δ-function 13 , µ(Q) can be written as an exponential representation with integrals over auxiliary variables {Q (α,β) k,n,m }. For ease of notation, letQ k,n,m ∈ C (r+1)×(r+1) be a Hermitian matrix whose elements are the auxiliary variables {Q (α,β) k,n,m }. Similar to the definition of Q, we further define the set Q = {Q k,n,m } ∀k,n,m . In the large dimensional limit, the integrals over {Q k,n,m } can be performed by maximizing the exponent in µ(Q) with respect to {Q k,n,m } (the saddle point method). Using the saddle point method and following 13 The inverse Laplace transform of δ-function is given by [51] δ(x) = 1 2πj j∞+t −j∞+t eQ x dQ, ∀t ∈ R. a similar approach as in [24], we can show that if N r is large, then µ(Q) is dominated by the exponent J (Q) = max Q    k,n,m tr Q k,n,m Q k,n,m − ln E X e k,m tr( n g k,n,mQk,n,m X H k T k,m X k )    .(39) Similarly, by applying the saddle point method to (36), we have [29,31] − ln E y, H [(Z(y, H)) r ] ≃ − max Q {S(Q) − J (Q)} = F .(40) The extremum overQ and Q in (39) and (40) can be obtained via seeking the point of zero gradient with respect toQ and Q, respectively, yielding a set of self-consistent equations. To avoid searching for the extremum over all possible Q andQ, we make the following replica symmetry (RS) assumption for the saddle point: Q k,n,m = q k,n,m 11 H + (c k,n,m − q k,n,m )I r+1 (41) Q k,n,m =q k,n,m 11 H + (c k,n,m −q k,n,m )I r+1 . With this RS assumption, the problem of seeking the extremum in (40) with respect to (Q k,n,m ,Q k,n,m ) is reduced to seeking the extremum over the four parameters (q k,n,m , c k,n,m ,q k,n,m ,c k,n,m ). Although the RS assumption is heuristic, and cases of RS breaking appear in literature [51,54], it is widely used in physics [51] and information theory [19, 24, 29-31, 39, 54]. Also, some results obtained based on the RS assumption have been shown to become exact in the large system limit [55]. Applying Lemma 2, one can easily show that k,n,m tr Q k,n,m Q k,n,m = k,n,m (c k,n,m + rq k,n,m )(c k,n,m + rq k,n,m ) + r(c k,n,m −q k,n,m )(c k,n,m − q k,n,m ). (44) Therefore, the last term of (39) can be written as ln E X e k,m tr( n g k,n,mQk,n,m X H k T k,m X k ) = ln E X e vec(X) HT vec(X) where we have usedT = diag T 1 ,T 2 , . . . ,T K and T k = m n g k,n,mQk,n,m ⊗ T k,m . For ease of notation, we define Ξ ′ = T ′ (0) and Ξ = T ′ (−1), where T ′ (τ ) = blockdiag (T ′ 1 (τ ), T ′ 2 (τ ), . . . , T ′ K (τ )) and T ′ k (τ ) = k,m ( n g k,n,m (τc k,n,m +q k,n,m )) T k,m . Using the above definitions in (45) yields ln E X e vec(X) HT vec(X) = ln E X e ( r α=0 √ Ξ ′ x (α) ) H ( r α=0 √ Ξ ′ x (α) )− r α=0 (x) (α) H Ξx (α) . (46) Now, we decouple the first quadratic term in the exponent of (46) by using the Hubbard-Stratonovich transformation in Lemma 3 and introducing the auxiliary vector z. As a result, (46) becomes ln Dz E X e −g(z) (47) where g(z) = z H z + r α=0 √ Ξ ′ x (α) H z + z H r α=0 √ Ξ ′ x (α) − r α=0 (x (α) ) H Ξx (α) .(48) Inserting (43), (44), and (47) into (40), we obtain F under the RS assumption as The parameters {c k,n,m , q k,n,m ,c k,n,m ,q k,n,m } are determined by seeking the point of zero gradient with respect to {c k,n,m , q k,n,m ,c k,n,m ,q k,n,m }. It is easy to check that c k,n,m = 0, ∀k, n, m and c k,n,m = tr(T k,m ), ∀k, n, m. Motivated by the exponent of the first term on the right hand side of (50), we define a virtual MIMO channel as in (9), where z A := z, T A := Ξ ′ , and B A d A := x. This virtual MIMO channel does not relate to any physical channel model and is introduced only for clarity of notation. In particular, we will show that the first term on the right hand side of (50) can be written as the mutual information of the virtual MIMO channel when taking the derivative of T (r) with respect to r at r = 0. Recall from (29) and (40) that we are only interested in the derivative of F at r = 0. Let γ k,n,m =q k,n,m and ψ k,n,m = c k,n,m − q k,n,m . Hence, from (49) where Ξ = T, T = blockdiag (T 1 , T 2 , . . . , T K ), T k = m ( n g k,n,m γ k,n,m ) T k,m and R = k,n ( m ψ k,n,m ) R k,n . The parameters γ k,n,m and ψ k,n,m are determined by seeking the point of zero gradient of F with respect to ψ k,n,m and γ k,n,m , respectively. Hence, we have γ k,n,m = tr (I Nr + R) −1 R k,n and ψ k,n,m = ln 2 ∂ ∂γ k,n,m I x; z √ Ξ = g k,n,m tr (Ω k T k,m ) where the derivative of the mutual information follows from the relationship between the mutual information and the MMSE revealed in [40]. Let γ k,n = γ k,n,m and ψ k,m = tr (Ω k T k,m ), for m = 1, 2, . . . , M . Using (51) and substituting the definitions of γ k,n , ψ k,m , T A , B A , and model (9), we obtain (15) for the case K 1 = K. The case for arbitrary values of K 1 can be proved following a similar approach as above. APPENDIX B PROOF OF THEOREM 1 Using the notations introduced in Section III-B and according to Proposition 1, we obtain an asymptotic expression for I(d 1 , · · · , d k ; y|d k+1 , · · · , d K ) as I d A k ; y d A c k ≃ k t=1 I d t ; z (k) t T (k) t B t + log 2 det (I Nr + R A k ) − log 2 e k t=1 γ (k) t T G t ψ (k) t .(54) To investigate the optimal precoders which maximize R w sum,asy (B 1 , B 2 , · · · , B K ), we consider the gradient of R w sum,asy (B 1 , B 2 , · · · , B K ) with respect to B l . It is noted from (13)-(17) that the parameters I d t ; z (k) t T (k) t B t , γ (k) t,n , ψ (k) t,m depend on B l . Therefore, the gradient of R w sum,asy (B 1 , B 2 , · · · , B K ) with respect to B l is given by t,m (B l ) l = 1, 2, · · · , K. and R. Schober was supported by the Alexander von Humboldt Foundation. The work of Y. Wu and X. Gao was also supported in part by National Natural Science Foundation of China under Grants 61320106003 and 61222102, the China High-Tech 863 Plan under Grant 2012AA01A506, National Science and Technology Major Project of China under Grants 2013ZX03003004 and 2014ZX03003006-003, and the Program for Jiangsu Innovation Team. The work of C.-K. Wen was supported in part by the MOST of Taiwan under Grant MOST103-2221-E-110-029-MY3.The work of C. Xiao was supported in part by National Science Foundation under Grants CCF-0915846 and ECCS-1231848. Part of this work was carried out while C. Xiao was a visiting professor at Universität Erlangen-Nürnberg. Y. Wu and R. Schober are with Institute for Digital Communications, Universität Erlangen-Nürnberg, Cauerstrasse 7, D-91058 Erlangen, Germany (Email: [email protected]; [email protected]). Y. Wu was with the National Mobile Communications Research Laboratory, Southeast University, Nanjing, 210096, P. R. China (Email: [email protected]). C.-K. Wen is with the Institute of Communications Engineering, National Sun Yat-sen University, Kaohsiung 804, Taiwan (Email: [email protected]). C. Xiao is with the Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, MO 65409, USA (Email: [email protected]). X. Gao is with the National Mobile Communications Research Laboratory, Southeast University, Nanjing, 210096, P. R. China (Email: [email protected]). elements, we set those to zero, normalize Γ ( 5 )], [ 6 , 56Eq. (24)]) involves additions over the modulation signal space which scales exponentially with the number of transmit antennas. The computational complexity of other operations, such as the matrix product, solving the fixed point equations, etc., are polynomial functions of the number of transmit and receive antennas. Therefore, for ease of analysis, we just compare the computational complexity of calculating the mutual information and the MSE matrix here. When N t increases, the computational complexity of Algorithm 1 is dominated by the required number of additions in calculating R w,(n) sum,asy (B 1 , B 2 , · · · , B K ),∇ Γ 2 B l R wsum,asy (B 1 , B 2 , · · · , B K ), and ∇ VB l R w sum,asy (B 1 , B 2 , · · · , B K ) based on(16) in Steps 1 and 5, Step 2, and Step 3, respectively. Eq.(16) implies that Algorithm 1 only requires additions over each user's own possible transmit vectors to design the precoders. Accordingly, the Fig. 1 : 1Average sum rate for different precoding designs. Fig. 2 : 2Sum rate vs. iteration index for different SNRs, QPSK inputs, θ = 0.1, and ω = 0.5. Fig. 3 : 3Average sum rate of two-user MIMO MAC with QPSK modulation. Fig. 4 : 4Achivable rate regions of two-user MIMO MAC with QPSK modulation. Fig. 5 : 5Average sum rate of two-user MIMO MAC with 16QAM modulation. Fig. 6 : 6Average sum rate of four-user massive MIMO MAC in suburban scenario with QPSK modulation. Fig. 7 : 7Average sum rate of four-user massive MIMO MAC in urban scenario with QPSK modulation. Fig. 8 : 8Average sum rate of massive MIMO MAC in urban scenario with QPSK modulation. KT . From (8), the mutual information of the MIMO MAC can be expressed as I(d; y) = F − N r log 2 e, where F = −E y,H [log 2 Z(y, H)] and Z(y, H) = E x e − y−Hx 2 . The expectations over y and H are difficult to perform because the logarithm appears inside the average. The replica method [50] circumvents this difficulty by rewriting F as F = − log 2 e lim r→0 ∂ ∂r ln E y,H [(Z(y, H)) r ] . Based on the RS assumption, S(Q) becomes S(Q) = −N r ln( F = k,n,m + rq k,n,m )(c k,n,m + rq k,n,m ) + r(c k,n,m −q k,n,m )(c k,n,m − q k,n,m ).(50) sum,asy (B1,B2,··· ,BK ) ∂γ sum,asy (B1,B2,··· ,BK ) ∂ψ(k) t,m ∇ B l ψ (k) TABLE I : INumber of additions required for calculating the mutual information and the MSE matrix. ] requires additions over all possible transmit vectors of all users. For this reason, the computational complexity of the conventional precoding design scales linearly withModulation QPSK 8PSK 16 QAM Algorithm 1 262144 6.7 e+007 1.7 e+010 Design Method in [6] 1.85 e+019 7.9 e+028 3.4 e+038 computational complexity 8 of the proposed Algorithm 1 in calculating the mutual information and the MSE matrix grows linearly with K k=1 Q 2Nt k . In contrast, the conventional precoder design for instantaneous CSI at the transmitter in [6K k=1 Q k 2Nt For ease of notation, we only consider the case where all users have the same number of transmit antennas. Note that all results in this paper can be easily extended to the case when this restriction does not hold. To simplify our notation, in this paper, without loss of generality, we normalize the power of the noise to unity. The virtual MIMO channel model in(9) is only used to evaluate the asymptotic mutual information of the actual channel model in (1) in the large system regime. Therefore, the dimensionality of the virtual MIMO channel model does not need to be identical to that of the channel model in(1). The dimensionality of the virtual MIMO channel model is detailed in Appendix A. (k) t , R (k) t , γ (k) t , ψ (k) t , z (k) t , E (k) t ,and Ω (k) t , t = 1, 2, · · · , k. Here, the indices [i 1 , i 2 , · · · , i K1 ] in(13) and(14)are [1, 2, · · · , k], and t refers to the individual users in the set [1, 2, · · · , k]. This effect is called the phenomenon of phase coexistence. For details on this phenomenon, please refer to[29]. The average over the noise vector v in(16) can be evaluated by employing the accurate approximation in[23, Eq. (57)]. Therefore, its computational burden is negligible compared to that of computing the expectation over d i k in(16). We note that the implementation complexity of Algorithm 1 in[6] is prohibitive for 16QAM inputs. In contrast, the complexity of the algorithm proposed in this paper is manageable for 16QAM inputs throughout the entire SNR region. Based on the definition of γ (k) t,n and ψ(k)t,m in Appendix A, we know that γ (k) t,n and ψ(k)t,m are obtained by setting the partial derivatives of the asymptotic mutual information expression in Proposition 1 with respective to γ (k) t,n and ψ(k)t,m to zero. Then, according to the definition of R w sum,asy in(18), we obtain ∂R w sum,asy (B 1 ,As a result, we haveFrom (58), we know that the asymptotic WSR maximization problem is equivalent to the following K subproblems:where l = 1, 2, · · · , K. From (16), we havewhereDefine the eigenvalue decomposition of Q l as Q l = U q,l Γ q,l U H q,l . Then, we havewhere V q,l ∈ C Nt×Nt is an arbitrary unitary matrix. According to(13)and(62), B l can be expressed asAny B l which conforms to the expression in (64) satisfies B H l T (k) l B l = Q l . Furthermore, the transmit power of B l is given bywhere (a) is obtained based on[56,Eq. (3.158)]. The equality in (66) holds when V H q,l U T l = I Nt . By substituting this condition into (64), we obtainEq. (67) indicates that for I d l ; zl B l , setting the left singular matrix to be equal to U T l minimizes the transmit power tr(B l B H l ). This conclusion holds for all k = l, l + 1, · · · , L in (59).We assume that the optimal precoder is B * l , l = 1, 2, · · · , K. If the left singular matrix of B * l is not equal to U T l , we can always find another precoder with the left singular matrix equal to U T l , which achieves the same value of Q l as B * l . Therefore, this precoder can achieve the same mutual information in (59) as B * l , but with a smaller transmit power according to (66). Then, by increasing the transmit power of this precoder until the power constraint is met with equality, a larger mutual information value in (59) is achieved. This contradicts the assumption that B * l is the optimal precoder. As a result, the left singular matrix of B * l must be equal to U T l . Substituting the optimal B l in (67) into(17), we obtain(19). This completes the proof. Capacity limits of MIMO channels. A Goldsmith, S A Jafar, N Jindal, S Vishwanath, IEEE J. Sel. Areas Commun. 21A. Goldsmith, S. A. Jafar, N. Jindal, and S. Vishwanath, "Capacity limits of MIMO channels," IEEE J. Sel. Areas Commun., vol. 21, pp. 684-702, Jun. 2003. Competition and cooperation in multi-user communication environments. W Yu, Stanford, CAStanford Univ.Ph.D. dissertationW. Yu, "Competition and cooperation in multi-user communication environments," Ph.D. dissertation, Stanford Univ., Stanford, CA, 2002. Iterative water-filling for Gaussian vector multiple-access channels. W Yu, W Rhee, S Boyd, J M Cioffi, IEEE Trans. Inform. Theory. 50W. Yu, W. Rhee, S. Boyd, and J. M. Cioffi, "Iterative water-filling for Gaussian vector multiple-access channels," IEEE Trans. Inform. Theory, vol. 50, pp. 145-152, Jan. 2004. Optimum power allocation for parallel Gaussian channels with arbitrary input distributions. A Lozano, A M Tulino, S Verdú, IEEE Trans. Inform. Theory. 52A. Lozano, A. M. Tulino, and S. Verdú, "Optimum power allocation for parallel Gaussian channels with arbitrary input distributions," IEEE Trans. Inform. Theory, vol. 52, pp. 3033-3051, Jul. 2006. Globally optimal linear precoders for finite alphabet signals over complex vector Gaussian channels. C Xiao, Y R Zheng, Z Ding, IEEE Trans. Signal Process. 59C. Xiao, Y. R. Zheng, and Z. Ding, "Globally optimal linear precoders for finite alphabet signals over complex vector Gaussian channels," IEEE Trans. Signal Process., vol. 59, pp. 3301-3314, Jul. 2011. Linear precoding for MIMO multiple access channels with finite discrete input. M Wang, C Xiao, W Zeng, IEEE Trans. Wireless Commun. 10M. Wang, C. Xiao, and W. Zeng, "Linear precoding for MIMO mul- tiple access channels with finite discrete input," IEEE Trans. Wireless Commun., vol. 10, pp. 3934-3942, Nov. 2011. Linear precoding for finite alphabet signaling over MIMOME wiretap channels. Y Wu, C Xiao, Z Ding, X Gao, S Jin, IEEE Trans. Veh. Technol. 61Y. Wu, C. Xiao, Z. Ding, X. Gao, and S. Jin, "Linear precoding for finite alphabet signaling over MIMOME wiretap channels," IEEE Trans. Veh. Technol., vol. 61, pp. 2599-2612, Jul. 2012. Linear precoding for MIMO broadcast channels with finite-alphabet constraints. Y Wu, M Wang, C Xiao, Z Ding, X Gao, IEEE Trans. Wireless Commun. 11Y. Wu, M. Wang, C. Xiao, Z. Ding, and X. Gao, "Linear precoding for MIMO broadcast channels with finite-alphabet constraints," IEEE Trans. Wireless Commun., vol. 11, pp. 2906-2920, Aug. 2012. Linear precoder design for MIMO interference channels with finite-alphabet signaling. Y Wu, C Xiao, X Gao, J D Matyjas, Z Ding, IEEE Trans. Commun. 61Y. Wu, C. Xiao, X. Gao, J. D. Matyjas, and Z. Ding, "Linear precoder design for MIMO interference channels with finite-alphabet signaling," IEEE Trans. Commun., vol. 61, pp. 3766-3780, Sep. 2013. On the mutual information and power allocation for vector Gaussian channels with finite discrete inputs. C Xiao, Y R Zheng, Proc. IEEE Global. Telecommun. Conf. (GLOBECOM 2008). IEEE Global. Telecommun. Conf. (GLOBECOM 2008)New Orleans, USAC. Xiao and Y. R. Zheng, "On the mutual information and power allocation for vector Gaussian channels with finite discrete inputs," in Proc. IEEE Global. Telecommun. Conf. (GLOBECOM 2008), New Orleans, USA, Dec. 2008, pp. 1-5. Transmit precoding for MIMO systems with partial CSI and discrete-constellation inputs. Proc. IEEE Int. Telecommun. Conf. (ICC 2009). IEEE Int. Telecommun. Conf. (ICC 2009)Dresden, Germany--, "Transmit precoding for MIMO systems with partial CSI and discrete-constellation inputs," in Proc. IEEE Int. Telecommun. Conf. (ICC 2009), Dresden, Germany, Jun. 2009, pp. 1-5. On optimal precoding in linear vector Gaussian channels with arbitrary inputs distribution. M Payaró, D P Palomar, Proc. IEEE Int. Symp. Inform. Theory (ISIT 2009). IEEE Int. Symp. Inform. Theory (ISIT 2009)Seoul, KoreaM. Payaró and D. P. Palomar, "On optimal precoding in linear vector Gaussian channels with arbitrary inputs distribution," in Proc. IEEE Int. Symp. Inform. Theory (ISIT 2009), Seoul, Korea, Jun. 2009, pp. 1085- 1089. Linear precoding for mutual information maximization in MIMO systems. M Lamarca, Proc. Int. Symp. Wireless Commun. Sys. (ISWCS 2009). Int. Symp. Wireless Commun. Sys. (ISWCS 2009)Siena, ItalyM. Lamarca, "Linear precoding for mutual information maximization in MIMO systems," in Proc. Int. Symp. Wireless Commun. Sys. (ISWCS 2009), Siena, Italy, 2009, pp. 1-5. On two-user Gaussian multiple access channels with finite input constellations. J Harshan, B S Rajan, IEEE Trans. Inform. Theory. 57J. Harshan and B. S. Rajan, "On two-user Gaussian multiple access channels with finite input constellations," IEEE Trans. Inform. Theory, vol. 57, pp. 1299-1327, Mar. 2011. A novel power allocation scheme for two-user GMAC with finite input constellations. J Harshan, B Rajan, IEEE Trans. Wireless. Commun. 12J. Harshan and B. Rajan, "A novel power allocation scheme for two-user GMAC with finite input constellations," IEEE Trans. Wireless. Commun., vol. 12, pp. 818-827, Feb. 2013. Capacity-achieving input covariance for single-user multi-antenna channels. A M Tulino, A Lozano, S Verdú, IEEE Trans. Wireless. Commun. 5A. M. Tulino, A. Lozano, and S. Verdú, "Capacity-achieving input co- variance for single-user multi-antenna channels," IEEE Trans. Wireless. Commun., vol. 5, pp. 662-671, Mar. 2006. Statistical eigenmode transmission over jointly-correlated MIMO channels. X Gao, B Jiang, X Li, A B Gershman, M R Mckay, IEEE Trans. Inform. Theory. 55X. Gao, B. Jiang, X. Li, A. B. Gershman, and M. R. McKay, "Statistical eigenmode transmission over jointly-correlated MIMO channels," IEEE Trans. Inform. Theory, vol. 55, pp. 3735-3750, Aug. 2009. Cooperative multicell precoding: Rate region characterization and distributed strategies with instantaneous and statistical CSI. E Bjornson, R Zakhour, D Gesbert, B Ottersten, IEEE Trans. Signal Process. 58E. Bjornson, R. Zakhour, D. Gesbert, and B. Ottersten, "Cooperative multicell precoding: Rate region characterization and distributed strate- gies with instantaneous and statistical CSI," IEEE Trans. Signal Process., vol. 58, pp. 4298-4310, Aug. 2010. On the sum-rate of multiuser MIMO uplink channels with jointly-correlated Rician fading. C.-K Wen, S Jin, K.-K Wong, IEEE Trans. Commun. 59C.-K. Wen, S. Jin, and K.-K. Wong, "On the sum-rate of multiuser MIMO uplink channels with jointly-correlated Rician fading," IEEE Trans. Commun., vol. 59, pp. 2883-2895, Oct. 2011. A deterministic equivalent for the analysis of correlated MIMO multiple access channels. C Romain, M Debbah, J W Silverstein, IEEE Trans. Inform. Theory. 57C. Romain, M. Debbah, and J. W. Silverstein, "A deterministic equiv- alent for the analysis of correlated MIMO multiple access channels," IEEE Trans. Inform. Theory, vol. 57, pp. 3493-3514, Jun. 2011. Statistical eigenmodebased SDMA for two-user downlink. J Wang, S Jin, X Gao, K.-K Wong, E Au, IEEE Trans. Signal Process. 60J. Wang, S. Jin, X. Gao, K.-K. Wong, and E. Au, "Statistical eigenmode- based SDMA for two-user downlink," IEEE Trans. Signal Process., vol. 60, pp. 5371-5383, Oct. 2012. Transmit designs for the MIMO broadcast channel with statistical CSI. Y Wu, S Jin, X Gao, M R Mckay, C Xiao, IEEE Trans. Signal Process. 62Y. Wu, S. Jin, X. Gao, M. R. McKay, and C. Xiao, "Transmit designs for the MIMO broadcast channel with statistical CSI," IEEE Trans. Signal Process., vol. 62, pp. 4451-4466, Sep. 2014. Linear precoding for finitealphabet inputs over MIMO fading channels with statistical CSI. W Zeng, C Xiao, M Wang, J Lu, IEEE Trans. Signal Process. 60W. Zeng, C. Xiao, M. Wang, and J. Lu, "Linear precoding for finite- alphabet inputs over MIMO fading channels with statistical CSI," IEEE Trans. Signal Process., vol. 60, pp. 3134-3148, Jun. 2012. Asymptotic analysis of spatially correlated MIMO multiple-access channels with arbitrary signaling inputs for joint and separate decoding. C.-K Wen, K.-K Wong, IEEE Trans. Inform. Theory. 53C.-K. Wen and K.-K. Wong, "Asymptotic analysis of spatially correlated MIMO multiple-access channels with arbitrary signaling inputs for joint and separate decoding," IEEE Trans. Inform. Theory, vol. 53, pp. 252- 268, Jan. 2007. Large-system analysis of correlated MIMO multiple access channels with arbitrary signaling in the presence of interference. M Girnyk, M Vehkaperä, L K Rasmussen, IEEE Trans. Wireless. Commun. 4M. Girnyk, M. Vehkaperä, and L. K. Rasmussen, "Large-system analysis of correlated MIMO multiple access channels with arbitrary signaling in the presence of interference," IEEE Trans. Wireless. Commun., vol. 4, pp. 2060-2073, Apr. 2014. Deficiencies of 'Kronecker' MIMO radio channel model. H Ozcelik, M Herdin, W Weichselberger, J Wallace, E Bonek, Electron. Lett. 39H. Ozcelik, M. Herdin, W. Weichselberger, J. Wallace, and E. Bonek, "Deficiencies of 'Kronecker' MIMO radio channel model," Electron. Lett., vol. 39, pp. 1209-1210, Aug. 2003. A stochastic MIMO channel model with joint correlation of both link ends. W Weichselberger, M Herdin, H Ozcelik, E Bonek, IEEE Trans. Wireless. Commun. 5W. Weichselberger, M. Herdin, H. Ozcelik, and E. Bonek, "A stochastic MIMO channel model with joint correlation of both link ends," IEEE Trans. Wireless. Commun., vol. 5, pp. 90-100, Jan. 2006. Weighted sum-rate maximization using weighted MMSE for MIMO-BC beamforming design. S S Christensen, R Agarwal, E Carvalho, J M Cioffi, IEEE Trans. Wireless. Commun. 7S. S. Christensen, R. Agarwal, E. de Carvalho, and J. M. Cioffi, "Weighted sum-rate maximization using weighted MMSE for MIMO- BC beamforming design," IEEE Trans. Wireless. Commun., vol. 7, pp. 4792-4799, Dec. 2008. A statistical-mechanics approach to large-system analysis of CDMA multiuser detectors. T Tanaka, IEEE Trans. Inform. Theory. 48T. Tanaka, "A statistical-mechanics approach to large-system analysis of CDMA multiuser detectors," IEEE Trans. Inform. Theory, vol. 48, pp. 2888-2910, Nov. 2002. Vector precoding for wireless MIMO systems and its replica analysis. R Müller, D Guo, A Moustakas, IEEE J. Sel. Areas Commun. 26R. Müller, D. Guo, and A. Moustakas, "Vector precoding for wireless MIMO systems and its replica analysis," IEEE J. Sel. Areas Commun., vol. 26, pp. 486-496, Apr. 2008. Randomly spread CDMA: Asymptotics via statistical physics. D Guo, S Verdú, IEEE Trans. Inform. Theory. 516D. Guo and S. Verdú, "Randomly spread CDMA: Asymptotics via statistical physics," IEEE Trans. Inform. Theory, vol. 51, no. 6, pp. 1983-2010, June 2005. Noncooperative cellular wireless with unlimited numbers of base station antennas. T L Marzetta, IEEE Trans. Wireless. Commun. 9T. L. Marzetta, "Noncooperative cellular wireless with unlimited num- bers of base station antennas," IEEE Trans. Wireless. Commun., vol. 9, pp. 3590-3600, Nov. 2010. Impact of correlation on the capacity of multi-antenna channels. A M Tulino, A Lozano, S Verdú, IEEE Trans. Inform. Theory. 51A. M. Tulino, A. Lozano, and S. Verdú, "Impact of correlation on the capacity of multi-antenna channels," IEEE Trans. Inform. Theory, vol. 51, pp. 2491-2509, Jul. 2005. Optimum power allocation for single-user MIMO and multi-user MIMO-MAC with partial CSI. A Soysal, S Ulukus, IEEE J. Sel. Areas Commun. 25A. Soysal and S. Ulukus, "Optimum power allocation for single-user MIMO and multi-user MIMO-MAC with partial CSI," IEEE J. Sel. Areas Commun., vol. 25, pp. 1402-1412, Sep. 2007. Fading correlation and its effect on the capacity of multielement antenna systems. D.-S Shiu, G J Foschini, M J Gans, J M Kahn, IEEE Trans. Commun. 48D.-S. Shiu, G. J. Foschini, M. J. Gans, and J. M. Kahn, "Fading correlation and its effect on the capacity of multielement antenna systems," IEEE Trans. Commun., vol. 48, pp. 502-513, Mar. 2000. A discretetime model for triply selective MIMO Rayleigh fading channels. C Xiao, J Wu, S Y Leong, Y R Zheng, K B Letaief, IEEE Trans. Wireless Commun. 3C. Xiao, J. Wu, S. Y. Leong, Y. R. Zheng, and K. B. Letaief, "A discrete- time model for triply selective MIMO Rayleigh fading channels," IEEE Trans. Wireless Commun., vol. 3, pp. 1678-1688, Sep. 2004. Deconstructing multiantenna fading channels. A M Sayeed, IEEE Trans. Signal Process. 50A. M. Sayeed, "Deconstructing multiantenna fading channels," IEEE Trans. Signal Process., vol. 50, pp. 2563-2579, Oct. 2002. T M Cover, J A Thomas, Elements of Information Theory. New York: Wiely2nd edT. M. Cover and J. A. Thomas, Elements of Information Theory, 2nd ed. New York: Wiely, 2006. MIMO capacity through correlated channels in the presence of correlated interferers and noise: A (not so) large n analysis. A L Moustakas, S H Simon, A M Sengupta, IEEE Trans. Inform. Theory. 49A. L. Moustakas, S. H. Simon, and A. M. Sengupta, "MIMO capacity through correlated channels in the presence of correlated interferers and noise: A (not so) large n analysis," IEEE Trans. Inform. Theory, vol. 49, pp. 2545-2561, Oct. 2003. Gradient of mutual information in linear vector Gaussian channels. D P Palomar, S Verdú, IEEE Trans. Inform. Theory. 52D. P. Palomar and S. Verdú, "Gradient of mutual information in linear vector Gaussian channels," IEEE Trans. Inform. Theory, vol. 52, pp. 141-154, Jan. 2006. R Horn, C Johnson, Matrix Analysis. New YorkCambridge University PressR. Horn and C. Johnson, Matrix Analysis. New York: Cambridge University Press, 1985. S Boyd, L Vandenberghe, Convex Optimization. New YorkCambridge University PressS. Boyd and L. Vandenberghe, Convex Optimization. New York: Cambridge University Press, 2004. Scaling up MIMO: Opportunities and challenges with very large arrays. F Rusek, D Persson, B K Lau, E G Larsson, T L Marzetta, O Edfors, F Tufvesson, IEEE Signal Process. Mag. 30F. Rusek, D. Persson, B. K. Lau, E. G. Larsson, T. L. Marzetta, O. Edfors, and F. Tufvesson, "Scaling up MIMO: Opportunities and challenges with very large arrays," IEEE Signal Process. Mag., vol. 30, pp. 40-60, Jan. 2013. Joint spatial division and multiplexing-The large-scale array regime. A Adhikary, J Nam, J-Y. Ahn, G Caire, IEEE Trans. Inform. Theory. 59A. Adhikary, J. Nam, J-Y. Ahn, and G. Caire, "Joint spatial division and multiplexing-The large-scale array regime," IEEE Trans. Inform. Theory, vol. 59, pp. 3735-3750, Oct. 2013. Massive MIMO for next generation wireless systems. E G Larsson, F Tufvesson, O Edfors, T L Marzetta, IEEE Commun. Mag. 52E. G. Larsson, F. Tufvesson, O. Edfors, and T. L. Marzetta, "Massive MIMO for next generation wireless systems," IEEE Commun. Mag., vol. 52, pp. 186-195, Feb. 2014. Ergodic rate analysis for multi-pair two-way relay large-scale antenna system. X Liang, S Jin, X Gao, K.-K Wong, Proc. IEEE Int. Telecommun. Conf. (ICC 2014). IEEE Int. Telecommun. Conf. (ICC 2014)Sydney, AustraliaX. Liang, S. Jin, X. Gao, and K.-K. Wong, "Ergodic rate analysis for multi-pair two-way relay large-scale antenna system," in Proc. IEEE Int. Telecommun. Conf. (ICC 2014), Sydney, Australia, Jun. 2014, pp. 1-5. Performance limits of massive MIMO systems based on bayes-optimal inference. C.-K Wen, Y Wu, K.-K Wong, R Schober, P Ting, submitted to Proc. IEEE Int. Telecommun. Conf. (ICC 2015). OnlineC.-K. Wen, Y. Wu, K.-K. Wong, R. Schober, and P. Ting, "Performance limits of massive MIMO systems based on bayes-optimal inference," submitted to Proc. IEEE Int. Telecommun. Conf. (ICC 2015), [Online]. MIMO Gaussian channels with arbitrary input: Optimal precoding and power allocation. F Pérez-Cruz, M R D Rodrigues, S Verdú, IEEE Trans. Inform. Theory. 56F. Pérez-Cruz, M. R. D. Rodrigues, and S. Verdú, "MIMO Gaussian channels with arbitrary input: Optimal precoding and power allocation," IEEE Trans. Inform. Theory, vol. 56, pp. 1070-1084, Mar. 2010. J Salo, G Galdo, J Salmi, P Kyösti, M Milojevic, D Laselva, C Schneider, 3GPP TR 25.996MATLAB implementation of the 3GPP Spatial Channel Model. J. Salo, G. Del Galdo, J. Salmi, P. Kyösti, M. Milojevic, D. La- selva, and C. Schneider. (2005, Jan.) MATLAB implementation of the 3GPP Spatial Channel Model (3GPP TR 25.996) [Online]. Available: http://www.tkk.fi/Units/Radio/scm/. Theory of spin glasses. S F Edwards, P W Anderson, J. of Physics F: Metal Physics. 5S. F. Edwards and P. W. Anderson, "Theory of spin glasses," J. of Physics F: Metal Physics, vol. 5, pp. 965-974, May 1975. Statistical physics of spin glasses and information processing: An introduction. H Nishimori, Ser. Number 111 in Int. Series on Monographs on Physics. Oxford University PressH. Nishimori, Statistical physics of spin glasses and information pro- cessing: An introduction. Ser. Number 111 in Int. Series on Monographs on Physics. Oxford University Press, 2001. A new approach for mutual information analysis of large dimensional multi-antenna channels. W Hachem, O Khorunzhiy, P Loubaton, J Najim, L Pastur, IEEE Trans. Inform. Theory. 54W. Hachem, O. Khorunzhiy, P. Loubaton, J. Najim, and L. Pastur, "A new approach for mutual information analysis of large dimensional multi-antenna channels," IEEE Trans. Inform. Theory, vol. 54, pp. 3987- 4004, Sep. 2008. Applications of the Lindeberg principle in communications and statistical learning. S Korada, A Montanari, IEEE Trans. Inform. Theory. 57S. Korada and A. Montanari, "Applications of the Lindeberg principle in communications and statistical learning," IEEE Trans. Inform. Theory, vol. 57, pp. 2440-2450, Apr. 2011. Vector precoding for Gaussian MIMO broadcast channels: Impact of replica symmetry breaking. B M Zaidel, R R Müller, A L Moustakas, R De Miguel, IEEE Trans. Inform. Theory. 58B. M. Zaidel, R. R. Müller, A. L. Moustakas, and R. de Miguel, "Vector precoding for Gaussian MIMO broadcast channels: Impact of replica symmetry breaking," IEEE Trans. Inform. Theory, vol. 58, pp. 1413- 1440, Mar. 2012. The dynamics of message passing on dense graphs, with applications to compressed sensing. M Bayati, A Montanari, IEEE Trans. Inform. Theory. 57M. Bayati and A. Montanari, "The dynamics of message passing on dense graphs, with applications to compressed sensing," IEEE Trans. Inform. Theory, vol. 57, pp. 764-785, Feb. 2011. MIMO Transceiver Design via Majorization Theory. D Palomar, Y Jiang, Now PublishersDelft, The NetherlandsD. Palomar and Y. Jiang, MIMO Transceiver Design via Majorization Theory. Delft, The Netherlands: Now Publishers, 2006.
[]
[ "High-frequency Graviton from Inflaton Oscillation", "High-frequency Graviton from Inflaton Oscillation" ]
[ "Yohei Ema ", "Ryusuke Jinno ", "Kazunori Nakayama \nDepartment of Physics\nFaculty of Science\nThe University of Tokyo\nBunkyo-ku113-0033TokyoJapan (\n\nKavli IPMU (WPI)\nThe University of Tokyo\n277-8583KashiwaChibaJapan\n", "\nNotkestrabe 85D-22607HamburgGermany (\n" ]
[ "Department of Physics\nFaculty of Science\nThe University of Tokyo\nBunkyo-ku113-0033TokyoJapan (", "Kavli IPMU (WPI)\nThe University of Tokyo\n277-8583KashiwaChibaJapan", "Notkestrabe 85D-22607HamburgGermany (" ]
[]
We point out that there is a high-frequency tail of the stochastic inflationary gravitational wave background that scales as f −1/2 with frequency f . This contribution comes from the graviton vacuum fluctuation amplified by the inflaton coherent oscillation during the reheating stage. It contains information on inflaton properties such as the inflaton mass as well as the thermal history of the early Universe.
10.1088/1475-7516/2020/09/015
[ "https://arxiv.org/pdf/2006.09972v1.pdf" ]
219,721,369
2006.09972
7e5a3d82cd9365e5fbe150f18f4712a011fe6516
High-frequency Graviton from Inflaton Oscillation 17 Jun 2020 Yohei Ema Ryusuke Jinno Kazunori Nakayama Department of Physics Faculty of Science The University of Tokyo Bunkyo-ku113-0033TokyoJapan ( Kavli IPMU (WPI) The University of Tokyo 277-8583KashiwaChibaJapan Notkestrabe 85D-22607HamburgGermany ( High-frequency Graviton from Inflaton Oscillation 17 Jun 2020 We point out that there is a high-frequency tail of the stochastic inflationary gravitational wave background that scales as f −1/2 with frequency f . This contribution comes from the graviton vacuum fluctuation amplified by the inflaton coherent oscillation during the reheating stage. It contains information on inflaton properties such as the inflaton mass as well as the thermal history of the early Universe. Introduction Gravitational wave (GW) provides us with a new way to probe our universe. It interacts only very weakly with matters, and hence preserves the information on source objects imprinted in its spectrum during its propagation. In particular, GW is a unique way to probe the early universe, as all the other messengers (such as photons and neutrinos) interact strongly with matters and hence lost their information in the (sufficiently) early universe. For instance, inflation generically predicts GWs that are excited during the quasi de Sitter phase [1][2][3]. They are one of the main targets of the modern cosmological observatories since they provide us with the information on the inflationary energy scale. GWs are also expected to be produced during the course of thermal history after inflation, such as preheating [4], cosmological defects [5], and phase transitions [6,7], although the existence of these contributions is more model-dependent (see Ref. [8] for a recent review). In this paper, we point out the existence of yet another source of GW. In general, an inflaton starts to oscillate around the bottom of its potential after inflation, before eventually decaying into other particles and hence completing the reheating. This coherent oscillation of the inflaton during the inflaton oscillation epoch produces GWs through gravitational interaction, which can be interpreted as inflaton annihilation into gravitons mediated by gravity itself [9]. This contribution extends toward the high frequency region beyond the inflationary GWs. Since it is produced from the inflaton oscillation, the GW spectrum contains a variety of information on the inflaton sector, such as the inflaton mass scale and more generally the shape of the inflaton potential around its bottom. It also depends on the inflaton decay rate and hence the reheating temperature. It is quite challenging to detect this contribution with the current and near-future GW detectors [10][11][12][13][14][15][16] (see also Refs. [17][18][19][20][21] for ideas for high-frequency graviton detection), not only because of its overall normalization but also because of its high characteristic frequency. Furthermore, there are other GW sources in the high frequency region that we expect to be present in general, such as the contribution from the standard model (SM) thermal plasma [22,23], the bremsstrahlung from the inflaton decay [24,25] as well as those from preheating [4]. These other contributions can hide our GWs, depending on the model parameters and the frequency. Nevertheless, we think it meaningful to point out the existence of the GWs produced during the inflaton oscillation epoch, as this contribution imprints quite interesting information on the inflaton sector that is usually hard to reach. We hope that a development of GW detection technology eventually enables us to probe the high frequency region such that we can gain information on the inflaton sector in the future. This paper is organized as follows. In Sec. 2, we review the equation of motion of the graviton and the graviton production during inflation. Sec. 3 is the main part of this paper, where we compute the GW production from the inflaton oscillation after inflation both analytically and numerically. In Sec. 4, we show the resultant GW spectrum, especially its dependence on the model parameters such as the inflaton mass and the reheating temperature. Sec. 5 is devoted to the discussion, where we compare our contribution with the contributions from the SM thermal plasma and the bremsstrahlung from the inflaton decay. Graviton in inflationary universe We consider the Einstein-Hilbert action plus the inflaton action as S = dtd 3 x √ −g M 2 P 2 R + L φ .(1) The metric is expanded as ds 2 = −dt 2 + a 2 (t)(δ ij + h ij )dx i dx j = a 2 (t) −dτ 2 + (δ ij + h ij )dx i dx j ,(2) where we have taken the transverse-traceless gauge: h i i = ∂ i h ij = 0. The graviton action is given by S = dtd 3 x a 3 M 2 P 8 (ḣ ij ) 2 − 1 a 2 (∂ l h ij ) 2 = λ=+,× dτ d 3 k (2π) 3 1 2 |h λ (k)| 2 − ω 2 k |h λ (k)| 2 , ω 2 k ≡ k 2 − a 2 R 6 ,(3) where the prime denotes the derivative with respect to the conformal time τ , and λ = +, × denotes the two polarization states of the graviton. We have defined the canonical graviton in the momentum space as aM P 2 h ij (t, x) = λ=+,× d 3 k (2π) 3 h λ ( k, τ )e i k· x λ ij ,(4) where λ ij denotes the polarization tensor, which satisfies λ ij λ ij = δ λλ . The free graviton action (3) is the same as the minimally-coupled massless scalar field. Thus gravitational production of gravitons during the inflation and reheating era is treated in the same way as the minimal scalar field which is extensively studied in Refs. [9,[26][27][28]. Let us introduce a creation and annihilation operator for the graviton: h λ ( k, τ ) = h λ ( k, τ )a λ, k + h * λ ( k, τ )a † λ,− k ,(5) where they satisfy the commutation relation a λ, k , a † λ , k = (2π) 3 δ( k − k )δ λλ . The equation of motion is given by h λ (k) + ω 2 k h λ (k) = 0.(6) The solution to the equation of motion and its approximate form in the high and low frequency limit, which satisfies the Bunch-Davies boundary condition, during inflation is given by h λ (k, τ ) = − 1 √ 2k −πkτ 2 H (1) 3/2 (−kτ )      1 √ 2k e −ikτ for − kτ 1 i aH inf √ 2k 3/2 for − kτ 1 ,(7) where H 3/2 (x) denotes the Hankel function of the first kind and we used τ = −(aH inf ) −1 during inflation with H inf being the inflationary Hubble scale. It is well known that the superhorizon modes (−kτ end 1 where the subscript "end" represents the end of inflation) have (nearly) scale invariant power spectrum: #1 P h (k, τ end ) ≡ k 3 π 2 a 2 h λ (k) 2 = H 2 inf 2π 2 for − kτ end 1.(8) On the other hand, shorter wavelength modes (−kτ end 1) never exit the horizon. However, it does not mean that shorter wavelength modes are not excited. Below we will evaluate the production of these short wavelength graviton modes during the inflaton oscillation epoch after inflation. High frequency graviton production Let us consider the high-frequency modes that never exit the horizon: −kτ end 1. After inflation ends, the inflaton coherent oscillation begins and the graviton wave function is modified through the (rapidly-oscillating) a 2 R term in the equation of motion. In this case it is convenient to parameterize the wave function in terms of the Bogoliubov coefficients α k , β k as h λ (k, τ ) = α k (τ )v k (τ ) + β k (τ )v * k (τ ),(9) #1 Often the graviton power spectrum is defined by the original basis before the canonical rescaling. In such a case the graviton power spectrum is given by P h (k) ≡ (2/M P ) 2 P h (k) = 2H 2 inf /(πM P ) 2 and the tensor-to-scalar ratio is defined as r = P h (k)/P ζ (k) with P ζ being the power spectrum of the curvature perturbation. where v k (τ ) = e −iΩ k (τ ) , Ω k (τ ) ≡ ω k dτ.(10) The equation of motion is rewritten as α k (τ ) = ω k 2ω k β k (τ )e 2iΩ k , β k (τ ) = ω k 2ω k α k (τ )e −2iΩ k ,(11) They satisfy the normalization condition |α k (τ )| 2 − |β k (τ )| 2 = 1. The initial condition is α k = 1 and β k = 0 for −kτ → ∞. The renormalized graviton energy density is expressed as a 4 (τ )ρ h (τ ) = 2 d 3 k (2π) 3 ω k |β k (τ )| 2 .(12) We define the graviton energy spectrum as ρ h (τ ) = ρ h,k (τ )d ln k, a 4 (τ )ρ h,k (τ ) = k 3 ω k π 2 |β k (τ )| 2 .(13) Thus it is sufficient to evaluate β k (τ ) to obtain the graviton energy spectrum. Numerically, one can integrate the equation (11) to obtain β k (τ ) given an inflation model. Fig. 1 shows the result of our numerical calculation. We assumed a chaotic inflation model with a quadratic potential V = m 2 φ φ 2 /2 for concreteness, #2 and solved the following equations 3M 2 P H 2 = a 2 (ρ φ + ρ r ),(14) φ + (2H + aΓ φ )φ + a 2 dV dφ = 0,(15)ρ r + 4Hρ r = aΓ φ ρ φ ,(16) where ρ φ = φ 2 /2a 2 + V is the inflaton energy density, ρ r is the radiation energy density, Γ φ is the inflaton total decay width and H = a /a denotes the conformal Hubble scale. Eq. (11) is solved numerically with this background. The inflaton decay rate is (hypothetically) taken to be zero in the left panel and the spectrum is evaluated during the inflaton domination. In the right panel the inflaton decay rate is taken to be Γ φ = (10 −1 , 10 −2 , 10 −3 ) × m φ and the spectrum is evaluated during the radiation domination. One can see that the graviton energy spectrum shows k −1/2 behavior as expected. Below we compare this result with an analytic estimate. In order to evaluate the graviton energy spectrum, it is convenient to interpret the graviton production during the reheating era as the inflaton annihilation into a graviton pair, as #2 The chaotic inflation [29] with a quadratic potential is now disfavored by the cosmological observation, but a slight modification on the potential makes the model viable [30,31]. Since we are mainly interested in the inflaton oscillation regime, such a modification is irrelevant for the discussion below. The inflaton decay rate is taken to vanish, and the spectrum is evaluated during the inflaton domination. The ratio ρ h /ρ φ is multiplied with the ratio of the scale factor at the inflation end and at the evaluation time to cancel out the dependence on the evaluation time. (Right) The inflaton decay rate is taken to be Γ φ = 10 −1 m φ (blue), 10 −2 m φ (red) and 10 −3 m φ (green), and the spectrum is evaluated during the radiation domination. emphasized in Refs. [9,26,27]. Taking account of two polarization states of the graviton, the effective inflaton annihilation rate is estimated as #3 Γ (grav) (φφ → hh) 1 192π ρ φ m φ M 4 P .(17) Each annihilation produces a pair of gravitons with energy m φ that are then redshifted away. Such a process continues until the end of reheating, resulting in a continuum graviton spectrum at the present universe. The graviton spectrum is calculated as ρ h,k (t 0 ) = C × ρ φ (t k ) Γ (grav) (φφ → hh) H k a(t k ) a 0 4 (18)          3C 64π m φ H 3 end m φ a end k 1/2 a end a 0 4 for k k high , 3C 64π m φ H 3 end m φ a end k high 1/2 a end a 0 4 e −2k 2 /k 2 high for k k high ,(19) where C is an O(1) coefficient and t k is defined through k = a(t k )m φ , i.e., the cosmic time at which the present graviton frequency k/a 0 was emitted. Here k high is defined as k high = C k m φ a(H = Γ φ ) with C k being an O(1) coefficient. The analytic estimate (19) with #3 Ref. [28] calculated the gravitational production rate analytically including O(1) numerical factor for a scalar particle. The same result is applied for a graviton production, since the graviton action is the same as the minimal massless scalar. C = 3 and C k = 1.5 is also plotted in Fig. 1 and it agrees well with the numerical result. We used the function g in Eq. (22) to interpolate between k k high and k k high . Note that there is a small deviation around k ∼ a end H end . This is because there is an intermediate epoch around the end of inflation, in which the inflaton oscillation may not be regarded as a harmonic oscillation, while the analytic estimate (19) assumes the harmonic inflaton oscillation. For low scale inflation models with a large hierarchy between m φ and H inf , it is more difficult to treat this intermediate epoch, but the high frequency behavior k a end m φ is expected to be well described by the above picture. In the above picture, the typical momentum of gravitons being produced is constant in time and is around the inflaton mass m φ . Therefore, it is crucial to use the inflaton equation of motion (15) to get the correct result, since otherwise the scale m φ never appears in the system. In order to stress this point, in Fig. 1, we also show as the dashed lines the graviton spectrum computed assuming a smooth background evolution ρ φ = ρ φ,inf 1 + (a/a end ) 3 e Γ φ (t−t end )      ρ φ,inf for t < t end , ρ φ,inf a a end −3 e −Γ φ (t−t end ) for t > t end ,(20) together with Eqs. (14) and (16), which do not have the timescale m φ . One can clearly see that the resultant graviton spectrum is highly suppressed in high frequencies compared to the solid lines. Thus, the inflaton oscillation is crucial for the high-frequency behavior of the spectrum. Stochastic gravitational wave background revisited Now we plot the present stochastic GW background spectrum in terms of Ω GW (k) ≡ ρ h,k /ρ cr . First, the energy spectrum of subhorizon modes induced by the inflaton oscillation is given by Eq. where we used C 3 and g(k) 1 + k k high 1/2 e −2k 2 /k 2 high .(22) The low and high frequency end of the spectrum are given respectively as f low = m φ 2π a end a 0 1.1 × 10 6 Hz m φ 10 13 GeV T R 10 10 GeV 1/3 10 13 GeV H end 2/3 ,(23) and f high = k high 2πa 0 2.9 × 10 13 Hz m φ 10 13 GeV 10 10 GeV T R ,(24) where we used C k 1.5. For f > f high the spectrum decays exponentially. Remember that the present frequency f is related to the comoving wavenumber k through f = k/(2πa 0 ). On the other hand, there are also contributions from the superhorizon modes that exit the horizon during inflation and reenter the horizon after inflation [32][33][34]. The shape of present GW spectrum depends on the equation of state of the Universe. In particular, the GW spectrum scales as Ω GW ∝ k 0 (k −2 ) for modes that enter the horizon during radiation (matter) domination [35][36][37]. The GW spectrum is evaluated as Ω (inf) GW (k) Ω 2 m 3r 128 P ζ (k 0 ) k 0 k 2−nt g * (T k ) g * (T eq ) g * s (T eq ) g * s (T k ) 4/3 T 1 k k eq T 2 k k R ,(25) where k 0 /a 0 = H 0 is the Hubble parameter at present, r is the tensor-to-scalar ratio, n t = −r/8 is the tensor spectral index, T 1 (x) 1 + (32/9)x 2 and T 2 (x) (1 + x 2 ) −1 , and f R = H R 2π a R a 0 2.6 × 10 2 Hz g * s (T R ) 106.75 1/6 T R 10 10 GeV .(26) This GW spectrum is cut at the frequency f end : f end = H end 2π a end a 0 1.1 × 10 6 Hz H end 10 13 GeV 1/3 T R 10 10 GeV Fig. 2 shows the stochastic GW background spectrum for H inf = 10 14 GeV, H end = m φ = 10 13 GeV and T R = 10 12 GeV (left) and 10 10 GeV (right). The solid lines correspond to the vacuum contribution that is amplified due to the inflaton oscillation during the reheating era and the dashed lines correspond to the inflationary GW that is amplified during the inflation stage. 1/3 .(27) Discussion We have shown that there is inevitably a contribution to the stochastic GW background from the reheating era. It is the subhorizon graviton excitation amplified by the inflaton oscillation. As shown in Fig. 2, this extends to the high-frequency tail which scales as Ω GW ∝ f −1/2 in addition to the well-known inflationary GWs that exit the horizon during inflation and reenter the horizon after inflation. This high frequency tail contains a lot of information about the property of the inflaton: the inflaton mass, the inflaton lifetime (or the reheating temperature) and so on. Although we have focused on the simple quadratic inflaton potential, it is expected that the high frequency tail exhibits more nontrivial structure for a more general form of the inflaton potential. We will come back to this issue in a separate work. Lastly we discuss other contributions to the high-frequency stochastic GW background spectrum, which can hide the vacuum contributions that we found. The Standard Model thermal plasma emits gravitons through scattering processes and they also constitute a stochastic GW background [22,23]. The typical frequency of the emitted graviton at the temperature T is of order T and it is redshifted as a −1 (t). Since the temperature is also redshifted as a −1 (t), the typical comoving frequency (or the frequency observed today) is roughly the same independently of the temperature. The overall amount of GW is dominated by those emitted earlier epoch for all the frequency range, i.e., at the highest temperature T R and the result is #4 Ω (th) GW (k) ∼ 2 × 10 −13 T R 10 10 GeV k a * T * 3 ϕ k a * T * ,(28) where T * denotes the reference temperature taken to be the electroweak scale, and ϕ(x) 1 for x 1 and exponentially decreases for x 1. Another contribution comes from the graviton bremsstrahlung processes associated with the perturbative inflaton decay. The spectrum is given by [24,25] Ω (brem) GW (k) Ω r m 2 φ 16π 2 M 2 P f f high ,(29) for f f high . Note that the coupling that is responsible for the inflaton decay may also induce the preheating and resonant particle production if the coupling is relatively large [38-#4 The dilute plasma before the completion of the reheating also emit gravitons. However, one can show that this contribution goes like k 4.6 toward lower frequency and is hidden by the k 3 tail of (28). 42]. It may act as a classical source of GWs resulting in more abundant GW background than Eq. (29) [4], but it is rather model dependent and we do not go into details here. Although in the most parameter regions these contributions are larger than those from the inflaton oscillation, it might be possible to remove these contributions from the data and find the inflaton oscillation signal, which will provide us with rich information on the early universe and the nature of the inflaton. We emphasize that the high frequency GWs induced by the inflaton oscillation may not be regarded as classical waves since they never exit the horizon and the occupation number is much smaller than unity. Thus detection of such high frequency GWs may be regarded as a direct test of quantum nature of the graviton. Figure 1 : 1Graviton energy spectrum after inflation. The numerical solutions of Eq. (11) with Eqs. (14)-(16) are shown as the solid lines, while the analytic estimates, i.e. Eq. (19), are shown as the gray-dotted line, for the quadratic inflaton potential V = m 2 φ φ 2 /2. For comparison, the numerical results without the inflaton oscillation (Eq. (20)) are shown as the dashed line. The modes shown here never exit the horizon. (Left) Figure 2 : 2Stochastic GW background spectrum for H inf = 10 14 GeV, H end = m φ = 10 13 GeV, and T R = 10 12 GeV (left) and 10 10 GeV (right). The solid lines correspond to the vacuum contribution that is amplified due to the inflaton oscillation during the reheating era and the dashed lines correspond to the inflationary GW that is amplified during the inflation stage. Spectrum of relict gravitational radiation and the early state of the universe. A A Starobinsky, Pisma Zh. Eksp. Teor. Fiz. 30JETP Lett.A. A. Starobinsky, "Spectrum of relict gravitational radiation and the early state of the universe," JETP Lett. 30 (1979) 682-685. [Pisma Zh. Eksp. Teor. Fiz.30,719(1979)]. M Maggiore, Astrophysics and Cosmology. Oxford University Press2M. Maggiore, Gravitational Waves. Vol. 2: Astrophysics and Cosmology. Oxford University Press, 3, 2018. Primordial backgrounds of relic gravitons. M Giovannini, 10.1016/j.ppnp.2020.103774arXiv:1912.07065Prog. Part. Nucl. Phys. 112103774astro-ph.COM. Giovannini, "Primordial backgrounds of relic gravitons," Prog. Part. Nucl. Phys. 112 (2020) 103774, arXiv:1912.07065 [astro-ph.CO]. Relic gravitational waves produced after preheating. S Y Khlebnikov, I I Tkachev, 10.1103/PhysRevD.56.653arXiv:hep-ph/9701423Phys. Rev. 56hep-phS. Y. Khlebnikov and I. I. Tkachev, "Relic gravitational waves produced after preheating," Phys. Rev. D56 (1997) 653-660, arXiv:hep-ph/9701423 [hep-ph]. Cosmic Strings and Other Topological Defects. A Vilenkin, E P S Shellard, Cambridge University PressA. Vilenkin and E. P. S. Shellard, Cosmic Strings and Other Topological Defects. Cambridge University Press, 2000. http://www.cambridge.org/mw/academic/subjects/physics/ theoretical-physics-and-mathematical-physics/ cosmic-strings-and-other-topological-defects?format=PB. Cosmic Separation of Phases. E Witten, 10.1103/PhysRevD.30.272Phys. Rev. 30E. Witten, "Cosmic Separation of Phases," Phys. Rev. D30 (1984) 272-285. Gravitational radiation from cosmological phase transitions. C J Hogan, Mon. Not. Roy. Astron. Soc. 218C. J. Hogan, "Gravitational radiation from cosmological phase transitions," Mon. Not. Roy. Astron. Soc. 218 (1986) 629-636. Cosmological Backgrounds of Gravitational Waves. C Caprini, D G Figueroa, 10.1088/1361-6382/aac608arXiv:1801.04268Class. Quant. Grav. 3516163001astro-ph.COC. Caprini and D. G. Figueroa, "Cosmological Backgrounds of Gravitational Waves," Class. Quant. Grav. 35 no. 16, (2018) 163001, arXiv:1801.04268 [astro-ph.CO]. Gravitational Effects on Inflaton Decay. Y Ema, R Jinno, K Mukaida, K Nakayama, 10.1088/1475-7516/2015/05/038arXiv:1502.02475JCAP. 150538hep-phY. Ema, R. Jinno, K. Mukaida, and K. Nakayama, "Gravitational Effects on Inflaton Decay," JCAP 1505 (2015) 038, arXiv:1502.02475 [hep-ph]. The Japanese space gravitational wave antenna DECIGO. S Kawamura, 10.1088/0264-9381/23/8/S17Class. Quant. Grav. 23S. Kawamura et al., "The Japanese space gravitational wave antenna DECIGO," Class. Quant. Grav. 23 (2006) S125-S132. The Einstein Telescope: A third-generation gravitational wave observatory. M Punturo, 10.1088/0264-9381/27/19/194002Class. Quant. Grav. 27194002M. Punturo et al., "The Einstein Telescope: A third-generation gravitational wave observatory," Class. Quant. Grav. 27 (2010) 194002. Gravitational wave astronomy with the SKA. G Janssen, 10.22323/1.215.0037arXiv:1501.00127[astro-ph.IMPoS. 1437G. Janssen et al., "Gravitational wave astronomy with the SKA," PoS AASKA14 (2015) 037, arXiv:1501.00127 [astro-ph.IM]. Upper Limits on the Stochastic Gravitational-Wave Background from Advanced LIGO's First Observing Run. B P Abbott, LIGO Scientific ; Virgo Collaboration10.1103/PhysRevLett.118.121101arXiv:1612.02029Phys. Rev. Lett. 1181229901Phys.Rev.Lett.LIGO Scientific, Virgo Collaboration, B. P. Abbott et al., "Upper Limits on the Stochastic Gravitational-Wave Background from Advanced LIGO's First Observing Run," Phys. Rev. Lett. 118 no. 12, (2017) 121101, arXiv:1612.02029 [gr-qc]. [Erratum: Phys.Rev.Lett. 119, 029901 (2017)]. P Amaro-Seoane, LISA CollaborationarXiv:1702.00786astro-ph.IMLaser Interferometer Space Antenna. LISA Collaboration, P. Amaro-Seoane et al., "Laser Interferometer Space Antenna," arXiv:1702.00786 [astro-ph.IM]. Mid-band gravitational wave detection with precision atomic sensors. P W Graham, MAGIS CollaborationJ M Hogan, MAGIS CollaborationM A Kasevich, MAGIS CollaborationS Rajendran, MAGIS CollaborationR W Romani, MAGIS CollaborationarXiv:1711.02225astro-ph.IMMAGIS Collaboration, P. W. Graham, J. M. Hogan, M. A. Kasevich, S. Rajendran, and R. W. Romani, "Mid-band gravitational wave detection with precision atomic sensors," arXiv:1711.02225 [astro-ph.IM]. AEDGE: Atomic Experiment for Dark Matter and Gravity Exploration in Space. Y A El-Neaj, AEDGE Collaboration10.1140/epjqt/s40507-020-0080-0arXiv:1908.00802EPJ Quant. Technol. 76gr-qcAEDGE Collaboration, Y. A. El-Neaj et al., "AEDGE: Atomic Experiment for Dark Matter and Gravity Exploration in Space," EPJ Quant. Technol. 7 (2020) 6, arXiv:1908.00802 [gr-qc]. Perturbative Photon Fluxes Generated by High-Frequency Gravitational Waves and Their Physical Effects. F Li, J Baker, M L Robert, Z Fang, G V Stephenson, Z Chen, 10.1140/epjc/s10052-008-0656-9arXiv:0806.1989Eur. Phys. J. C. 56gr-qcF. Li, J. Baker, Robert M.L., Z. Fang, G. V. Stephenson, and Z. Chen, "Perturbative Photon Fluxes Generated by High-Frequency Gravitational Waves and Their Physical Effects," Eur. Phys. J. C 56 (2008) 407-423, arXiv:0806.1989 [gr-qc]. Signal Photon Flux and Background Noise in a Coupling Electromagnetic Detecting System for High Frequency Gravitational Waves. F Li, N Yang, Z Fang, J Baker, M L Robert, G V Stephenson, H Wen, 10.1103/PhysRevD.80.064013arXiv:0909.4118Phys. Rev. D. 8064013gr-qcF. Li, N. Yang, Z. Fang, J. Baker, Robert M.L., G. V. Stephenson, and H. Wen, "Signal Photon Flux and Background Noise in a Coupling Electromagnetic Detecting System for High Frequency Gravitational Waves," Phys. Rev. D 80 (2009) 064013, arXiv:0909.4118 [gr-qc]. Upper limits on the amplitude of ultra-high-frequency gravitational waves from graviton to photon conversion. A Ejlli, D Ejlli, A M Cruise, G Pisano, H Grote, 10.1140/epjc/s10052-019-7542-5arXiv:1908.00232Eur. Phys. J. C. 79121032gr-qcA. Ejlli, D. Ejlli, A. M. Cruise, G. Pisano, and H. Grote, "Upper limits on the amplitude of ultra-high-frequency gravitational waves from graviton to photon conversion," Eur. Phys. J. C 79 no. 12, (2019) 1032, arXiv:1908.00232 [gr-qc]. Probing GHz gravitational waves with graviton-magnon resonance. A Ito, T Ikeda, K Miuchi, J Soda, 10.1140/epjc/s10052-020-7735-yarXiv:1903.04843Eur. Phys. J. C. 803179gr-qcA. Ito, T. Ikeda, K. Miuchi, and J. Soda, "Probing GHz gravitational waves with graviton-magnon resonance," Eur. Phys. J. C 80 no. 3, (2020) 179, arXiv:1903.04843 [gr-qc]. A formalism for magnon gravitational wave detectors. A Ito, J Soda, arXiv:2004.04646gr-qcA. Ito and J. Soda, "A formalism for magnon gravitational wave detectors," arXiv:2004.04646 [gr-qc]. J Ghiglieri, M Laine, 10.1088/1475-7516/2015/07/022arXiv:1504.02569Gravitational wave background from Standard Model physics: Qualitative features. 150722hep-phJ. Ghiglieri and M. Laine, "Gravitational wave background from Standard Model physics: Qualitative features," JCAP 1507 (2015) 022, arXiv:1504.02569 [hep-ph]. J Ghiglieri, G Jackson, M Laine, Y Zhu, arXiv:2004.11392Gravitational wave background from Standard Model physics: Complete leading order. hep-phJ. Ghiglieri, G. Jackson, M. Laine, and Y. Zhu, "Gravitational wave background from Standard Model physics: Complete leading order," arXiv:2004.11392 [hep-ph]. Stochastic Gravitational Waves from Particle Origin. K Nakayama, Y Tang, 10.1016/j.physletb.2018.11.023arXiv:1810.04975Phys. Lett. 788hep-phK. Nakayama and Y. Tang, "Stochastic Gravitational Waves from Particle Origin," Phys. Lett. B788 (2019) 341-346, arXiv:1810.04975 [hep-ph]. Stochastic Gravitational Waves from Inflaton Decays. D Huang, L Yin, 10.1103/PhysRevD.100.043538arXiv:1905.08510Phys. Rev. 100443538hep-phD. Huang and L. Yin, "Stochastic Gravitational Waves from Inflaton Decays," Phys. Rev. D100 no. 4, (2019) 043538, arXiv:1905.08510 [hep-ph]. Gravitational particle production in oscillating backgrounds and its cosmological implications. Y Ema, R Jinno, K Mukaida, K Nakayama, 10.1103/PhysRevD.94.063517arXiv:1604.08898Phys. Rev. D94. 663517hep-phY. Ema, R. Jinno, K. Mukaida, and K. Nakayama, "Gravitational particle production in oscillating backgrounds and its cosmological implications," Phys. Rev. D94 no. 6, (2016) 063517, arXiv:1604.08898 [hep-ph]. Production of Purely Gravitational Dark Matter. Y Ema, K Nakayama, Y Tang, 10.1007/JHEP09(2018)135arXiv:1804.07471JHEP. 09135hep-phY. Ema, K. Nakayama, and Y. Tang, "Production of Purely Gravitational Dark Matter," JHEP 09 (2018) 135, arXiv:1804.07471 [hep-ph]. Gravitational production of super-Hubble-mass particles: an analytic approach. D J H Chung, E W Kolb, A J Long, 10.1007/JHEP01(2019)189arXiv:1812.00211JHEP. 01189hep-phD. J. H. Chung, E. W. Kolb, and A. J. Long, "Gravitational production of super-Hubble-mass particles: an analytic approach," JHEP 01 (2019) 189, arXiv:1812.00211 [hep-ph]. Chaotic Inflation. A D Linde, 10.1016/0370-2693(83)90837-7Phys. Lett. B. 129A. D. Linde, "Chaotic Inflation," Phys. Lett. B 129 (1983) 177-181. MCMC analysis of WMAP3 and SDSS data points to broken symmetry inflaton potentials and provides a lower bound on the tensor to scalar ratio. C Destri, H J Vega, N Sanchez, 10.1103/PhysRevD.77.043509arXiv:astro-ph/0703417Phys. Rev. D. 7743509C. Destri, H. J. de Vega, and N. Sanchez, "MCMC analysis of WMAP3 and SDSS data points to broken symmetry inflaton potentials and provides a lower bound on the tensor to scalar ratio," Phys. Rev. D 77 (2008) 043509, arXiv:astro-ph/0703417. Polynomial Chaotic Inflation in the Planck Era. K Nakayama, F Takahashi, T T Yanagida, 10.1016/j.physletb.2013.06.050arXiv:1303.7315Phys. Lett. B. 725hep-phK. Nakayama, F. Takahashi, and T. T. Yanagida, "Polynomial Chaotic Inflation in the Planck Era," Phys. Lett. B 725 (2013) 111-114, arXiv:1303.7315 [hep-ph]. Gravitational wave experiments and early universe cosmology. M Maggiore, 10.1016/S0370-1573(99)00102-7arXiv:gr-qc/9909001Phys. Rept. 331gr-qcM. Maggiore, "Gravitational wave experiments and early universe cosmology," Phys. Rept. 331 (2000) 283-367, arXiv:gr-qc/9909001 [gr-qc]. Direct detection of the inflationary gravitational wave background. T L Smith, M Kamionkowski, A Cooray, 10.1103/PhysRevD.73.023504arXiv:astro-ph/0506422Phys. Rev. 7323504astro-phT. L. Smith, M. Kamionkowski, and A. Cooray, "Direct detection of the inflationary gravitational wave background," Phys. Rev. D73 (2006) 023504, arXiv:astro-ph/0506422 [astro-ph]. Probing the early universe with inflationary gravitational waves. L A Boyle, P J Steinhardt, 10.1103/PhysRevD.77.063504arXiv:astro-ph/0512014Phys. Rev. 7763504astro-phL. A. Boyle and P. J. Steinhardt, "Probing the early universe with inflationary gravitational waves," Phys. Rev. D77 (2008) 063504, arXiv:astro-ph/0512014 [astro-ph]. Space laser interferometers can determine the thermal history of the early Universe. K Nakayama, S Saito, Y Suwa, J Yokoyama, 10.1103/PhysRevD.77.124001arXiv:0802.2452Phys. Rev. 77124001hep-phK. Nakayama, S. Saito, Y. Suwa, and J. Yokoyama, "Space laser interferometers can determine the thermal history of the early Universe," Phys. Rev. D77 (2008) 124001, arXiv:0802.2452 [hep-ph]. Probing reheating temperature of the universe with gravitational wave background. K Nakayama, S Saito, Y Suwa, J Yokoyama, 10.1088/1475-7516/2008/06/020arXiv:0804.1827JCAP. 080620astro-phK. Nakayama, S. Saito, Y. Suwa, and J. Yokoyama, "Probing reheating temperature of the universe with gravitational wave background," JCAP 0806 (2008) 020, arXiv:0804.1827 [astro-ph]. Precision calculations of the gravitational wave background spectrum from inflation. S Kuroyanagi, T Chiba, N Sugiyama, 10.1103/PhysRevD.79.103501arXiv:0804.3249Phys. Rev. 79103501astro-phS. Kuroyanagi, T. Chiba, and N. Sugiyama, "Precision calculations of the gravitational wave background spectrum from inflation," Phys. Rev. D79 (2009) 103501, arXiv:0804.3249 [astro-ph]. . A Dolgov, D Kirilova, &quot; On, Creation By A Time, Dependent Scalar, Field, Sov. J. Nucl. Phys. 51A. Dolgov and D. Kirilova, "ON PARTICLE CREATION BY A TIME DEPENDENT SCALAR FIELD," Sov. J. Nucl. Phys. 51 (1990) 172-177. Particle Production During Out-of-equilibrium Phase Transitions. J H Traschen, R H Brandenberger, 10.1103/PhysRevD.42.2491Phys. Rev. D. 42J. H. Traschen and R. H. Brandenberger, "Particle Production During Out-of-equilibrium Phase Transitions," Phys. Rev. D 42 (1990) 2491-2504. Reheating after inflation. L Kofman, A D Linde, A A Starobinsky, 10.1103/PhysRevLett.73.3195arXiv:hep-th/9405187Phys. Rev. Lett. 73L. Kofman, A. D. Linde, and A. A. Starobinsky, "Reheating after inflation," Phys. Rev. Lett. 73 (1994) 3195-3198, arXiv:hep-th/9405187. Universe reheating after inflation. Y Shtanov, J H Traschen, R H Brandenberger, 10.1103/PhysRevD.51.5438arXiv:hep-ph/9407247Phys. Rev. D. 51Y. Shtanov, J. H. Traschen, and R. H. Brandenberger, "Universe reheating after inflation," Phys. Rev. D 51 (1995) 5438-5455, arXiv:hep-ph/9407247. Towards the theory of reheating after inflation. L Kofman, A D Linde, A A Starobinsky, 10.1103/PhysRevD.56.3258arXiv:hep-ph/9704452Phys. Rev. D. 56L. Kofman, A. D. Linde, and A. A. Starobinsky, "Towards the theory of reheating after inflation," Phys. Rev. D 56 (1997) 3258-3295, arXiv:hep-ph/9704452.
[]
[ "S-Walk: Accurate and Scalable Session-based Recommendation with Random Walks", "S-Walk: Accurate and Scalable Session-based Recommendation with Random Walks" ]
[ "Minjin Choi ", "Jinhong Kim [email protected] ", "Joonseok Lee [email protected] ", "Hyunjung Shim ", "Jongwuk Lee [email protected] ", "Minjin Choi ", "Jinhong Kim ", "Joonseok Lee ", "Hyunjung Shim ", "Jongwuk Lee ", "\nSungkyunkwan University\nRepublic of Korea\n", "\nNaver Corp\nRepublic of Korea\n", "\nSeoul National University United States\nRepublic of Korea\n", "\nYonsei University\nRepublic of Korea\n", "\nSungkyunkwan University\nRepublic of Korea\n" ]
[ "Sungkyunkwan University\nRepublic of Korea", "Naver Corp\nRepublic of Korea", "Seoul National University United States\nRepublic of Korea", "Yonsei University\nRepublic of Korea", "Sungkyunkwan University\nRepublic of Korea" ]
[ "Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining (WSDM '22)" ]
Session-based recommendation (SR) predicts the next items from a sequence of previous items consumed by an anonymous user. Most existing SR models focus only on modeling intra-session characteristics but pay less attention to inter-session relationships of items, which has the potential to improve accuracy. Another critical aspect of recommender systems is computational efficiency and scalability, considering practical feasibility in commercial applications. To account for both accuracy and scalability, we propose a novel session-based recommendation with a random walk, namely S-Walk. Precisely, S-Walk effectively captures intra-and inter-session correlations by handling high-order relationships among items using random walks with restart (RWR). By adopting linear models with closed-form solutions for transition and teleportation matrices that constitute RWR, S-Walk is highly efficient and scalable. Extensive experiments demonstrate that S-Walk achieves comparable or state-of-the-art performance in various metrics on four benchmark datasets. Moreover, the model learned by S-Walk can be highly compressed without sacrificing accuracy, conducting two or more orders of magnitude faster inference than existing DNN-based models, making it suitable for large-scale commercial systems.
10.1145/3488560.3498464
[ "https://arxiv.org/pdf/2201.01091v1.pdf" ]
245,668,738
2201.01091
2be2e23c1ba6c2c62a85ac2ab7968d5e657ed8c6
S-Walk: Accurate and Scalable Session-based Recommendation with Random Walks ACMCopyright ACMFebruary 21-25, 2022. 2022. February 21-25, 2022 Minjin Choi Jinhong Kim [email protected] Joonseok Lee [email protected] Hyunjung Shim Jongwuk Lee [email protected] Minjin Choi Jinhong Kim Joonseok Lee Hyunjung Shim Jongwuk Lee Sungkyunkwan University Republic of Korea Naver Corp Republic of Korea Seoul National University United States Republic of Korea Yonsei University Republic of Korea Sungkyunkwan University Republic of Korea S-Walk: Accurate and Scalable Session-based Recommendation with Random Walks Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining (WSDM '22) the Fifteenth ACM International Conference on Web Search and Data Mining (WSDM '22)Tempe, AZ, USA; Tempe, AZ, USA; New York, NY, USAACM11February 21-25, 2022. 2022. February 21-25, 2022ACM ISBN 978-1-4503-9132-0/22/02. . . $15.00CCS CONCEPTS • Information systems → Recommender systemsCollabora- tive filteringExpert systems KEYWORDS Collaborative filteringSession-based recommendationRandom walksClosed-form solution * Corresponding author Session-based recommendation (SR) predicts the next items from a sequence of previous items consumed by an anonymous user. Most existing SR models focus only on modeling intra-session characteristics but pay less attention to inter-session relationships of items, which has the potential to improve accuracy. Another critical aspect of recommender systems is computational efficiency and scalability, considering practical feasibility in commercial applications. To account for both accuracy and scalability, we propose a novel session-based recommendation with a random walk, namely S-Walk. Precisely, S-Walk effectively captures intra-and inter-session correlations by handling high-order relationships among items using random walks with restart (RWR). By adopting linear models with closed-form solutions for transition and teleportation matrices that constitute RWR, S-Walk is highly efficient and scalable. Extensive experiments demonstrate that S-Walk achieves comparable or state-of-the-art performance in various metrics on four benchmark datasets. Moreover, the model learned by S-Walk can be highly compressed without sacrificing accuracy, conducting two or more orders of magnitude faster inference than existing DNN-based models, making it suitable for large-scale commercial systems. INTRODUCTION Modern recommender systems (RS) are indispensable for addressing the enormous information overload in various real-world applications, such as e-commerce platforms and online multimedia platforms, e.g., Amazon and Alibaba, YouTube, Netflix, and Spotify. Classical recommender systems [12,15,19,44] usually assume that user accounts and users' long-term interactions are available. However, this assumption rarely holds. The user may not login, or multiple users share the user account, e.g., family members. The user may also exhibit different behaviors depending on the context. Thus, it is necessary to provide personalized recommendations without explicit user information. Recently, session-based recommendation (SR) [3,9,22,42,51] has gained considerable attention for predicting the next items from the sequential behavior consumed by an anonymous user. Unlike conventional RS, SR relies only on the users' actions in an ongoing session. This setting of SR is well-suited for real-world scenarios but inherently results in a severe data sparsity problem. To address this issue, it is essential to understand the unique characteristics of sessions, that is, intra-session properties. First, the items within a session are coherent with the user's hidden intent, e.g., a list of products in the same category, referred to as item consistency (or long-term dependency). Second, some items are strictly consumed in chronological order, namely sequential dependency (or short-term dependency), e.g., consecutive episodes of a TV series. Lastly, the user can repeatedly consume the same items, called repeated item consumption, e.g., user's favorite tracks. Most SR models utilize deep neural networks (DNNs) to learn intra-session relationships. Recurrent neural networks (RNNs) [16][17][18] and attention mechanisms [31,32] have been used to model the sequential dependency of items. Recently, graph neural networks (GNNs) [1,13,40,41,50,53,54] have been used to effectively represent both item consistency and sequential dependency. Unfortunately, they suffer from performance degradation when dealing with complex and long sessions, where it is difficult to understand user intent. For this, inter-session relationships among items are valuable clues. Figure 1 describes intra-session and inter-session relationships; some items do not occur within a single session but share their neighboring items for multiple sessions, implying potential item correlations. Several studies [36,50,52] have attempted to consider inter-session relationships using neighboring sessions or global item graphs. However, they incur substantial computational costs and are infeasible for large-scale commercial systems. High-capacity DNN models have achieved state-of-the-art performances but usually require heavy computational overhead for runtime speed and memory consumption. Although competing for the best performance is a reasonable mission for most research problems, computational efficiency and scalability are also critical factors for dealing with practical restrictions in commercial recommender systems. [11,33] suggested neighborhood-based models for session-based recommendations. Owing to their simplicity, they are highly scalable. Moreover, [34,35] reported that the neighborhoodbased models achieved comparable performances to DNN-based models on several benchmark datasets. Our primary objective is to design an SR model that accounts for both accuracy and scalability. To this end, we propose a novel session-based recommendation with a random walk, namely S-Walk, (i) exploiting intra-and inter-session relationships among items to improve accuracy, and (ii) supporting cost-effective, real-time performance at scale. (i) Whereas the basic SR learns hidden patterns between items only within a session, S-walk additionally introduces global item graphs, modeling item-item relationships across all sessions. By applying random walks with restart (RWR) on this graph, a random surfer can jump from one item to another adjacent item by traversing the item graph or restart from an arbitrary item in the current session. Therefore, S-Walk can capture high-order correlations among items using multi-hop connections on an item graph. S-Walk can exploit the local patterns within a session and global patterns involving the same items from other sessions. (ii) Recently, linear item-item models [23,[45][46][47] have shown competitive performance in conventional RS. Motivated by their success, we devise linear item models to build two probability matrices, item transition and item teleportation matrices, to formulate the stochastic process of RWR. The item transition matrix captures the sequential dependency of items, generalizing a Markov chain model on items. The item teleportation matrix reflects the restart probability, allowing various items to participate in the model training depending on the ongoing session. Instead of having arbitrary items for restart, we utilize the co-occurrence relationship among items. The items that co-occurred with the current session items are used for restart. Notably, training our linear models is highly efficient and scalable because they have closed-form solutions whose computational complexity is determined by the number of items, independent of the number of sessions or user actions. To summarize, the key advantages of S-Walk are as follows: (i) It can effectively capture inter-and intra-session relationships via RWR. (ii) Without complicated model tuning, it is highly efficient and scalable owing to the closed-form solution of linear models. (iii) It achieves competitive or state-of-the-art performance in various metrics (i.e., HR, MRR, recall, and MAP) on four benchmark datasets (i.e., YouChoose, Diginetica, RetailRocket, and NowPlaying). (iv) The model learned by S-Walk can be highly compressed without sacrificing accuracy, supporting fast inference time. PRELIMINARIES Notations. Given a set of sessions S = { (1) , . . . , ( ) } over a set of items I = { 1 , . . . , }, an arbitrary session ∈ S is represented by a sequence of items = ( 1 , 2 , . . . , | | ) with a length | |. Here, ∈ I is the -th consumed item, e.g., clicked, watched, or purchased. For simplicity, ∈ S is represented by a binary vector x ∈ {0, 1} , where x = 1 if is consumed, and x = 0 otherwise. By stacking sessions, let X ∈ R × denote a session-item interaction matrix, where is the number of sessions. As a straightforward variant, the binary value in x can be converted to real values to quantify the importance of items within a session. Problem statement. Given a sequence of items previously consumed by an anonymous user in a session, session-based recommendations predict the next items that the user is most likely to consume. Formally, a session-based recommender model takes a session = ( 1 , . . . , ) as input and returns a ranked list of topcandidate items as the recommended next items ( +1 , . . . | | ). Note that this is more generalized (and challenging) than predicting the next single item +1 , which has been used in existing work. (See Section 4 for the generalized evaluation metrics for this setting.) Random walk models. The key concept behind random walk models is to reflect the direct and transitive relations among items. Conventional recommender models using random walks [6-8, 20, 37, 38, 55] are based on a user-item bipartite graph G = (U ∪ I, E), where U and I are a set of users and items. Each edge ∈ E represents the relationship between a user and an item. Thus, the core part of the random walk model is to determine a transition probability matrix to compute proximity scores for items. In general, there are two possible solutions for computing the proximity scores of items. First, we can utilize the -step landing probability distribution of a random walker. Starting from a source user , the proximity scores for all items can be computed as uR ( ) , where u is the user vector, and R ( ) is the -step transition probability matrix. Although it is simple yet effective, existing studies [6,7] are vulnerable to popularity bias; popular items tend to have high proximity scores as increases (e.g., ≥ 3), thereby degrading the recommendation quality. As an alternative, we can compute the stationary distribution of random walks with restart (RWR), which is well-known as personalized PageRank [26]. It is effective for capturing high-order relationships among vertices. Because RWR leverages restart in addition to sequential transition, it can alleviate the problem of popularity bias, concentrating on the central node in the -step. For this reason, we adopt RWR for session-based recommendations. (See Section 5 for empirical comparisons between the two methods, -step and RWR.) Adopting the random walk in session-based recommendations has the following advantages: (i) The random walk model utilizes high-order item correlations across sessions. Because sessions are usually sparse by nature, it is useful for alleviating the data sparsity problem by capturing profound relationships among items. (ii) Compared to GNN-based SR models [13,40,41,53,54], it is efficient without requiring complicated hyper-parameter tuning. Linear item-item models. Given the session-item matrix X, the goal of linear models [39,46] is to estimate the item-item similarity matrix B ∈ R × . As a pioneering work, SLIM [39] formulated a linear model subject to the constraint that all entries in B are non-negative and zero diagonal. argmin B ∥X − X · B∥ 2 + 1 ∥B∥ 1 + 2 ∥B∥ 2 s.t. diag(B) = 0, B ≥ 0,(1) where ∥ · ∥ 1 and ∥ · ∥ are the entry-wise ℓ 1 -norm and the matrix Frobenius norm, respectively, 1 and 2 are the regularization coefficients, and diag(B) ∈ R denotes the vector with the diagonal elements of B. Although SLIM [39] shows competitive accuracy, it suffers from high computational training cost. Recently, EASE [46] and its variants [45,47] only consider the zero-diagonal constraint by removing the non-negativity of B and ℓ 1 -norm constraints from Eq. (1): argmin B ∥X − X · B∥ 2 + · ∥B∥ 2 s.t. diag(B) = 0.(2) Owing to this simpler formulation, EASE is solved by the closedform equation via Lagrange multipliers: B = I −P · diagMat(1 ⊘ diag(P)),(3) whereP = (X ⊤ X + I) −1 . Here, 1 ∈ R is the vector of ones, and ⊘ denotes the element-wise division. Let diagMat(x) denote the diagonal matrix expanded from the vector x. (See [46] for the detailed derivation of the closed-form solution.) Although inverting the regularized Gram matrix X ⊤ X is the computational bottleneck for large-scale datasets (i.e., the time complexity is O (|I| 2.376 ) with the Coppersmith-Winograd algorithm), the closed-form solution is advantageous in terms of efficiency. The training complexity of EASE [46] is proportional to the number of items, which is usually much smaller than the number of sessions ( ≪ ). Besides, the linear model is beneficial to accelerate the inference because computing top-N recommended items is simply done by single matrix multiplication. Most recently, SLIST [5] reported competitive accuracy with linear models for the session-based recommendation. Although SLIST [5] tackled various characteristics of session data, it did not consider inter-session relationships. In contrast, we devise linear models with random walks, taking both intra-and inter-session correlations into account. S-WALK: THE PROPOSED MODEL In this section, we propose a novel session-based recommendation with a random walk, namely S-Walk. While existing random-walkbased recommender models [6-8, 20, 37, 38, 55] are based on a useritem bipartite graph, it is non-trivial to adopt them for session-based recommendations. Since user information is unavailable, we rely only on item information to learn underlying patterns. Furthermore, the session-item matrix is extremely sparse. To address these issues, we first present the overall architecture of S-Walk using global item graphs (Section 3.1). Intuitively, walking on a global item graph can describe inter-session relationships among items because the walker can move from an item of the current session to the items of other sessions. Then, we develop two linear models to build a transition matrix and a teleportation matrix, used in S-Walk (Sections 3.2-3.3). Note that these linear models can be replaced by others as long as they are efficient and scalable. Notably, our linear models satisfy the desirable condition for efficiency and scalability. Finally, we explain model training and inference of S-Walk (Section 3.4). Figure 2 overviews the S-Walk model. Given a session-item interaction matrix X, we first design two models for item transition and teleportation to capture different characteristics of sessions (the blue and orange box in Figure 2, respectively). We then constitute a final global item graph using random walk with restart (RWR). Specifically, each model produces its own relevance matrix over the transition graph G R = (I, E R ) and the teleportation graph G T = (I, E T ), where each node corresponds to an item and an edge indicates the relevance between a pair of items. The transition matrix R is the adjacency matrix of G R which encodes sequential dependency and repeated item consumption in a session. On the other hand, the teleportation matrix T is the adjacency matrix of G T , which captures item consistency in the session. Introducing the two matrices captures different intra-session relationships among items, but they do not address the inter-session relationship. Model Architecture By adopting the RWR using the two graphs, where a random walker jumps from one node to another or restarts on an arbitrary node regardless of her current position, we intend to consider the inter-session relationship, capturing high-order relationships among items, i.e., multi-hop connections on the item graphs. Conceptually, the RWR on the two item graphs, G and G , can be thought of as tossing a biased coin that yields the head with probability : (1) If the coin is head (with a probability of ), the walker moves to one of the items adjacent to the current item through the transition matrix R ∈ R × . (2) If the coin is tail (with a probability of 1 − ), the walker restarts on one of the items adjacent to the start item through the teleportation matrix T ∈ R × . The overall architecture of S-Walk. Given a session-item matrix, two different linear models build the transition graph and the teleportation graph by adjacency matrices R and T, using Eq. (7) and Eq. (10), respectively. Then, in Eq. (12), the random walk with restart is used to build the final graph with adjacency matrix M, capturing high-order relationships. Item transition model Item teleportation model The random walk using these two matrices is a stochastic process, which can also be seen as a Markov chain on items over a homogeneous discrete time. Formally, we formulate the RWR as follows: x ( +1) = x ( ) R + (1 − )x (0) T, where = 0, . . . , ∞. (4) Here, is the damping factor that controls the proportion of the random walk and the restart. x ⊤ (0) ∈ R is the initial item vector and x ⊤ ( ) ∈ R is the updated proximity score for items after the -th step. As the increases, x ( ) converges to limited distribution. Through the RWR, we obtain stationary probabilities that the random walker lands on each node, expressed as the green graph in Figure 2. Finally, we generate the recommendation list using the final graph G M = (I, E M ). In this process, we devise linear models for the two models, taking the following advantages: (i) they achieve comparable performance without complicated tuning, and (ii) training and inference are much faster than DNN-based session recommender models [13, 16-18, 31, 32, 40, 41, 53, 54]. Item Transition Model Fist, we develop a linear transition model to build the item transition matrix R. As a natural way of representing the transition of item sequences, we introduce partial session representations. A session is divided into two sub-sessions, past and future, according to each time step = 1, ..., | |. The past partial session consists of items consumed before the -th item, i.e., s 1: −1 = { 1 , . . . , −1 }. The future partial session consists of items consumed at or after the -th item, i.e., s :| | = { , . . . , | | }. For each time = 2, ..., | |, we produce | | − 1 past and future partial session pairs. By stacking | | − 1 pairs for all ∈ S, we build two matrices, the past session matrix Y ∈ R ′ × and future session matrix Z ∈ R ′ × , where ′ is the number of all partial sessions, i.e., ′ = =1 (| ( ) | − 1). Finally, the item transition matrix R is learned with two matrices Y and Z according to the partial session representation. To represent the temporal proximity of items, we adjust the weights of items in Y and Z. As adopted in [5,11], we consider the position gap between two items as the weight of items within a session. pos ( , , ) = exp − | ( , ) − ( , )| pos ,(5) where pos is the hyper-parameter that controls the position decay in partial sessions, and ( , ) is the position of item in the session . In this way, we decay the relevance between items and as they get farther away. Formally, the item transition model is formulated as argmin B tran Z − Y · B tran 2 + ∥B tran ∥ 2 ,(6) where B tran is the item-item relevance matrix learned from the sequential dependency between Y and Z. As Y ≠ Z, we can naturally avoid the trivial solution B tran = I. Unlike Eq. (2), we can remove the zero-diagonal constraint in B tran . The closed-form solution is given byB tran =P ′ · (Y ⊤ Z),(7)whereP ′ = (Y ⊤ Y + I) −1 ∈ R . The computational complexity is independent of the number of users, as shown in Eq. (3). (See Appendix A for a detailed derivation of our solution in the supplementary material.) To utilize the item transition matrix in random walks, each element should be the transition probability from one node to another. That is, every element is non-negative and sums to 1. However,B tran is not normally a probability matrix. To satisfy the non-negative constraint, we first replace all negative values inB tran with zero 1 , denotingB tran ≥0 . We then normalizeB tran ≥0 as follows: R = diagMat(B tran ≥0 1) −1Btran ≥0 .(8) Here, R is the item transition probability matrix, where each row is normalized, and 1 is a column vector of length filled with one. Item Teleportation Model The item teleportation matrix is designed to capture the item consistency within a session. For this reason, we focus on modeling co-occurrence between items. A session is treated as a set of items s = { 1 , 2 , . . . , | | }, ignoring the order of items. By stacking sessions, we build a binary session-item matrix X. Note that the repeated items in the session are treated as a single item. Given the matrix X, we devise a linear teleportation model. It is formulated with the same input and output matrix as used in the existing linear models [39,46]. Meanwhile, we relax the zerodiagonal constraint for B to handle repeated item consumption, as discussed in [5]. When the diagonal element of B is loosely penalized, it allows us to predict the same items as the next item repeatedly. argmin B tele (X − X · B tele ) 2 + ∥B tele ∥ 2 , s.t. diag(B tele ) ≤ , (9) where B tele is the item-item relevance matrix for item consistency, and is the hyper-parameter to control diagonal constraints. When = 0, it is equivalent to the zero-diagonal constraint for B tele . When = ∞, there is no constraint on the diagonal elements of B tele . Note that the objective function of EASE [46] is a special case of Eq. (9), where B tele with = 0. We can obtain the closed-form solution of B tele as follows: B tele = I −P · diagMat( ), = if 1 − ≤ , 1− otherwise,(10) whereP = (X ⊤ X + I) −1 , and ∈ R is the vector used to check the diagonal constraint of B tele . Because of the inequality condition for the diagonal elements in B tele , is determined by (1 − ). The closed-form solution may depend on . If = 0, is equal to 1/ , corresponding to (1 ⊘ diag(P)). If = ∞, the solution becomesB tele = I −P. Similar toB tran , the solutionB tele does not satisfy the nonnegativity and normalization conditions to be a probability matrix. We compute the item teleportation probability matrix T by replacing the negative values with zero and normalizingB tele ≥0 : T = diagMat(B tele ≥0 1) −1Btele ≥0 + (1 − ) I,(11) where is a hyper-parameter that controls the importance of the self-loop to guarantee the convergence of random walks. (See Appendix C for detailed analysis in the supplementary material.) Random-walk Training and Inference Training. To compute the stationary distribution of S-Walk, we utilize the power method [26]. Each proximity score corresponds to the limiting distribution using Eq. (4): x (∞) = ∞ x (0) R ∞ + ∞ ∑︁ =0 (1 − )x (0) TR ≈ x (0) ∞ ∑︁ =0 (1 − )TR = x (0) M.(12) Here, M = ∞ =0 (1 − )TR is the trained item-item matrix by S-Walk. (Appendix B provides the detailed pseudo-code for computing the proximity score using the power method.) Inference. Given a new session new , we represent a session vector x new and compute the proximity score using M. The proximity score for predicting the next item is finally given by x new M. In this process, we decay the importance of items in x new to rely more on recent consumption, similar to Eq. (5): inf ( , new ) = exp − | new | − ( , new ) inf ,(13) where inf is the hyper-parameter used to control the item weight decay, and | new | is the length of the session new . EXPERIMENTAL SETUP Benchmark datasets. We use four public datasets collected from e-commerce and music streaming services: YooChoose 2 (YC), Diginetica 3 (DIGI), RetailRocket 4 (RR), and NowPlaying 5 (NOWP). Following existing studies [16,31,32,53], we use single training-test split (single-split) datasets (i.e., YC-1/4, DIGI1). To minimize the risk of random effects, we also split the datasets (i.e., YC5, DIGI5, RR, NOWP) into five folds (five-split) which are contiguous in time, as used in recent extensive evaluation [33][34][35]. Following the convention [33][34][35], we discard the sessions with only one interaction and items that occur less than five times. Also, we split the training and test datasets chronologically and use a portion of the training set for validation, such that the validation set size is equal to that of the test dataset. (See Appendix D for detailed statistics of all benchmark datasets.) Competing models. We compare our model to nine competing models. Among the non-neural models, we compare to AR [2], SR [25], STAN [11], and SLIST [5]. AR [2] is a simple model using association rules, and SR [25] is a Markov chain model together with association rules. STAN [11] is an improved model of SKNN [21], a session-based kNN algorithm. SLIST [5] is a linear model designed for SR. Among the neural models, we compare to NARM [31], STAMP [32], SR-GNN [53], NISER+ [13], and GCE-GNN [52]. Notably, SR-GNN [53] employs GNNs to consider complex sequential dependency of items. To overcome overfitting and popularity bias, NISER+ [13] uses normalized items and session embeddings on top of SR-GNN. Lastly, GCE-GNN [52] considers inter-session relationships by modeling item transitions over all sessions. (See Appendix E for the implementation details of baselines.) All source codes are available at https://github.com/jin530/SWalk. Evaluation protocol and metrics. As in the common protocol [18,31,53], we use the iterative revealing scheme, which iteratively exposes an item from a session to the model, to reflect sequential user behavior throughout the entire session. We adopt several metrics to handle two scenarios: (i) To evaluate the next single item, we use Hit Rate (HR) and Mean Reciprocal Rank (MRR), which have been widely used in existing studies [17,32,53]. HR and MRR measure the next item's existence and rank in the recommendation list, respectively. (ii) To evaluate all subsequent items predicted, we use two common IR measures, Recall (R@k) and Mean Average Precision (MAP@k), which measure the subsequent items' existence and rank in the recommendation list, respectively. We use = {5, 10, 20, 50, 100}, where = 20 is the default. EVALUATION RESULTS We evaluate the accuracy and efficiency of S-Walk by comparing it with competing models. Based on thorough and extensive evaluations, we make the following conclusions. • S-Walk exhibits state-of-the-art performance on multiple datasets. For R@20, MAP@20, and HR@20, S-Walk consistently outperforms the other models, up to 3.16%, 4.11%, and 3.65%, respectively (Section 5.1). • The inference of S-Walk is up to 8.9 times faster than other DNN-based models. Surprisingly, S-Walk can be compressed (sparsified by zeroing-out entries via thresholding), without sacrificing its accuracy (Section 5.2). • For long sessions, where user intent is more difficult to capture, S-Walk significantly outperforms existing models. (Section 5.3). • S-Walk converges within 3-5 steps and is superior to the k-step method because it utilizes the teleportation model for restarts (Section 5.4). Table 1 reports the accuracy of S-Walk and other competitors. It is found that no single model shows the best performance over all the datasets, as reported in existing studies [33][34][35]. Remarkably, the existing models have different tendencies for single-split and five-split datasets; while most neural models surpass non-neural models on the single-split datasets, their performances are degraded on the five-split datasets. We conjecture that the parameters of the neural models reported in existing studies are mostly biased for optimizing the single-split datasets. For five-split datasets, it is also observed that the variance in the neural models is larger than that of non-neural models. Based on these observations, it is challenging for one model to consistently achieve outstanding performance on all the datasets. Evaluation of Accuracy Nevertheless, S-Walk shows competitive or state-of-the-art performances. Notably, the gain of S-Walk is up to 4.08% and 8.15% in R@20 and MAP@20 over the best competing model. For all fivesplit datasets, S-Walk consistently surpasses the existing models for HR@20, R@20, and MAP@20. These empirical results indicate that S-Walk captures various intra-and inter-correlation among items without being biased to specific datasets. Compared to the other metrics, S-walk is slightly lower on MRR@20, particularly on YC and NOWP datasets. Based on the gap between SR [25] and STAN [11], we observe that the MRR@20 scores on these datasets are mostly affected by intra-session relationships. Thus, we address this by taking fewer random walks on these datasets; e.g., S-Walk (1) considers mostly intra-session relationships, confirming superior performance in MRR@20 as well. Table 2 compares the inference time between S-Walk and the other best models on several datasets. Whereas the computational cost of S-Walk is proportional only to the number of items, that of neural models also depends on the number of layers and their dimensions. Although S-Walk runs on CPU and other models run on GPU, S-Walk shows about five times faster inference time owing to its simpler structure, even with better accuracy. This property is highly desirable for deploying S-Walk for real-world applications. We also attempt model pruning for S-Walk. As a simple strategy, we adopt magnitude pruning [10], i.e., the model is globally sparsified by zeroing out parameter values with the smallest magnitude. For instance, 100× compression means that we retain only 1% non-zero entries in M with the highest magnitude while zeroing out the remaining 99%. Surprisingly, S-Walk preserves the accuracy with highly extreme (100×) compression ratios, as depicted in Figure 3. Random pruning is a simple baseline, i.e., the parameters are randomly removed under the given compression ratio. This result indicates that the learned relationship between items is disentangled with the locality, as observed in existing studies [4,[27][28][29][30]. Owing to this valuable property, the compressed S-Walk is memory-efficient, suitable for low-resource devices, e.g., mobile and embedded applications. Evaluation of Scalability Effect of Various Session Data To further investigate the accuracy gains of S-Walk we carefully design case studies and examine the effects of session length and retrieval size. (See Appendix F for the effect of the data size on S-Walk and competing models.) To observe how the session length affects the performance, we compare it with other best models (i.e., STAN [11], SR-GNN [53] and NISER+ [13]) by categorizing the entire sessions into two groups: short (≤ 5 items) and long sessions (> 5 items). Note that the ratio of short sessions is 77.2% (DIGI5) and 79.2% (RR), respectively. Figure 4 indicates that long sessions are much more challenging to predict users' hidden intents effectively. Nonetheless, S-Walk significantly outperforms NISER+ [13] in long sessions by 6.2%-20.2% and 5.8%-21.0% on R@20 and MAP@20, respectively. Based on this result, we Table 1: Accuracy comparison of S-Walk and competing models, following experimental setup in [34,35]. Gains indicate how much better the best proposed model is than the best baseline model. S-Walk (1) is a variant of S-Walk trained only up to the first step M (1) . The best model is marked in bold and the second best model is underlined. confirm that S-Walk effectively captures complicated item patterns, further improving the accuracy of longer sessions. Figure 5 depicts the comparison results between S-Walk and the competitive models for various numbers of recommended items. For all cut-off sizes, we observe that S-Walk consistently surpasses NISER+ by 5.2%-16.9% and 5.4%-17.2% gains on Recall and MAP, respectively. In this sense, S-Walk can expand item coverage by increasing the number of steps through the random walk process in the item graph. Ablation Study We analyze the effect of each component, i.e., the transition model and the teleportation model, in S-Walk. As shown in Table 3 complete S-Walk shows the best performance, compared to using SR [25] for the transition model. For the teleportation model, AR [2] shows worse accuracy than the identity matrix in the YC-1/4 dataset, where AR [2] shows the worst performance. This implies that incorrect teleportation may hinder random walks. Figure 6 depicts the effect of the damping factor in S-Walk, controlling the ratio of walks and restarts. With = 0.7, S-Walk Table 3: R@20 and MAP@20 of S-Walk with various transition and teleportation models. SR [25] and AR [2] are simple Markov-and co-occurence-based models. I denotes the identity matrix. and denote the transition and teleportation model, respectively. shows the best performance, implying that the transition model is more dominant than the teleportation model. Finally, with various numbers of random walk steps, we compare S-walk with thestep method, which uses the -step landing probability on the transition graph. (i) Without restart using the teleportation graph, performance significantly degrades, and (ii) when ≥ 2 in -step, the accuracy continues to decrease. (iii) S-Walk usually converges within 3-5 steps, achieving the best performance on both datasets. RELATED WORK Random walk-based models. With the success of PageRank [26], the idea of the random walk has been widely adopted to address the data sparsity problem in recommender systems [55]. Trust-Walker [20] addressed cold-start user/item problems using trust information between users. Starting from a target user vertex, [6,7] estimated the proximity score using the transition probabilities after short random walks on the user-item bipartite network. Besides, random-walk-based models have been successfully deployed in the large-scale industrial systems [8]. Recently, RecWalk [37,38] leveraged spectral properties of nearly decoupled Markov chains and combined the item model with random walks. Session-based recommendation (SR). SR models are categorized into three groups, i.e., Markov chain models, neighborhood-based models, and DNN-based models. For more details, please refer to recent survey papers [3,22]. Firstly, Markov chains (MC) are useful for modeling consecutive item dependency. FPMC [43] proposed the tensor factorization using MF, and FOSSIL [14] combined FISM [24] with factorized MC. SR [25] proposed an improved MC model by combining association rules. Although they are effective for addressing the short-term item dependency, it does not utilize various patterns among items. Secondly, Jannach and Ludewig [21] adopted the K-nearest neighbor (KNN) for SR, and STAN [11] improved SKNN using various weight schemes to further reflect item dependency. Recently, [33][34][35] reported that the KNN-based models have shown competitive performance on various datasets. However, they are generally limited in representing high-order dependency among items. Lastly, various DNN-based models have been used for SR. GRU4Rec [16,17] employed gated recurrent units (GRU) for SR. Later, NARM [31] and STAMP [32] utilized attention mechanisms to distinguish short-and long-term item dependency. To further analyze complex item transitions, SR-GNN [53] recently exploited gated graph neural networks (GGNN). However, SR-GNN [53] is often vulnerable to overfitting [13] owing to extreme data sparsity. CONCLUSION In this work, we propose a session-based recommendation using item random walks, S-Walk. To complement the drawback of existing models, we utilize the random walk with restart to fully capture intra-and inter-correlations of sessions. We incorporate efficient linear item models, i.e., the transition model and the teleportation model, into the item random walks process. Our extensive evaluation shows that S-Walk achieves comparable or state-of-the-art accuracies, high scalability, and fast inference speed over various benchmark datasets. A CLOSED-FORM SOLUTIONS We provide a detailed derivation of the closed-form solutions for the item transition model and the item teleportation model in Section 3.2 and 3.3. Based on the closed-form solutions, we can further apply for row-level weights to reflect the timeliness of sessions, i.e., the more important, the more recent sessions. A.1 Linear Item Transition Model Given the input matrix Y ∈ R × and the output matrix Z ∈ R × , the objective function is expressed as argmin B tran L (B tran ) = (Z − Y · B tran ) 2 + ∥B tran ∥ 2 .(14) The first-order derivative of Eq. (14) over B is then given by 1 2 · L B tran = (−Y ⊤ )(Z − Y · B tran ) + B tran , = (Y ⊤ Y + I) · B tran − Y ⊤ Z.(15) Solving for B tran so that Eq. (15) becomes 0 gives the closed-form solution of Eq. (14): B tran =P ′ · (Y ⊤ Z),(16)whereP ′ = Y ⊤ Y + I −1 . A.2 Linear Item Teleportation Model To solve the constrained optimization problem, we define a new objective function L (B tele , ) by applying a Lagrangian multiplier and a KKT condition: L (B tele ) = ∥(X − X · B tele )∥ 2 + ∥B tele ∥ 2 s.t. diag(B tele ) ≤ (17) L (B tele , ) = ∥(X − X · B tele )∥ 2 + ∥B tele ∥ 2 + 2 ⊤ diag(B tele − I),(18) where ∈ R is the KKT multiplier that satisfies ∀i, ≥ 0. Then, we differentiate L (B tele , ) with respect to B tele to minimize Eq. (18): 1 2 L (B tele , ) B tele = (−X ⊤ )(X − X · B tele ) + B tele + diagMat( ) = −X ⊤ X + X ⊤ X · B tele + B tele + diagMat( ) = (X ⊤ X + I)B tele − X ⊤ X + diagMat( ).(19) Setting this to 0 and solving by B tele gives the optimalB tele aŝ B tele = (X ⊤ X + I) −1 · [X ⊤ X + I − I − diagMat( )] =P[P −1 − I − diagMat( )] = I −P[ I + diagMat( )] = I −P −P · diagMat( ),(20) whereP = X ⊤ X + I −1 . Besides, a KKT multiplier is zero only if B tele ≤ . Otherwise, has a non-zero value. In this case, regularizes the value of B tele as B tele = . For B tele , we can develop the the following equation: B tele = = 1 − − .(21) Finally, can be expressed as follows: = (1 − − ) = (1 − ) − .(22) Substituting in Eq. (20) and enforcing non-negative elements inB tele giveB tele :B tele = I −P · diagMat( ),(23)= if 1 − ≤ 1− otherwise.(24) Here, is a vector defined by = + · 1. B TRAINING OF S-WALK Algorithm 1 describes how the final item-item matrix M is trained using the transition matrix R and the teleportation matrix T. First, we define M (0) as the identity matrix. Then, we repeatedly update M ( ) using R and T. For example, M (1) can be thought as R with a probability of and T with a probability of (1 − ). After M ( ) converges, we use M ( ) as the item-item matrix for inference. C THEORETICAL ANALYSIS Property of the item transition matrix. The item transition matrix R can be viewed as a generalized Markov model. When representing partial representations, the importance of items can be different. As pos in Eq. (5) is small, it can be close to a simple Markov model [25], capturing consecutive transitions between items. We can also consider a broader range to represent the transition between items. While the simple Markov model is mostly based on the frequency of consecutive items, our linear model can estimate the sequential correlation between items. As a result, it can outperform existing Markov models [14,24,25,43]. Property of the item teleportation matrix. As discussed in [46], assume that a training matrix X is |S| samples with |I| random variables, i.e., ∼ N (0, Σ) following a Gaussian distribution with zero mean and the covariance matrix Σ ∈ R × . Here, the estimate of the covariance matrix isΣ = X ⊤ X/|S|, andP =Σ −1 is the estimate of the precision (or concentration) matrix. By solving the closed-form equation in Eq. (10), the item teleportation matrix T can be interpreted as the precision matrix by estimating the cooccurrence matrix X ⊤ X/|S|. In other words, the precision matrix can be viewed as the similarity matrix between items, which has been used in the recent neighborhood-based approach [48,49]. Model convergence. The graph G M is defined by a combination of R and T. The non-negative values in both matrices imply the positive correlations in covariance matrix R ⊤ T. The edges based on the correlations make graph G M . Besides, T has a self-loop connection as in Eq. (11). Based on these structures, the transition matrix R is an ergodic Markov model, i.e., irreducible and aperiodic. That is, the landing probabilities of S-Walk converge to a limiting distribution. It is found that S-Walk converges by 3-5 steps, as observed in 5.4. Table 4 summarizes the detailed statistics of all the benchmark datasets. These datasets are collected from e-commerce and music streaming services: YooChoose 6 (YC), Diginetica 7 (DIGI), Retail-Rocket 8 (RR), and NowPlaying 9 (NOWP). For YC and DIGI, we use single-split datasets (i.e., YC-1/4 and DIGI1), following existing studies [16,31,32,53]. To evaluate on large-scale datasets, we further experiment on five-split datasets (i.e., YC, DIGI5, RR, NOWP), used in the recent empirical analysis [33][34][35]. For single-split datasets, we use the last day as the test set for YC-1/4 and the sessions of the last seven days as the test set for DIGI1, as done in [33][34][35]. For five-split datasets, we divide the datasets into five disjoint successive splits. For each split, the last -days sessions were used for the test set. We choose for the test set as follows: one for YC, two for RR, five for NOWP, and seven for DIGI. Table 5: Hyper-parameter settings of S-Walk. is a L2weight decay, is a damping factor and is the self-loop factor. pos is a weight decay by item position and inf is a weight decay by inference item position. Hyper-parameter setting. Table 5 reports all hyper-parameters for S-Walk. Firstly, the hyper-parameters of the left column are the factor that controls the involvement of random walk models. is a damping factor that determines the proportion of the transition matrix S and teleportation matrix T. controls the probability that the surfer jumps to the item rather than to itself. Lastly, the hyperparameters of the right column are model hyper-parameters used for the transition model and the teleportation model. D DATASET DESCRIPTION F ADDITIONAL EXPERIMENTAL RESULTS We also evaluate the accuracy of S-Walk and the competing models by varying the size of the entire session from 10-100%. As shown in Figure 8, S-Walk consistently achieves better accuracy than other baselines. Even though all models degrade performance with a smaller training set, S-Walk suffers the least from sparser data. Notably, DNN-based models show significant degradation in accuracy with smaller training sets owing to their complex structures and enormous parameters. (For the other datasets, we still observe similar tendencies.) Based on this observation, S-Walk is still more effective than the baseline models, even on small-scale datasets. Figure 1 : 1Illustration of intra-and inter-session relationships. The red shoes and the gray shoes (in blue box) do not appear within a single session, but they are correlated because both are related to the orange shirt (in red boxes). Figure 2 : 2Figure 2: The overall architecture of S-Walk. Given a session-item matrix, two different linear models build the transition graph and the teleportation graph by adjacency matrices R and T, using Eq. (7) and Eq. (10), respectively. Then, in Eq. (12), the random walk with restart is used to build the final graph with adjacency matrix M, capturing high-order relationships. Figure 3 : 3R@20 and MAP@20 of S-Walk over various compression ratios. Figure 4 : 4R@20 and MAP@20 of S-Walk and competitor models over different session lengths (Short ≤ 5, Long > 5). Figure 5 : 5R@N and MAP@N of S-Walk and state-of-the-art models over the various cut-offs on DIGI5 and RR. Figure 6 : 6R@20 and MAP@20 of S-Walk over varied damping factor . Figure 7 : 7R@20 of S-Walk using random walk with restart (RWR) and a k-step landing probability. Algorithm 1 : 1Training procedure for S-Walk Input: Transition matrix R, teleportation matrix T. Output: S-Walk model M. ∥M ( ) − M ( −1) ∥ 1 ≤ 7 M ← M ( ) Figure 8 : 8R@20 and MAP@20 of S-Walk and competing models over various size of training session matrices. all experiments on a desktop with 2 NVidia TITAN RTX, 256 GB memory, and 2 Intel Xeon Processor E5-2695 v4 (2.10 GHz, 45M cache). Table 2 : 2The number of floating point operations and run- time (in seconds) for inference of S-Walk and DNN-based models. Gain indicate how fast S-Walk is compared to GCE- GNN [52]. Runtime for DNN models was measured on GPU, while that for S-Walk on CPU. Models YC1/4 DIGI5 RR GFLOPs Time GFLOPs Time GFLOPs Time SR-GNN 1282.8 70.8 765.4 49.2 247.2 12.7 NISER+ 2605.8 87.1 1551.0 59.7 501.8 15.7 GCE-GNN 51094.8 108.8 10445.9 74.0 9446.0 19.8 S-Walk 11.0 20.5 4.9 8.3 2.3 5.2 Gain 4632.3x 5.3x 2131.3x 8.9x 4133.2x 3.8x Table 4 : 4Statistics of the benchmark datasets. #Actions indicates the number of entire user-item interactions. Dataset #Actions #Sessions #Items #Actions #Items / Sess. / Sess.Split 1-split YC-1/4 7,909,307 1,939,891 30,638 4.08 3.28 DIGI1 916,370 188,807 43,105 4.85 4.08 5-split YC5 5,426,961 1,375,128 28,582 3.95 3.17 DIGI5 203,488 41,755 32,137 4.86 4.08 RR 212,182 59,962 31,968 3.54 2.56 NOWP 271,177 27,005 75,169 10.04 9.38 As the negative values inB tran mean negative correlations among items, they are less important than positive correlations. In our empirical study, it is observed that this conversion does not harm the accuracy of the linear model. https://www.kaggle.com/chadgostopp/recsys-challenge-2015 3 https://competitions.codalab.org/competitions/11161 4 https://www.kaggle.com/retailrocket/ecommerce-dataset 5 https://drive.google.com/drive/folders/1ritDnO_Zc6DFEU6UND9C8VCisT0ETVp5 N-GCN: Multi-scale graph convolution for semi-supervised node classification. Sami Abu-El-Haija, Amol Kapoor, Bryan Perozzi, Joonseok Lee, Uncertainty in Artificial Intelligence (UAI). Sami Abu-El-Haija, Amol Kapoor, Bryan Perozzi, and Joonseok Lee. 2020. N- GCN: Multi-scale graph convolution for semi-supervised node classification. In Uncertainty in Artificial Intelligence (UAI). Mining Association Rules between Sets of Items in Large Databases. Rakesh Agrawal, Tomasz Imielinski, Arun N Swami, SIGMOD. Rakesh Agrawal, Tomasz Imielinski, and Arun N. Swami. 1993. Mining Associa- tion Rules between Sets of Items in Large Databases. In SIGMOD. 207-216. Automated Generation of Music Playlists: Survey and Experiments. Geoffray Bonnin, Dietmar Jannach, ACM Comput. Surv. 4735Geoffray Bonnin and Dietmar Jannach. 2014. Automated Generation of Music Playlists: Survey and Experiments. ACM Comput. Surv. 47, 2 (2014), 26:1-26:35. Local Collaborative Autoencoders. Minjin Choi, Yoonki Jeong, Joonseok Lee, Jongwuk Lee, Proceedings of the 14th ACM International Conference on Web Search and Data Mining. the 14th ACM International Conference on Web Search and Data MiningMinjin Choi, Yoonki Jeong, Joonseok Lee, and Jongwuk Lee. 2021. Local Collabo- rative Autoencoders. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining. 734-742. Session-aware Linear Item-Item Models for Session-based Recommendation. Minjin Choi, Jinhong Kim, Joonseok Lee, Hyunjung Shim, Jongwuk Lee, Minjin Choi, Jinhong Kim, Joonseok Lee, Hyunjung Shim, and Jongwuk Lee. 2021. Session-aware Linear Item-Item Models for Session-based Recommendation. In WWW. 2186-2197. Fabian Christoffel, Bibek Paudel, Chris Newell, Abraham Bernstein, Blockbusters and Wallflowers: Accurate, Diverse, and Scalable Recommendations with Random Walks. In RecSys. Fabian Christoffel, Bibek Paudel, Chris Newell, and Abraham Bernstein. 2015. Blockbusters and Wallflowers: Accurate, Diverse, and Scalable Recommendations with Random Walks. In RecSys. 163-170. Random walks in recommender systems: exact computation and simulations. Colin Cooper, Sang-Hyuk Lee, Tomasz Radzik, and Yiannis Siantos. WWW (Companion VolumeColin Cooper, Sang-Hyuk Lee, Tomasz Radzik, and Yiannis Siantos. 2014. Random walks in recommender systems: exact computation and simulations. In WWW (Companion Volume). 811-816. Pixie: A System for Recommending 3+ Billion Items to 200+ Million Users in Real-Time. Chantat Eksombatchai, Pranav Jindal, Jerry Zitao Liu, Yuchen Liu, Rahul Sharma, Charles Sugnet, Mark Ulrich, Jure Leskovec, Chantat Eksombatchai, Pranav Jindal, Jerry Zitao Liu, Yuchen Liu, Rahul Sharma, Charles Sugnet, Mark Ulrich, and Jure Leskovec. 2018. Pixie: A System for Recommending 3+ Billion Items to 200+ Million Users in Real-Time. In WWW. 1775-1784. Hui Fang, Guibing Guo, Danning Zhang, Yiheng Shu, Deep Learning-Based Sequential Recommender Systems: Concepts, Algorithms, and Evaluations. In ICWE. Hui Fang, Guibing Guo, Danning Zhang, and Yiheng Shu. 2019. Deep Learning- Based Sequential Recommender Systems: Concepts, Algorithms, and Evaluations. In ICWE. 574-577. Jonathan Frankle, Michael Carbin, The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. In ICLR. Jonathan Frankle and Michael Carbin. 2019. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. In ICLR. Diksha Garg, Priyanka Gupta, Pankaj Malhotra, Lovekesh Vig, and Gautam M. Shroff. 2019. Sequence and Time Aware Neighborhood for Session-based Recommendations: STAN. In SIGIR. Diksha Garg, Priyanka Gupta, Pankaj Malhotra, Lovekesh Vig, and Gautam M. Shroff. 2019. Sequence and Time Aware Neighborhood for Session-based Recom- mendations: STAN. In SIGIR. 1069-1072. Using Collaborative Filtering to Weave an Information Tapestry. David Goldberg, David A Nichols, Brian M Oki, Douglas B Terry, Commun. ACM. 35David Goldberg, David A. Nichols, Brian M. Oki, and Douglas B. Terry. 1992. Using Collaborative Filtering to Weave an Information Tapestry. Commun. ACM 35, 12 (1992), 61-70. NISER: Normalized Item and Session Representations with Graph Neural Networks. Priyanka Gupta, Diksha Garg, Pankaj Malhotra, Lovekesh Vig, Gautam M Shroff, CoRRPriyanka Gupta, Diksha Garg, Pankaj Malhotra, Lovekesh Vig, and Gautam M. Shroff. 2019. NISER: Normalized Item and Session Representations with Graph Neural Networks. CoRR (2019). Fusing Similarity Models with Markov Chains for Sparse Sequential Recommendation. Ruining He, Julian J Mcauley, ICDM. Ruining He and Julian J. McAuley. 2016. Fusing Similarity Models with Markov Chains for Sparse Sequential Recommendation. In ICDM. 191-200. An Algorithmic Framework for Performing Collaborative Filtering. Jonathan L Herlocker, Joseph A Konstan, Al Borchers, John Riedl, SIGIR. Jonathan L. Herlocker, Joseph A. Konstan, Al Borchers, and John Riedl. 1999. An Algorithmic Framework for Performing Collaborative Filtering. In SIGIR. 230-237. Recurrent Neural Networks with Top-k Gains for Session-based Recommendations. Balázs Hidasi, Alexandros Karatzoglou, CIKM. Balázs Hidasi and Alexandros Karatzoglou. 2018. Recurrent Neural Networks with Top-k Gains for Session-based Recommendations. In CIKM. 843-852. Session-based Recommendations with Recurrent Neural Networks. Balázs Hidasi, Alexandros Karatzoglou, ICLR. Linas Baltrunas, and Domonkos TikkBalázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, and Domonkos Tikk. 2016. Session-based Recommendations with Recurrent Neural Networks. In ICLR. Balázs Hidasi, Massimo Quadrana, Alexandros Karatzoglou, and Domonkos Tikk. 2016. Parallel Recurrent Neural Network Architectures for Feature-rich Sessionbased Recommendations. In RecSys. Balázs Hidasi, Massimo Quadrana, Alexandros Karatzoglou, and Domonkos Tikk. 2016. Parallel Recurrent Neural Network Architectures for Feature-rich Session- based Recommendations. In RecSys. 241-248. Collaborative Filtering for Implicit Feedback Datasets. Yifan Hu, Yehuda Koren, Chris Volinsky, ICDM. Yifan Hu, Yehuda Koren, and Chris Volinsky. 2008. Collaborative Filtering for Implicit Feedback Datasets. In ICDM. 263-272. TrustWalker: a random walk model for combining trust-based and item-based recommendation. Mohsen Jamali, Martin Ester, KDD. Mohsen Jamali and Martin Ester. 2009. TrustWalker: a random walk model for combining trust-based and item-based recommendation. In KDD. 397-406. When Recurrent Neural Networks meet the Neighborhood for Session-Based Recommendation. Dietmar Jannach, Malte Ludewig, RecSys. Dietmar Jannach and Malte Ludewig. 2017. When Recurrent Neural Networks meet the Neighborhood for Session-Based Recommendation. In RecSys. 306-310. Session-based item recommendation in e-commerce: on short-term intents, reminders, trends and discounts. Dietmar Jannach, Malte Ludewig, Lukas Lerche, User Model. User Adapt. Interact. 27Dietmar Jannach, Malte Ludewig, and Lukas Lerche. 2017. Session-based item recommendation in e-commerce: on short-term intents, reminders, trends and discounts. User Model. User Adapt. Interact. 27, 3-5 (2017), 351-392. Closed-Form Models for Collaborative Filtering with Side-Information. Olivier Jeunen, Jan Van Balen, Bart Goethals, RecSys. Olivier Jeunen, Jan Van Balen, and Bart Goethals. 2020. Closed-Form Models for Collaborative Filtering with Side-Information. In RecSys. 651-656. FISM: factored item similarity models for top-N recommender systems. Santosh Kabbur, Xia Ning, George Karypis, Santosh Kabbur, Xia Ning, and George Karypis. 2013. FISM: factored item simi- larity models for top-N recommender systems. In KDD. 659-667. A Comparison of Frequent Pattern Techniques and a Deep Learning Method for Session-Based Recommendation. Iman Kamehkhosh, Dietmar Jannach, Malte Ludewig, RecSys. Iman Kamehkhosh, Dietmar Jannach, and Malte Ludewig. 2017. A Comparison of Frequent Pattern Techniques and a Deep Learning Method for Session-Based Recommendation. In RecSys. 50-56. Google's PageRank and Beyond: The Science of Search Engine Rankings. Amy N Langville, Carl D Meyer, Princeton University PressUSAAmy N. Langville and Carl D. Meyer. 2006. Google's PageRank and Beyond: The Science of Search Engine Rankings. Princeton University Press, USA. Local collaborative ranking. Joonseok Lee, Samy Bengio, Seungyeon Kim, Guy Lebanon, Yoram Singer, WWW. Joonseok Lee, Samy Bengio, Seungyeon Kim, Guy Lebanon, and Yoram Singer. 2014. Local collaborative ranking. In WWW. 85-96. Local low-rank matrix approximation. Joonseok Lee, Seungyeon Kim, Guy Lebanon, Yoram Singer, ICML. Joonseok Lee, Seungyeon Kim, Guy Lebanon, and Yoram Singer. 2013. Local low-rank matrix approximation. In ICML. 82-90. Matrix approximation under local low-rank assumption. Joonseok Lee, Seungyeon Kim, Guy Lebanon, Yoram Singer, ICLR. Joonseok Lee, Seungyeon Kim, Guy Lebanon, and Yoram Singer. 2013. Matrix approximation under local low-rank assumption. In ICLR. LLORMA: Local Low-Rank Matrix Approximation. Joonseok Lee, Seungyeon Kim, Guy Lebanon, Yoram Singer, Samy Bengio, Journal of Machine Learning Research. 1724Joonseok Lee, Seungyeon Kim, Guy Lebanon, Yoram Singer, and Samy Bengio. 2016. LLORMA: Local Low-Rank Matrix Approximation. Journal of Machine Learning Research 17 (2016), 15:1-15:24. Neural Attentive Session-based Recommendation. Jing Li, Pengjie Ren, Zhumin Chen, Zhaochun Ren, Tao Lian, Jun Ma, CIKM. Jing Li, Pengjie Ren, Zhumin Chen, Zhaochun Ren, Tao Lian, and Jun Ma. 2017. Neural Attentive Session-based Recommendation. In CIKM. 1419-1428. STAMP: Short-Term Attention/Memory Priority Model for Session-based Recommendation. Qiao Liu, Yifu Zeng, Refuoe Mokhosi, Haibin Zhang, Qiao Liu, Yifu Zeng, Refuoe Mokhosi, and Haibin Zhang. 2018. STAMP: Short- Term Attention/Memory Priority Model for Session-based Recommendation. In KDD. 1831-1839. Evaluation of session-based recommendation algorithms. Malte Ludewig, Dietmar Jannach, User Model. User Adapt. Interact. 28Malte Ludewig and Dietmar Jannach. 2018. Evaluation of session-based recom- mendation algorithms. User Model. User Adapt. Interact. 28, 4-5 (2018), 331-390. Empirical Analysis of Session-Based Recommendation Algorithms. Malte Ludewig, Noemi Mauro, Sara Latifi, Dietmar Jannach, CoRR abs/1910.12781Malte Ludewig, Noemi Mauro, Sara Latifi, and Dietmar Jannach. 2019. Empirical Analysis of Session-Based Recommendation Algorithms. CoRR abs/1910.12781 (2019). Performance comparison of neural and non-neural approaches to session-based recommendation. Malte Ludewig, Noemi Mauro, Sara Latifi, Dietmar Jannach, RecSys. Malte Ludewig, Noemi Mauro, Sara Latifi, and Dietmar Jannach. 2019. Per- formance comparison of neural and non-neural approaches to session-based recommendation. In RecSys. 462-466. Junhua Fang, and Victor S. Sheng. 2020. Collaborative Self-Attention Network for Session-based Recommendation. Anjing Luo, Pengpeng Zhao, Yanchi Liu, Fuzhen Zhuang, Deqing Wang, Jiajie Xu, Anjing Luo, Pengpeng Zhao, Yanchi Liu, Fuzhen Zhuang, Deqing Wang, Jiajie Xu, Junhua Fang, and Victor S. Sheng. 2020. Collaborative Self-Attention Network for Session-based Recommendation. In IJCAI. 2591-2597. RecWalk: Nearly Uncoupled Random Walks for Top-N Recommendation. N Athanasios, George Nikolakopoulos, Karypis, WSDM. Athanasios N. Nikolakopoulos and George Karypis. 2019. RecWalk: Nearly Uncoupled Random Walks for Top-N Recommendation. In WSDM. 150-158. Boosting Item-based Collaborative Filtering via Nearly Uncoupled Random Walks. N Athanasios, George Nikolakopoulos, Karypis, ACM Trans. Knowl. Discov. Data. 1426Athanasios N. Nikolakopoulos and George Karypis. 2020. Boosting Item-based Collaborative Filtering via Nearly Uncoupled Random Walks. ACM Trans. Knowl. Discov. Data 14, 6 (2020), 64:1-64:26. Xia Ning, George Karypis, SLIM: Sparse Linear Methods for Top-N Recommender Systems. In ICDM. Xia Ning and George Karypis. 2011. SLIM: Sparse Linear Methods for Top-N Recommender Systems. In ICDM. 497-506. Star Graph Neural Networks for Session-based Recommendation. Zhiqiang Pan, Fei Cai, Wanyu Chen, Honghui Chen, Maarten De Rijke, CIKM. Zhiqiang Pan, Fei Cai, Wanyu Chen, Honghui Chen, and Maarten de Rijke. 2020. Star Graph Neural Networks for Session-based Recommendation. In CIKM. 1195- 1204. Rethinking Item Importance in Session-based Recommendation. Zhiqiang Pan, Fei Cai, Yanxiang Ling, Maarten De Rijke, SIGIR. Zhiqiang Pan, Fei Cai, Yanxiang Ling, and Maarten de Rijke. 2020. Rethinking Item Importance in Session-based Recommendation. In SIGIR. 1837-1840. Sequence-Aware Recommender Systems. Massimo Quadrana, Paolo Cremonesi, Dietmar Jannach, ACM Comput. Surv. 5136Massimo Quadrana, Paolo Cremonesi, and Dietmar Jannach. 2018. Sequence- Aware Recommender Systems. ACM Comput. Surv. 51, 4 (2018), 66:1-66:36. Factorizing personalized Markov chains for next-basket recommendation. Steffen Rendle, Christoph Freudenthaler, Lars Schmidt-Thieme, WWW. Steffen Rendle, Christoph Freudenthaler, and Lars Schmidt-Thieme. 2010. Factor- izing personalized Markov chains for next-basket recommendation. In WWW. 811-820. 2015. Recommender Systems Handbook. Francesco Ricci, Lior Rokach, and Bracha ShapiraSpringerFrancesco Ricci, Lior Rokach, and Bracha Shapira (Eds.). 2015. Recommender Systems Handbook. Springer. Collaborative Filtering via High-Dimensional Regression. Harald Steck, CoRR abs/1904.13033Harald Steck. 2019. Collaborative Filtering via High-Dimensional Regression. CoRR abs/1904.13033 (2019). Embarrassingly Shallow Autoencoders for Sparse Data. Harald Steck, WWW. Harald Steck. 2019. Embarrassingly Shallow Autoencoders for Sparse Data. In WWW. 3251-3257. Markov Random Fields for Collaborative Filtering. Harald Steck, NeurIPS. Harald Steck. 2019. Markov Random Fields for Collaborative Filtering. In NeurIPS. 5474-5485. Unifying nearest neighbors collaborative filtering. Koen Verstrepen, Bart Goethals, RecSys. Koen Verstrepen and Bart Goethals. 2014. Unifying nearest neighbors collabora- tive filtering. In RecSys. 177-184. Effective Latent Models for Binary Feedback in Recommender Systems. Maksims Volkovs, Guang Wei Yu, SIGIR. Maksims Volkovs and Guang Wei Yu. 2015. Effective Latent Models for Binary Feedback in Recommender Systems. In SIGIR. 313-322. A Collaborative Session-based Recommendation Approach with Parallel Memory Modules. Meirui Wang, Pengjie Ren, Lei Mei, Zhumin Chen, Jun Ma, Maarten De Rijke, SIGIR. Meirui Wang, Pengjie Ren, Lei Mei, Zhumin Chen, Jun Ma, and Maarten de Rijke. 2019. A Collaborative Session-based Recommendation Approach with Parallel Memory Modules. In SIGIR. 345-354. Shoujin Wang, Longbing Cao, Yan Wang, arXiv:1902.04864A Survey on Session-based Recommender Systems. Shoujin Wang, Longbing Cao, and Yan Wang. 2019. A Survey on Session-based Recommender Systems. arXiv:1902.04864 (2019). Global Context Enhanced Graph Neural Networks for Session-based Recommendation. Ziyang Wang, Wei Wei, Gao Cong, Xiao-Li Li, Xianling Mao, Minghui Qiu, SIGIR. Ziyang Wang, Wei Wei, Gao Cong, Xiao-Li Li, Xianling Mao, and Minghui Qiu. 2020. Global Context Enhanced Graph Neural Networks for Session-based Rec- ommendation. In SIGIR. 169-178. Session-Based Recommendation with Graph Neural Networks. Shu Wu, Yuyuan Tang, Yanqiao Zhu, Liang Wang, Xing Xie, Tieniu Tan, AAAI. Shu Wu, Yuyuan Tang, Yanqiao Zhu, Liang Wang, Xing Xie, and Tieniu Tan. 2019. Session-Based Recommendation with Graph Neural Networks. In AAAI. 346-353. Graph Contextualized Self-Attention Network for Session-based Recommendation. Chengfeng Xu, Pengpeng Zhao, Yanchi Liu, Victor S Sheng, Jiajie Xu, Fuzhen Zhuang, Junhua Fang, and Xiaofang Zhou. Chengfeng Xu, Pengpeng Zhao, Yanchi Liu, Victor S. Sheng, Jiajie Xu, Fuzhen Zhuang, Junhua Fang, and Xiaofang Zhou. 2019. Graph Contextualized Self- Attention Network for Session-based Recommendation. In IJCAI. 3940-3946. A random walk method for alleviating the sparsity problem in collaborative filtering. Hilmi Yildirim, Mukkai S Krishnamoorthy, RecSys. Hilmi Yildirim and Mukkai S. Krishnamoorthy. 2008. A random walk method for alleviating the sparsity problem in collaborative filtering. In RecSys. 131-138. We implemented the proposed model and STAN [11] using NumPy. We used public source code for SR-GNN 10 [53] and implemented NISER+ [13] using SR-GNN code. We used all source code for the other baseline models 11 in [35]. For reproducibility, we conducted the reported results for all the baseline models, and actually verified whether the performance of baseline models could be reproduced with an error of 1-2% or less in our implementation environments. E REPRODUCIBILITY Implementation details. For S-Walk, we tuned and among {0.1, 0.3, 0.5, 0.7, 0.9}, pos and inf among {0. 1258} using the validation set selected from the training set for the same period as the testing set. For the baseline models, we used the best hyper-parametersE REPRODUCIBILITY Implementation details. For S-Walk, we tuned and among {0.1, 0.3, 0.5, 0.7, 0.9}, pos and inf among {0.125, 0.25, 0.5, 1, 2, 4, 8} using the validation set selected from the training set for the same period as the testing set. For the baseline models, we used the best hyper-parameters reported in [33, 34]. We implemented the proposed model and STAN [11] using NumPy. We used public source code for SR-GNN 10 [53] and implemented NISER+ [13] us- ing SR-GNN code. We used all source code for the other baseline models 11 in [35]. For reproducibility, we conducted the reported results for all the baseline models, and actually verified whether the performance of baseline models could be reproduced with an error of 1-2% or less in our implementation environments. We conducted 6 https://www.kaggle.com/chadgostopp/recsys-challenge-2015
[ "https://github.com/jin530/SWalk." ]
[ "Spanning Trees in Graphs of High Minimum Degree with a Universal Vertex II: A Tight Result", "Spanning Trees in Graphs of High Minimum Degree with a Universal Vertex II: A Tight Result" ]
[ "Bruce Reed ", "Maya Stein " ]
[]
[]
We prove that, if m is sufficiently large, every graph on m + 1 vertices that has a universal vertex and minimum degree at least ⌊ 2m 3 ⌋ contains each tree T with m edges as a subgraph. Our result confirms, for large m, an important special case of a conjecture by Havet, Reed, Stein, and Wood.The present paper builds on the results of a companion paper in which we proved the statement for all trees having a vertex that is adjacent to many leaves.
10.1002/jgt.22899
[ "https://arxiv.org/pdf/1905.09806v3.pdf" ]
250,699,037
1905.09806
55fcd1a9c6c85c95209d9a347b284438cfd4e483
Spanning Trees in Graphs of High Minimum Degree with a Universal Vertex II: A Tight Result 20 Jul 2022 Bruce Reed Maya Stein Spanning Trees in Graphs of High Minimum Degree with a Universal Vertex II: A Tight Result 20 Jul 2022arXiv:1905.09806v3 [math.CO] We prove that, if m is sufficiently large, every graph on m + 1 vertices that has a universal vertex and minimum degree at least ⌊ 2m 3 ⌋ contains each tree T with m edges as a subgraph. Our result confirms, for large m, an important special case of a conjecture by Havet, Reed, Stein, and Wood.The present paper builds on the results of a companion paper in which we proved the statement for all trees having a vertex that is adjacent to many leaves. Introduction It is easy to see that any graph of minimum degree at least m contains a copy of each tree with m edges, and that this bound is sharp. Variants replacing the minimum degree condition with another degree condition have also been proposed. The average degree is used in the well-known Erdős-Sós conjecture (see [BPS21,Ro19] for recent results), and the median degree is used in the Loebl-Komlós-Sós conjecture, which was approximately solved in [HKP + 17a, HKP + 17b, HKP + 17c, HKP + 17d]. These variants are strengthenings of the observation at the beginning of the paragraph. If, however, one wishes to strengthen the observation by simply weakening the imposed bound on the minimum degree of the host graph, the problem becomes impossible. For this, it suffices to consider the disjoint union of complete graphs of order m. This graph has minimum degree m − 1 and contains no tree with m edges. But, if we are restricting our attention to spanning trees, it is still possible to embed bounded degree trees using a weaker minimum degree condition. Komlós, Sarközy and Szemerédi showed in [KSS01] that for every δ > 0, every large enough (m + 1)-vertex graph of minimum degree at least (1 + δ) m 2 contains each tree with m edges whose maximum degree is bounded by O( n log n ). From the example given above, it is clear, though, that an analogue of the result from [KSS01] could not be true for trees that are much smaller than the host graph, even if we would require a minimum degree of just below the size of the tree we are looking for. So, it seems natural to seek an additional condition to impose on the host graph to make a statement in this direction come true. A condition on the maximum degree is an obvious candidate, since we may have to embed a star with m edges. The following conjecture in this respect has been put forward recently. Conjecture 1.1 (Havet, Reed, Stein, and Wood [HRSW20]). Let m ∈ N. If a graph has maximum degree at least m and minimum degree at least ⌊ 2m 3 ⌋ then it contains every tree with m edges as a subgraph. We remark that if the minimum degree condition is replaced by the much stronger bound (1 − γ)m, for a tiny constant γ, a result along the lines of Conjecture 1.1 is true [HRSW20]. The conjecture also holds if the maximum degree condition is replaced by a large function in m [HRSW20]. Furthermore, an approximate version of Conjecture 1.1 holds for bounded degree trees and dense host graphs [BPS19]. Such an approximate version even holds for a generalised form of Conjecture 1.1, where the bound on the minimum degree is allowed to be any value between m 2 and 2m 3 , with the maximum degree obeying a corresponding bound between 2m and m (see [BPS20] for details). As further evidence for Conjecture 1.1, we prove that it holds when the graph has m+1 vertices, if m is large enough. That is, we show the conjecture for the case when we are looking for a spanning tree in a large graph. Theorem 1.2. There is an m 0 ∈ N such that for every m ≥ m 0 every graph on m + 1 vertices that has minimum degree at least ⌊ 2m 3 ⌋ and a universal vertex contains every tree T with m edges as a subgraph. Clearly, our theorem can also be understood as a variant of the result by Komlós, Sarközy and Szemerédi mentioned above. The proof of Theorem 1.2 follows quickly from a result obtained in the companion paper [RS19a], and a second result, Lemma 2.2, which will be proved in the present paper. We present the two lemmas and give the short proof of Theorem 1.2 in the next section, deferring the proof of Lemma 2.2 to the subsequent sections. 2 Proof of Theorem 1.2 In the companion paper [RS19a], we showed the following lemma. Lemma 2.1. [RS19a, Lemma 1.3] For every δ > 0, there is an m δ such that for any m ≥ m δ the following holds for every graph G on m + 1 vertices that has minimum degree at least ⌊ 2m 3 ⌋ and a universal vertex. If T is a tree with m edges, and some vertex of T is adjacent to at least δm leaves, then T embeds in G. Lemma 2.1 covers the proof of our main result for all trees which have a vertex with many leaves, namely at least δm leaves, for some fixed δ, but is of no help for trees which have no such vertex. This latter case is covered by the next lemma which will be proved in the present paper. Lemma 2.2. There are m 1 ∈ N, and δ > 0 such that the following holds for every m ≥ m 1 , and every graph G on m + 1 vertices that has minimum degree at least ⌊ 2m 3 ⌋ and a universal vertex. If T is a tree with m edges such that no vertex of T is adjacent to more than δm leaves, then T embeds in G. The proof of Lemma 2.2 is given in the next section, Section 3. It will rely on four auxiliary lemmas, Lemmas 3.2, 3.4, 3.6, and 3.7, of which one is proved right away in Section 3, one is from [RS19a], and the remaining two will be proved in later sections of the present paper. With Lemma 2.1 and Lemma 2.2 at hand, the proof of our main result, Theorem 1.2, is straightforward. Proof of Theorem 1.2. We choose our output m 0 for Theorem 1.2 by taking the maximum value of m 1 and m δ , where m 1 and δ are given by Lemma 2.2, and m δ is given for input δ by Lemma 2.1. Given now T and G as in the theorem, Lemma 2.2 covers the case that T has no vertex adjacent to more than δm leaves, and Lemma 2.1 covers the remaining case. The proof of Lemma 2.2 We start by giving a quick overview of the proof of Lemma 2.2 in Section 3.1. As mentioned earlier, we formally organise the proof of Lemma 2.2 by splitting it up into four auxiliary lemmas, namely Lemma 3.2, Lemma 3.4, Lemma 3.6, and Lemma 3.7. These four auxiliary lemmas will be stated in Section 3.2. Section 3.3 then contains the proof of Lemma 2.2, under the assumption that the four auxiliary lemmas hold. The easy proof of Lemma 3.2 is given in Section 3.4. Lemma 3.6 was proved in [RS19a]. So, at the end of this section, there will be only two lemmas, Lemma 3.4 and Lemma 3.7, left to prove. In Sections 4 and 5, we state and prove two new lemmas, Lemma 4.1 and Lemma 5.1, which together imply Lemma 3.4. The last section of the paper, Section 6, is devoted to the proof of Lemma 3.7. Idea of the proof of Lemma 2.2 The idea of the proof is to first reserve a random set S ⊆ V (G) for later use. Then, we embed into G − S a very small subtree T * of the tree T we wish to embed. Actually, we will only embed T * − L, having chosen a subset L ⊆ V (T * ) of some low degree vertices (either leaves or vertices of degree 2). The vertices from L will be left out of the embedding for now, as they will only be embedded at the very end. The set L is slightly larger than the set S. This gives us some free space when we embed T − T * , which will be useful. In fact, this freedom makes it possible for us to use a lemma from [RS19a] (stated as Lemma 3.6 in the present paper) for embedding T − T * , unless the graph G has a very special structure, in which case an ad-hoc embedding is provided by Lemma 3.7. After this, there is a small leftover set of vertices of G, which, together with the set S, serves for embedding the vertices from L, by using an absorption argument. Four auxiliary lemmas In the present section, we present our four auxiliary lemmas, Lemma 3.2, Lemma 3.4, Lemma 3.6, and Lemma 3.7. We start with the simplest of our lemmas, Lemma 3.2. This lemma enables us to find a convenient subtree T * of a tree T . We need a quick definition before we give the lemma. Definition 3.1 (γ-nice subtree, type 1, type 2). Let T be a tree with m edges. Call a subtree T * of T with root t * a γ-nice subtree if (i) |V (T * )| ≤ γm; and (ii) every component of T − T * is adjacent to t * . Consider the following additional conditions: (1) T * contains at least ⌈ γm 20 ⌉ disjoint paths of length 5 and all vertices on these paths have degree at most 2 in T . (2) T * contains at least ⌈ γm 40 ⌉ leaves from T . If the former condition holds, we say T * is of type 1, and if the latter condition holds, we say T * is of type 2. We are now ready to state the lemma that finds a γ-nice subtree of one of the two types. Lemma 3.2. For all 0 < γ ≤ 1, any tree with m ≥ 200 γ edges has a γ-nice subtree of type 1 or of type 2. The proof of Lemma 3.2 is straightforward, but we prefer to leave it to the end of the present section, namely to Subsection 3.4, in order to be able to first focus on the proof of the main result. Next, we exhibit a lemma that will enable us to transfer the embedding problem of the tree to an embedding problem of almost all of the tree, under the condition that we already embed a small part of it, i.e. a γ-nice subtree, beforehand. For convenience, let us use the following notation. Lemma 3.4. There is an m 0 ∈ N such that the following holds for all m ≥ m 0 , and all γ with 2 10 7 ≤ γ < 1 30 . Let G be an m-good graph, with universal vertex w. Let T be a tree with m edges, such that no vertex of T is adjacent to more than m 10 23 leaves. Let T * be a γ-nice subtree of T , of type 1 or 2, rooted at vertex t * . Then there are sets L ⊆ V (T * ) \ {t * } and S ⊆ V (G) satisfying |S| ≤ |L| − ⌈( γ 2 ) 4 m⌉. Furthermore, for any w ′ ∈ V (G) − S, with w ′ = w, there is an embedding of T * − L into G − S, with t * embedded in w ′ , such that the following holds. Any embedding of T − L into G − S extending our embedding of T * − L can be extended to an embedding of all of T into G. As mentioned earlier, later on we shall split Lemma 3.4 into two lemmas, Lemma 4.1 and Lemma 5.1, depending on the type of the γ-nice subtree. We will state and prove Lemma 4.1 in Section 4, and state and prove Lemma 5.1 in Section 5. Together, Lemmas 4.1 and 5.1 imply Lemma 3.4. In order to state the remaining two of our four auxiliary lemmas, we need a simple definition. This definition describes the extremal case, where the graph G has a very specific structure (and therefore, the approach from the companion paper [RS19a] does not work). Definition 3.5. Let γ > 0. We say a graph G on m + 1 vertices is γ-special if V (G) consists of three mutually disjoint sets X 1 , X 2 , X 3 such that • m 3 − 3γm ≤ |X i | ≤ m 3 + 3γm for each i = 1, 2, 3; and • there are at most γ 10 |X 1 | · |X 2 | edges between X 1 and X 2 . The following lemma, which excludes the extremal situation, was proved in the companion paper [RS19a]. Lemma 3.6.[RS19a, Lemma 7.3] For all γ < 1 10 6 there are m 0 ∈ N and λ > 0 such that the following holds for all m ≥ m 0 . Let G be an m-good graph, which is not γ-special. Let T be a tree with m edges such that T ⊆ G and no vertex in T is adjacent to more than λm leaves. Let T * be a γ-nice subtree of T , with root t * , let L ⊆ V (T * ) \ {t * }, and let S ⊆ V (G) such that |S| ≤ |L| − ⌈( γ 2 ) 4 m⌉. Assume that for any W ⊆ V (G) − S with |W | ≥ γm, there is an embedding φ W of T * − Y into G − S, with t * embedded in W . Then there is a set W ⊆ V (G) − S with |W | ≥ γm, and an embedding of T − Y into G − S that extends φ W . Our last auxiliary lemma deals with the extremal case described in Definition 3.5. Lemma 3.7. There are m 0 ∈ N, β ≤ 1 10 10 , and γ 0 , γ 1 ≤ 1 50 such that the following holds for all m ≥ m 0 . Suppose G is a γ 0 -special (m+1)-vertex graph of minimum degree at least ⌊ 2 3 m⌋, and suppose T is a tree with m edges such that none of its vertices is adjacent to more than βm leaves. Let T * be a γ 1 -nice subtree of T , with root t * , and let L ⊆ V (T * ) \ {t * }. Assume there is a set S ⊆ V (G) such that |S| ≤ |L| − ⌈( γ 1 2 ) 4 m⌉. Assume that for any W ⊆ V (G) − S with |W | ≥ γm, there is an embedding φ W of T * − Y into G − S, with t * embedded in W . Then there is a set W ⊆ V (G) − S with |W | ≥ γm, and an embedding of T − Y into G − S that extends φ W . We prove Lemma 3.7 in Section 6. Proving Lemma 2.2 We now show how our four auxiliary lemmas imply Lemma 2.2. Proof of Lemma 2.2. First, we apply Lemma 3.7 to obtain four numbers β, γ 0 , γ 1 > 0 and m Lem 3.7 0 ∈ N. Next, we apply Lemma 3.4 to obtain a number m Lem 3.4 0 . Finally, we apply Lemma 3.6 with input γ 0 to obtain another integer m Lem 3.6 0 as well as a number λ > 0. For the output of Lemma 2.2, we will take Now, consider an m-good graph G, and a tree T with m edges as in the statement of Lemma 2.2. Use Lemma 3.2 together with Lemma 3.4, once for each input γ 0 , γ 1 , to obtain, for i = 0, 1, a γ i -nice tree T * i with root t * i , and sets S i , L i satifying |S i | ≤ |L i | − ( γ i 2 ) 4 m. Moreover, for i = 0, 1, there are embeddings of T * i − L i into G − S i that map the vertex t * i to any given vertex, except possibly the universal vertex of G. Furthermore, Lemma 3.4 guarantees that, in order to embed T into G, we only need to extend, for either i = 0 or i = 1, the embedding of T * i − L i given by the lemma to an embedding of all of T − L i into G − S. For this, we will use Lemmas 3.6 and 3.7. More precisely, if G is not γ 0 -special, then we can apply Lemma 3.6 to G with sets S 0 and L 0 , together with the tree T * 0 . If G is γ 0 -special, we can apply Lemma 3.7 to G with sets S 1 and L 1 , together with the tree T * 1 . This finishes the proof of the lemma. Proof of Lemma 3.2 We finish Section 3 by giving the short proof of Lemma 3.2. Proof of Lemma 3.2. As an auxiliary measure, we momentarily fix any leaf v L of the given tree T as the root of T . Next, we choose a vertex t * in T having at least ⌈ γm 2 ⌉ descendants, such that it is furthest from v L having this property. Then, each component of T − t * that does not contain v L has size at most ⌈ γm 2 ⌉. So, there is a subset S * of these components such that γm 2 ≤ S∈S * |S| ≤ γm. Now, consider the tree T * formed by the union of the trees in S * and the vertex t * . Clearly, T * fulfills items (i) and (ii) of Definition 3.1. If T * contains at least ⌈ γm 40 ⌉ leaves of T , then T * is γ-nice of type 2, and we are done. Otherwise, T * has at most ⌊ γm 40 ⌋ leaves, and a standard calculation shows that T * has at most ⌊ γm 40 ⌋ vertices of degree at least 3. Delete these vertices from T * . It is easy to see that this leaves us with a set of at most γm 20 paths, together containing at least 19 40 γm vertices. All vertices of these paths have degree at most 2 in T . Deleting at most four vertices on each path we can ensure all paths have lengths divisible by five, and together contain at least 19 40 γm − 4 · γm 20 ≥ γm 4 + 5 vertices. Dividing each of the paths into paths of length five we obtain a set P of at least ⌈ γm 20 ⌉ disjoint paths in T * . So, T * is γ-nice of type 1. The proof of Lemma 4.1 This section is devoted to the proof of the following lemma, which proves Lemma 3.4 for all γ-nice trees of type 1. Lemma 4.1. There is an m 0 ∈ N such that the following holds for all m ≥ m 0 , and for all γ > 0 with 2 10 7 ≤ γ < 1 30 . Let G be an m-good graph. Let T be a tree with m edges, such that no vertex of T is adjacent to more than m 10 23 leaves. Let T have a γ-nice subtree T * of type 1, with root t * . Then there are sets L ⊆ V (T * ) \ {t * } and S ⊆ V (G) satisfying |S| ≤ |L| − ( γ 2 ) 4 m. Furthermore, for any w ∈ V (G) − S, there is an embedding of T * − L into G − S, with t * embedded in w, such that any embedding of T − L into G − S extending our embedding of T * − L can be extended to an embedding of all of T into G. In the proof of Lemma 4.1, some random choices are going to be made, and in order to see we are not far from the expected outcome, it will be useful to have the well-known Chernoff bounds at hand (see for instance [McD89]). For the reader's convenience let us state these bounds here. Let X 1 , . . . , X n be independent random variables satisfying 0 ≤ X i ≤ 1. Let X = X 1 + . . . + X n and set µ := E[X]. Then for any ε ∈ (0, 1), it holds that P[X ≥ (1 + ε)µ] ≤ e − ε 2 2+ε µ and P[X ≤ (1 − ε)µ] ≤ e − ε 2 2 µ .(1) We are now ready for the proof of Lemma 4.1. Proof of Lemma 4.1. We choose m 0 = 10 25 . Now assume that for some m ≥ m 0 , we are given an m-good graph G, and a tree T with m edges such that none of its vertices is adjacent to more than 10 −23 m leaves. We are also given a γ-nice subtree T * of T , with root t * , and a set P of disjoint paths of length five such that |P| = ⌈ γm 20 ⌉, for some γ as in the lemma. We now define L as the set that consists of the fourth vertex (counting from the vertex closest to t * ) of each of the paths from P. Clearly, |L| = ⌈ γm 20 ⌉ ≥ ⌈ m 10 8 ⌉,(2) by our assumptions on γ. In order to prove Lemma 4.1, we need to do three things. First of all, we need to find a set S ⊆ V (G) of size at most |L| − ( γ 2 ) 4 m. Then, given any vertex w ∈ V (G) − S, we have to embed T * − L into G − S, with t * going to w. Finally, we need to make sure that any extension of this embedding to an embedding of all of T − L into G − S can be completed to an embedding of all of T . It is clear that for the last point to go through, it will be crucial to have chosen both S and the set N of the images of the neighbours of the vertices in L carefully, in order to have the necessary connections between N and S. Our solution is to choose both S and N randomly. More precisely, choose a set S of size |S| = |L| − ⌈( γ 2 ) 4 m⌉(3) uniformly and independently at random in V (G − w). Also, choose a set N of size |N| = 2|L| (4) uniformly and independently at random in V (G − w − S). Now, we can proceed to embed T ′ := T * − L into G − S. We will start by embedding the neighbours of vertices in L arbitrarily into N. Let us keep track of these by calling n 1 (x) and n 2 (x) the images of the neighbours of x, for each x ∈ L. Next, we embed t * into w, and then proceed greedily, using a breadth-first order on T * (skipping the vertices of L and those already embedded into N). Each vertex we embed has at most two neighbours that have been embedded earlier (usually this is just the parent, but parents of vertices embedded into N have two such neighbours, and the root of T ′ has none). So, since G has minimum degree at least ⌊ 2m 3 ⌋ and given the small size of T ′ , we can easily embed all of T ′ as planned. It remains to prove that any extensions of this embedding can be completed to an embedding of all of T . This will be achieved by the following claim, which finishes the proof of Lemma 4.1. Claim 4.2. For any set R ⊆ V (G) of |L| − |S| vertices, there is a bijection between L and S ∪ R mapping each vertex x ∈ L to a common neighbour of n 1 (x) and n 2 (x). In order to prove Claim 4.2, we define an auxiliary bipartite graph H having V (G − w) on one side, and L on the other. We put an edge between v ∈ V (G − w) and x ∈ L if v is adjacent to both n 1 (x) and n 2 (x). We are interested in the subgraph H ′ of H that is obtained by restricting the V (G−w)-side to the set S ∪R (but sometimes it is enough to consider degrees in H). By the minimum degree condition on G, the expectation of the degree in H of any vertex v ∈ V (G − w) is E(deg H (v)) ≥ ( 199 300 ) 2 |L|, since v has at least ⌊ 2 3 m − 1⌋ ≥ 199 300 m neighbours in G − w, and thus, for any given x ∈ L, each n i (x) is adjacent to v with probability at least 199 300 m. Therefore, the probability that all vertices of G have degree at least d := ( 198 300 ) 2 |L| is bounded from below by P[δ(G) ≥ d] ≥ 1 − v∈V (G−w) P[deg H (v) < d] ≥ 1 − (m + 1) · e −( 397 199·300 ) 2 |L| 2 ≥ 0.9999, where we used (1) (Chernoff's bound) with ε = 199 2 −198 2 199 2 = 397 199 2 , our bound on the size of L as given in (2) and the fact that m ≥ 10 25 . Furthermore, since G has minimum degree at least ⌊ 2 3 m⌋, we know that for each x ∈ L, vertices n 1 (x) and n 2 (x) have at least 1 3 m − 3 common neighbours in G − w. Therefore, every vertex of L has degree at least 1 3 m − 3 in H. However, we are interested in the degree of these vertices into the set S. For a bound on this degree, first note that the expected degree of any vertex of L into the set S is bounded from below by 999 3000 |S|. Now again apply (1) (Chernoff's bound), together with the fact that |S| ≥ 10 17 , to obtain that with probability greater than 0.9999, every element of L is incident to at least 998 3000 |S| vertices of S. Resumingly, we can say that with probability greater than 0.999 we chose the sets S and N such that the resulting graph H obeys the following degree conditions: (A) the minimum degree of V (G − w) into L is at least ( 198 300 ) 2 |L|; and (B) the minimum degree of L into S is at least 998 3000 |S|. Let us from now on assume that we are in the likely situation that both (A) and (B) hold. Further, assume there is no matching from S ∪ R to L in H ′ . Then by Hall's theorem 1 , there is a partition of L into sets L ′ and L ′′ and a partition of S ∪ R into sets J ′ and J ′′ such that there are no edges from L ′ to J ′′ , and such that |J ′ | < |L ′ | and |L ′′ | < |J ′′ |. Since J ′′ = ∅, and since by (A), each vertex in J ′′ has degree at least ( 198 300 ) 2 |L| into L, and thus into L ′′ , we deduce that |J ′′ | > |L ′′ | ≥ ( 198 300 ) 2 |L|.(5) Since also L ′ = ∅, and by (B), each of its elements has at least 998 3000 |S| neighbours in S ∩ J ′ , we see that |L ′ | > |J ′ | ≥ 998 3000 |S|. Thus, using (2) and (3), as well as our upper bound on γ, we can calculate that |L ′′ | = |L| − |L ′ | ≤ |S| + ⌈( γ 2 ) 4 m⌉ − 998 3000 |S| ≤ 2003 3000 |S|.(6) Let us iteratively define a subset S * of S ∩ J ′′ as follows. We start by putting an arbitrary vertex v 0 ∈ S ∩ J ′′ into S * , and while there is a vertex of S ∩ J ′′ whose neighbourhood contains m 1000 log m vertices which are not in the neighbourhood of S * , we augment S * by adding any such vertex v that maximises N(v) − N(S * ). We stop when there is no suitable vertex that can be added to S * . Note that |S * | ≤ 1000 log m. Our plan is to show next that the set S * has certain properties which are unlikely to be had by any set having certain other properties that S * has (for instance, having size at most 1000 log m). More precisely, the probability that a set like S * exists will be bounded from above by 0.005. This will finish the proof of Claim 4.2, as we then know that with probability at least 0.99 we chose sets S and N such that in the resulting graph H ′ , the desired matching exists, and thus Claim 4.2 holds. So, let us define Q as the set of all subsets of V (G − w) having size at most 1000 log m. For each Q ∈ Q, let V 1 (Q) be the set consisting of all vertices of G − w which have less than m 1000 log m neighbours outside N(Q) (in the graph G − w). Finally, let Q ′ ⊆ Q contain all Q ∈ Q for which m 10 9 ≤ |V 1 (Q)| ≤ m 3 + m log m + 2.(7) Observe that, for Q ∈ Q ′ fixed, the expected size of V 1 (Q) ∩ S is E[V 1 (Q) ∩ S] = |V 1 (Q)| · |S| m because S was chosen at random in G − v. So by (3) and (2), and by (7), we see that 1 2 · m 10 17 ≤ E[V 1 (Q) ∩ S] ≤ |S| 3 + |S| log m + 2 ≤ 38 100 |S|,(8) where the last inequality follows from the fact that m ≥ 10 25 . Now, we can use (1) (Chernoff's bound) and the first inequality of (8) to bound the probability that |V 1 (Q) ∩ S| exceeds its expectation by a factor of at least 20 19 as follows: P |V 1 (Q) ∩ S| ≥ 20 19 · E[V 1 (Q) ∩ S] ≤ e − E[V 1 (Q)∩S] 820 ≤ e − m164·10P ∃Q ∈ Q ′ with |V 1 (Q) ∩ S| ≥ 41 100 |S| ≤ 0.001.(9) Now, let us turn back to the set S * . First of all, we note that by the definition of S * , we have S ∩ J ′′ ⊆ V 1 (S * ). Thus, we can use (5) and (3) to deduce that |V 1 (S * ) ∩ S| ≥ |J ′′ | − |R| ≥ ( 198 300 ) 2 |L| − ⌈( γ 2 ) 4 m⌉ ≥ ( 197 300 ) 2 |S| > 43 100 |S|.(10) So, by (2) and (3), the first inequality of (7) holds for Q = S * . For a moment, assume that N(S * ) ≤ 999 1000 m. Then, also the second inequality of (7) holds for Q = S * , as otherwise, each of the at least m a contradiction. Hence S * ∈ Q ′ . But then, according to (9), we know that (10) is not likely to happen. So, with probability at least 0.998, we chose S in a way that all three of (A), (B), and (C) |N(S * )| ≥ 999 1000 m hold. We will from now on assume that we are in this likely case. Consider the set Q ′′ which consists of all sets Q ∈ Q for which the first inequality in (7) holds, and for which |N(Q)| ≥ 999 1000 m. By (10) and by (C), S * ∈ Q ′′ . Call Q ′′ + the set of all Q ∈ Q ′′ for which at least one of the following holds: • Q has a vertex of degree at least 2m 3 + m 100 ; or • Q has two vertices v, v ′ such that each sees at least m 100 vertices outside the neighbourhood of the other one. We are going to show that the sets Q ∈ Q ′′ + typically have larger neighbourhoods in L than S * has, and will thus be able to conclude that S * / ∈ Q ′′ + , which will be crucial for the very last part of the proof. For this, let X(Q) be the set of unordered pairs {v, v ′ } of distinct vertices which have a common neighbour in Q, for each Q ∈ Q ′′ . Then, because of the minimum degree condition we imposed on the graph G, we know that each vertex v ∈ N(Q) is in at least ⌊ 2m 3 ⌋ − 2 pairs of X(Q). So, since N was chosen at random in V (G − w), and because of the definition of Q ′′ , we know that for any fixed set Q ∈ Q ′′ , and any fixed vertex x ∈ L, the probability that n 1 (x) and n 2 (x) have a common neighbour in Q can be bounded as follows: P[{n 1 (x), n 2 (x)} ∈ X(Q)] ≥ 999m 1000 · (⌊ 2m 3 ⌋ − 2) m 2 . However, if we take any fixed Q ∈ Q ′′ + , and any fixed x ∈ L, the bound becomes P[{n 1 (x), n 2 (x)} ∈ X(Q)] ≥ 999m 1000 (⌊ 2m 3 ⌋ − 2) + min{( 2m 3 + m 100 ) m 100 , ( m 3 − 2) m 100 } m 2 ≥ 669 1000 , where the two entries in the minimum stand for the two scenarios that may cause the set Q to belong to Q ′′ + . In order to to see the term for the second scenario, observe that vertices v and v ′ have at least m 3 − 2 common neighbours, and each of these neighbours belongs to at least ⌊ 2m 3 ⌋ − 2 + m 100 pairs of X(Q). Therefore, fixing Q ∈ Q ′′ + , and letting L(Q) denote the sets of all x ∈ L with {n 1 (x), n 2 (x)} ∈ X(Q), we know that the expected size of L(Q) is bounded by E |L(Q)| ≥ 669 1000 |L|. As above, we can apply the Chernoff bound (1) to see that with very high probability, |L(Q)| is not much smaller than its expectation: Because of (6) (and (3)), and since L ′′ ⊇ L(S * ), this means that P |L(Q)| ≤ 668 669 · E[|L(Q)|] ≤ e − E[|L(Q)|] 2·669 2 ≤ e − |L| 2·10 6 ≤ e − mS * / ∈ Q ′′ + . In particular, the degree of v 0 (in G − w) is less than 2m 3 + m 100 , and each vertex of S * has less than m 100 neighbours outside N(v 0 ). Moreover, by the choice of S * , we can deduce that every vertex in S ∩ J ′′ has less than m 100 neighbours outside N(v 0 ). (11) By (3) and by (6), and since |R| = |L| − |S|, we know that |S ∩ J ′′ | ≥ |J ′′ | − |R| ≥ ( 198 300 ) 2 |L| − ⌈( γ 2 ) 4 m⌉ > 2 5 |S|.(12) Fix a subset Z of size m 4 of G − w − N(v 0 ), and let us look at the average degree d of the vertices of Z into S ∩ J ′′ . We have d · m 4 = v∈Z deg(v, S ∩ J ′′ ) = v∈S∩J ′′ deg(v, Z) ≤ m · |S ∩ J ′′ | 100 , where for the last inequality we used (11). Thus d ≤ |S ∩ J ′′ | 25 . Now use (12) to see that the average degree of the vertices of Z into S is bounded from above by |S| − 48 125 |S| < ( 2 3 − 3 100 )|S|. This means that there must be at least one vertex in Z, say the vertex z, which has degree at most ( 2 3 − 3 100 )|S| into S. However, by Chernoff's bound (1), and since the expected degree of any vertex of G − W into S is at least ( 2 3 − 1 1000 )|S|, we know that this would only happen with probability at most 0.001. So we can assume we are in a situation where no such vertex z exists, and reach a contradiction, as desired. Resumingly, we know that with probability at least 0.995, our choice of S and N guarantee that a set S * as above does not exist in the resulting auxiliary graph H ′ , and thus, Hall's condition holds in H ′ . This means we find the desired matching, which finishes the proof of Claim 4.2, and with it the proof of Lemma 4.1. The proof of Lemma 5.1 This section is devoted to the proof of the following lemma, which proves Lemma 3.4 for all γ-nice trees of type 2. So, since γ-nice trees of type 1 are covered by Lemma 4.1, this finishes the proof of Lemma 3.4. Lemma 5.1. There is an m 0 ∈ N such that the following holds for all m ≥ m 0 , and all γ > 0 with 2 10 7 ≤ γ < 1 30 . Let G be an m-good graph, with universal vertex w. Let T be a tree with m edges, such that no vertex of T is adjacent to more than m 10 23 leaves. Let T have a γ-nice subtree T * of type 2, with root t * . Then there are sets L ⊆ V (T * ) \ {t * } and S ⊆ V (G) satisfying |S| ≤ |L| − ( γ 2 ) 4 m. Furthermore, for any w ′ ∈ V (G) − (S ∪ {w}), there is an embedding of T * − L into G − S, with t * embedded in w ′ , such that any embedding of T − L into G − S extending our embedding of T * − L can be extended to an embedding of all of T into G. In the proof of Lemma 5.1 we will use Azuma's inequality which can be found for instance in [McD89]). This well-known inequality states that for any sub-martingale {X 0 , X 1 , X 2 , . . .} which for each k almost surely satisfies |X k − X k−1 | < c k for some c k , we have that P[X n − X 0 ≤ −t] ≤ e − t 2 2· n k=1 c 2 k(13) for all n ∈ N + and all positive t. Let us now give the proof of Lemma 5.1. Proof of Lemma 5.1. We choose m 0 ∈ N large enough so that certain inequalities below are satisfied. Let G be an m-good graph, with universal vertex w. Let T be a tree with m edges, such that no vertex of T is adjacent to more than m 10 23 leaves. We are also given a γ-nice subtree T * of T , with root t * , and since T * is of type 2, there is a set L * ⊆ V (T * ) \ {t * } of |L * | = ⌈ γm 40 ⌉ leaves of T . Instead of L * , we will work with the set L which is obtained from L * by deleting all neighbours of t * . Cleary, |L| = ⌈ γm 41 ⌉ ≥ ⌈ m 10 9 ⌉ leaves of T . In order to prove Lemma 5.1, it suffices to find a set S ⊆ V (G) satisfying |S| ≤ |L| − ( γ 2 ) 4 m, to embed T * − L into G − S, and show that any extension of this embedding to an embedding of T − L into G − S can be completed to an embedding of all of T into G. For this, let us define t as the vertex of T * that is adjacent to most leaves from L. Define α so that t is incident to ⌈αm⌉ leaves and call L t the set of these leaves. By the assumptions of the lemma, α ≤ 10 −23 . (14) We now randomly embed T * − L in a top down fashion, where we start by putting t * in to w ′ . At each moment, when we embed a vertex v = t, we choose a uniformly random neighbour of the image of the (already embedded) parent p(v) of v. When we reach t, we embed t into w, the universal vertex of G. (This gives us some leeway when we later have to embed L.) We do not have to worry about the connection of w to the image of p(t) because of the universality of w. For every x ∈ L, let us call n(x) the image of p(x). Next, we pick a set S of size |S| = |L| − ⌈( γ 2 ) 4 m⌉ uniformly and independently at random in what remains of G. It only remains to prove the following analogue of Claim 4.2 to finish the proof of Lemma 5.1. Claim 5.2. For any set R ⊆ V (G) of |L| − |S| vertices, there is a bijection between L and S ∪ R mapping each vertex x ∈ L to a neighbour of n(x). In order to prove Claim 5.2, consider a set R of size |L| − |S| such that there is no matching from L to S ∪ R in the auxiliary bipartite graph H which is defined as follows. The bipartition classes of this graph H are L and S ∪ R, and every vertex x ∈ L is joined to all unoccupied neighbours of the image n(x) of the parent of x in S ∪ R. Our aim is to derive a contradiction from the assumption that such a set R exists. Our first observation is that by Chernoff's bound (1) and by our assumption on the minimum degree of G, we know that with probability at least 0.999, every vertex of L has degree at least ( 2 3 − 2 10 4 )|L| in H. Furthermore, as there is no matching from L to S ∪ R in H, we can apply Hall's theorem. This gives a partition of L into sets L ′ and L ′′ and a partition of S ∪ R into sets J ′ and J ′′ such that there are no edges from L ′ to J ′′ , and such that furthermore, |J ′ | < |L ′ | and |L ′′ | < |J ′′ |. As L ′ = ∅, we know that |J ′ | ≥ ( 2 3 − 2 10 4 )|L| and therefore, |J ′′ | ≤ ( 1 3 + 2 10 4 )|L|.(15) Since L ′′ contains all the children of t (this follows from the definition of H and from the fact that |J ′ | < m), and because of the definition of α, we know that L ′′ has size at least ⌈αm⌉ and hence |J ′′ | > ⌈αm⌉.(16) We now consider the set V * of vertices of G which are adjacent to at most ( 1 3 + 2 10 4 )|L| vertices of L in H. (The vertices in V * are those that serve only for relatively few leaves in L as a possible image.) Note that the size of V * depends on how we embedded T * − L (which was done randomly). We plan to show that with probability ≥ 0.99, we embedded T * − L such that |V * | < αm. (17) Then, by (16) there is a vertex v ∈ J ′′ \ V * . As the neighbours of v in H are contained in L ′′ , we get that |J ′′ | > |L ′′ | ≥ ( 1 3 + 2 10 4 )|L|, which is a contradiction to (15). This would prove Claim 5.2. So, it only remains to show (17). For this, we start by bounding the probability that a specific vertex v is in V * . Consider any vertex p that is the parent of some subset L p of L, and recall that p was embedded randomly in the neighbourhood N p of the image of the parent of p. By our minimum degree condition on G, we know that v is incident to at least 499 1000 |N p | vertices of N p . Hence, the probability that v is adjacent to p in G, and thus to all of L p in H, is bounded from below by 499 1000 . Since T * − L is very small, this bound actually holds independently of whether v is adjacent to L p ′ for some other parent p ′ . Therefore, the expected degree of v into L p is at least 499 1000 |L p |,(18) for each p. Our plan is to use Azuma's inequality (i.e., inequality (13) above). For this, order the set P of parents p of subsets L p of L as above, writing P = {p 1 , p 2 , . . . , p n }. For 1 ≤ i ≤ n, write d i for the degree of v into L p i . Now, define the random variable X k := 1≤i≤k d i + 499 1000 · k<i≤n |L p i |. By (18), this is a sub-martingale. Observe that X 0 = 499 1000 · |L| and X n = deg(v, L). We set c k := |L p k | for all k ≤ n. Then n k=1 c k = |L|, and furthermore, by our choice of the vertex t in the beginning of the proof of Lemma 5.1, we know that c k ≤ αm, for all k ≤ n. This, together with Azuma's inequality (13), tells us that the probability that v is in V * can be bounded as follows: P[v ∈ V * ] ≤ P deg(v, L) ≤ 336 1000 |L| = P[X n − X 0 ≤ − 163 1000 |L|] ≤ e − ( 163 1000 |L|) 2 2αm· n k=1 c k ≤ e − 163 2 2α·10 15 ≤ e − 1 10 11 ·α . So, the expected size of V * is at most m · e − 1 10 11 ·α . Using Markov's inequality we see that the probability that V * contains more than αm vertices is bounded from above by e − 1 10 11 ·α α ≤ 0.01, where we used the fact that α ≤ 10 −23 by (14). This proves (17), and thus finishes the proof of Claim 5.2, and of Lemma 5.1. 6 The proof of Lemma 3.7 The whole section is devoted to the proof of Lemma 3.7. We employ an ad-hoc strategy, which we briefly outline now. First, we clean up the γ 0 -special host graph G, ensuring a convenient minimum degree between and inside the three sets X i (the witnesses to the fact that G is γ 0 -special, see Definition 3.5). Then, given the tree T with its γ 1 -nice subtree T * , rooted at t * , we preprocess the part T − T * we have to embed. We do this by strategically choosing a small set Z ⊆ V (T −(T * −t * )), and divide the set A of all components of T − (T * − t * ) − Z into two sets A 1 and A 2 , which have certain useful properties (see Claim 6.1). We embed T − L, extending the given embedding of T * − L. We now distinguish three cases. In the first two cases, many elements of A are three-vertex paths, and we embed them into X 2 ∪ X 3 and embed the rest into X 1 ∪ X 3 . In the third case, there are not so many elements of A that are three-vertex paths, and we will use the partition A 1 ∪ A 2 of A. Components from sets A 1 will be embedded into X 1 ∪ X 3 , and components from A 2 will be embedded into X 2 ∪ X 3 . Let us now formally give the proof of Lemma 3.7. Setting up the constants and summarising the situation. Now, assume we are given a γ 0 -special (m + 1)-vertex graph G of minimum degree at least ⌊ 2m 3 ⌋, for some m ≥ m 0 , together with a tree T having m edges, such that none of the vertices of T is adjacent to more than βm leaves. Assume T has a γ 1 -nice subtree T * rooted at t * , and there are sets L ⊆ V (T * ) \ {t * } and S ⊆ V (G) such that |S| ≤ |L| − ⌈( γ 1 2 ) 4 m⌉. Furthermore, for any large enough set W , it is possible to embed T * − L into a subset ϕ(T * − L) of V (G) − S, with t * going to W . (We will specify below which set W we will use.) Once T * − L is embedded, our task is to embed the rest of T − L into G − (ϕ(T * − L) ∪ S). Observe that because of the discrepancy of the sizes of the sets L and S, we can count on an approximation of at least ⌈( γ 1 2 ) 4 ⌉, that is, we know our embedding will leave at least ⌈( γ 1 2 ) 4 m⌉ vertices of G − (ϕ(T * − L) ∪ S) unused. Preparing G for the embedding. Since G is γ 0 -special, there are sets X 1 , X 2 , X 3 partitioning V (G) such that m 3 − 3γ 0 m ≤ |X i | ≤ m 3 + 3γ 0 m(19) for each i = 1, 2, 3, and such that there are at most γ 10 0 |X 1 | · |X 2 | edges between X 1 and X 2 . Using the minimum degree condition on G, and using (20), an easy calculation shows that we can eliminate at most γ 5 0 m vertices from each of the sets X i , for i = 1, 2, so that the vertices of the thus obtained subsets X ′ i ⊆ X i each have degree at least ⌊ 2m 3 ⌋ − γ 5 0 |X 3−i | into X ′ i ∪ X 3 , for i = 1, 2. Then, because of (19), we can deduce that there are at least (1 − 6γ 0 )|X ′ i ||X 3 | edges between the sets X ′ i and X 3 , for i = 1, 2. So, we can eliminate at most 2 · √ 6γ 0 m vertices from X 3 , obtaining a set X ′ 3 , so that each of the vertices in X ′ 3 has degree at least (1 − 6 √ γ 0 )|X ′ i | into X ′ i , for i = 1, 2. Resumingly, we eliminated a few vertices from each of the sets X 1 , X 2 , X 3 to obtain three sets X ′ 1 , X ′ 2 , X ′ 3 satisfying |X ′ i | ≥ |X i | − 5 √ γ 0 m(21) such that for i = 1, 2, and any vertex v in X ′ 3 , the degree of v into X ′ i is at least |X ′ i | − 3 √ γ 0 m.(22) Furthermore, for i = 1, 2, for any v ∈ X ′ i and any X ∈ {X ′ i , X ′ 3 }, the degree of v into X is at least |X| − 6 √ γ 0 m.(23) Indeed, in order to see (23) for X = X ′ i , we use (19) to calculate that deg(v, X ′ i ) = deg(v, X ′ i ∪ X 3 ) − deg(v, X 3 ) ≥ ⌊ 2m 3 ⌋ − γ 5 0 |X 3−i | − |X 3 | ≥ ⌊ m 3 ⌋ − (γ 5 0 + 3γ 0 )m ≥ |X ′ i | − 6 √ γ 0 m, and for (23) for X = X ′ 3 , we calculate similarly, also using (21), to see that deg(v, X ′ 3 ) ≥ deg(v, X ′ i ∪ X 3 ) − |X ′ i | − (|X 3 | − |X ′ 3 |) ≥ ⌊ 2m 3 ⌋ − γ 5 0 |X 3−i | − |X i | − 5 √ γ 0 m ≥ |X ′ i | − 6 √ γ 0 m. Finding Z and grouping the components. Let us next have a closer look at the to-be-embedded T − T * . This forest might have relatively large components, which, for reasons that will become clearer below, might add unnecessary difficulties to our embedding strategy. In order to avoid these difficulties, we will now find a set Z ⊆ V (T −(T * −t * )) of up to three vertices so that all components in T − (T * − t * ) − Z have controlled sizes, and can be grouped into convenient sets. (Note that t * may or may not lie in Z.) More precisely, our aim is to prove the following statement. Claim 6.1. There are an independent set Z ⊆ V (T ) \ V (T * − t * ) with |Z| ≤ 3 and a partition of the set A of components of T − (T * − t * ) − Z into sets A 1 , A 2 such that for i = 1, 2, (i) all but at most oneT ∈ A has exactly one vertex rT neighbouring Z; (ii) m 3 + γ 1 m ≤ | T ∈A i V (T )| ≤ 2m 3 − γ 1 m; and (iii) if | T ∈A i V (T )| ≥ | T ∈A V (T )| 2 + 1 γ 0 , then eachT ∈ A i has at least 1 γ 0 vertices. For proving Claim 6.1, we plan to use the following folklore argument, and for completeness, we include its short proof. Claim 6.2. Every tree D has a vertex t D such that each component of D−t D has size at most |D| 2 . Proof. In order to see Claim 6.2, temporarily root D at any leaf vertex v L . Let t D be a vertex that is furthest from v L having the property that t D and its descendants constitute a set of at least |D| 2 vertices. Then each component of D − t D , including the one containing v L , has at most |D| 2 vertices. We can now prove Claim 6.1. Proof of Claim 6.1. Set T ′ := T − (T * − t * ) and apply Claim 6.2 to T ′ . We obtain a vertex z. Let A z be the set of all components of T ′ − z. First assume there is a set A 1 ⊆ A z with | T ∈A V (T )| 2 ≤ | T ∈A 1 V (T )| ≤ 2 3 m − γ 1 m.(24) We can assume that A 1 is smallest possible with (24). This choice guarantees that either A 1 has no components with at most 1 γ 0 vertices, or | T ∈A 1 V (T )| < | T ∈A V (T )| 2 + 1 γ 0 . So Z := {z}, A 1 and A 2 := A \ A 1 are as desired. Now assume there is no set A 1 as in (24). Then there is not set A ′ ⊆ A z with m 3 + γ 1 m ≤ | T ∈A ′ V (T )| ≤ 2 3 m − γ 1 m (25) (since if there was such a set A ′ , then either A ′ or A\ A ′ would qualify as A 1 ). We claim that T ′ − z has three components C 1 , C 2 , C 3 such that m 3 − 2γ 1 m ≤ |C i | ≤ m 3 + γ 1 m(26) for i = 1, 2, 3 (additionally, T ′ −z might have a set of very small components). Indeed, take a subset of A ′ ⊆ A z such that | T ∈A ′ V (T )| is maximised among all A ′ with | T ∈A ′ V (T )| ≤ 2 3 m − γ 1 m. Because of (25), we know that | T ∈A ′ V (T )| < m 3 + γ 1 m, and moreover, for any component C from A \ A ′ we have that |V (C)| ∪ T ∈A ′ V (T )| > 2 3 m − γ 1 m. So, |V (C)| > m 3 − 2γ 1 m for any such C, and Claim 6.2 implies that |V (C)| ≤ m 2 . Hence there are exactly two such components, C 1 and C 2 , both of which fulfill (26), and A = A ′ ∪ {C 1 , C 2 }. A similar argument (using the fact that we did not choose C 1 together with a subset of A ′ instead of choosing A ′ ) gives that A ′ contains a component C 3 for which (26) holds, and that |V (T − T * − C 1 − C 2 − C 3 )| ≤ 3γ 1 m.(27) Apply Claim 6.2 to each of the three components C 1 , C 2 , C 3 , obtaining three vertices, z 1 , z 2 , z 3 , such that for i = 1, 2, 3, z i ∈ C i and the components of C i − z i have size at most m 6 + γ 1 2 m. First assume that one of the vertices z i , say z 1 , is not adjacent to z. Then we set Z := {z 1 , z}. For A 1 , we choose C 2 and some of the components of C 1 − z 1 , in a way that (ii) of Claim 6.1 holds for A 1 . Let A 2 be the set of the remaining components of T ′ − Z. As before, we can ensure (iii) by shifting some of the small components from one of A 1 , A 2 to the other, until they have almost the same number of vertices, or the larger one has no small components. Note that most one component of A = A 1 ∪ A 2 is adjacent to both z 1 and z, which ensures (i). Now assume z i z is an edge, for each i = 1, 2, 3. Then we set Z := {z 1 , z 2 , z 3 }. Observe that the set A of the components of T ′ − Z is comprised of all components of C i − z i , for i = 1, 2, 3, plus a component containing z and all vertices outside C 1 ∪ C 2 ∪ C 3 . Each tree in A has exactly one vertex neighbouring Z, as desired for (i). Moreover, as these trees each have size at most m 6 + γ 1 2 m, it is easy to group them into two sets A 1 and A 2 fulfilling (ii), and as before, we can shift some of the small trees to ensure (iii). We now embed T −T * , distinguishing three cases. For convenience, let us define A * ⊆ A as the set of those components that contain t * or are adjacent to more than one vertex of Z. By Claim 6.1 (i), |A * | ≤ 2. Also, callT ∈ A bad ifT is isomorphic to a 3-vertex path whose middle vertex has degree 2 in T . Let B be the set of all bad components in A \ A * . Embedding T − T * if B is large. We show that if T ∈B V (T ) > m 2 ,(28) then we can embed T − T * . Indeed, choose W as the set X ′ 1 . That is, we let T * − L be embedded into ϕ(T * − L) ⊆ (X 1 ∪ X 2 ∪ X 3 ) \ S, with t * embedded into any vertex from X ′ 1 . We also embed all vertices from Z \{t * } into vertices from X ′ 1 , respecting possible adjacencies to t * . After doing this, we define, for i = 1, 2, 3, S i := X ′ i \ ϕ(T * − L) ∪ ϕ(Z) ∪ S . Note that, for i = 1, 2, 3, we have that m 3 + 3γ 0 m ≥ |S i | ≥ m 3 − 3γ 0 m − 5 √ γ 0 m − γ 1 m − 4 ≥ m 3 − 11 10 γ 1 m, because of (19) and (21). Consider the following way to embed trees from B into S 2 ∪ S 3 : We put the first vertex into S 3 , the second vertex into S 2 , and the third vertex into S 3 . Embed as many trees from B as possible in this way. Because of (28), and because of (22) and (23), we will use all but at most 3γ 0 m + 3 √ γ 0 m of S 2 (and about half of S 3 ). For the embedding of the remaining trees from A (including those trees from B that have not been embedded yet), note that for any treeT ∈ A \ A * , we can embed the larger 2 of its bipartition classes, minus the root rT ofT , into S 3 , and the other bipartition class into S 1 . For the treesT ∈ A * we can proceed similarly, only taking special care when embedding the parent p of a vertex that is already embedded (either t * or a vertex from Z). We will embed p into either S 1 or S 3 (as planned), respecting the adjacencies to its two already embedded neighbours (both of which see almost all of S 1 ∪ S 3 , so this is not a problem). Note that if vertex t * belongs to the class that was chosen to be embedded into S 3 , we 'spoil' our plan by one vertex since t * has been embedded in S 1 . We embed trees from A as long as we can in the manner described above. The next treeT is embedded with its larger bipartition class into S 1 ∪ S 3 , and the smaller class into S 1 , using as much as possible of S 3 . Because of (22) and (23), we will use all but at most 6 √ γ 0 m + 1 vertices of S 3 . The remaining trees from A are embedded into S 1 , which finishes the embedding. Embedding T − T * if B is medium sized. We now show how to embed T − T * if 4 9 m < T ∈B V (T ) ≤ m 2 .(29) In this case, we choose W as the set X ′ 3 if t * ∈ Z, that is, we let T * − L be embedded into ϕ(T * − L) ⊆ (X 1 ∪ X 2 ∪ X 3 ) \ S, with vertex t * embedded into a vertex ϕ(t * ) from X ′ 3 . If t * / ∈ Z, we choose W as the set X ′ 1 . Now assume that T * − L has been embedded. We next embed all vertices from Z \ {t * } into X ′ 3 , respecting possible adjacencies to t * . We then set, for i = 1, 2, 3, S i := X ′ i \ ϕ(T * − L) ∪ S ∪ ϕ(Z ∪ {t * }) , and because of (19) and (21), we have m 3 + 3γ 0 m ≥ |S i | ≥ m 3 − (3γ 0 + 5 √ γ 0 )m − (γ 1 m − ⌈( γ 1 2 ) 4 m⌉) − 4 ≥ m 3 − 11 10 γ 1 m.(30) We will now embed some treesT ∈ B in the following way. Embed the first and the third vertex ofT into S 2 , while the second vertex may go to either S 2 or S 3 . We embed as many trees from B as possible in this way, and fill as much as possible of S 2 with them. Then, because of (22), (23), (29) and (30), we will have used all but at most 6γ 0 m vertices of S 2 , and we will also have used at least m 9 − 3γ 0 vertices of S 3 . If we did not embed all of B we have used about half of the set S 3 , and we embed the few remaining trees from B into S 1 . We finish the embedding by putting all the remaining components into S 1 ∪ S 3 , as follows. Consider any treeT ∈ A \ (A * ∪ B), and let rT denote its root. As the parent of rT was embedded into S 3 , we have to embed rT into S 1 , but then we could either embedT − rT so that the even levels go to S 1 , and the odd levels go to S 3 , or we could embedT − rT the other way around (if there is enough space). This means that for eachT ∈ A, we can embed its larger bipartition class, except possibly for rT , into S 3 , and the rest into S 1 . Even better, since any vertex in S 1 is adjacent to almost all of S 1 , we note that any of the vertices that went to S 3 could alternatively have been placed in S 1 . Hence, we can embedT such that for any given t ≤ ⌈ |T |−1 2 ⌉, exactly t vertices go to S 3 , and the rest go to S 1 . So, as long as there is reasonable space left in both sets S 1 and S 3 , we know that for each treeT ∈ A \ A * with |V (T )| ≥ 5, one can embed two fifth of its vertices (or less, if desired) into S 3 (as 2 5 |V (T )| ≤ ⌈ |T |−1 2 ⌉ for these trees). For treesT ∈ A \ A * with |V (T )| < 5, it is easy to see thatT / ∈ B ensures that at least half of its vertices can be embedded into S 3 (or less, if desired). For the trees in A * we can argue analogously, except that the vertex t * is already embedded into the set X ′ 1 , and any neighbour of a vertex from Z is forced to go to S 1 . Therefore we might have two vertices less than expected going to S 3 , but this does not matter for the overall strategy. Thus, we can embed all trees from A \ B into S 2 ∪ S 3 , which finishes the embedding in this case. Embedding T −T * if B is small. We finally show how to embed T −T * if T ∈B V (T ) ≤ 4 9 m. As in the previous case, we set W := X ′ 3 if t * ∈ Z and set W := X ′ 1 otherwise. Without loss of generality, let us assume that t * lies in a component from A 1 (otherwise we rename A 1 and A 2 ). Now assume that T * − L has been embedded. We now embed Z \ {t * } into X ′ 3 , respecting possible adjacencies to t * . We then embed the at most 4βm leaves adjacent to Z \ {t * } anywhere in G, using (22) and (23). For i = 1, 2, 3, let S i be the set of all unused vertices from X ′ i \ S. By (19) and (21), and since β ≪ γ 0 , we calculate similarly as for (30) that m 3 + 3γ 0 m ≥ |S i | ≥ m 3 − 11 10 γ 1 m.(32) We will next embed the components from A. As in the previous case, we see that for any treeT ∈ A \ (A * ∪ B), for any i ∈ {1, 2}, and for any t ≤ ⌈ |T |−1 2 ⌉, we can embedT into S i ∪ S 3 with exactly t vertices going to S 3 . For the trees in A * the same is true if we replace t with t − 1. So, as above, the trees in B can be embedded with a third of their vertices (or less, if desired) going to S 3 . The trees in A \ B having size less than 1 γ 0 can be embedded with two fifth of their vertices, or less, if desired, going to S 3 . For the at most two trees in A * \ B the same is true, but we might have (in total) two vertices less in S 3 . For the trees in A having size at least 1 γ 0 , however, we can work under the stronger assumption that half of their vertices (or less, if desired) may be embedded into S 3 . This is so because there are at most γ 0 m such trees, and hence for embedding their roots we will use at most γ 0 m vertices, which is small enough to play no role in the calculations. We will now see that the above implies that, for i = 1, 2, we can embed all trees from A i into S i ∪ S 3 , thus concluding the proof of Lemma 3.7. Indeed, if both A 1 and A 2 contain elements of B, then by Claim 6.1 (iii), they contain roughly the same number of vertices. By (31), each A i has few enough components from B to ensure that there is a reasonable number of vertices in components which can be embedded with at least two fifths in S 3 . So we can embed all trees from A i , leaving at most 15 √ γ 0 m vertices from S i unused (here we also use (22), (23), and (32)). On the other hand, if only the smaller set among A 1 and A 2 contains elements of B, then we can embed this set as before. For the other set we recall that since it does not contain any small trees, half of its vertices (or less, if desired) can be embedded into S 3 . So we finish the embedding without a problem. Definition 3. 3 3(m-good graph). Let m ∈ N. Call a graph m-good if it has m + 1 vertices, minimum degree at least ⌊ 2m 3 ⌋ and a universal vertex. δ := min{β, λ, 10 −23 }. 1000 vertices of V (G − w) \ N(S * ) sees at least m log m vertices of V 1 (S * ), and so, by the definition of S * , we have that m 1000 · m log m ≤ e(V 1 (S * ), V (G − w) \ N(S * )) use (2) and the fact that m ≥ 10 25 . So with probability at least 0.997, we have chosen N in a way that (A), (B), (C), and also (D) |L(Q)| > 668 1000 |L| = 2004 3000 |L| for every Q ∈ Q ′′ + hold. and since |Q| ≤ m log m for each Q ∈ Q, we can deduce that18 ≤ 0.001 m log m . Since by (8), we know that 20 19 · E[V 1 (Q) ∩ S] < 41 100 |S|, Hall's theorem can be found in any standard textbook, it states that a bipartite graph with bipartition classes A and B either has a matching covering all of A, or there is an 'obstruction': a set A ′ ⊆ A such that |N (A ′ )| < |A ′ |. If both classes have the same size, we choose one class arbitrarily. On a conjecture of Loebl. M Ajtai, J Komlós, E Szemerédi, Graph theory, combinatorics, and algorithms. Kalamazoo, MI; New YorkWiley1M. Ajtai, J. Komlós, and E. Szemerédi. On a conjecture of Loebl. In Graph theory, combinatorics, and algorithms, Vol. 1, 2 (Kalamazoo, MI, 1992), 1135-1146. Wiley, New York, 1995. The probabilistic method. N Alon, J H Spencer, Wiley-Interscience Series in Discrete Mathematics and Optimization. N. Alon and J. H. Spencer. The probabilistic method. Wiley- Interscience Series in Discrete Mathematics and Optimization. Degree conditions for embedding trees. G Besomi, M Pavez-Signé, M Stein, SIAM Journal on Discrete Mathematics. 333G. Besomi, M. Pavez-Signé, and M. Stein. Degree conditions for embedding trees. SIAM Journal on Discrete Mathematics, 33(3): 1521-1555, 2019. Maximum and minimum degree conditions for embedding trees. G Besomi, M Pavez-Signé, M Stein, SIAM Journal on Discrete Mathematics. 344G. Besomi, M. Pavez-Signé, and M. Stein. Maximum and mini- mum degree conditions for embedding trees. SIAM Journal on Discrete Mathematics, 34(4): 2108-2123, 2020. On the Erdős-Sós conjecture for trees with bounded degree. Combinatorics, Probability and Computing. G Besomi, M Pavez-Signé, M Stein, 30G. Besomi, M. Pavez-Signé, and M. Stein. On the Erdős-Sós conjecture for trees with bounded degree. Combinatorics, Prob- ability and Computing, 30(5): 741-761, 2021. Discrepancy of trees. P Erdős, Z Füredi, M Loebl, V T Sós, Studia Sci. Math. Hungar. 301-2P. Erdős, Z. Füredi, M. Loebl, and V. T. Sós. Discrepancy of trees. Studia Sci. Math. Hungar., 30(1-2):47-57, 1995. A Variant of the Erdős-Sós Conjecture. F Havet, B Reed, M Stein, D Wood, J. Graph Theory. 941F. Havet, B. Reed, M. Stein, and D. Wood. A Variant of the Erdős-Sós Conjecture. J. Graph Theory, 94(1): 131-158, 2020. The approximate Loebl-Komlós-Sós Conjecture I: The sparse decomposition. ] J Hkp + 17a, J Hladký, D Komlós, M Piguet, M Simonovits, E Stein, Szemerédi, SIAM Journal on Discrete Mathematics. 312HKP + 17a] J. Hladký, J. Komlós, D. Piguet, M. Simonovits, M. Stein, and E. Szemerédi. The approximate Loebl-Komlós-Sós Con- jecture I: The sparse decomposition. SIAM Journal on Discrete Mathematics 31-2 (2017), pages 945-982. The approximate Loebl-Komlós-Sós Conjecture II: The rough structure of LKS graphs. [ Hkp + 17b, ] J Hladký, J Komlós, D Piguet, M Simonovits, M Stein, E Szemerédi, SIAM Journal on Discrete Mathematics. 312[HKP + 17b] J. Hladký, J. Komlós, D. Piguet, M. Simonovits, M. Stein, and E. Szemerédi. The approximate Loebl-Komlós-Sós Con- jecture II: The rough structure of LKS graphs. SIAM Journal on Discrete Mathematics 31-2 (2017), pages 983-1016. The approximate Loebl-Komlós-Sós Conjecture III: The finer structure of LKS graphs. [ Hkp + 17c, ] J Hladký, J Komlós, D Piguet, M Simonovits, M Stein, E Szemerédi, SIAM Journal on Discrete Mathematics. 312[HKP + 17c] J. Hladký, J. Komlós, D. Piguet, M. Simonovits, M. Stein, and E. Szemerédi. The approximate Loebl-Komlós-Sós Con- jecture III: The finer structure of LKS graphs. SIAM Journal on Discrete Mathematics 31-2 (2017), pages 1017-1071. The approximate Loebl-Komlós-Sós Conjecture IV: Embedding techniques and the proof of the main result. [ Hkp + 17d, ] J Hladký, J Komlós, D Piguet, M Simonovits, M Stein, E Szemerédi, SIAM Journal on Discrete Mathematics. 312[HKP + 17d] J. Hladký, J. Komlós, D. Piguet, M. Simonovits, M. Stein, and E. Szemerédi. The approximate Loebl-Komlós-Sós Conjecture IV: Embedding techniques and the proof of the main result. SIAM Journal on Discrete Mathematics 31-2 (2017), pages 1072- 1148. Spanning Trees in Dense Graphs. J Komlós, G Sárközy, E Szemerédi, Combinatorics, Probability and Computing. 5J. Komlós, G. Sárközy and E. Szemerédi. Spanning Trees in Dense Graphs. Combinatorics, Probability and Computing, Vol. 5, 397-416 (2001). On the method of bounded differences. C Mcdiarmid, Surveys in combinatorics. Norwich; CambridgeCambridge Univ. Press141C. McDiarmid. On the method of bounded differences. In Sur- veys in combinatorics, 1989 (Norwich, 1989), volume 141 of London Math. Soc. Lecture Note Ser., pages 148-188. Cambridge Univ. Press, Cambridge, 1989. A local approach to the Erdős-Sós conjecture. V Rozhoň, SIAM Journal on Discrete Mathematics. 332V. Rozhoň. A local approach to the Erdős-Sós conjecture. SIAM Journal on Discrete Mathematics, 33(2):643-664, 2019. Embedding Spanning Trees in Graphs of High Minimum Degree which have a Universal Vertex I: An approximate asymptotic version. B Reed, M Stein, Accepted for publication in J. Graph TheoryB. Reed and M. Stein. Embedding Spanning Trees in Graphs of High Minimum Degree which have a Universal Vertex I: An approximate asymptotic version. Accepted for publication in J. Graph Theory.
[]
[ "A signature invariant geometric algebra framework for spacetime physics and its applications in relativistic dynamics of a massive particle and gyroscopic precession", "A signature invariant geometric algebra framework for spacetime physics and its applications in relativistic dynamics of a massive particle and gyroscopic precession" ]
[ "Bofeng Wu [email protected] \nDepartment of Physics\nCollege of Sciences\nNortheastern University\n110819ShenyangChina\n" ]
[ "Department of Physics\nCollege of Sciences\nNortheastern University\n110819ShenyangChina" ]
[]
A signature invariant geometric algebra framework for spacetime physics is formulated. By following the original idea of David Hestenes in the spacetime algebra of signature (+, −, −, −) , the techniques related to relative vector and spacetime split are built up in the spacetime algebra of signature (−, +, +, +) . The even subalgebras of the spacetime algebras of signatures (±, ∓, ∓, ∓) share the same operation rules, so that they could be treated as one algebraic formalism, in which spacetime physics is described in a signature invariant form. Based on the two spacetime algebras and their "common" even subalgebra, rotor techniques on Lorentz transformation and relativistic dynamics of a massive particle in curved spacetime are constructed. A signature invariant treatment of the general Lorentz boost with velocity in an arbitrary direction and the general spatial rotation in an arbitrary plane is presented. For a massive particle, the spacetime splits of the velocity, acceleration, momentum, and force four-vectors with the normalized four-velocity of the fiducial observer, at rest in the coordinate system of the spacetime metric, are given, where the proper time of the fiducial observer is identified, and the contribution of the bivector connection is considered, and with these results, a threedimensional analogue of Newton's second law for this particle in curved spacetime is achieved. Finally, as a comprehensive application of the techniques constructed in this paper, a geometric algebra approach to gyroscopic precession is provided, where for a gyroscope moving in the Lense-Thirring spacetime, the precessional angular velocity of its spin is derived in a signature invariant manner.William Kingdon Clifford introduced geometric algebra (GA) based on the earlier work of Hamilton and Grassmann 1 , and then, David Hestenes developed it by inventing geometric calculus and formulating spacetime algebra (STA) 2 . GA is a unified language for mathematics and physics 3 , and has important applications in theoretical physics 4-16 . STA, as the GA for spacetime, provides a synthetic framework for spacetime physics 17 . One of the remarkable advantages of STA is that Lorentz boost and spatial rotation can be handled with rotor techniques in an elegant and highly condensed manner 17-20 . Therefore, for those topics involving a knowledge of Lorentz boost and spatial rotation, such as gyroscopic precession 21,22 , it could be expected that a more efficient approach to dealing with them will be found in the language of STA. STA can be generated by an orthonormal frame with respect to the Minkowski metric. Since the signature (+, −, −, −) is widely used in STA 17 whereas the opposite signature (−, +, +, +) is often adopted in literatures on relativity 23 , when STA is applied to relativistic physics the change of signature from one to another will cause inconvenience. In fact, the STA of signature (−, +, +, +) is also used 24-29 , and however, because the techniques related to relative vector and spacetime split have not been developed in this algebraic formalism, its applications are quite limited. One of the purposes of this paper is to build up these techniques in the STA of signature
10.1038/s41598-022-06895-0
[ "https://arxiv.org/pdf/2111.07353v3.pdf" ]
244,116,909
2111.07353
abc1c70b769b9ba71387dc4ab865c7388ea30c47
A signature invariant geometric algebra framework for spacetime physics and its applications in relativistic dynamics of a massive particle and gyroscopic precession 0123456789 Bofeng Wu [email protected] Department of Physics College of Sciences Northeastern University 110819ShenyangChina A signature invariant geometric algebra framework for spacetime physics and its applications in relativistic dynamics of a massive particle and gyroscopic precession 012345678910.1038/s41598-022-06895-01 Scientific Reports | (2022) 12:3981 | https:// (−, +, +, +) by following the original idea of David Hestenes in the STA of signature (+, −, −, −) , which will definitely facilitate the study of relativistic physics in the language of GA. Throughout the paper, the following notation and rules are adopted unless stated otherwise: OPEN A signature invariant geometric algebra framework for spacetime physics is formulated. By following the original idea of David Hestenes in the spacetime algebra of signature (+, −, −, −) , the techniques related to relative vector and spacetime split are built up in the spacetime algebra of signature (−, +, +, +) . The even subalgebras of the spacetime algebras of signatures (±, ∓, ∓, ∓) share the same operation rules, so that they could be treated as one algebraic formalism, in which spacetime physics is described in a signature invariant form. Based on the two spacetime algebras and their "common" even subalgebra, rotor techniques on Lorentz transformation and relativistic dynamics of a massive particle in curved spacetime are constructed. A signature invariant treatment of the general Lorentz boost with velocity in an arbitrary direction and the general spatial rotation in an arbitrary plane is presented. For a massive particle, the spacetime splits of the velocity, acceleration, momentum, and force four-vectors with the normalized four-velocity of the fiducial observer, at rest in the coordinate system of the spacetime metric, are given, where the proper time of the fiducial observer is identified, and the contribution of the bivector connection is considered, and with these results, a threedimensional analogue of Newton's second law for this particle in curved spacetime is achieved. Finally, as a comprehensive application of the techniques constructed in this paper, a geometric algebra approach to gyroscopic precession is provided, where for a gyroscope moving in the Lense-Thirring spacetime, the precessional angular velocity of its spin is derived in a signature invariant manner.William Kingdon Clifford introduced geometric algebra (GA) based on the earlier work of Hamilton and Grassmann 1 , and then, David Hestenes developed it by inventing geometric calculus and formulating spacetime algebra (STA) 2 . GA is a unified language for mathematics and physics 3 , and has important applications in theoretical physics 4-16 . STA, as the GA for spacetime, provides a synthetic framework for spacetime physics 17 . One of the remarkable advantages of STA is that Lorentz boost and spatial rotation can be handled with rotor techniques in an elegant and highly condensed manner 17-20 . Therefore, for those topics involving a knowledge of Lorentz boost and spatial rotation, such as gyroscopic precession 21,22 , it could be expected that a more efficient approach to dealing with them will be found in the language of STA. STA can be generated by an orthonormal frame with respect to the Minkowski metric. Since the signature (+, −, −, −) is widely used in STA 17 whereas the opposite signature (−, +, +, +) is often adopted in literatures on relativity 23 , when STA is applied to relativistic physics the change of signature from one to another will cause inconvenience. In fact, the STA of signature (−, +, +, +) is also used 24-29 , and however, because the techniques related to relative vector and spacetime split have not been developed in this algebraic formalism, its applications are quite limited. One of the purposes of this paper is to build up these techniques in the STA of signature • For two multivectors A and B in spacetime, their geometric product, inner product, outer product, and commutator product are represented by AB, A · B, A ∧ B , and A × B , respectively; • For a multivector M in spacetime, M and �M� p (p = 0, 1, 2, 3, 4) denote its reverse and p-vector part, respectively, where M 0 is abbreviated as M ; • The Greek letters, denoting the spacetime indices, range from 0 to 3, whereas the Latin letters, denoting the space indices, range from 1 to 3; • The sum should be taken over, when repeated indices appear within a term; • The international system of units is used. Let {γ + α } and {γ − α } be orthonormal frames with respect to the Minkowski metrics in the signatures (+, −, −, −) and (−, +, +, +) , respectively, and the STAs of the two signatures can be generated by them. In these two STAs, we find the following important conclusions: • Denote {γ α ± } as the reciprocal frames of {γ ± α } , and frames of relative vectors are constructed by {σ ± k := γ ± k γ 0 ± } , where both {σ + k } and {σ − k } , spanning the relative spaces orthogonal to the timelike vectors γ + 0 and γ − 0 , respectively, provide representation-free versions of the Pauli matrices; • The two relative spaces are both the Euclidean spaces of dimension 3 with {σ ± k } as right-handed orthonormal bases, where the inner product and the cross product in these two spaces can be defined as their conventional ones, respectively; • The even subalgebras of the STAs of signatures (±, ∓, ∓, ∓) are generated by {σ ± k } , and they share the same operation rules; • For vectors b ± = b α ± γ ± α , their spacetime splits with γ ± 0 are b ± γ 0 ± = b 0 ± + b ± , where b ± = b i ± σ ± i , as bivectors in spacetime, are called the relative vectors of b ± ; • For operators ∂ ± := γ α ± ∂ α , their spacetime splits with γ ± 0 are γ ± 0 ∂ ± = ∂ 0 + ∇ ± , where ∂ µ := ∂/∂x µ and ∇ ± := σ k ± ∂ k with x µ and {σ k ± := γ ± 0 γ k ± } as coordinates in spacetime and the reciprocal frames of {σ ± k } in the relative spaces, respectively. Since the even subalgebras of the two STAs share the same operation rules, we will no longer distinguish them strictly and treat them as one algebraic formalism hereafter. In Appendix B of this paper, a detailed presentation of this algebraic formalism is given. It will be shown that the "common" even subalgebra of the STAs of signatures (±, ∓, ∓, ∓) actually provides a signature invariant GA framework for spacetime physics. In order to give an application paradigm of the two STAs and their "common" even subalgebra, we need to make use of them to study some specific problems in spacetime physics, and gyroscopic precession is such a typical topic. According to the prediction of General Relativity, the spin of a gyroscope precesses relative to the asymptotic inertial frames as it moves around a rotating spherical source 22 . The conventional method to describe gyroscopic precession under the weak-field and slow-motion (WFSM) approximation in tensor language is presented in Refs. 21,22 . For a uniformly rotating spherical source, the external gravitational field is stationary, and only the leading pole moments need to be considered, so that the spacetime geometry is described by the Lense-Thirring metric 30 . As a result, the corresponding spacetime is known as the Lense-Thirring spacetime. When a torquefree gyroscope is moving in this spacetime, there exist three types of precession for its spin, namely, the de Sitter precession, the Lense-Thirring precession, and the Thomas precession, where these phenomena are, respectively, resulted from gyroscopic motion through the spacetime curved by the mass of the source, rotation of the source, and gyroscopic non-geodesic motion 31 . In the traditional description for gyroscopic precession based on tensor language, one always needs to work with the components of some tensor in a chosen coordinate frame, which often leads to many equations with a low degree of clarity. The language of STA could provide a physically clear approach to dealing with this topic, since one just involves geometric objects during calculation 32 . As a preliminary attempt, another purpose of the present paper is to handle gyroscopic precession by applying the STAs of signatures (±, ∓, ∓, ∓) and their "common" even subalgebra, so that for a gyroscope moving in the Lense-Thirring spacetime, a signature invariant derivation of the precessional angular velocity of its spin could be achieved. For brevity, in later applications, the signs "±" associated with multivectors and operators will be suppressed, and for equalities like A = F(±B) and C = G(∓D) , the signs " + " and "−" in the former equation correspond to the cases in the signatures (+, −, −, −) and (−, +, +, +) , respectively, and the situation in the latter equation is reverse. Before analyzing gyroscopic precession, rotor techniques on Lorentz transformation and relativistic dynamics of a massive particle in curved spacetime need to be addressed in the two STAs. Rotor techniques are available in the STA of signature (+, −, −, −) [17][18][19][20] , and however, since the STA of signature (−, +, +, +) is rarely employed, these techniques have not been fully developed in this algebraic formalism, where in particular the expressions of the rotors inducing Lorentz boost and spatial rotation should be clearly established. Being the third purpose of this paper, by virtue of the rotors constructed in the "common" even subalgebra of the two STAs, the general Lorentz boost with velocity in an arbitrary direction and the general spatial rotation in an arbitrary plane are handled in a signature invariant manner. How to study physics in curved spacetime based on STA is a fundamental problem. By following GA techniques for General Relativity formulated in Ref. 33 , the treatment of gyroscopic precession in this paper is able to be put on a solid theoretical footing. To generate the STAs of signatures (±, ∓, ∓, ∓) in a curved spacetime, one just needs to define a local orthonormal tetrad {γ α } by the orthonormalization of a coordinate frame (in either signature), and then, by applying these two STAs and their "common" even subalgebra, the relevant topics in spacetime physics can be dealt with. Relativistic dynamics of a massive particle in curved spacetime should be studied so as to describe the motion of a gyroscope moving around a gravitating source 34 . We assume that a collection of fiducial observers Scientific Reports | (2022) 12:3981 | https://doi.org/10.1038/s41598-022-06895-0 www.nature.com/scientificreports/ is distributed over space, and each fiducial observer is at rest in the coordinate system of the spacetime metric. For a massive particle, the spacetime splits of the velocity, acceleration, momentum, and force four-vectors with the normalized four-velocity γ 0 of the fiducial observer need to be derived, which is easy when spacetime is flat. However, in curved spacetime, some subtleties appear and ought to be seriously analyzed. For instance, the proper time of fiducial observers should be identified, and the contribution of the bivector connection ω(u) associated with {γ α } (cf. Ref. 33 ) should also be considered. In this paper, after overcoming these difficulties, the results are given, and with them, a three-dimensional analogue of Newton's second law for the particle in curved spacetime is achieved, which is the fourth purpose of the present paper. Besides, the Fermi-Walker derivatives presented in tensor language are recast in the STAs of signatures (±, ∓, ∓, ∓) so that the motion of the spin of a gyroscope can be depicted in these two STAs 21 . With the aid of the GA techniques constructed before, an efficient treatment of gyroscopic precession could be provided in the two STAs. Considering a gyroscope moving in the Lense-Thirring spacetime, some significant results like the three-dimensional generalized equation of motion for the gyroscope are first given on the basis of relativistic dynamics of a massive particle. Then, the rotor techniques are employed to handle the spin of the gyroscope, and the direct result shows that a bivector field Ω(τ ) along its worldline completely determines the motion of its spin, where τ is the proper time. The bivector field Ω(τ ) is dependent on the rotor L generating the pure Lorentz boost from the gyroscope's four-velocity u to the fiducial observer's four-velocity cγ 0 and the bivector connection ω(u) associated with {γ α } , where c is the velocity of light in vacuum. Just like the Faraday bivector, namely the electromagnetic field strength, the bivector field Ω(τ ) can also be decomposed into the electric part Ω (E) (τ ) and the magnetic part Ω (B) (τ ) . Let {γ β } be the reciprocal tetrad of {γ α } , and technically, if the condition L aLγ 0 = cΩ (E) (τ ) is fulfilled, the spin of the gyroscope always precesses relative to its comoving frame, determined by the pure Lorentz boost generated by the rotor L , with Ω (B) (τ ) as the precessional angular velocity. The key point is to write down signature invariant expression of the bivector field Ω(τ ) and the spacetime split of the gyroscope's four-acceleration a with the normalized four-velocity γ 0 of the fiducial observer based on the "common" even subalgebra of the two STAs. According to Refs. 33,35 , the bivector connection ω(u) associated with {γ α } can be directly derived, and then, by recasting it in terms of the relative vectors {σ k } , its signature invariant expression and those of its electric part ω (E) (u) and magnetic part ω (B) (u) are obtained. Moreover, by applying the rotor techniques, the pure Lorentz boost L from u to cγ 0 can also be derived. Thus, as noted before, the signature invariant expression of Ω(τ ) and those of Ω (E) (τ ) and Ω (B) (τ ) are completely determined. As to a, its spacetime split with γ 0 could be directly obtained from the relevant conclusion in relativistic dynamics of a massive particle. Thus, with a, L , and Ω (E) (τ ) , one is capable of verifying that the condition L aLγ 0 = cΩ (E) (τ ) holds by means of various operations in the "common" even subalgebra of the two STAs, and hence, the spin of the gyroscope indeed precesses in the comoving frame with Ω (B) (τ ) as the precessional angular velocity. After expanding Ω (B) (τ ) up to 1/c 3 order with 1/c as the WFSM parameter 36 , the gyroscope spin's angular velocities of the de Sitter precession, the Lense-Thirring precession, and the Thomas precession are able to be read out, and their expressions, in the form of geometric objects, are equivalent to their conventional ones in component form, respectively. The whole derivation implies that the "common" even subalgebra of the STAs of signatures (±, ∓, ∓, ∓) does provide a signature invariant GA framework for spacetime physics, and the rotors, presented in a signature invariant form, can be used to generate Lorentz transformations in these two STAs. The treatment of relativistic dynamics of a massive particle and gyroscopic precession intuitively displays the basic method of dealing with specific topics in curved spacetime within the signature invariant GA framework, which suggests that the GA techniques established in this paper are efficient and reliable. No doubt, if these techniques are directly applied to gyroscopic precession in alternate theories of gravity, such as f(R) gravity 30,37-39 , f (R, G) gravity 40,41 , and f(X, Y, Z) gravity 42 , they will definitely facilitate the relevant studies, where G is the Gauss-Bonnet invariant, X := R is the Ricci scalar, Y := R µν R µν is the quadratic contraction of two Ricci tensors, and Z := R µνσρ R µνσρ is the quadratic contraction of two Riemann tensors. Furthermore, by developing other types of techniques, the method in this paper could also be applied to more fields, and in fact, some topics in classical mechanics and electrodynamics have been described in such a manner. The applications of this method will be expected to be extended to a wider range in the future, so that the study of spacetime physics in the language of GA could be greatly promoted. This paper is organized as follows. In "STAs of signatures (±, ∓, ∓, ∓) and their "common" even subalgebra", the STAs of signatures (±, ∓, ∓, ∓) and their "common" even subalgebra are formulated. In "Rotor techniques on Lorentz transformation and relativistic dynamics of a massive particle in curved spacetime", rotor techniques on Lorentz transformation and relativistic dynamics of a massive particle in curved spacetime are constructed. In "A GA approach to gyroscopic precession in the Lense-Thirring spacetime", a GA approach to gyroscopic precession in the Lense-Thirring spacetime is given. In "Summary and discussions", some concluding remarks will be made. In Appendix A, operation rules of blades in the STAs of signatures (±, ∓, ∓, ∓) are summarized. In Appendix B, the "common" even subalgebra of the STAs of signatures (±, ∓, ∓, ∓) is introduced in detail. In Appendix C, a local orthonormal tetrad {γ α } and the bivector connection ω(u) associated with it in the Lense-Thirring spacetime are derived. STAs of signatures (±, ∓, ∓, ∓) and their "common" even subalgebra STA, introduced in the classical literature Space-Time Algebra by David Hestenes (1966), can provide a synthetic framework for relativistic physics 17 , so it has attracted widespread attention in the physical community. Since the establishment of STA, the signature (+, −, −, −) has been widely used, and however, in relativistic physics, one of the main application fields of STA, the opposite signature (−, +, +, +) is often adopted 17,23 . Thus, when one intends to apply STA to relativistic physics, the change of signature from one to another will cause inconvenience even though these two signatures differ only by a minus sign. In fact, the STA of signature (−, +, +, +) www.nature.com/scientificreports/ was also used [24][25][26][27][28][29] , but a lack of long-term attention to it results in that the techniques related to relative vector and spacetime split have not been developed in this algebraic formalism so that its applications are quite limited. In this section, by following the original idea of David Hestenes, we will build up these techniques in the STA of signature (−, +, +, +) so that a more convenient approach to relativistic physics could be given in the language of GA. For the ease of writing, we will directly formulate the STAs of signatures (±, ∓, ∓, ∓) , and analyze the operation rules of multivectors. In spacetime, the STAs of signatures (±, ∓, ∓, ∓) can be generated by corresponding orthogonal vectors {γ ± α } satisfying respectively, where η ± αβ are the Minkowski metrics in the two signatures. With these vector generators {γ ± α } , explicit bases for both the STAs are defined, namely where, in either signature, one scalar, four vectors, six bivectors, four trivectors, and one pseudoscalar are contained. One can perform operations between any two multivectors in spacetime by expanding them in a basis, once operation rules of blades of different grades are given, where the term "blade" here denotes a multivector written as the outer product of a set of vectors (cf. Ref. 17 ). In Appendix A of this paper, a detail list of operation rules of blades in the two STAs is presented, and based on these rules, the "common" even subalgebra of these two STAs will be constructed in the following. According to Eqs. (A1) and (A7), the orthogonality between the vector generators {γ ± α } implies that the bases (2) can be rewritten as where the geometric products of {γ ± α } are obviously anticommutative, By making use of the anticommutation of {γ ± α } , the pseudoscalars I ± also have the expressions, with ǫ ijk as the three-dimensional Levi-Cività symbol. Among the basis blades, those of even grade, form bases for the even subalgebras of the two STAs. Now, we will first discuss some properties of the bivectors {γ ± 0 γ ± k } . With Eqs. (1), (4), and (A14), one can directly derive the following equalities, where δ ij is the Kronecker symbol, and in the second step of (8), Eqs. (5), (A5), and (A10) have been used. These equalities show that relative vectors, spanning the relative spaces orthogonal to the timelike vectors γ ± 0 , could be defined as {σ ± k = ∓γ ± 0 γ ± k = γ ± k γ 0 ± } with {γ α ± } as the reciprocal frames of {γ ± α } , so that they have the similar algebraic properties to the Pauli matrices, www.nature.com/scientificreports/ and then, by inserting Eqs. (13) and (14) into Eqs. (7) and (8), respectively, we get which prove once again that the algebraic properties of {σ ± k } are similar to those of the Pauli matrices. In fact, as mentioned in Ref. 32 , {σ + k } or {σ − k } provide a representation-free version of the Pauli matrices. Equations (10) and (12) show that the relative spaces orthogonal to γ ± 0 are both the Euclidean spaces of dimension 3 with {σ ± k } and I ± as orthonormal bases and pseudoscalars, respectively. In relative space, a relative vector, although being a bivector in STA, is actually treated as a multivector of grade 1, and thus, in this sense, the inner product and the cross product between two relative vectors can be defined. Let a ± = a ± i σ ± i and b ± = b ± j σ ± j be relative vectors, and then, with the help of Eqs. (10) and (11), the inner products and the cross products between a ± and b ± are defined as where the commutator products between a ± and b ± , and have been used. Obviously, the above definitions of inner product and cross product are identical to their conventional ones, respectively. The cross products defined in Eqs. (19) determine the handedness of {σ ± k } , and by applying them, one easily gets which clearly suggest that {σ ± k } are both right-handed bases. Next, we will employ relative vectors to reconstruct bases of the even subalgebras of the STAs of signatures (±, ∓, ∓, ∓) . The definitions of {σ ± k } provide and then, by further using Eqs. (1) and (4), there are After inserting Eqs. (23), (24), and (12) into (6), we know that bases of the even subalgebras of the two STAs can be reconstructed as which indicates that {σ ± k } are actually the vector generators of the two subalgebras. Eqs. (11) and (17) imply that equalities hold, and thus, the anticommutation of {σ ± k }, is explicitly obtained. As a consequence, there exist three types of basic homogeneous multivectors (cf. Ref. 17 ) in the even subalgebras of the two STAs, namely, www.nature.com/scientificreports/ In view of (12), a ± × b ± ∧ c ± in (30) are able to be written in the form of multiplications of the pseudoscalars I ± by real numbers, and in fact, from the bases (25), all multivectors of grade 4 could be expressed in such a form. Thus, Eq. (A5) states that the geometric product between any multivector and a pseudoscalar is equivalent to their inner product. Keep this conclusion in mind, and then, with the help of the following formulas, one gets a convenient way to carry out operations involving multivectors of grade 4, where in the derivation of (31), Eqs. (20) and (21) have been used. Eqs. (23) and (8) show that both σ ± k and σ ± i × σ ± j (i � = j) are bivectors in the two STAs, where the former contain timelike components, whereas the latter do not. The geometric products of them also need to be derived, where according to Eq. (A2), we have By further using Eqs. (1), (4), (A8), and (A15), the terms on the right-hand sides of Eqs. (33), (34), and (35) are achieved, With the aid of the above operation rules of the basic homogeneous multivectors, namely Eqs. (31)- (39), one can carry out operations of any two multivectors in the even subalgebras of the STAs of signatures (±, ∓, ∓, ∓) . Evidently, as shown in these formulas, the two even subalgebras share the same operation rules, and thus, when dealing with specific problems, such as relativistic dynamics of a massive particle and gyroscopic precession in the next two sections, we will no longer distinguish them strictly and treat them as one algebraic formalism. In Appendix B of the present paper, a detailed presentation of this "common" even subalgebra of the two STAs is given. It will be shown that this algebraic formalism provides a signature invariant GA framework for spacetime physics. Clearly, {σ + k = γ + k γ 0 + = γ + k γ + 0 } is(1) γ ± α · γ ± β = η ± αβ = diag(±, ∓, ∓, ∓),(2)1, γ ± α , γ ± µ ∧ γ ± ν (µ < ν), γ ± ρ ∧ γ ± σ ∧ γ ± (ρ < σ < ), γ ± 0 ∧ γ ± 1 ∧ γ ± 2 ∧ γ ± 3 ,(3)1, γ ± α , γ ± µ γ ± ν (µ < ν), γ ± ρ γ ± σ γ ± (ρ < σ < ), I ± := γ ± 0 γ ± 1 γ ± 2 γ ± 3 , (4) γ ± µ γ ± ν = −γ ± ν γ ± µ , (µ � = ν). (5) I ± = 1 3! ǫ ijk γ ± 0 γ ± i γ ± j γ ± k (6) 1, γ ± 0 γ ± k , γ ± i γ ± j i < j , I ± ,(7)γ ± 0 γ ± i · γ ± 0 γ ± j = δ ij ,(8)γ ± 0 γ ± i × γ ± 0 γ ± j = ∓γ ± i ∧ γ ± j = ∓ǫ ijk γ ± 0 γ ± k I ± ,(9)γ ± 0 γ ± 1 γ ± 0 γ ± 2 γ ± 0 γ ± 3 = ∓I ± ,(10)σ ± i · σ ± j = δ ij ,(11)σ ± i × σ ± j = ǫ ijk σ ± k I ± ,(12)σ ± 1 σ ± 2 σ ± 3 = I ± . (13) γ ± i · γ ± j = γ ± i γ ± j + γ ± j γ ± i 2 ,(14) γ ± i ∧ γ ± j = γ ± i γ ± j − γ ± j γ ± i 2 , (15) σ ± i σ ± j + σ ± j σ ± i = 2δ ij ,(16)σ ± i σ ± j − σ ± j σ ± i = 2ǫ ijk σ ± k I ± ,(17)σ ± i σ ± j = δ ij + ǫ ijk σ ± k I ± , (18) a ± · b ± = a ± b ± = a ± k b ± k , (19) a ± × 3 b ± = − I ± a ± × b ± = ǫ ijk a ± i b ± j σ ± k , (20) a ± × b ± = a ± b ± 2 = a ± i b ± j ǫ ijk σ ± k I ± (21) a ± I ± = I ± a ± , I ± 2 = I ± I ± = −1 (22) σ ± i × 3 σ ± j = ǫ ijk σ ± k ,(23)γ ± 0 γ ± k = ∓σ ± k ,(24)γ ± i γ ± j = ∓ σ ± i σ ± j (i < j). (25) 1, σ ± k , σ ± i σ ± j (i < j), σ ± 1 σ ± 2 σ ± 3 ,(26)σ ± i σ ± j = σ ± i × σ ± j (i � = j) (27) σ ± i σ ± j = −σ ± j σ ± i (i � = j),(28) When STA is used to describe relativistic physics, the techniques on spacetime split are also of significance, where in the STA of signature (+, −, −, −) , these techniques provide an extremely efficient tool for comparing physical effects in different frames 2,17 . Of course, these techniques can also be constructed in the STA of signature (−, +, +, +) . Let b ± = b α ± γ ± α be vectors in spacetime, and the spacetime splits of b ± with γ ± 0 are defined as where b ± = b i ± σ ± i are called the relative vectors of b ± . Besides, as for operators ∂ ± := γ α ± ∂ α , their spacetime splits with γ ± 0 are given by www.nature.com/scientificreports/ where ∂ µ := ∂/∂x µ and ∇ ± := σ k ± ∂ k with x µ and {σ k ± := γ ± 0 γ k ± } as coordinates in spacetime and the reciprocal frames of {σ ± k } in the relative spaces, respectively. As clearly shown, the spacetime splits of b + and ∂ + are indeed the same as those introduced in the STA of signature (+, −, −, −) 2,17 , and the spacetime splits of b − and ∂ − are those defined in the STA of signature (−, +, +, +). (29) a ± × b ± = a ± i b ± j σ ± i × σ ± j = i<j a ± i b ± j − a ± j b ± i σ ± i σ ± j , (30) � a ± × b ± � ∧ c ± = det   a ± 1 , b ± 1 , c ± 1 a ± 2 , b ± 2 , c ± 2 a ± 3 , b ± 3 , c ± 3   σ ± 1 σ ± 2 σ ± 3 with c ± = c ± k σ ± k . (31) σ ± i × σ ± j I ± = −ǫ ijk σ ± k ⇔ σ ± k I ± = 1 2 ǫ kij σ ± i × σ ± j ,(32)σ ± i σ ± j = σ ± i · σ ± j + σ ± i × σ ± j ,(33)σ ± k σ ± i × σ ± j = σ ± k × σ ± i × σ ± j + σ ± k ∧ σ ± i × σ ± j ,(34)σ ± i × σ ± j σ ± k = σ ± i × σ ± j × σ ± k + σ ± i × σ ± j ∧ σ ± k ,(35)σ ± i × σ ± j σ ± p × σ ± q = σ ± i × σ ± j · σ ± p × σ ± q + σ ± i × σ ± j × σ ± p × σ ± q . (36) σ ± k × σ ± i × σ ± j = − σ ± i × σ ± j × σ ± k = σ ± k · σ ± i σ ± j − σ ± k · σ ± j σ ± i ,(37)σ ± k ∧ σ ± i × σ ± j = σ ± i × σ ± j ∧ σ ± k = σ ± i ∧ σ ± j × σ ± k = σ ± j ∧ σ ± k × σ ± i ,(38)σ ± i × σ ± j · σ ± p × σ ± q = σ ± j · σ ± p σ ± i · σ ± q − σ ± i · σ ± p σ ± j · σ ± q ,(39)σ ± i × σ ± j × σ ± p × σ ± q = σ ± j · σ ± p σ ± i × σ ± q + σ ± i · σ ± q σ ± j × σ ± p − σ ± i · σ ± p σ ± j × σ ± q − σ ± j · σ ± q σ ± i × σ ± p . (40) b ± γ 0 ± = b 0 ± + b ± ,(41)γ ± 0 ∂ ± = ∂ 0 + ∇ ± , The timelike vectors cγ ± 0 could be recognized as the four-velocities of some observer, so the spacetime split introduced above is observer dependent, and consequently, one of the most powerful applications of the techniques on spacetime split is that they can greatly simplify the study of effects involving different observers 2,17 . Technically, spacetime split actually encodes the crucial geometric relationship between STA and its even subalgebra 2 , where with these techniques, many calculations between vectors in spacetime are able to be transformed into those in the even subalgebra of STA. As a result, based on various operations in this algebraic formalism, a large number of specific problems could be solved efficiently. Moreover, since the even subalgebras of the STAs of signatures (±, ∓, ∓, ∓) share the same operation rules, by resorting to the techniques on spacetime split, one is capable of managing to acquire a signature invariant approach to these problems. We will see that the above advantages of spacetime split play a key role in the following treatment of relevant topics. Rotor techniques on Lorentz transformation and relativistic dynamics of a massive particle in curved spacetime It is well known that one of the remarkable advantages of STA is that Lorentz boost and spatial rotation can be handled with rotor techniques in an elegant and highly condensed manner [17][18][19][20] . As shown in classical literatures 21,22 , a knowledge of Lorentz boost and spatial rotation is heavily involved in the description of gyroscopic precession, and hence, it could be expected that a more efficient approach to dealing with this topic will be found in the language of STA. Besides, in "STAs of signatures (±, ∓, ∓, ∓) and their "common" even subalgebra", it is claimed that the "common" even subalgebra of the STAs of signatures (±, ∓, ∓, ∓) provides a signature invariant GA framework for spacetime physics, and thus, when this framework is applied to gyroscopic precession, a signature invariant GA derivation of the precessional angular velocity of the gyroscope spin could be achieved. Therefore, as a preliminary attempt, making use of the two STAs and their "common" even subalgebra to study gyroscopic precession is one objective of the present paper, which, if successful, will definitely become an application paradigm of STA. In view that many relevant techniques need to be constructed in this section, the detailed treatment of gyroscopic precession will be left to the next section. In the analyse of gyroscopic precession, rotor techniques on Lorentz boost and spatial rotation are widely used, and therefore, these techniques need to be specifically addressed in the two STAs. Rotor techniques are available in the STA of signature (+, −, −, −) [17][18][19][20] , and however, since the STA of signature (−, +, +, +) is rarely employed, these techniques have not been fully developed in this algebraic formalism, where in particular the expressions of the rotors inducing Lorentz boost and spatial rotation should be clearly established. In this section, by constructing the rotors on the basis of the exponential function defined on the "common" even subalgebra of the two STAs, the general Lorentz boost with velocity in an arbitrary direction and the general spatial rotation in an arbitrary plane are handled in a signature invariant manner. In addition, relativistic dynamics of a massive particle in curved spacetime ought be studied so as to describe the motion of a gyroscope moving around a gravitating source 34 . To this end, for a massive particle, the spacetime splits of the velocity, acceleration, momentum, and force four-vectors with the normalized four-velocity of the fiducial observer, at rest in the coordinate system of the spacetime metric, are first derived, and then with these results, a three-dimensional analogue of Newton's second law for this particle in curved spacetime is achieved. Furthermore, in order to describe the motion of the spin of a gyroscope, the Fermi-Walker derivative in the STA of signature (−, +, +, +) is also constructed by following the way in the (+, −, −, −) signature. In Appendix B of this paper, the signs "±" associated with multivectors have been omitted in the "common" even subalgebra of the STAs of signatures (±, ∓, ∓, ∓) , so that all the formulas in this algebraic formalism are presented in a neat form. Inspired by this, when formulas in the two STAs are involved hereafter, the following convention will be adopted for brevity: The signs "±" associated with multivectors and operators are suppressed, and for equalities like A = F(±B) and C = G(∓D) , the signs " + " and "−" in the former equation correspond to the cases in the signatures (+, −, −, −) and (−, +, +, +) , respectively, and the situation in the latter equation is reverse. Rotor techniques on Lorentz boost and spatial rotation . In GA, a rotor R is defined as an even multivector satisfying RR = 1 and the property that the map defined by b → RbR transforms any vector into another one 17 . Rotors encode an important geometric object and can provide a more elegant scheme for performing orthogonal transformations in spaces of arbitrary signature, where mathematically, rotor group, formed by the set of rotors, provides a double-cover representation of the connected subgroup of the special orthogonal group. In the present paper, we are only interested in rotors in spacetime, and in such a case, the rotor group in spacetime is a representation of the group of proper orthochronous Lorentz transformations 17 . In the STA of signature (+, −, −, −) , rotor techniques on Lorentz boost and spatial rotation have been established [17][18][19][20] , which greatly promotes the application of STA in spacetime physics. Of course, in order to complete the necessary discussion on gyroscopic precession in a signature invariant manner, these techniques also need to be explicitly constructed in the STA of signature (−, +, +, +) . To facilitate the writing, as in "STAs of signatures (±, ∓, ∓, ∓) and their "common" even subalgebra", we will directly build up rotor techniques in the STAs of signatures (±, ∓, ∓, ∓). In Appendix B of this paper, a simple method to construct rotor is presented, and it has been shown that for a real number α and a unit 2-blade B, e αB is a rotor. Here, we will make use of e αB to handle Lorentz boost and spatial rotation in the two STAs. From Eqs. (8) and (23) Clearly, both σ k and σ i × σ j (i � = j) are unit 2-blades, and the signs of their squares are different, which suggests that there are two types of unit 2-blades in spacetime. It is based on the exponential functions of these two types of unit 2-blades that the rotors inducing Lorentz boost and spatial rotation can be constructed. Let v = v k σ k , m = m i σ i , and n = n j σ j be three arbitrary relative vectors. Consider the bivectors v and m × n , and the following results can be easily given by means of Eqs. (42a)-(43b): and The former two equations indicate that both v and m × n are 2-blades, and thus, with the latter two equations, two unit 2-blades are derived, where a direct calculation verifies that According to Ref. 17 , a proper orthochronous Lorentz transformation can be generated by a rotor R in spactime, and under this transformation, a general multivector M will be transformed double-sidedly as M → R −1 MR . Let θ and ϕ be two real numbers, and the corresponding rotors associated with e v and I 2 are constructed as e θ 2 e v and e ϕ 2 I 2 , respectively. When they act on vectors x and y, two new vectors x ′ and y ′ are obtained, In order to analyze the generated Lorentz transformations in the "common" even subalgebra of the two STAs, the techniques on spacetime split need to be applied. From Eqs. (44a)-(46b) and (A1), the orthogonality and anticommutation of {γ α } imply and then, with the help of Eq. (B39), one gets The spacetime splits of x, y, x ′ , and y ′ with γ 0 are provided by applying Eq. (40), www.nature.com/scientificreports/ As stated in "STAs of signatures (±, ∓, ∓, ∓) and their "common" even subalgebra", the relative space is an Euclidean space of dimension 3, and a relative vector, although being a spacetime bivector, could be treated as a multivector of grade 1, which implies that in terms of the three-dimensional geometric meaning, a relative vector is just a vector 17 . Similarly, the commutator product of two relative vectors, referred to as the relative bivector, also has three-dimensional geometric meaning. After comparing Eq. (B8) with Eq. (A1), one is able to find that the commutator product of two relative vectors serves as the role of the wedge product of two vectors in general finite dimensional GA, and thus, in the three-dimensional relative space, it encodes an oriented plane 3,17 . In such a sense, Eqs. (57a) and (57b) indicate that x � (x ′ � ) and x ⊥ (x ′ ⊥ ) are, respectively, the components of x (x ′ ) parallel and perpendicular to e v . (42a) σ k = ∓γ 0 ∧ γ k , (42b) σ i × σ j = ∓γ i ∧ γ j , (43a) (σ k ) 2 = 1, (43b) σ i × σ j 2 = −1 (i � = j). (44a) v = ∓ γ 0 ∧ v k γ k , (44b) m × n = ∓ m i γ i ∧ n j γ j (45a) v 2 = v k v k , (45b) (m × n) 2 = − i<j m i n j − m j n i 2 . (46a) e v : = v √ v 2 ,(46b)I 2 : = m × n −(m × n) 2 , (47a) (e v ) 2 = 1, (47b) (I 2 ) 2 = − 1. (48a) x ′ = e − θ 2 e v x e θ 2 e v , (48b) y ′ = e − ϕ 2 I 2 y e ϕ 2 I 2 . (49a) γ 0 e v = − e v γ 0 , (49b) γ 0 I 2 = I 2 γ 0 , (50a) x ′ γ 0 = e − θ 2 e v xγ 0 e − θ 2 e v ,(50b)xγ 0 = x 0 + x, (51b) x ′ γ 0 = x ′0 + x ′ , (51c) yγ 0 = y 0 + y, (51d) y ′ γ 0 = y ′0 + y ′ , (52a) x ′0 + x ′ = e − θ 2 e v x 0 + x e − θ 2 e v , (52b) y ′0 + y ′ = e − ϕ 2 I 2 y 0 + y e ϕ 2 I 2 . (53a) x = x · e v e v + x × e v e v , (53b) x ′ = x ′ · e v e v + x ′ × e v e v , (53c) y = y × I 2 I −1 2 + y ∧ I 2 I −1 2 , (53d) y ′ = y ′ × I 2 I −1 2 + y ′ ∧ I 2 I −1 2 . (54a) x × e v ∧ e v = 0, (54b) x ′ × e v ∧ e v = 0, (54c) y × I 2 ∧ I −1 2 = 0, (54d) y ′ × I 2 ∧ I −1 2 = 0 (55a) x � := x · e v e v , x ⊥ := x × e v × e v = x × e v e v , (55b) x ′ � := x ′ · e v e v , x ′ ⊥ := x ′ × e v × e v = x ′ × e v e v ,(55c)y � := y × I 2 × I −1 2 = y × I 2 I −1 2 , y ⊥ := y ∧ I 2 · I −1 2 = y ∧ I 2 I −1 2 , (55d) y ′ � := y ′ × I 2 × I −1 2 = y ′ × I 2 I −1 2 , y ′ ⊥ := y ′ ∧ I 2 · I −1 2 = y ′ ∧ I 2 I −1 2 (56a) x = x � + x ⊥ , (56b) x ′ = x ′ � + x ′ ⊥ , (56c) y = y � + y ⊥ , (56d) y ′ = y ′ � + y ′ ⊥ (57a) x � × e v = x · e v e v e v 2 = 0, x ⊥ · e v = x × e v e v e v 0 = 0, (57b) x ′ � × e v = x ′ · e v e v e v 2 = 0, x ′ ⊥ · e v = x ′ × e v e v e v 0 = 0, Of course, I 2 also defines an oriented plane in the relative space. Let and then, together with Eqs. which explicitly show that y � (y ′ � ) and y ⊥ (y ′ ⊥ ) are, respectively, the components of y (y ′ ) parallel and perpendicular to the plane defined by I 2 . When the relative vectors x, x ′ , y , and y ′ in Eqs. (52a) and (52b) are replaced by their decompositions, namely Eqs. (56a)-(56d), it will be seen that a clear physical explanation of the Lorentz transformations induced by e v and I 2 in Eqs. (48a) and (48b) is able to be achieved. To this end, the following properties of the components of x, x ′ , y , and y ′ need to be first derived by means of the combination of Eqs. In order to handle these two equations, e −θ e v and e −ϕI 2 should be rewritten as (64a) and (64b), respectively, and then, by using the grade operator �· · · � and the orthogonal projection operator successively, we finally arrive at (57c) y � ∧ I 2 = y × I 2 I −1 2 I 2 4 = 0, y ⊥ × I 2 = y ∧ I 2 I −1 2 I 2 2 = 0, (57d) y ′ � ∧ I 2 = y ′ × I 2 I −1 2 I 2 4 = 0, y ′ ⊥ × I 2 = y ′ ∧ I 2 I −1 2 I 2 2 = 0. (58) l : = m × 3 n (m × 3 n) 2 , (59) l = − I 2 I ⇔ I 2 = lI. (60a) l 2 = 1, (60b) m · l = 0, (60c) n · l = 0, (61a) a × I 2 = �alI� 2 = (a × l)I, (61b) a ∧ I 2 = �alI� 4 = (a · l)I (62a) y � · l = 0, y ⊥ × l = 0, (62b) y ′ � · l = 0, y ′ ⊥ × l = 0, (63a) x � e v = e v x � , x ⊥ e v = −e v x ⊥ , (63b) y � I 2 = −I 2 y � , y ⊥ I 2 = I 2 y ⊥ . (64a) x ′0 + x ′ � + x ′ ⊥ = e −θ e v x 0 + x � + x ⊥ , (64b) y ′0 + y ′ � + y ′ ⊥ = y 0 + e −ϕI 2 y � + y ⊥ . (65a) e −θ e v = cosh θ − e v sinh θ =: γ (1 − β), (65b) e −ϕI 2 = cos ϕ − I 2 sin ϕ Scientific Reports |( In the above derivation, Eqs. (57a) and (B8) have been employed, and besides, one also needs to note that in view of Eqs. Here, in order to reasonably interpret relevant equations obtained in this subsection, the active view for Lorentz transformation needs to be adopted [5][6][7] . Moreover, it also needs to be stressed that for the spatial rotation, Eq. (71b) shows that if ϕ > 0 , the relative bivector y � × y ′ � has the same orientation as I 2 in the threedimensional geometry. Let us recall that the relative vectors v, m , and n were chosen arbitrarily in the beginning, and therefore, with the rotors e θ 2 e v and e ϕ 2 I 2 , the general Lorentz boost with velocity in an arbitrary direction and the general spatial rotation in an arbitrary plane can be handled. Furthermore, considering that Eqs. (67a), (67b), (70), (71a), and (71b) are derived in the "common" even subalgebra of the STAs of signatures (±, ∓, ∓, ∓) , all of these equations are presented in a signature invariant form. According to the previous discussion, the Lorentz boost and the spatial rotation have been first generated in Eqs. (48a) and (48b), and however, until these two equations were transformed into those in the "common" even subalgebra of the two STAs, their physical explanations were achieved in the three-dimensional geometry. In this process, the techniques on spacetime split have been employed, which implies that the intuitive pictures formed in the relative space are observer dependent. In addition, one may also have found that it is since the (66a) β : = tanh θ , www.nature.com/scientificreports/ "common" even subalgebra of the two STAs are independent of the signatures that the original equation (48a) or (48b) has the same three-dimensional meaning in the two signatures, and thus, a signature invariant method for handling Lorentz boost and spatial rotation is gained. In fact, many topics in spacetime physics can be dealt with in such a manner, and inspired by this, we will apply this method to studying gyroscopic precession in the next section, so that a signature invariant GA derivation of the precessional angular velocity of the gyroscope spin could be found. (66b) β : = βe v , (66c) γ : = cosh θ = 1 1 − β 2 . (67a)    x ′0 = γ � x 0 − β · x � � , x ′ � = γ � x � − x 0 β � , x ′ ⊥ = x ⊥ , (67b)    y ′0 = y 0 , y ′ � = e −ϕI 2 y � = cos ϕy � − sin ϕI 2 y � , y ′ ⊥ = y ⊥ . (68a) I 2 y � = I 2 × y � , (68b) I 2 y � · l = �I 2 y � l� = I 2 · (y � × l) = 0 (69a) tanh θ = √ v 2 c , (69b) β = v c .(70)     x ′0 = γ � x 0 − v · x c � , x ′ = x + v � (γ − 1) x · v v 2 − γ x 0 c � . (71a) y ′ � · y ′ � = y � e ϕI 2 e −ϕI 2 y � = y � · y � , (71b) y � y ′ � = � y � · y � � e ϕI 2 ⇒    y � · y ′ � = � y � · y � � cos ϕ, y � × y ′ � = � y � · y � � sin ϕI 2 . As the final task of this subsection, the pure Lorentz boost (cf. Ref. 17 ) between two vectors of the same magnitude will be discussed based on the previous results. Assuming that x ′ = cγ 0 , Eqs. x is mapped to x ′ by According to Ref. 17 , the above L in the (+, −, −, −) signature is exactly the rotor that determines the pure Lorentz boost between x and x ′ , and motivated by this, we claim that the above L in the (−, +, +, +) signature also plays the same role. It should be noted that the validity of Eq. (76) is able to be directly verified only by Eqs. (73a) and (75), which does not depend on the selection of the frame {γ α } . In the treatment of gyroscopic precession in the next section, Eqs. (75) and (76) will be used to generate the pure Lorentz boost between a comoving orthonormal frame of the gyroscope and a local orthonormal tetrad at rest in the coordinate system of the spacetime metric, which greatly improves the computational efficiency. Relativistic dynamics of a massive particle in curved spacetime. As mentioned previously, the description of the motion of a gyroscope requires that relativistic dynamics of a massive particle in curved spacetime should be studied 34 , and to this end, a brief introduction to relevant GA techniques for General Relativity formulated in Ref. 33 needs to be given, so that the treatment of gyroscopic precession in the following can be put on a solid theoretical footing. In order to develop a GA description of curved spacetime, one should define a local orthonormal tetrad {γ α } by the orthonormalization of a coordinate frame and then generate the corresponding STA. Let x µ and {g µ } be local coordinates in a curved spacetime and the associated coordinate frame, respectively. Assume that a collection of fiducial observers is distributed over space, and each fiducial observer is at rest in the coordinate system. Then, the components of the metric with respect to the coordinate frame {g µ }, satisfy the conditions 44 (72a) ±x · x ′ = γ c 2 , (72b) ±x ∧ x ′ = γ c 2 β (73a) x 2 = x ′2 = ±c 2 , (73b) e v = ± x ∧ x ′ (x ∧ x ′ ) 2 ,(74)e θ 2 e v = 1 + cosh θ + e v sinh θ √ 2(1 + cosh θ) = 1 + γ + γ β √ 2(1 + γ ) = c 2 ± xx ′ 2c 2 c 2 ± x · x ′ = e ± θ 2 x∧x ′ √ (x∧x ′ ) 2 ,(75)L := c 2 ± xx ′ 2c 2 c 2 ± x · x ′ = e ± θ 2 x∧x ′ √ (x∧x ′ ) 2 ,(76)x ′ =LxL. (77) g µν : = g µ · g ν , (78a) ±g 0 · g 0 = ± g 00 > 0, (78b) −(g 1 ∧ g 0 ) · (g 0 ∧ g 1 ) = − det g 00 , g 01 g 10 , g 11 > 0, (78c) ±(g 2 ∧ g 1 ∧ g 0 ) · (g 0 ∧ g 1 ∧ g 2 ) = ± det g 00 , g 01 , g 02 g 10 , g 11 , g 12 g 20 , g 21 45 , where one of its important properties is that it will reduce to ∂ when acting on scalar functions. Suppose that ∇ is the unique torsion-free and metric-compatible derivative operator. Then, according to Ref. 33 , the covariant derivative of a multivector A along a vector b is evaluated by the formula Here, the operator b · ∂ satisfies with {γ β } and φ as the reciprocal tetrad of {γ α } and a scalar field in spacetime, respectively. ω(b) , being the bivector connection associated with {γ α } , is defined by where if b = b µ g µ , the expression of ω(b) is given by 33,35 with With the aid of the corresponding GA technique 3 , {g ν } , as the reciprocal frame of {g µ } , is constructed as where from Eq. (79), the coordinate frame {g µ } can be expanded in the local orthonormal tetrad {γ α }, (78d) −(g 3 ∧ g 2 ∧ g 1 ∧ g 0 ) · (g 0 ∧ g 1 ∧ g 2 ∧ g 3 ) = − det    g 00 , g 01 , g 02 , g 03 g 10 , g 11 , g 12 , g 13 g 20 , g 21 , g 22 , g 23 g 30 , g 31 , g 32 , g 33    > 0,(79)             γ 0 = g 0 √ ±g 0 ·g 0 , γ 1 = ± g 0 (g 0 ∧g 1 ) √ ±g 0 ·g 0 √ −(g 1 ∧g 0 )·(g 0 ∧g 1 ) , γ 2 = − (g 1 ∧g 0 )(g 0 ∧g 1 ∧g 2 ) √ −(g 1 ∧g 0 )·(g 0 ∧g 1 ) √ ±(g 2 ∧g 1 ∧g 0 )·(g 0 ∧g 1 ∧g 2 ) , γ 3 = ± (g 2 ∧g 1 ∧g 0 )(g 0 ∧g 1 ∧g 2 ∧g 3 ) √ ±(g 2 ∧g 1 ∧g 0 )·(g 0 ∧g 1 ∧g 2 ) √ −(g 3 ∧g 2 ∧g 1 ∧g 0 )·(g 0 ∧g 1 ∧g 2 ∧g 3 ) . (80) γ α · γ β = η αβ = diag(±, ∓, ∓, ∓) (81a) γ 0 ∧ γ 1 = g 0 ∧ g 1 −(g 1 ∧ g 0 ) · (g 0 ∧ g 1 ) , (81b) γ 0 ∧ γ 1 ∧ γ 2 = g 0 ∧ g 1 ∧ g 2 ±(g 2 ∧ g 1 ∧ g 0 ) · (g 0 ∧ g 1 ∧ g 2 ) , (81c) γ 0 ∧ γ 1 ∧ γ 2 ∧ γ 3 = g 0 ∧ g 1 ∧ g 2 ∧ g 3 −(g 3 ∧ g 2 ∧ g 1 ∧ g 0 ) · (g 0 ∧ g 1 ∧ g 2 ∧ g 3 ) . www.nature.com/scientificreports/ Because only the knowledge of covariant derivative and bivector connection will be involved in the discussion of gyroscopic precession, other GA techniques for General Relativity will not be covered here, and the reader wishing to go into more details may consult Ref. 33 . Next, for a massive particle, the spacetime splits of the velocity, acceleration, momentum, and force fourvectors with the normalized four-velocity of the fiducial observer will be discussed, so that relativistic dynamics of this particle in curved spacetime can be studied. Let us first identify the proper time of fiducial observers. As indicated earlier, fiducial observers are at rest in the coordinate system x µ , which means that their worldlines are the coordinate curves with x i = const. (i = 1, 2, 3) , namely, t := x 0 /c coordinate curves. As a consequence, if we let t 0 denote the proper time of each fiducial observer, ±c 2 (dt 0 ) 2 = g 00 c 2 (dt) 2 hold along his worldline, and then, (82) b · ∇A = b · ∂A + ω(b) × A. (83a) b · ∂γ α = b · ∂γ β = 0, (83b) b · ∂φ = b · ∇φ (84) b · ∇γ α = ω(b) × γ α , (85) ω(b) = b µ ω(g µ ) (86) ω(g µ ) = 1 2 g ρ ∧ g σ g σ · ∂g µρ + 1 2 g ρ ∧ g µ · ∂g ρ .(87)         g 0 = � g 1 ∧ g 2 ∧ g 3 �� g 0 ∧ g 1 ∧ g 2 ∧ g 3 � −1 , g 1 = − � g 0 ∧ g 2 ∧ g 3 �� g 0 ∧ g 1 ∧ g 2 ∧ g 3 � −1 , g 2 = � g 0 ∧ g 1 ∧ g 3 �� g 0 ∧ g 1 ∧ g 2 ∧ g 3 � −1 , g 3 = − � g 0 ∧ g 1 ∧ g 2 �� g 0 ∧ g 1 ∧ g 2 ∧ g 3 � −1 , Assuming that x µ (τ ) is the worldline of a massive particle with τ as the proper time, the four-velocity of the particle can be rewritten as 22,30 We will prove that Consider an event P on the particle's worldline. The t coordinate curve with x i = x i (P) (i = 1, 2, 3) passes through P and is the worldline of a fiducial observer. Based on the orthonormal tetrad {γ µ | x i =x i (P) } carried by this fiducial observer, his proper reference frame can be defined, and thus, a local coordinate system y 0 =: ct 0 , y 1 , y 2 , y 3 covering a finite domain near his worldline can also be defined. In this coordinate system, if the worldline of the particle is y µ (τ ) , its four-velocity at the event P is Comparing Eq. (92) with Eq. (90), we get P is an arbitrary event on the particle's worldline, and due to Eq. (89), does not depend on the selection of the coordinate system y µ , so Eq. (91) holds. By applying Eq. (40), the spacetime split of the four-velocity of the particle with γ 0 yields where because of ( uγ 0 ) · (uγ 0 ) = ±u 2 = c 2 , one is able to achieve Since cγ 0 could be identified as the four-velocity of some fiducial observer, u is actually the relative velocity measured in his orthonormal tetrad, which is also able to be inferred from Eq. (92). (88)                                                g 0 = � ±g 0 · g 0 γ 0 , g 1 = ± g 01 √ ±g 0 · g 0 γ 0 + � −(g 1 ∧ g 0 ) · (g 0 ∧ g 1 ) √ ±g 0 · g 0 γ 1 , g 2 = ± g 02 √ ±g 0 · g 0 γ 0 − (g 2 ∧ g 0 ) · (g 0 ∧ g 1 ) √ ±g 0 · g 0 � −(g 1 ∧ g 0 ) · (g 0 ∧ g 1 ) γ 1 + � ±(g 2 ∧ g 1 ∧ g 0 ) · (g 0 ∧ g 1 ∧ g 2 ) � −(g 1 ∧ g 0 ) · (g 0 ∧ g 1 ) γ 2 , g 3 = ± g 03 √ ±g 0 · g 0 γ 0 − (g 3 ∧ g 0 ) · (g 0 ∧ g 1 ) √ ±g 0 · g 0 � −(g 1 ∧ g 0 ) · (g 0 ∧ g 1 ) γ 1 ± (g 3 ∧ g 1 ∧ g 0 ) · (g 0 ∧ g 1 ∧ g 2 ) � −(g 1 ∧ g 0 ) · (g 0 ∧ g 1 ) � ±(g 2 ∧ g 1 ∧ g 0 ) · (g 0 ∧ g 1 ∧ g 2 ) γ 2 + � −(g 3 ∧ g 2 ∧ g 1 ∧ g 0 ) · (g 0 ∧ g 1 ∧ g 2 ∧ g 3 ) � ±(g 2 ∧ g 1 ∧ g 0 ) · (g 0 ∧ g 1 ∧ g 2 ) γ 3 . (89) dt 0 dt = ±g 00 . www.nature.com/scientificreports/ After clarifying many concepts, we are in a position to derive the spacetime split of the four-acceleration of the particle with γ 0 , which is an essential ingredient in the formalism of relativistic dynamics. The four-acceleration of the particle, a = Du/dτ = u · ∇u , is immediately gained from Eq. (82), and then, by employing Eq. (40), its spacetime split with γ 0 is provided, The first term is in which, Eqs. Let m be the rest mass of the particle. The spacetime splits of its four-momentum p = mu and the four-force f = Dp/dτ = u · ∇p acting on it also need to be evaluated so that a three-dimensional analogue of Newton's second law in curved spacetime will be achieved. Starting from Eq. (94), the spacetime split of the particle's four-momentum p with γ 0 is where are the energy and the relative momentum of theparticle measured by the fiducial observer (cf. Ref. 17 ), respectively. The relationship between E and p can be directly obtained from ( pγ 0 ) · (pγ 0 ) = m 2 c 2 , which is exactly the same as that in Special Relativity. Assuming that the particle's rest mass remains unchanged as it moves, namely dm/dτ = 0 , the four-force f acting on it is able be expressed as When the spacetime is flat and x µ are coordinates in an inertial frame of reference with g µν = η µν , by definition, fiducial observers reduce to inertial observers. In such a case, Eq. (89) suggests that dt 0 = dt , and the relative force f = f i σ i acting on the particle should be given by f i = dp i /dt 46 . Thus, using σ i = γ i γ 0 , one is capable of recasting f as In curved spacetime, we claim that the corresponding relative force f measured by the fiducial observer is related to Dp/dt 0 in the same way, (108) is a three-dimensional analogue of Newton's second law in curved spacetime, which constitutes the core content of relativistic dynamics of a massive particle. In the above discussion, the key point is that the relative velocity, relative acceleration, relative momentum, and relative force for the particle could be reasonably defined in the orthonormal tetrad carried by the fiducial observer. Evidently, in terms of the (102c) (90) u = γ u cγ 0 + u i γ i . (91) γ u = dt 0 dτ . (92) u| P = dt 0 dτ P cγ 0 | P + dy i dt 0 P γ i | P . γ u | P = dt 0 dτ P . (93) dt 0 dτ = dt 0 dt dt dτ = ±g 00 dt dτ (94) uγ 0 = γ u (c + u) with u := u i σ i ,(95)aγ 0 = (u · ∂u)γ 0 + (ω(u) × u)γ 0 . (97) (u · ∂u)γ 0 = u · ∂(cγ u ) + u · ∂(γ u u i ) σ i = u · ∇(cγ u ) + u · ∇(γ u u i ) σ i = c dγ u dτ + dγ u dτ u + γ u du dτ = γ 4 u u · a c + γ 4 u u · a c 2 u + γ 2 u a, (98a) a : = du dt 0 , (98b) dγ u dt 0 = γ 3 u u · a c 2 (99) (ω(u) × u)γ 0 = (ω(u) · u) · γ 0 + (ω(u) · u) ∧ γ 0 = ω(u) · u ∧ γ 0 + γ 0 ∧ (u · ω(u)) = γ u u · ω (E) (u) + γ u u · ω (B) (u) + γ 0 ∧ u · ω (E) (u) + γ 0 ∧ u · ω (B) (u) . (100a) ω (E) (u) : = ω(u) · γ k ∧ γ 0 γ 0 ∧ γ k , (100b) ω (B) (u) : = i<j ω(u) · γ j ∧ γ i γ i ∧ γ j , (100c) ω(u) = ω (E) (u) + ω (B) (u), (101a) γ 0 ω(u)γ 0 = − ω (E) (u) + ω (B) (u), (101b) ω (E) (u) = 1 2 ω(u) − γ 0 ω(u)γ 0 , (101c) ω (B) (u) = 1 2 ω(u) + γ 0 ω(u)γ 0 . (102a) γ u u · ω (B) (u) = γ u u i γ i ∧ γ 0 · ω (B) (u) = 0, (102b) γ 0 ∧ u · ω (E) (u) = γ 0 ∧ cγ u γ 0 · ω (E) (u) + γ 0 ∧ γ u u i γ i · ω (E) (u) = cγ u γ 0 ∧ γ 0 · ω (E) (u) = cγ u ω (E) (u),γ 0 ∧ u · ω (B) (u) = γ 0 ∧ cγ u γ 0 · ω (B) (u) + γ 0 ∧ γ u u i γ i · ω (B) (u) = γ 0 ∧ γ u u i γ i · ω (B) (u) + γ 0 · γ u u i γ i ∧ ω (B) (u) = γ u γ 0 u i γ i ω (B) (u) 2 = γ u ω (B) (u) × u, (103) aγ 0 = γ 4 u u · a c + γ 4 u u · a c 2 u + γ 2 u a + γ u u · ω (E) (u) + cγ u ω (E) (u) − γ u u × ω (B) (u). (104) pγ 0 = E c + p, (105a) E : = γ u mc 2 = cp · γ 0 , (105b) p : = γ u mu = p ∧ γ 0 (106) E 2 = p 2 c 2 + m 2 c 4 , (107) f = ma. f = dp i dt 0 σ i = dp dt 0 γ ∧ γ 0 = dp dt 0 ∧ γ 0 . (108) f = Dp dt 0 ∧ γ 0 = dτ dt 0 Dp dτ ∧ γ 0 = 1 γ u ma ∧ γ 0 = m γ 3 u u · a c 2 u + γ u a + c ω (E) (u) − u × ω (B) (u) , (109) f · u = m γ u u · a γ 2 u c 2 u 2 + 1 + cω (E) (u) · u − u × ω (B) (u) · u = m γ 3 u u · a + cu · ω (E) (u) = c Dp dt 0 · γ 0 . (110) www.nature.com/scientificreports/ three-dimensional geometric meaning in the relative space, these relative vectors ought to be interpreted as their corresponding three-vectors in tensor language. When the spacetime is flat, the bivector connection ω(u) and its electric part ω (E) (u) and magnetic part ω (B) (u) vanish. In this case, via considering the components of these relative vectors in the rest frame of the fiducial observer, namely {σ k } , one is able to verify that all the above results reduce to those in Special Relativity. Therefore, the formalism of relativistic dynamics of a massive particle constructed in this subsection is an elegant generalization of the classical one in flat spacetime. f γ 0 = γ u f · u c + f . In the tetrad formalism of General Relativity 47 , the covariant derivative of a vector b = b α γ α along the coordinate frame vector g µ is given by where are the spin connection coefficients, and due to the metric compatibility condition, they satisfy 48 Using Eqs. (84) and (A14), one obtains which means that the bivector connection ω(g µ ) can be expressed as The above discussion suggests that it could be expected that when the relative vectors in Eq. (108) are expanded in the frame {σ k } , the corresponding generalization of Newton's second law in the tetrad formalism will also be acquired. Compared with those results in the tetrad formalism, the results in this paper are presented in the form of geometric objects, so they are endowed with a higher degree of clarity. Besides, as highlighted before, since the operations in the "common" even subalgebra of the STAs of signatures (±, ∓, ∓, ∓) are independent of the signatures, the relevant results like Eqs. (95), (106), (108), and (109) are able to be handled in a signature invariant manner. As a primary application of the signature invariant GA framework provided by the "common" even subalgebra of the two STAs, the treatment of relativistic dynamics of a massive particle in this subsection provides a paradigm on how to achieve a signature invariant approach to spacetime physics in curved spacetime. In order to depict the motion of the spin of a gyroscope, the behaviors of vector fields along the worldline of the particle also need to be studied, and here, we only focus our attention on the Fermi-Walker derivatives in the (±, ∓, ∓, ∓) signatures. In fact, their classical forms written in tensor language have been available in Refs. 49,50 , and recasting them in the STAs of the two signatures is a straightforward task. Hence, the results are directly provided as follows: The Fermi-Walker derivatives of a vector field p(τ ) along the particle's worldline in the STAs of signatures (±, ∓, ∓, ∓) are where if D F p(τ )/dτ = 0 , the vector field p(τ ) is said to be Fermi-Walker transported along the particle's worldline. For a torque-free gyroscope moving in spacetime, any nongravitational forces acting on it are applied at its center of mass, and in this case, the spin of the gyroscope experiences the Fermi-Walker transport along its worldline 21 . In the next section, we will regard the transport equation satisfied by the gyroscope spin as the starting point for the discussion of gyroscopic precession. Interestingly, by means of the Leibniz rule and the formula 3 with B as a bivector in spacetime, the above forms of Fermi-Walker derivative can readily be extended to a multivector field A(τ ) along the worldline of the particle, namely, and readers who are interested in this conclusion could attempt to prove it. A GA approach to gyroscopic precession in the Lense-Thirring spacetime According to the prediction of General Relativity, the spin of a gyroscope precesses relative to the asymptotic inertial frames as it moves around a rotating spherical source 22 . Conventionally, by following the standard method in tensor language 21,22 , the precessional angular velocity of the gyroscope spin is able to be evaluated under the WFSM approximation. In General Relativity, the time-dependent metric, presented in the form of multipole expansion, for the external gravitational field of a spatially compact supported source is derived under the WFSM approximation in Ref. 30 . Since we are only interested in uniformly rotating spherical sources like the Earth in this paper, the spacetime is stationary, and only the leading pole moments of the source need to be considered. Consequently, in such a case, the metric reduces to the Lense-Thirring metric 30 , and the spacetime is (111) g µ · ∇b = D µ b α γ α = ∂ µ b α + ω µ α β b β γ α , (112) ω µ α β := g µ · ∇γ β · γ α (113) ω µαβ = −ω µβα with ω µαβ = ω δ µβ η δα . (114) ω µαβ = g µ · ∇γ β · γ α = ω(g µ ) · γ β · γ α = ω(g µ ) · γ β ∧ γ α ), www.nature.com/scientificreports/ accordingly known as the Lense-Thirring spacetime. When a torque-free gyroscope is moving in this spacetime, there exist three types of precession for its spin, namely, the de Sitter precession, the Lense-Thirring precession, and the Thomas precession, where these phenomena are, respectively, resulted from gyroscopic motion through the spacetime curved by the mass of the source, rotation of the source, and gyroscopic non-geodesic motion 31 . Today, the type of experiments designed according to these effects of gyroscopic precession have become an important method to test gravitational theories. (115) ω(g µ ) = 1 2 ω µαβ γ α ∧ γ β . (116) D F p(τ ) dτ = Dp(τ ) dτ ± 1 c 2 (u ∧ a) · p(τ ), (117) B × (C ∧ D) = (B × C) ∧ D + C ∧ (B × D) (118) D F A(τ ) dτ = DA(τ ) dτ ± 1 c 2 (u ∧ a) × A(τ ), In the traditional description for gyroscopic precession based on tensor language, since one always needs to work with the components of some tensor in a chosen coordinate frame, many equations are given a low degree of clarity. In the language of STA, it could be expected that a physically clear approach to handling this topic will be found, since one just involves geometric objects during calculation 32 . In this section, as a comprehensive application of the STAs of signatures (±, ∓, ∓, ∓) formulated in "STAs of signatures (±, ∓, ∓, ∓) and their "common" even subalgebra" and the GA techniques constructed in "Rotor techniques on Lorentz transformation and relativistic dynamics of a massive particle in curved spacetime", a GA approach to gyroscopic precession will be provided, where for a gyroscope moving in the Lense-Thirring spacetime, the precessional angular velocity of its spin will be derived in a signature invariant manner. The GA description of curved spacetime and the relevant GA techniques for General Relativity introduced at the beginning of "Relativistic dynamics of a massive particle in curved spacetime" will still be adopted, and here, we let x µ and {g µ } be local coordinates in the Lense-Thirring spacetime and the associated coordinate frame, respectively. In addition, it should be pointed out that some physical quantities in this section and Appendix C are presented in the form of the 1/c expansion, where 1/c is used as the WFSM parameter 36 . Since the Lense-Thirring metric is only expanded up to 1/c 3 order, the framework of the linearized General Relativity is sufficient to analyze gyroscopic precession 30,[37][38][39] , and in such a case, the coordinates (x µ ) =: (ct, x i ) are treated as though they were the Minkowski coordinates in flat space 51,52 . Consider a torque-free gyroscope moving in the Lense-Thirring spacetime, and denote x µ (τ ) as its worldline with τ as the proper time. Assuming that the four-force acting on the gyroscope is f, from Eq. (107), its fouracceleration a is determined by with m as its rest mass. In fact, Eq. (119) should be derived from the Mathisson-Papapetrou-Tulczyjew-Dixon (MPTD) equations, where the term related to the curvature tensor has been omitted because the gyroscope scale is very much smaller than the characteristic dimensions of the gravitational field 34 . In accordance with Refs. 21,22 , the spin s of the gyroscope (i.e., its angular momentum vector) is always orthogonal to its four-velocity u and experiences Fermi-Walker transport along its worldline, It will be seen that starting from the above three equations, the precessional angular velocity of the gyroscope spin can be derived. Besides, gyroscopic precession can also be discussed based on MPTD equations, and interested readers may consult Refs. 53,54 . Since the four-velocity of the gyroscope satisfies by use of Eqs. (A6) and (A7), Eq. (121) is equivalent to Thus, Eqs. (120) and (123) directly result in which means that s 2 remains fixed along the worldline of the gyroscope. As shown in Appendix C, the Lense-Thirring metric satisfies Eqs. (78a)-(78d), which implies that we are capable of assuming that there exists a collection of fiducial observers who are distributed over space and at rest in the coordinate system x µ , and as a consequence, a local orthonormal tetrad {γ α } in the Lense-Thirring spacetime could be directly defined by means of the corresponding formulas in "Relativistic dynamics of a massive particle in curved spacetime". Based on the detailed calculation in Appendix C, the tetrad {γ α } determined up to 1/c 3 order is given by where the potentials U and U i are, respectively, (119) f = ma (120) s · u = 0, (121) D F s dτ = Ds dτ ± 1 c 2 (u ∧ a) · s = 0. (122) u 2 = ±c 2 ⇒ u · a = 0, (123) u · ∇s = ∓ 1 c 2 (a · s)u. (124) ds 2 dτ = u · ∇s 2 = 2s · (u · ∇s) = 0, www.nature.com/scientificreports/ Here, G is the gravitational constant, M and J are the mass and the conserved angular momentum of the gravitating source, respectively, and r := √        γ 0 = � 1 + 1 c 2 U � g 0 , γ i = − 4 c 3 V i g 0 + � 1 − 1 c 2 U � g i ,(125) x i x i . Before analyzing the motion of the spin s of the gyroscope, its relativistic dynamics needs to be discussed. Let t 0 be the proper time of the fiducial observer, which is related to the coordinate time t by Eq. (89), and from Eq. (C1), the expression of dt 0 /dt up to 1/c 3 order is As in Eqs. (90) and (91), the four-velocity u of the gyroscope can be expanded in the tetrad {γ α }, and then, Eq. (94) indicates that its spacetime split with γ 0 yields where u := u i σ i is the relative velocity measured in the orthonormal tetrad of the fiducial observer. Due to u 2 = ±c 2 , the Lorentz factor γ u has the expression (95), and thus, by expanding it up to 1/c 3 order, one gets Furthermore, based on Eqs. (103) and (108)-(110), the spacetime splits of the four-acceleration of the gyroscope and the four-force acting on it are able to be given, respectively, and in view of Eq. (119), we only give the result of the four-force, In the Lense-Thirring spacetime, after inserting Eqs. (C14) and (C15) into Eqs. (108) and (109), the expressions of the relative force f exerted on the gyroscope and the corresponding power f · u delivered by it up to 1/c 3 order are derived, and with ∇ := σ k ∂ k and V := V i σ i . It could be verified that these two equations are compatible. By plugging the potential U into Eq. (132), one will find that −m∇U is the Newtonian gravitational force acting on the gyroscope, and hence, at the leading order, Eqs. (132) and (133) reduce to the corresponding results in Newtonian gravity, which means that Eq. (132) is a three-dimensional analogue of Newton's second law for the gyroscope in the Lense-Thirring spacetime. Evidently, the terms at the next-leading order fall into three classes that depend on U, V , and f , respectively, and as implied from Eq. (126), they should be resulted from gyroscopic motion through the spacetime curved by the mass of the source, rotation of the source, and gyroscopic non-geodesic motion. It will be seen that due to the same reasons, the spin of the gyroscope also experiences three types of precession. In Eqs. (132) and (133), the corrections to the results in Newtonian gravity are presented in a very elegant way, which intuitively displays the powerful potential of the signature invariant GA framework formulated in "STAs of signatures (±, ∓, ∓, ∓) and their "common" even subalgebra" for application in spacetime physics. Next, we begin to review the basis process of evaluating the precessional angular velocity of the gyroscope spin in the language of STA. Let {γ (α) } be a local orthonormal frame comoving with the gyroscope, and by definition, the timelike vector γ (0) is given by γ (0) = u/c . In order to determine the other three spacelike vectors γ (i) of {γ (α) } , the pure Lorentz boost between the gyroscope's four-velocity u and the fiducial observer's four-velocity cγ 0 needs to be presented. According to Eqs. (75) and (76), under the pure Lorentz boost generated by the rotor the vector u is mapped to cγ 0 by (126)      U = GM r , V i = − GJǫ 3ij x j 2r 3 . (127) dt 0 dt = 1 − 1 c 2 U. (128) u = γ u cγ 0 + u i γ i with γ u = dt 0 dτ ,(129)uγ 0 = γ u (c + u), (130) γ u = 1 + 1 2c 2 u 2 . (131) f γ 0 = γ u f · u c + f . (132) f = m a − ∇U − 1 c 2 u 2 ∇U − 1 c 2 U∇U + 2 c 2 (u · ∇U)u − 4 c 2 u × (∇ × V ) + 1 mc 2 (u · f )u + 1 2mc 2 u 2 f (133) f · u = m u · (a − ∇U) + 1 c 2 (u · ∇U)u 2 − 1 c 2 (u · ∇U)U + 3 2mc 2 (u · f )u 2 (134) L := c 2 ± u(cγ 0 ) 2c 2 c 2 ± u · (cγ 0 ) ,(135)     L 0 = 1 + γ u √ 2(1 + γ u ) , L = γ u (u/c) √ 2(1 + γ u ) . (138) LL =LL = 1. This result indicates that the motion of the spin of the gyroscope relative to the comoving frame {γ (α) } is completely determined by the bivector field Ω(τ ) along its worldline, where Ω(τ ) is dependent on the rotor L generating the pure Lorentz boost from the gyroscope's four-velocity u to the fiducial observer's four-velocity cγ 0 and the bivector connection ω(u) associated with the tetrad {γ α } . Like the bivector connection ω(u) in Eqs. (100a)-(101c), the bivector Ω(τ ) is also able to be decomposed into the electric part Ω (E) (τ ) and the magnetic part Ω (B) (τ ) , which clearly suggests that −Ω (B) (τ )I , as a relative vector, is the precessional angular velocity of s ′ in the conventional sense, and because the cross product (denoted by × 3 ) is rarely employed in GA, the relative bivector Ω (B) (τ ) could be regarded as the precessional angular velocity of s ′ . That is to say, in the comoving frame {γ (α) } of the gyroscope, its spin always precesses with Ω (B) (τ ) as the precessional angular velocity. In addition, one should also note that since (164) or (166) has been represented in the "common" even subalgebra of the STAs of signatures (±, ∓, ∓, ∓) , a signature invariant GA derivation of the precessional angular velocity of the gyroscope spin could be found. In order to make a further analysis on Eq. (164), we need to derive the expressions of Ω (E) (τ ) , Ω (B) (τ ) , and a ′ . Let us first evaluate the corresponding results of Ω (E) (τ ) and Ω (B) (τ ) , and as shown in Eqs. (161a)-(161c), their expressions can directly be read out from that of the bivector Ω(τ ) . By plugging Eqs. (136) and (100c) into Eq. (160), the bivector Ω(τ ) is able to be expressed as (139) γ (α) =Lγ αL , (140) γ (α) =Lγ αL , (141) s = s (α) γ (α) ,(142)s (0) = s · γ (0) = ± 1 c s · u = 0. (143) d s (i) s (i) dτ = ∓ d s (i) s (j) δ ij dτ = 0. (144) ds dτ = ∓ 1 c 2 (a · s)u − ω(u) · s,(145)ds dτ = u · ∂s = ds α dτ γ α . (146) s (α) = s · γ (α) = sLγ αL = L sLγ α = s ′ · γ α (147) s ′ :=LsL,(148)s ′ = s (i) γ i ⇒ ds ′ dτ = ds (i) dτ γ i ,(149)s ′ γ 0 = s ′ = s (i) σ i ,(150) ds ′ dτ γ 0 = ds ′ dτ = ds (i) dτ σ i .(151)ds ′ dτ = L ds dτL + dL dτ sL +Ls dL dτ γ 0 = ∓ 1 c a · s −L(ω(u) · s)Lγ 0 + dL dτL s ′ + s ′ L dL dτ γ 0 . (152) a ′ =LaL, (153) a ′ · γ 0 = a ′ γ 0 = aγ (0) = a · γ (0) = ± 1 c a · u = 0 (154) a ′ γ 0 = a ′ . (155) a · s = �as� = a ′ s ′ = ± a ′ γ 0 γ 0 s ′ = ∓a ′ · s ′ , (156) −L(ω(u) · s)L =L(s · ω(u))L = s ′ · L ω(u)L , L ω(u)L = −Lω(u)L, (157) dL dτL = L dL dτ = −L dL dτ ,(158)dL dτL s ′ + s ′ L dL dτ = 2s ′ × L dL dτ = s ′ · 2L dL dτ . (159) ds ′ dτ = 1 c a ′ · s ′ + s ′ · Ω(τ ) γ 0 (160) Ω(τ ) =Lω(u)L + 2L dL dτ .(161b) Ω (B) (τ ) : = i<j Ω(τ ) · γ j ∧ γ i γ i ∧ γ j , (161c) Ω(τ ) = Ω (E) (τ ) + Ω (B) (τ ), (162a) γ 0 Ω(τ )γ 0 = − Ω (E) (τ ) + Ω (B) (τ ), (162b) Ω (E) (τ ) = 1 2 Ω(τ ) − γ 0 Ω(τ )γ 0 , (162c) Ω (B) (τ ) = 1 2 Ω(τ ) + γ 0 Ω(τ )γ 0 . (163) s ′ · Ω(τ ) γ 0 = s ′ · Ω(τ ) · γ 0 + s ′ · Ω(τ ) ∧ γ 0 = γ 0 ∧ s ′ · Ω(τ ) + s (i) γ i · Ω(τ ) ∧ γ 0 = − s ′ · Ω (E) (τ ) + s (i) γ i · Ω (B) (τ ) ∧ γ 0 + s (i) γ i ∧ Ω (B) (τ ) · γ 0 = − s ′ · Ω (E) (τ ) + s (i) γ i Ω (B) (τ )γ 0 2 = − s ′ · Ω (E) (τ ) + s ′ × Ω (B) (τ ), (164) ds ′ dτ = s ′ · a ′ c − Ω (E) (τ ) + s ′ × Ω (B) (τ ), (165) a ′ c = Ω (E) (τ ) (166) ds ′ dτ = s ′ × Ω (B) (τ ),(167)ds ′ 2 dτ = 2s ′ · s ′ × Ω (B) (τ ) = 2 s ′ 2 Ω (B) (τ ) = 2s ′ 2 Ω (B) (τ ) = 0,(168)Ω(τ ) = (L 0 − L) ω (E) (u) + ω (B) (u) (L 0 + L) 2 + 2(L 0 − L) dL 0 dτ + dL dτ 2 = L 2 0 ω (E) (u) + L 2 0 ω (B) (u) + L 2 ω (E) (u) + L 2 ω (B) (u) + 2L 0 ω (E) (u) × L + 2L 0 ω (B) (u) × L − 2 ω (E) (u) · L L − 2 ω (B) (u) ∧ L · L + 2L 0 dL dτ − 2 dL 0 dτ L − 2L × dL dτ ,(170)L 2 ω (B) (u) = ω (B) (u)LL 2 = ω (B) (u) × L × L + ω (B) (u) ∧ L · L (171a) Ω (E) (τ ) = L 2 0 ω (E) (u) + L 2 ω (E) (u) + 2L 0 ω (B) (u) × L − 2 ω (E) (u) · L L + 2L 0 dL dτ − 2 dL 0 dτ L, (171b) Ω (B) (τ ) = L 2 0 ω (B) (u) + L 2 ω (B) (u) + 2L 0 ω (E) (u) × L − 2 ω (B) (u) ∧ L · L − 2L × dL dτ . (172a) LT (τ ) are resulted from gyroscopic motion through the spacetime curved by the mass of the source and rotation of the source, respectively, and hence, they should be the de Sitter precession and the Lense-Thirring precession 22 . Besides, Ω (B) T (τ ) , associated with the relative force f acting on the gyroscope, explicitly represents the Thomas precession of its spin, which is caused by gyroscopic non-geodesic motion. In the fine structure of atomic spectra, Thomas precession plays a significant role 21 . Ω (E) (τ ) = γ u ω (E) (u) + γ u c ω (B) (u) × u − γ 2 u c 2 (1 + γ u ) ω (E) (u) · u u + γ 4 u c 3 (1 + γ u ) (u · a)u + γ 2 u c a, (172b) Ω (B) (τ ) = γ u ω (B) (u) + γ u c ω (E) (u) × u − γ 2 u c 2 (1 + γ u ) ω (B) (u) ∧ u · u − γ 3 u c 2 (1 + γ u ) u × a. (173) a ′ = L aLγ 0 2 = L aγ 0L 2 = 1 m L f γ 0L 2 = γ u m 1 c (f · u) L 2 2 + L fL 2 . (174) e u = u √ u 2 , (175a) f � = f · e u e u = 1 u 2 f · u u, (175b) f ⊥ = f × e u × e u = f × e u e u = 1 u 2 f × u u,(176)f = f � + f ⊥ (177a) f � e u = e u f � , (177b) f ⊥ e u = − e u f ⊥ . (178a) f �L =Lf � ,(178b) Recall that the three-dimensional operator ∇ appearing in Eq. (182) is defined by ∇ = σ k ∂ k (cf. (41)), and on the basis of it, the expression of Ω where r := x i σ i is the relative position vector of the gyroscope, and due to r = √ x i x i , there is r = √ r 2 . In order to deduce the expression of Ω (B) LT (τ ) , some tricks need to be applied. In the language of GA, the relative angular momentum bivector J , is more convenient to describe the rotation of the source. Equations (126) and (C1) indicate that the source is rotating around the x 3 axis, so its relative angular momentum vector is and then, from Eqs. (B14), (B20), (B2), and (B6), its relative angular momentum bivector should be Thus, via Eqs. (126) and (B15), one is able to express V as Keeping in mind that the relative angular momentum bivector J of the source is conserved, the following identity holds, T (τ ) are capable of being directly transformed into their corresponding expressions in the conventional sense by multiplying −I , namely, Although these expressions presented here seem to be identical to those in Refs. 30,55 , one still needs to note that since the relative velocity u in −Ω T (τ )I is measured in the orthonormal tetrad {γ α } of the fiducial observer instead of in the coordinate frame {g µ } , their above expressions are slightly different from those obtained in tensor language. In despite of this, a straightforward calculation 22,30 shows that the difference between the gyroscope's velocities measured in {γ α } and in {g µ } is at least at 1/c 2 order, so the above −Ω T (τ )I are essentially equivalent to their conventional expressions. These computations in the final part of this section display in detail how to give a signature invariant GA derivation of the precessional angular velocity of the gyroscope spin within the framework provided by the "common" even subalgebra of the STAs of signatures (±, ∓, ∓, ∓) , which could stand as a successful paradigm of the application of this framework in spacetime physics. (179) a ′ = γ u m 1 c (f · u) L 2 2 + L 2 f � 2 + f ⊥ 2 = cγ u ω (E) (u) + γ u ω (B) (u) × u − γ 2 u c(1 + γ u ) ω (E) (u) · u u + γ 4 u c 2 (1 + γ u ) (u · a)u + γ 2 u a,(180)Ω (B) (τ ) = γ u c(1 + γ u ) ω (E) (u) × u + ω (B) (u) − γ u mc 2 (1 + γ u ) u × f . (181) Ω (B) (τ ) = Ω (B) d (τ ) + Ω (B) LT (τ ) + Ω (B) T (τ ) (182)            Ω (B) d (τ ) := 3 2c 2 u × ∇U, Ω (B) LT (τ ) := 2 c 2 ∇ × V , Ω (B) T (τ ) := − 1 2mc 2 u × f .(185) J = J pseu I = Jσ 1 × σ 2 = 1 2 Jǫ 3ij σ i × σ j . (186) V = V i σ i = − GJǫ 3ij x j 2r 3 σ i = GJǫ 3ij x k 4r 3 σ k × (σ i × σ j ) = In this section, based on the STAs of signatures (±, ∓, ∓, ∓) formulated in "STAs of signatures (±, ∓, ∓, ∓) and their "common" even subalgebra" and the GA techniques constructed in "Rotor techniques on Lorentz transformation and relativistic dynamics of a massive particle in curved spacetime", an efficient treatment of gyroscopic precession is achieved. One significant advantage of GA approach is that only geometric objects are involved during calculation, and thus, many equations are given a degree of clarity which is lost in tensor language. A typical example is that the relationship between the gyroscope spin s and its components s (i) in the comoving frame {γ (α) } is clearly shown by the equation s (i) = s · γ (i) , which could help readers understand that instead of s, it is the spin s (1) , s (2) , s (3) in the frame {γ (α) } that experiences a spatial rotation. However, in the classical derivation with tensor, since one always needs to work with the components of some tensor, the role of s is usually played by its components in the coordinate frame {g µ } , and thus, the above equation is replaced by the corresponding component equations 22,30 , from which, the relationship between s and s (i) can not be explicitly reflected. It should be noted that the application of the rotor techniques is also very crucial in simplifying the derivation. In the beginning, Eqs. (142)-(144) imply that in order to obtain the precessional angular velocity of the gyroscope spin s (1) , s (2) , s (3) in the frame {γ (α) } , the expression of ds (i) /dτ needs to be given. Then as in Eq. (146), by employing the rotor techniques, the effect of the pure Lorentz boost generated by the rotor L is transformed from γ (i) to s ′ , and as a result, one can deal with the geometric object ds ′ /dτ = ds (i) /dτ γ i rather than ds (i) /dτ . Being a common trick in STA, such an approach is extremely useful for computations. The STAs of signatures (±, ∓, ∓, ∓) and the GA techniques for General Relativity formulated in Ref. 33 are organically integrated in "Relativistic dynamics of a massive particle in curved spacetime", so that physics in curved spacetime is able to be discussed within the signature invariant framework provided in "STAs of signatures (±, ∓, ∓, ∓) and their "common" even subalgebra", which is perhaps the most easily overlooked contribution of the present paper. It is based on the results presented in "Relativistic dynamics of a massive particle in curved spacetime" that relativistic dynamics of the gyroscope and the precession of its spin can be studied in the two STAs. In particular, within the framework provided by the "common" even subalgebra of the two STAs, the three-dimensional generalized equation of motion for the gyroscope and the precessional angular velocity of its spin are able to be derived in a signature invariant manner. The treatment of gyroscopic precession in this section intuitively displays the basic method of dealing with specific problems in curved spacetime within the signature invariant framework. In the future, if the applications of this method could be extended to a wider range, the study of spacetime physics in the language of GA will be greatly promoted. Summary and discussions Since the establishment of STA by David Hestenes, the signature (+, −, −, −) has been widely used 2,17 , which may cause inconvenience to the application of STA in relativistic physics because plenty of literatures on relativity adopt the opposite signature (−, +, +, +) . Although the STA of signature (−, +, +, +) was also used 24-29 , a lack of long-term attention to it results in that its applications are quite limited. In this paper, by following the original idea of Hestenes, the techniques related to relative vector and spacetime split are built up in the STA of signature (−, +, +, +) , so that a more convenient approach to relativistic physics could be given in the language of GA. The further research suggests that the two even subalgebras of the STAs of signatures (±, ∓, ∓, ∓) share the same operation rules, so that they could be treated as one algebraic formalism. Consequently, many calculations between vectors involved in a large number of specific problems can be transformed into those in this "common" even subalgebra of the two STAs through the techniques on spacetime split, and then be solved efficiently in a signature invariant manner with the help of various operations provided in Appendix B. Thus, the "common" even subalgebra of the two STAs provides a signature invariant GA framework for spacetime physics. When orthogonal transformations in spaces of arbitrary signature are performed, calculations with rotors are demonstrably more efficient than calculations with matrices, which is a remarkable advantage of GA. Therefore, the topic of rotor techniques on Lorentz transformation should be specifically addressed in the STAs of signatures (±, ∓, ∓, ∓) , and what needs to be pointed out is that since rotor techniques have not been fully developed in the STA of signature (−, +, +, +) , it is significant to explicitly elaborate how to construct the rotors inducing Lorentz boost and spatial rotation in this algebraic formalism. In the present paper, by constructing the rotors on the basis of the exponential function defined on the "common" even subalgebra of the two STAs, the general Lorentz boost with velocity in an arbitrary direction and the general spatial rotation in an arbitrary plane are handled in a signature invariant manner. Relativistic dynamics of a massive particle in curved spacetime is also studied so as to describe the motion of a gyroscope moving around a gravitating source 34 . To this end, the two STAs and their "common" even subalgebra are first generated by a local orthonormal tetrad, and thus, the corresponding signature invariant GA framework can be set up. Then, after organically integrating the STAs of signatures (±, ∓, ∓, ∓) and the GA techniques for General Relativity formulated in Ref. 33 , physics in curved spacetime is able to be discussed within the signature invariant framework provided in "STAs of signatures (±, ∓, ∓, ∓) and their "common" even subalgebra", which lays the foundation for dealing with gyroscope precession hereafter. With these preparations, for a massive particle, the spacetime splits of the velocity, acceleration, momentum, and force four-vectors with the normalized four-velocity of the fiducial observer are derived, and as a consequence, a three-dimensional analogue of Newton's second law for this particle in curved spacetime is achieved. Since the result is derived in a comoving orthonormal tetrad of the fiducial observer and is presented in the form of geometric objects, it is an elegant generalization of the classical one in flat spacetime. As a comprehensive application of the GA techniques constructed before, the last task of this paper is to provide an efficient treatment of gyroscopic precession in the STAs of signatures (±, ∓, ∓, ∓) . For a gyroscope moving in the Lense-Thirring spacetime, its relativistic dynamics is first discussed, and some significant results like the three-dimensional generalized equation of motion for the gyroscope are given. Then, by applying the rotor techniques, the geometric object ds ′ /dτ = ds (i) /dτ γ i is able to be directly dealt with instead of ds (i) /dτ , which greatly simplifies the following derivation. The result suggests that if Eq. (165) holds, the spin of the gyroscope always precesses relative to its comoving frame {γ (α) } with Ω (B) (τ ) as the precessional angular velocity. Within the framework provided by the "common" even subalgebra of the two STAs, signature invariant expressions of the relevant physical quantities involved in Eq. (164) are deduced, which clearly indicates that Eq. (165) holds, and therefore, the gyroscope spin indeed precesses in the frame {γ (α) } . After expanding Ω (B) (τ ) up to 1/c 3 order, the gyroscope spin's angular velocities of the de Sitter precession, the Lense-Thirring precession, and the Thomas precession are all directly read out, and their expressions, in the form of geometric objects, are equivalent to their conventional ones in component form, respectively. All physical laws should be independent of the choice of signature, which implies that many significant techniques constructed in the STA of signature (+, −, −, −) can also be introduced to the STA of signature (−, +, +, +) , and starting from this motivation, we find that the "common" even subalgebra of the two STAs provides a signature invariant GA framework for spacetime physics. In order to pave the way for the applications of these two STAs and their "common" even subalgebra, we elaborate in detail the rotor techniques on Lorentz transformation and the method of handling physics in curved spacetime within the signature invariant framework, and they are of theoretical significance and of practical worth. As two successful paradigms, the treatment of relativistic dynamics of a massive particle and gyroscopic precession clearly shows that the GA techniques constructed in this paper are efficient and reliable. Being straightforward generalizations, these techniques could also be applied to gyroscopic precession in alternative theories of gravity, such as f(R) gravity 30,37-39 , f (R, G) gravity 40,41 , and f(X, Y, Z) gravity 42 . However, since these topics are usually explored by making use of some complicated mathematical tools (e.g., the symmetric and trace-free formalism in terms of the irreducible Cartesian tensors 30 ), it is crucial to develop new techniques to apply these tools in STA. In fact, by generalizing various GA techniques in STA of signature (+, −, −, −) [2][3][4]17 , the approach in this paper could also be applied to other fields, and it has been verified that some topics in classical mechanics and electrodynamics can be described in such a manner. We expect that the applications of this approach will be extended to a wider range in the future, so that the study of spacetime physics in the language of GA could be greatly promoted. the frame of relative vectors introduced in the STA of signature (+, −, −, −) 2,17,32 , whereas {σ − k = γ − k γ 0 − = −γ − k γ − 0 } is the one in the STA of signature (−, +, +, +) . Further properties of {σ ± k } can also be obtained. Eqs. (A18) and (A19) yield , σ k and σ i × σ j have the forms Scientific Reports | (2022) 12:3981 | https://doi.org/10.1038/s41598-022-06895-0 www.nature.com/scientificreports/ and their squares are deduced by applying Eqs. (B3) and (B17), y ′ γ 0 = e − ϕ 2 I 2 yγ 0 e them in Eqs. (50a) and (50b), Eqs. (48a) and (48b) are recast in a signature invariant form, For the relative vectors x, x ′ , y , and y ′ , Eqs. (46b), (B8), and (B9) yield the decompositions, Since one can directly check that by virtue of Eqs. (46b), (47b), (B15), and (B16), the following relative vectors can be defined with Eqs. (B8)- (46b), (B14), (B20), and (B23), one obtains By further applying Eq. (B26), one can verify that which mean that the relative vector l is a unit normal vector to the plane encoded by I 2 . Thus, for any relative vector a , hold, where Eqs. (59) and (B13) are used. With the aid of these results, Eqs. (57c) and (57d) can be transformed into (46b), (57a), (57c), and the relevant formulas in Appendix B, With these equalities and Eqs. (B39)-(B41), after substituting Eqs. (56a)-(56d) in Eqs. (52a) and (52b), important intermediate results are obtained, (46b), (57c), (60b), (60c), (B10), (B15), and (B17), hold, and hence, I 2 y is indeed a relative vector parallel to the plane defined by I 2 . Remember that θ is a free parameter, and if one defines because of Eqs. (46a), (66a), and (66b), Thus, the equivalent expression of Eq. (67a) is given by making use of Eqs. (55a), (56a), (56b), and (57a), As for Eq. (67b), by virtue of Eqs. (63b), (B8), (B39), and (B41), one can achieve Evidently, these results suggest that the Lorentz transformations induced by e v and I 2 in Eqs. (48a) and (48b) are, respectively, a Lorentz boost with the velocity v 5-7,43 and a spatial rotation through an angle ϕ in the plane encoded by I 2 . (51a), (51b), (66c), and (67a) yield xγ 0 = γ (c + v) , and then, with Eq. (A1) and v = cβ , one obtains and where Eq. (73a) implies that the vectors x and x ′ could be thought of as the four-velocities of observers. In such a case, by means of Eqs. (B39) and (66a)-(66c), the rotor e θ 2 e v can be expressed as and thus, Eq. (48a) states that under the Lorentz boost generated by the rotor (83a), (83b), (90), (91), (94), σ i = γ i γ 0 , and have been used. Explicitly, the above a is the relative acceleration measured by the fiducial observer. By virtue of Eqs. (A1), (A6), (A14), and (94), the second term of Eq. (96) is Here, just like the Faraday bivector, namely the electromagnetic field strength, the bivector connection ω(u) has been decomposed into the electric part ω (E) (u) and the magnetic part ω (B) (u) , and by making use of Eq. (A1) and the anticommutation of {γ α } , the important equalities are obtained, Finally, let us deal with the last three terms in Eq. (99) with Eqs. (90), (94), (100a), and (100b),(96) in which, Eqs. (91), (103), and (107) have been used. Furthermore, by employing Eq. (95), the power delivered by the relative force f is evaluated as Thus, with Eqs. (103) and (107)-(109), one can verify that Equation , based on Eq. (134), one is able to directly check that being a rotor, L satisfies As indicated in Ref.21 , the comoving orthonormal frame {γ (α) } of the gyroscope is related to the tetrad {γ α } by and thus, the other three spacelike vectors γ (i) of {γ (α) } are fully determined. One consequence of Eq. (139) is that where {γ (α) } is the reciprocal frame of {γ (α) } . Now, let us expand the spin s of the gyroscope in its comoving frame {γ (α) }, and by virtue of Eq. (120) and u = cγ (0) , we have In this case, Eq. (124) states thatThe above two equations suggest that in the comoving frame {γ (α) } of the gyroscope, its spin s (1) , s(2) , s (3) is a purely spatial vector with constant length, and therefore, from the viewpoint of the observer comoving with it, the spin s (1) , s(2) , s (3) experiences a spatial rotation. That is to say, the spin of the gyroscope always precesses relative to its comoving frame {γ (α) } . The objective of the derivation in this section is to first write down the equation satisfied by s (1) , s (2) , s(3) , and then derive the expression of the precessional angular velocity of the gyroscope spin up to 1/c 3 order within the signature invariant GA framework formulated in "STAs of signatures (±, ∓, ∓, ∓) and their "common" even subalgebra".From Eqs. (82), (123), and (A14), the differential equation satisfied by the spin s of the gyroscope is where from Eqs. (83a) and (83b), Motivated by this, we could consider the derivative of s (i) with respect to τ , namely ds (i) /dτ so as to obtain the equation fulfilled by s (1) , s (2) , s (3) . On the basis of Eq. (140) and �AB� = �BA� , there is with and as a result, the effect of the pure Lorentz boost generated by the rotor L can be seen by taking s to s ′ . By means of Eqs. (142), (145), and (146), one will find that from which, the spacetime splits of s ′ and ds ′ /dτ with γ 0 are, respectively, , and (150), ds ′ /dτ is evaluated as Define and because of Eqs. (122), (138), (140), and u = cγ (0) , holds, which results in that the spacetime split of a ′ with γ 0 is Via this result, a · s is able to be written as where Eqs. (147)-(149) and γ 0 γ i = −γ i γ 0 are used. By further using Eqs. (A6) and (A18), one gets and here, because L ω(u)L is an even multivector satisfying it is a bivector. In addition, Eq. (138) provides which means that like L ω(u)L , L (dL/dτ ) is also a bivector. Thus, by employing Eqs. (A6) and (A14), the term inside the square brackets in Eq. (151) is After inserting Eqs. (155), (156), and (158) into Eq. (151), ds ′ /dτ is rewritten as with the bivector Ω(τ ) defined by ( 161a ) 161aΩ (E) (τ ) : = Ω(τ ) · γ k ∧ γ 0 γ 0 ∧ γ k , based on Eqs. (148), (149), and the relevant formulas in Appendix A, the second term in Eq. (159) can be recast as where in the third and fifth steps, γ 0 γ i = −γ i γ 0 has been used. Finally, by substituting this result back in Eq. (159), we arrive at which is the differential equation describing the motion of the spin of the gyroscope relative to its comoving frame {γ (α) } . As analyzed previously, in this frame, the gyroscope spin s (1) , s (2) , s (3) experiences a spatial rotation, and therefore, Eq. (164) should depict the precession of s ′ = s (i) σ i . If the condition holds, Eq. (164) reduces to and then, where in terms of the three-dimensional meaning in the relative space, the conservation of s ′ 2 along the worldline of the gyroscope means that Eq. (166) is the equation depicting the precession of s ′ . In this case, based on Eqs. (B13), (B14), and (B20), Eq. (166) is capable of being transformed into I = − s ′ × Ω (B) (τ )I I = −Ω (B) (τ )I × 3 s ′ , used. Equations (161a)-(161c) suggest that the timelike vector γ 0 or γ 0 only appears in the electric part Ω (E) (τ ) of the bivector Ω(τ ) , and thus, from Eq. (169), After substituting Eq. (137) in the above results, the expressions of Ω (E) (τ ) and Ω (B) (τ ) are derived, We turn now to the evaluation of a ′ . Due to γ i γ 0 = −γ 0 γ i and u = u i σ i = u i γ i γ 0 , one gets uγ 0 = −γ 0 u , which leads to L γ 0 = γ 0L via Eqs. (136) and (137). As a consequence, by means of Eqs. (152), (154), (119), and (131), Define the unit relative vector and the components of the relative force f parallel and perpendicular to it can be determined by following the method presented in "Rotor techniques on Lorentz boost and spatial rotation", where they satisfy and Thus, together with Eqs. (136), (137), and (174), there are Based on these two results, (175a), and (176), a ′ is able to be rewritten as (169) TLT This is the general formula for the precessional angular velocity of the spin of a gyroscope moving in curved spacetime. In the Lense-Thirring spacetime, one only needs to insert Eqs. (C14), (C15), and (130) into the above result, and then, the expression of Ω (B) (τ ) up to 1/c 3 order is obtained, with In the three-dimensional relative space, (τ ) describe three types of precession of the gyroscope spin in complete generality under the WFSM approximation. If the gyroscope does not experience any force, namely f = 0 , Ω (B) (τ ) (τ ) is the precessional angular velocity of its spin brought about by the curved spacetime in General Relativity. As implied from Eqs. (126) and (182), d (τ ) can be readily derived by the potential U in Eq. (126), J pseu : = Jσ 3 , the third and fifth steps, J = J pseu I and (A5) have been used, and as a result, by use of Eqs. r 5 r 2 J − 3(r ∧ J)r . Within the framework of General Relativity, the covariant derivative ∇ on the spacetime manifold can be defined in the standard way, g 22 > 0, Scientific Reports | (2022) 12:3981 | https://doi.org/10.1038/s41598-022-06895-0 www.nature.com/scientificreports/ where in the last three equations, Eqs. (A8), (A11), and (A13) are used. By means of the GA technique on the Gram-Schmidt orthogonalization procedure provided in Ref. 3 , the coordinate frame {g µ } is able to be ortho- normalized conveniently, With the relevant formulas in Appendix A, one can immediately verify that {γ α } , as a local orthonormal tetrad, satisfies and Thus, instead of ds (i) /dτ , the expression of ds ′ /dτ could be deduced hereafter, and since it is more convenient to be handle ds ′ /dτ in GA, working with s ′ will greatly facilitate the calculations. With the aid of Eqs. (135), (144),Scientific Reports | (2022) 12:3981 | https://doi.org/10.1038/s41598-022-06895-0 www.nature.com/scientificreports/ f ⊥L =Lf ⊥ .www.nature.com/scientificreports/ where Eqs. (136) and (137) have been used again. Comparing Eq. (179) with Eq. (172a), it is easy to verify that Eq. (165) holds, and as noted before, the spin of the gyroscope always precesses relative to its comoving frame {γ (α) } with Ω (B) (τ ) as the precessional angular velocity. Here, by making use of Eqs. (95) and (108), the expression of Ω (B) (τ ) in Eq. (172b) can be recast asScientific Reports | (2022) 12:3981 | https://doi.org/10.1038/s41598-022-06895-0 I 2 . Received: 16 December 2021; Accepted: 8 February 2022 © The Author(s) 2022 AcknowledgementsScientific Reports |Author contributionsAll the content are completed by B.W.Competing interestsThe author declares no competing interests.Additional informationSupplementary InformationThe online version contains supplementary material available at https:// doi. org/ 10. 1038/ s41598-022-06895-0.Correspondence and requests for materials should be addressed to B.W.Reprints and permissions information is available at www.nature.com/reprints.Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. Mathematical Papers (Macmillan, 1882). W K Clifford, Clifford, W. K. Mathematical Papers (Macmillan, 1882). Space-Time Algebra. D Hestenes, Gordon and BreachHestenes, D. Space-Time Algebra (Gordon and Breach, 1966). Clifford Algebra to Geometric Calculus (Reidel. D Hestenes, G Sobczyk, Hestenes, D. & Sobczyk, G. Clifford Algebra to Geometric Calculus (Reidel, 1984). New Foundations for Classical Mechanics. D Hestenes, Kluwer Academic PublishersHestenes, D. New Foundations for Classical Mechanics (Kluwer Academic Publishers, 1999). Multivectors and Clifford Algebra in Electrodynamics. B Jancewicz, World ScientificJancewicz, B. Multivectors and Clifford Algebra in Electrodynamics (World Scientific, 1989). D Hestenes, Primer on Geometric Algebra for Introductory Mathematics and Physics. Hestenes, D. Primer on Geometric Algebra for Introductory Mathematics and Physics. http:// geoca lc. clas. asu. edu/ pdf/ Prime rGeom etric Algeb ra. pdf Spacetime algebra as a powerful tool for electromagnetism. J Dressel, K Y Bliokh, F Nori, Phys. Rep. 5891Dressel, J., Bliokh, K. Y. & Nori, F. Spacetime algebra as a powerful tool for electromagnetism. Phys. Rep. 589, 1 (2015). Curvature calculations with spacetime algebra. D Hestenes, Int. J. Theor. Phys. 25581Hestenes, D. Curvature calculations with spacetime algebra. Int. J. Theor. Phys. 25, 581 (1986). Gravity, gauge theories and geometric algebra. A N Lasenby, C J L Doran, S F Gull, Philos. Trans. R. Soc. Lond. A. 356487Lasenby, A. N., Doran, C. J. L. & Gull, S. F. Gravity, gauge theories and geometric algebra. Philos. Trans. R. Soc. Lond. A 356, 487 (1998). Quadratic Lagrangians and topology in gauge theory gravity. A M Lewis, C J L Doran, A N Lasenby, Gen. Relat. Grav. 32161Lewis, A. M., Doran, C. J. L. & Lasenby, A. N. Quadratic Lagrangians and topology in gauge theory gravity. Gen. Relat. Grav. 32, 161 (2000). Towards the unification of gravity and other interactions: What has been missed?. M Pavšič, J. Phys. Conf. Ser. 22212017Pavšič, M. Towards the unification of gravity and other interactions: What has been missed?. J. Phys. Conf. Ser. 222, 012017 (2010). Geometric algebra, gravity and gravitational waves. A N Lasenby, Adv. Appl. Clifford Algebras. 2979Lasenby, A. N. Geometric algebra, gravity and gravitational waves. Adv. Appl. Clifford Algebras 29, 79 (2019). States and operators in the spacetime algebra. C J L Doran, A N Lasenby, S F Gull, Found. Phys. 231239Doran, C. J. L., Lasenby, A. N. & Gull, S. F. States and operators in the spacetime algebra. Found. Phys. 23, 1239 (1993). Spacetime algebra and electron physics. C J L Doran, A N Lasenby, S F Gull, S Somaroo, A D Challinor, Adv. Imaging Electron Phys. 95271Doran, C. J. L., Lasenby, A. N., Gull, S. F., Somaroo, S. & Challinor, A. D. Spacetime algebra and electron physics. Adv. Imaging Electron Phys. 95, 271 (1996). A multivector derivative approach to Lagrangian field theory. A N Lasenby, C J L Doran, S F Gull, Found. Phys. 231295Lasenby, A. N., Doran, C. J. L. & Gull, S. F. A multivector derivative approach to Lagrangian field theory. Found. Phys. 23, 1295 (1993). Electron scattering without spin sums. A M Lewis, C J L Doran, A N Lasenby, Int. J. Theor. Phys. 40363Lewis, A. M., Doran, C. J. L. & Lasenby, A. N. Electron scattering without spin sums. Int. J. Theor. Phys. 40, 363 (2001). Geometric Algebra for Physicists. C J L Doran, A N Lasenby, Cambridge University PressDoran, C. J. L. & Lasenby, A. N. Geometric Algebra for Physicists (Cambridge University Press, 2003). Proper particle mechanics. D Hestenes, J. Math. Phys. 151768Hestenes, D. Proper particle mechanics. J. Math. Phys. 15, 1768 (1974). Proper dynamics of a rigid point particle. D Hestenes, J. Math. Phys. 151778Hestenes, D. Proper dynamics of a rigid point particle. J. Math. Phys. 15, 1778 (1974). Geometric Algebra and Applications to Physics. V De Sabbata, B K Datta, Taylor & Francis Groupde Sabbata, V. & Datta, B. K. Geometric Algebra and Applications to Physics (Taylor & Francis Group, 2007). . C W Misner, K S Thorne, J A Wheeler, Gravitation, W. H. Freeman and CompanyMisner, C. W., Thorne, K. S. & Wheeler, J. A. Gravitation (W. H. Freeman and Company, 1973). I Ciufolini, J A Wheeler, Gravitation and Inertia. Princeton University PressCiufolini, I. & Wheeler, J. A. Gravitation and Inertia (Princeton University Press, 1995). Special Relativity in General Frames From Particles to. É Gourgoulhon, Astrophysics. SpringerGourgoulhon, É. Special Relativity in General Frames From Particles to Astrophysics (Springer, 2013). Relativistic quantum theory with correct consevation laws. K Greider, Phys. Rev. Lett. 441718Greider, K. Relativistic quantum theory with correct consevation laws. Phys. Rev. Lett. 44, 1718 (1980). A unifying Clifford algebra formalism for relativistic fields. K R Greider, Found. Phys. 14467Greider, K. R. A unifying Clifford algebra formalism for relativistic fields. Found. Phys. 14, 467 (1984). Clifford algebra geometric-multispinor particles and multivector-current gauge fields. W Pezzaglia, Found. Phys. Lett. 557Pezzaglia, W. Clifford algebra geometric-multispinor particles and multivector-current gauge fields. Found. Phys. Lett. 5, 57 (1992). Should Metric Signature Matter in Clifford Algebra Formulations of Physical Theories? e-Print Archive. W M Pezzaglia, Jr, J J Adams, gr-qc/9704048Pezzaglia, W. M., Jr. & Adams, J. J. Should Metric Signature Matter in Clifford Algebra Formulations of Physical Theories? e-Print Archive: gr-qc/9704048. The Landscape of Theoretical Physics: A Global View from Point Particles to the Brane World and Beyond. M Pavšič, Search of a Unifying Principle. Kluwer AcademicPavšič, M. The Landscape of Theoretical Physics: A Global View from Point Particles to the Brane World and Beyond, in Search of a Unifying Principle (Kluwer Academic, 2001). The Clifford algebra of physical space and Elko spinors. J Vaz, Int. J. Theor. Phys. 57582Vaz, J. The Clifford algebra of physical space and Elko spinors. Int. J. Theor. Phys. 57, 582 (2018). Multipole analysis on gyroscopic precession in f (R) gravity with irreducible cartesian tensors. B Wu, X Zhang, Phys. Rev. D. 10424052Wu, B. & Zhang, X. Multipole analysis on gyroscopic precession in f (R) gravity with irreducible cartesian tensors. Phys. Rev. D 104, 024052 (2021). Gravity probe B: Final results of a space experiment to test general relativity. C W F Everitt, Phys. Rev. Lett. 106221101Everitt, C. W. F. et al. Gravity probe B: Final results of a space experiment to test general relativity. Phys. Rev. Lett. 106, 221101 (2011). Geometric algebra as a unifying language for physics and engineering and its use in the study of gravity. A N Lasenby, Adv. Appl. Clifford Algebras. 27733Lasenby, A. N. Geometric algebra as a unifying language for physics and engineering and its use in the study of gravity. Adv. Appl. Clifford Algebras 27, 733 (2017). Geometric algebra techniques for general relativity. M R Francis, A Kosowsky, Ann. Phys. 311459Francis, M. R. & Kosowsky, A. Geometric algebra techniques for general relativity. Ann. Phys. 311, 459 (2004). S Weinberg, Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity. WileyWeinberg, S. Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity (Wiley, 2014). . J Clifford Snygg, Algebra, Oxford University PressSnygg, J. Clifford Algebra (Oxford University Press, 1997). Gravitational radiation from post-Newtonian sources and inspiralling compact binaries. L Blanchet, Living Rev. Relat. 17Blanchet, L. Gravitational radiation from post-Newtonian sources and inspiralling compact binaries. Living Rev. Relat. 17, 2 (2014). On the 1/c expansion of f(R) gravity. J Näf, P Jetzer, Phys. Rev. D. 81104003Näf, J. & Jetzer, P. On the 1/c expansion of f(R) gravity. Phys. Rev. D 81, 104003 (2010). Perturbation of the metric around a spherical body from a nonminimal coupling between matter and curvature. N Castel-Branco, J Páramos, R March, Phys. Lett. B. 73525Castel-Branco, N., Páramos, J. & March, R. Perturbation of the metric around a spherical body from a nonminimal coupling between matter and curvature. Phys. Lett. B 735, 25 (2014). The gyroscopic frequency of metric f (R) and generalised Brans-Dicke theories: Constraints from gravity probe-B. A Dass, S Liberati, Gen. Relat. Grav. 51108Dass, A. & Liberati, S. The gyroscopic frequency of metric f (R) and generalised Brans-Dicke theories: Constraints from gravity probe-B. Gen. Relat. Grav. 51, 108 (2019). Energy bounds in f (R, G) gravity with anisotropic background. M F Shamir, A Komal, Int. J. Geometr. Methods Mod. Phys. 141750169Shamir, M. F. & Komal, A. Energy bounds in f (R, G) gravity with anisotropic background. Int. J. Geometr. Methods Mod. Phys. 14, 1750169 (2017). Dynamics of inflation and dark energy from F(R, G) gravity. S D Odintsov, V K Oikonomou, S Banerjee, Nucl. Phys. B. 938935Odintsov, S. D., Oikonomou, V. K. & Banerjee, S. Dynamics of inflation and dark energy from F(R, G) gravity. Nucl. Phys. B 938, 935 (2019). The most general fourth order theory of gravity at low energy. A Stabile, Phys. Rev. D. 82124026Stabile, A. The most general fourth order theory of gravity at low energy. Phys. Rev. D 82, 124026 (2010). . J D Jackson, Electrodynamics, WileyJackson, J. D. Classical Electrodynamics (Wiley, 1999). L D Landau, E M Lifshitz, The Classical Theory of Fields. Butterworth-HeinemannLandau, L. D. & Lifshitz, E. M. The Classical Theory of Fields (Butterworth-Heinemann, 1980). . R M Wald, Relativity, The University of Chicago PressWald, R. M. General Relativity (The University of Chicago Press, 1984). Special Relativity: An Introduction with 200 Problems and Solutions. M Tsamparlis, SpringerTsamparlis, M. Special Relativity: An Introduction with 200 Problems and Solutions (Springer, 2010). Einstein's Vierbein Field Theory of Curved Space e-Print Archive. J Yepez, gr-qc/1106.2037Yepez, J. Einstein's Vierbein Field Theory of Curved Space e-Print Archive: gr-qc/1106.2037. . M Gasperini, Theory of Gravitational Interactions. SpringerGasperini, M. Theory of Gravitational Interactions (Springer, 2013). P Hoyng, Relativistic Astrophysics and Cosmology: A Pirmer. SpringerHoyng, P. Relativistic Astrophysics and Cosmology: A Pirmer (Springer, 2006). S W Hawking, G F Ellis, The Large Scale Structure of Space-Time. Cambridge University PressHawking, S. W. & Ellis, G. F. R. The Large Scale Structure of Space-Time (Cambridge University Press, 1973). Multipole expansions of gravitational radiation. K S Thorne, Rev. Mod. Phys. 52299Thorne, K. S. Multipole expansions of gravitational radiation. Rev. Mod. Phys. 52, 299 (1980). Radiative gravitational fields in general relativity I. General structure of the field outside the source. L Blanchet, T Damour, Trans. R. Soc. A. 320379Blanchet, L. & Damour, T. Radiative gravitational fields in general relativity I. General structure of the field outside the source. Trans. R. Soc. A 320, 379 (1986). Relativistic effects due to gravimagnetic moment of a rotating body. W G Ramírez, A A Deriglazov, Phys. Rev. D. 96124013Ramírez, W. G. & Deriglazov, A. A. Relativistic effects due to gravimagnetic moment of a rotating body. Phys. Rev. D 96, 124013 (2017). Recent progress on the description of relativistic spin: Vector model of spinning particle and rotating body with gravimagnetic moment in general relativity. A A Deriglazov, W G Ramírez, Adv. Math. Phys. 20177397159Deriglazov, A. A. & Ramírez, W. G. Recent progress on the description of relativistic spin: Vector model of spinning particle and rotating body with gravimagnetic moment in general relativity. Adv. Math. Phys. 2017, 7397159 (2017). . E Poisson, C M Will, Gravity, Newtonian, Post-Newtonian, RelativisticCambridge University PressPoisson, E. & Will, C. M. Gravity: Newtonian, Post-Newtonian, Relativistic (Cambridge University Press, 2014). . 10.1038/s41598-022-06895-0Scientific Reports |. 123981Scientific Reports | (2022) 12:3981 | https://doi.org/10.1038/s41598-022-06895-0
[]
[ "Formalizing Data Deletion in the Context of the Right to Be Forgotten", "Formalizing Data Deletion in the Context of the Right to Be Forgotten" ]
[ "Sanjam Garg [email protected] \nDepartment of Electrical Engineering and Computer Sciences\nUniversity of California Berkeley\nBerkeleyUSA\n", "Shafi Goldwasser \nSimons Institute for the Theory of Computing\nUniversity of California Berkeley\nBerkeleyUSA\n", "Prashant Nalini Vasudevan \nDepartment of Electrical Engineering and Computer Sciences\nUniversity of California Berkeley\nBerkeleyUSA\n", "S Garg ", "P N Vasudevan " ]
[ "Department of Electrical Engineering and Computer Sciences\nUniversity of California Berkeley\nBerkeleyUSA", "Simons Institute for the Theory of Computing\nUniversity of California Berkeley\nBerkeleyUSA", "Department of Electrical Engineering and Computer Sciences\nUniversity of California Berkeley\nBerkeleyUSA" ]
[ "LNCS" ]
The right of an individual to request the deletion of their personal data by an entity that might be storing it -referred to as the right to be forgotten -has been explicitly recognized, legislated, and exercised in several jurisdictions across the world, including the European Union, Argentina, and California. However, much of the discussion surrounding this right offers only an intuitive notion of what it means for it to be fulfilled -of what it means for such personal data to be deleted.In this work, we provide a formal definitional framework for the right to be forgotten using tools and paradigms from cryptography. In particular, we provide a precise definition of what could be (or should be) expected from an entity that collects individuals' data when a request is made of it to delete some of this data. Our framework captures most, though not all, relevant aspects of typical systems involved in data processing. While it cannot be viewed as expressing the statements of current laws (especially since these are rather vague in this respect), our work offers technically precise definitions that represent possibilities for what the law could reasonably expect, and alternatives for what future versions of the law could explicitly require.Finally, with the goal of demonstrating the applicability of our framework and definitions, we consider various natural and simple scenarios where the right to be forgotten comes up. For each of these scenarios, we highlight the pitfalls that arise even in genuine attempts at implementing systems offering deletion guarantees, and also describe technological solutions that provably satisfy our definitions. These solutions bring together techniques built by various communities.
10.1007/978-3-030-45724-2_13
null
211,296,633
2002.10635
41afebe291be822fe589f7d9722a6abeabe23ef4
Formalizing Data Deletion in the Context of the Right to Be Forgotten 2020 Sanjam Garg [email protected] Department of Electrical Engineering and Computer Sciences University of California Berkeley BerkeleyUSA Shafi Goldwasser Simons Institute for the Theory of Computing University of California Berkeley BerkeleyUSA Prashant Nalini Vasudevan Department of Electrical Engineering and Computer Sciences University of California Berkeley BerkeleyUSA S Garg P N Vasudevan Formalizing Data Deletion in the Context of the Right to Be Forgotten LNCS 12106202010.1007/978-3-030-45724-2_13(CLTC, UC Berkeley). The views expressed are those of the authors and do not reflect the official policy or position of the funding agencies. S. Goldwasser-Supported in part by the C. Lester Hogan Chair in EECS, UC Berkeley, and Fintech@CSAIL. 374 S. Garg et al. The right of an individual to request the deletion of their personal data by an entity that might be storing it -referred to as the right to be forgotten -has been explicitly recognized, legislated, and exercised in several jurisdictions across the world, including the European Union, Argentina, and California. However, much of the discussion surrounding this right offers only an intuitive notion of what it means for it to be fulfilled -of what it means for such personal data to be deleted.In this work, we provide a formal definitional framework for the right to be forgotten using tools and paradigms from cryptography. In particular, we provide a precise definition of what could be (or should be) expected from an entity that collects individuals' data when a request is made of it to delete some of this data. Our framework captures most, though not all, relevant aspects of typical systems involved in data processing. While it cannot be viewed as expressing the statements of current laws (especially since these are rather vague in this respect), our work offers technically precise definitions that represent possibilities for what the law could reasonably expect, and alternatives for what future versions of the law could explicitly require.Finally, with the goal of demonstrating the applicability of our framework and definitions, we consider various natural and simple scenarios where the right to be forgotten comes up. For each of these scenarios, we highlight the pitfalls that arise even in genuine attempts at implementing systems offering deletion guarantees, and also describe technological solutions that provably satisfy our definitions. These solutions bring together techniques built by various communities. Introduction Everything we do in our lives leaves (or will soon leave) a digital trace, which can be analyzed. Recent advances in capturing and analyzing big data help us improve traffic congestion, accurately predict human behavior and needs in various situations, and much more. However, this mass collection of data can be used against people as well. Simple examples of this would be to charge individuals higher auto insurance premiums or decline mortgages and jobs based on an individual's profile as presented by the collected data. In the worst case, this wealth of information could be used by totalitarian governments to persecute their citizens years after the data was collected. In such ways, vast collection of personal data has the potential to present a serious infringement to personal liberty. Individuals could perpetually or periodically face stigmatization as a consequence of a specific past action, even one that has already been adequately penalized. This, in turn, threatens democracy as a whole, as it can force individuals to self-censor personal opinions and actions for fear of later retaliation. One alternative for individuals wanting to keep personal information secret is to simply stay offline, or at least keep such information hidden from entities that are likely to collect it. Yet, this is not always desirable or possible. These individuals might want to share such information with others over an internetbased platform, or obtain a service based on their personal information, such as personalized movie recommendations based on previous movie watching history, or simply driving directions to their destination based on where they want to go. In such cases, it is reasonable to expect that an individual might later change their mind about having this data available to the service provider they sent it to. In order to provide useful functionality while keeping in mind the aforementioned perils of perennial persistence of data, an individual's ability to withdraw previously shared personal information is very important. For example, one might want to request deletion of all personal data contained in one's Facebook account. However, in many cases, an individual's desire to request deletion of their private data may be in conflict with a data collector's 1 interests. In particular, the data collector may want to preserve the data because of financial incentives or simply because fulfilling these requests is expensive. It would seem that, in most cases, the data collector has nothing to gain from fulfilling such requests. Thus, it seems imperative to have in place legal or regulatory means to grant individuals control over what information about them is possessed by different entities, how it is used, and, in particular, provide individuals the rights to request deletion of any (or all) of their personal data. And indeed, the legitimacy of this desire to request deletion of personal data is being increasingly widely discussed, codified in law, and put into practice (in various forms) in, for instance, the European Union (EU) [GDP16], Argentina [Car13], and California [CCP18]. The following are illustrative examples: -The General Data Protection Regulation (GDPR) [GDP16], adopted in 2016, is a regulation in the EU aimed at protecting the data and privacy of individuals in the EU. Article 6 of the GDPR lists conditions under which an entity may lawfully process personal data. The first of these conditions is when "the data subject has given consent to the processing of his or her personal data for one or more specific purposes". And Article 7 states that, "The data subject shall have the right to withdraw his or her consent at any time". Further, Article 17 states that, "The data subject shall have the right to obtain from the controller the erasure of personal data concerning him or her without undue delay and the controller shall have the obligation to erase personal data without undue delay" under certain conditions listed there. -The California Consumer Privacy Act (CCPA), passed in 2018, is a law with similar purposes protecting residents of California. Section 1798.105 of the CCPA states, "A consumer shall have the right to request that a business delete any personal information about the consumer which the business has collected from the consumer", and that "A business that receives a verifiable request from a consumer . . . shall delete the consumer's personal information from its records." Thus, if a data collector (that operates within the jurisdictions of these laws) wishes to process its consumers' data based on their consent, and wishes to do so lawfully, it would also need to have in place a mechanism to stop using any of its consumers' data. Only then can it guarantee the consumers' right to be forgotten as the above laws require. However, it is not straightforward to nail down precisely what this means and involves. Defining Deletion: More that Meets the Eye. Our understanding of what it means to forget a user's data or honor a user deletion request is rather rudimentary, and consequently, the law does not precisely define what it means to delete something. Further, this lack of understanding is reflected in certain inconsistencies between the law and what would naturally seem desirable. For example, Article 7 of the GDPR, while describing the right of the data subject to withdraw consent for processing of personal data, also states, "the withdrawal of consent shall not affect the lawfulness of processing based on consent before its withdrawal." This seems to suggest that it is reasonable to preserve the result of processing performed on user data even if the data itself is requested to be deleted. However, processed versions of user data may encode all or most of the original data, perhaps even inadvertently. For instance, it is known that certain machine learning models end up memorizing the data they were trained on [SRS17,VBE18]. Thus, capturing the intuitive notion of what it means to truly delete something turns out be quite tricky. In our quest to do so, we ask the following question: How does an honest data collector know whether it is in compliance with the right to be forgotten? Here, by honest we mean a data collector that does in fact intend to guarantee its users' right to be forgotten in the intuitive sense -it wishes to truly forget all personal data it has about them. Our question is about how it can tell whether the algorithms and mechanisms it has in place to handle deletion requests are in fact working correctly. Honest Data-Collectors. In this work, we focus on the simple case where the data-collector is assumed to be honest. In other words, we are only interested in the data-collectors that aim to faithfully honor all legitimate deletion requests. Thus, we have no adversaries in our setting. This deviates from many cryptographic applications where an adversary typically attempts to deviate from honest execution. Note that even in the case of semi-honest adversaries in multiparty computation, the adversary attempts to learn more than what it is supposed to learn while following protocol specification. In our case, we expect the data-collector to itself follow the prescribed procedures, including deleting any stored information that it is directed to delete. With the above view, we do not attempt to develop methods by which a data collector could prove to a user that it did indeed delete the user's data. As a remark, we note here that this is in fact impossible in general, as a malicious data collector could always make additional secret copies of user data. 2 Finally, we note that even for this case of law-abiding data-collectors, the problem of defining what it means to delete data correctly is relevant. The goal of our definitions is to provide such data-collectors guidance in designing systems that handle data deletion, and a mechanism to check that any existing systems are designed correctly and are following the law (or some reasonable interpretation of it). When is it Okay to Delete? Another challenge a data-collector faces in handling deletion requests is in establishing whether a particular deletion request should be honored. Indeed, in some cases a data collector may be legally required to preserve certain information to satisfy legal or archival needs, e.g. a data collector may be required to preserve some payment information that is evidence in a case in trial. This raises the very interesting question of how to determine whether a particular deletion request should indeed be honored, or even what factors should be taken into consideration while making this decision. However, this is not the focus of this work. Instead, we are only interested in cases where the data-collector does intend (or has already decided) to honor a received deletion request, after having somehow found it legitimate. In such cases, we aim to specify the requirements this places on the data-collector. Our Contributions. In this work, we provide the first precise general notions of what is required of an honest data-collector trying to faithfully honor deletion requests. We say that a data-collector is deletion-compliant if it satisfies our requirements. Our notions are intended to capture the intuitive expectations a user may have when issuing deletion requests. Furthermore, it seems to satisfy the requirements demanded, at least intuitively, by the GDPR and CCPA. However, we note that our definition should not be seen as being equivalent to the relevant parts of these laws -for one, the laws themselves are somewhat vague about what exactly they require in this respect, and also there are certain aspects of data-processing systems that are not captured by our framework (see Sect. 2.2 for a discussion). Instead, our work offers technically precise definitions for data deletion that represent possibilities for interpretations of what the law could reasonably expect, and alternatives for what future versions of the law could explicitly require. Next, armed with these notions of deletion-compliance, we consider various natural scenarios where the right to be forgotten comes up. For each of these scenarios, we highlight the pitfalls that arise even in genuine attempts at writing laws or honest efforts in implementing systems with these considerations. Our definitions provide guidance towards avoiding these pitfalls by, for one, making them explicit as violations of the definitions. In particular, for each of the considered scenarios, we describe technological solutions that provably satisfy our definitions. These solutions bring together techniques built by various communities. Our Notions In this subsection, we explain our notions of deletion-compliance at a high level, building them up incrementally so as to give deeper insights. The formal definitions are in terms of building blocks from the UC framework [Can01], and details are provided in Sect. 2.1. The Starting Challenge. We start with the observation that a deletion request almost always involves much more than the process of just erasing something from memory. In fact, this issue comes up even in the most seemingly benign deletion requests. For example, consider the very simple case where a user requests deletion of one of her files stored with a data-collector. In this setting, even if the server was to erase the file from its memory, it may be the case that not all information about it has been deleted. For example, if the files are stored contiguously in memory, it might be possible to recover the size of the file that was deleted. Furthermore, if the files of a user are kept on contiguous parts of the memory, it might be possible to pin-point the owner of the deleted file as well, or in most cases at least be able to tell that there was a file that was deleted. Our Approach: Leave No Trace. In order to account for the aforementioned issues, we take the leave-no-trace approach to deletion in our definitions. In particular, a central idea of our definition is that execution of the deletion request should leave the data collector and the rest of the system in a state that is equivalent (or at least very similar) to one it would have been in if the data that is being deleted was never provided to the data-collector in the first place. The requirement of leave-no-trace places several constraints on the datacollector. First, and obviously, the data that is requested to be deleted should no longer persist in the memory of the data-collector after the request is processed. Second, as alluded to earlier, the data-collector must also remove the dependencies that other data could have on the data that is requested for deletion. Or at least, the data-collector should erase the other stored information which depends on this data. We note that we diverge from the GDPR in this sense, as it only requires deletion of data rather than what may have been derived from it via processing. Third, less obvious but clearly necessary demands are placed on the data-collector in terms of what it is allowed to do with the data it collects. In particular, the data-collector cannot reveal any data it collects to any external entity. This is because sharing of user data by the data-collector to external entities precludes it from honoring future deletion requests for the shared data. More specifically, on sharing user data with an external entity, the data-collector loses its the ability to ensure that the data can be deleted from everywhere where it is responsible for the data being present or known. That is, if this data were never shared with the data collector, then it would not have found its way to the external entity, and thus in order for the system to be returned to such a state after a deletion request, the collector should not reveal this data to the entity. A more concrete consequence of the third requirement above is that the data-collector cannot share or sell user data to third parties. Looking ahead, in some settings this sharing or selling of user data is functionally beneficial and legally permitted as long as the collector takes care to inform the recipients of such data of any deletion requests. For instance, Article 17 of the GDPR says, "Where the controller has made the personal data public and is obliged . . . to erase the personal data, the controller . . . shall take reasonable steps, including technical measures, to inform controllers which are processing the personal data that the data subject has requested the erasure by such controllers of any links to, or copy or replication of, those personal data." We later see (in Sect. 2.3) how our definition can be modified to handle such cases and extended to cover data collectors that share data with external entities but make reasonable efforts to honor and forward deletion requests. The Basic Structure of the Definition. In light of the above discussion, the basic form of the definition can be phrased as follows. Consider a user Y that shares certain data with a data-collector and later requests for the shared data to be deleted. We refer to this execution as a real world execution. In addition to this user, the data-collector might interact with other third parties. In this case, we are interested in the memory state of the data-collector post-deletion and the communication between the data-collector and the third parties. Next, we define the ideal world execution, which is same as the real world execution except that the user Y does not share anything with the data-collector and does not issue any deletion requests. Here again we are interested in the memory state of the data-collector and the communication between the data-collector and the third parties. More specifically, we require that the joint distribution of memory state of the data-collector and the communication between the data-collector and the third parties in the two worlds is identically distributed (or is at least very close). Further, this property needs to hold not just for a specific user, but hold for every user that might interact with the data-collector as part of its routine operation where it is interacting with any number of other users and processing their data and deletion requests as well. Note that the data-collector does not a priori know when and for what data it will receive deletion requests. A More Formal Notion. Hereon, we refer to the data-collector as X , and the deletion requester as Y. In addition to these two entities, we model all other parties in the system using Z, which we also refer to as the environment. Thus, in the real execution, the data-collector X interacts arbitrarily with the environment Z. Furthermore, in addition to interactions with Z, X at some point receives some data from Y which Y at a later point also requests to be deleted. In contrast, in the ideal execution, Y is replaced by a silent Y 0 that does not communicate with X at all. In both of these executions, the environment Z represent both the rest of the users in the system under consideration, as well as an adversarial entity that possibly instructs Y on what to do and when. Finally, our definition requires that the state of X and the view of Z in the real execution and the ideal execution are similar. Thus, our definition requires that the deletion essentially has the same effect as if the deleted data was never sent to X to begin with. The two executions are illustrated in Fig. 1. Fig. 1. The real and ideal world executions. In the real world, the deletion-requester talks to the data collector, but not in the ideal world. In the real world, π1 and π2 are interactions that contain data that is asked to be deleted by the deletion-requester through the interactions πD,1 and πD,2, respectively. While Y above is represented as a single user sending some data and a corresponding deletion request, we can use the same framework for a more general modeling. In particular, Y can be used to model just the part of a user that contains the data to be deleted, or of multiple users, all of whom want some or all of their data to be deleted. Dependencies in Data. While the above definition makes intuitive sense, certain user behaviors can introduce dependencies that make it impossible for the datacollector to track and thus delete properly. Consider a data-collector that assigns a pseudonym to each user, which is computed as the output of a pseudo-random permutation P (with the seed kept secret by the data-collector) on the user identity. Imagine a user who registers in the system with his real identity id and is assigned the pseudonym pd. Next, the user re-registers a fresh account using pd as his identity. Finally, the user requests deletion of the first account which used his real identity id. In this case, even after the data-collector deletes the requested account entirely, information about the real identity id is still preserved in its memory, i.e. P −1 (pd) = id. Thus, the actions of the user can make it impossible to keep track of and properly delete user data. In our definition, we resolve this problem by limiting the communication between Y and Z. We do not allow Y to send any messages to the environment Z, and require that Y ask for all (and only) the data it sent to be deleted. This implicitly means that the data that is requested to be deleted cannot influence other information that is stored with the data-collector, unless that is also explicitly deleted by the user. Requirement that the Data-Collector Be Diligent. Our definitions of deletion compliance place explicit requirements on the data collector only when a deletion request is received. Nonetheless, these explicit requirements implicitly require the data-collector to organize (or keep track of the collected data) in a way that ensures that deletion requests can be properly handled. For example, our definitions implicitly require the data-collector to keep track of how it is using each user's data. In fact, this book-keeping is essential for deletion-compliance. After all, how can a data-collector delete a user's data if it does not even know where that particular user's data is stored? Thus, a data-collector that follows these implicit book-keeping requirements can be viewed as being diligent. Furthermore, it would be hard (if not impossible) for a data-collector to be deletion-compliant if it is not diligent. As we discuss later, our definition also implies a requirement on the datacollector to have in place authentication mechanisms that ensure that it is sharing information only with the legitimate parties, and that only the user who submitted a piece of data can ask for it to be deleted. Composition Properties. Finally, we also show, roughly, that under an assumption that different users operate independently of each other, a data collector that is deletion-compliant under our definition for a deletion request from a single user is also deletion-compliant for requests from (polynomially) many users (or polynomially many independent messages from a single user). This makes our definition easier to use in the analysis of certain data collectors, as demonstrated in our examples in Sect. 3. Lessons from Our Definitions Our formalization of the notion of data deletion enables us to design and analyze mechanisms that handle data obtained from others and process deletion requests, as demonstrated in Sect. 3. This process of designing systems that satisfy our definition has brought to light a number of properties such a mechanism needs to have in order to be deletion-compliant that may be seen as general principles in this respect. To start with, satisfying our definition even while providing very simple functionalities requires a non-trivial authentication mechanism that uses randomness generated by the server. Otherwise many simple attacks can be staged that lead to observable differences based on whether some specific data was stored and deleted or never stored. The easier case to observe is when, as part of its functionality, the data collector provides a way for users to retrieve data stored with it. In this case, clearly if there is no good authentication mechanism, then one user can look at another user's data and be able to remember it even after the latter user has asked the collector to delete it. More broadly, our definition implicitly requires the data collector to provide certain privacy guarantees -that one user's data is not revealed to others. But even if such an interface is not provided by the collector, one user may store data in another user's name, and then if the latter user ever asks for its data to be deleted, this stored data will also be deleted, and looking at the memory of the collector after the fact would indicate that such a request was indeed received. If whatever authentication mechanism the collector employs does not use any randomness from the collector's side, such an attack may be performed by any adversary that knows the initial state (say the user name and the password) of the user it targets. Another requirement that our definition places on data collectors is that they handle metadata carefully. For instance, care has to be taken to use implementations of data structures that do not inadvertently preserve information about deleted data in their metadata. This follows from our definition as it talks about the state of the memory, and not just the contents of the data structure. Such requirements may be satisfied, for instance, by the use of "history-independent" implementations of data structures [Mic97,NT01], which have these properties. Further, this kind of history-independence in other domains can also be used to provide other functionalities while satisfying our definition. For instance, recent work [CY15,GGVZ19,Sch20,BCC+19,BSZ20] has investigated the question of data deletion in machine learning models, and this can be used to construct a data collector that learns such a model based on data given to it, and can later delete some of this data not just from its database, but also from the model itself. Finally, we observe that certain notions of privacy, such as differential privacy [DMNS06], can sometimes be used to satisfy deletion requirements without requiring any additional action from the data collector at all. Very roughly, a differentially private algorithm guarantees that the distribution of its output does not change by much if a small part of its input is changed. We show that if a data collector runs a differentially private algorithm on data that it is given, and is later asked to delete some of the data, it need not worry about updating the output of the algorithm that it may have stored (as long as not too much data is asked to be deleted). Following the guarantee of differential privacy, whether the deleted data was used or not in the input to this algorithm essentially does not matter. Related Work Cryptographic treatment of legal terms and concepts has been undertaken in the past. Prominent examples are the work of Cohen and Nissim [CN19] that formalizes and studies the notion of singling-out that is specified in the GDPR as a means to violate privacy in certain settings, and the work of Nissim et al. [NBW+17] that models the privacy requirements of FERPA using a game-based definition. Recently, the notion of data deletion in machine learning models has been studied by various groups [CY15,GGVZ19,Sch20,BCC+19,BSZ20]. Closest to our work is the paper of Ginart et al. [GGVZ19], which gives a definition for what it means to retract some training data from a learned model, and shows efficient procedures to do so in certain settings like k-means clustering. We discuss the crucial differences between our definitions and theirs in terms of scope and modelling in Sect. 2.2. There has been considerable past work on notions of privacy like differential privacy [DMNS06] that are related to our study, but very different in their considerations. Roughly, in differential privacy, the concern is to protect the privacy of each piece of data in a database -it asks that the output of an algorithm running on this database is roughly the same whether or not any particular piece of data is present. We, in our notion of deletion-compliance, ask for something quite different -unless any piece of data is requested to be deleted, the state of the data collector could depend arbitrarily on it; only after this deletion request is processed by the collector do the requirements of our definition come in. In this manner, while differential privacy could serve as a means to satisfy our definition, our setting and considerations in general are quite different from those there. For similar reasons, our definitions are able to require bounds on statistical distance without precluding all utility (and in some cases even perfect deletion-compliance is possible), whereas differential privacy has to work with a different notion of distance between distributions (see [Vad17, Section 1.6] for a discussion). While ours is the first formal definition of data deletion in a general setting, there has been considerable work on studying this question in specific contexts, and in engineering systems that attempt to satisfy intuitive notions of data deletion, with some of it being specifically intended to support the right to be forgotten. We refer the reader to the comprehensive review article by Politou et al. [PAP18] for relevant references and discussion of such work. Our Framework and Definitions In this section we describe our framework for describing and analyzing data collectors, and our definitions for what it means for a data collector to be deletion-compliance. Our modeling uses building blocks that were developed for the Universal Composability (UC) framework of Canetti [Can01]. First, we present the formal description of this framework and our definitions. Explanations of the framework and definitions, and how we intend for them to be used are given in Sect. 2.1. In Sect. 2.2, we discuss the various choices made in our modelling and the implicit assumptions and restrictions involved. In Sect. 2.3, we present a weakening of our definition that covers data collectors that share data with external entities, and in Sect. 2.4 we demonstrate some composition properties that our definition has. The Model of Execution. Looking ahead, our approach towards defining deletioncompliance of a data collector will be to execute it and have it interact with certain other parties, and at the end of the execution ask for certain properties of what it stores and its communication with these parties. Following [GMR89,Gol01,Can01], both the data collector and these other parties in our framework are modelled as Interactive Turing Machines (ITMs), which represent the program to be run within each party. Our definition of an ITM is very similar to the one in [CCL15], but adapted for our purposes. Definition 1 (Interactive Turing Machine). An Interactive Turing Machine (ITM) is a (possibly randomized) Turing Machine M with the following tapes: (i) a read-only identifier tape; (ii) a read-only input tape; (iii) a writeonly output tape; (iv) a read-write work tape; (v) a single-read-only incoming tape; (vi) a single-write-only outgoing tape; (vii) a read-only randomness tape; and (viii) a read-only control tape. The state of an ITM M at any given point in its execution, denoted by state M , consists of the content of its work tape at that point. Its view, denoted by view M , consists of the contents of its input, output, incoming, outgoing, randomness, and control tapes at that point. The execution of the system consists of several instances of such ITMs running and reading and writing on their own and each others' tapes, and sometimes instances of ITMs being created anew, according to the rules described in this subsection. We distinguish between ITMs (which represent static objects, or programs) and instances of ITMs, or ITIs, that represent instantiations of that ITM. Specifically, an ITI is an ITM along with an identifier that distinguishes it from other ITIs in the same system. This identifier is written on the ITI's identifier tape at the point when the ITI is created, and its semantics will be described in more detail later. In addition to having the above access to its own tapes, each ITI, in certain cases, could also have access to read from or write on certain tapes of other ITI. The first such case is when an ITI M controls another ITI M . M is said to control the ITIs whose identifiers are written on its control tape, and for each ITI M on this tape, M can read M 's output tape and write on its input tape. This list is updated whenever, in the course of the execution of the system, a new ITI is created under the control of M . The second case where ITIs have access to each others' tapes is when they are engaged in a protocol. A protocol is described by a set of ITMs that are allowed to write on each other's incoming tapes. Further, any "message" that any ITM writes on any other ITM's incoming tape is also written on its own outgoing tape. As with ITMs, a protocol is just a description of the ITMs involved in it and their prescribed actions and interactions; and an instance of a protocol, also referred to as a session, consists of ITIs interacting with each other (where indeed some of the ITIs may deviate from the prescribed behavior). Each such session has a unique session identifier (sId), and within each session each participating ITI is identified by a unique party identifier (pId). The identifier corresponding to an ITI participating in a session of a protocol with session identifier sId and party identifier pId is the unique tuple (sId, pId). There will be small number of special ITIs in our system, as defined below, whose identifiers are assigned differently from the above. Unless otherwise specified, all ITMs in our system are probabilistic polynomial time (PPT) -an ITM M is PPT if there exists a constant c > 0 such that, at any point during its run, the overall number of steps taken by M is at most n c , where n is the overall number of bits written on the input tape of M during its execution. The Data Collector. We require the behavior of the data collector and its interactions with other parties to be specified by a tuple (X , π, π D ), where X specifies the algorithm run by the data collector, and π, π D are protocols by means of which the data collector interacts with other entities. Here, π could be an arbitrary protocol (in the simplest case, a single message followed by local processing), and π D is the corresponding deletion protocol -namely, a protocol to undo/reverse a previous execution of the protocol π. For simplicity, in this work, we restrict to protocol π, π D to the natural case of the two-party setting. 3 Specifically, each instance of the protocol π that is executed has specifications for a server-side ITM and a client-side ITM. The data collector will be represented in our system by a special ITI that we will also refer to as X . When another ITI in the system, call it W for now, wishes to interact with X , it does by initiating an instance (or session) of one of the protocols π or π D . This initiation creates a pair of ITIs -the client and the server of this session -where W controls the client ITI and X the server ITI. W and X then interact by means of writing to and reading from the input and output tapes of these ITIs that they control. Further details are to be found below. The only assumption we will place on the syntax of these protocols is the following interface between π and π D . We require that at the end of any particular execution of π, a deletion token is defined that is a function solely of the sId of the execution and its transcript, and that π should specify how this token is computed. The intended interpretation is that a request to delete this instance of π consists of an instance of π D where the client-side ITI is given this deletion token as input. As we will see later, this assumption does not lose much generality in applications. Recipe for Describing Deletion-Compliance. Analogous to how security is defined in the UC framework, we define deletion-compliance in three steps as follows. First, we define a real execution where certain other entities interact with the data collector ITI X by means of instances the protocols π and π D . This is similar to the description of the "real world" in the UC framework. In this setting, we identify certain deletion requests (that is, executions of π D ) that are of special interest for us -namely, the requests that we will be requiring to be satisfied. Next, we define an ideal execution, where the instances of π that are asked to be deleted by these identified deletion requests are never executed in the first place. The "ideal execution" in our setting is different from the "ideal world" in the UC framework in the sense that we do not have an "ideal functionality". Finally, we say that (X , π, π D ) is deletion-compliant if the two execution process are essentially the same in certain respects. Below, we explain the model of the real execution, the ideal execution, and the notion of deletion-compliance. Real Execution. The real execution involves the data collector ITI X , and two other special ITIs: the environment Z and the deletion requester Y. By intention, Y represents the part of the system whose deletion requests we focus on and will eventually ask to be respected by X , and Z corresponds to the rest of the world -the (possibly adversarial) environment that interacts with X . Both of these interact with X via instances of π and π D , with X controlling the server-side of these instances and Z or Y the client-side. The environment Z, which is taken to be adversarial, is allowed to use arbitrary ITMs (ones that may deviate from the protocol) as the client-side ITIs of any instances of π or π D it initiates. The deletion-requester Y, on the other hand, is the party we are notionally providing the guarantees for, and is required to use honest ITIs of the ITMs prescribed by π and π D in the instances it initiates, though, unless otherwise specified, it may provide them with any inputs as long as they are of the format required by the protocol. 4 In addition, we require that any instance of π D run by Y is for an instance of π already initiated by Y. 5 Finally, in our modeling, while Z can send arbitrary messages to Y (thereby influencing its executions), we do not allow any communication from Y back to Z. This is crucial for ensuring that the X does not get any "to be deleted" information from other sources. At any point, there is at most one ITI in the system that is activated, meaning that it is running and can reading from or writing to any tapes that it has access to. Each ITI, while it is activated, has access to a number of tapes that it can write to and read from. Over the course of the execution, various ITIs are activated and deactivated following rules described below. When an ITI is activated, it picks up execution from the point in its "code" where it was last deactivated. Now we provide a formal description of the real execution. We assume that all parties have a computational/statistical security parameter λ ∈ N that is written on their input tape as 1 λ the first time they are activated. 6 The execution consists of a sequence of activations, where in each activation a single participant (either Z, Y, X or some ITM) is activated, and runs until it writes on the incoming tape of another (at most one other) machine, or on its own output tape. Once this write happens, the writing participant is deactivated (its execution is paused), and another party is activated next-namely, the one on who incoming tape the message was written; or alternatively, if the message was written to the output tape then the party controlling the writing ITI is activated. If no message is written to the incoming tape (and its own output tape) of any party, then Z is activated. The real execution proceeds in two phases: (i) the alive phase, and (ii) the termination phase. Alive Phase: This phase starts with an activation of the environment Z, and Z is again activated if any other ITI halts without writing on a tape. The various ITIs run according to their code, and are allowed to act as follows: -The environment Z when active is allowed to read the tapes it has access to, run, and perform any of the following actions: • Write an arbitrary message on the incoming tape of Y. • Write on the input tape of any ITI that it controls (from protocol instances initiated in the past). • Initiate a new protocol instance of π or π D with X , whereupon the required ITIs are created and Z is given control of the client-side ITI of the instance and may write on its input tape. At the same time, X is given control of the corresponding server-side ITI that is created. • Pass on activation to X or Y. • Declare the end of the Alive Phase, upon which the execution moves to the Terminate Phase. This also happens if Z halts. -The deletion-requester Y on activation can read the tapes it has access to, run, and perform any of the following actions: • Write on the input tape of any ITI that it controls. • Initiate a new instance of π or π D with X , and write on the input tape of the created client-side ITI. -The data collector X on activation can read the tapes it has access to, run, and write on the input tape of any ITI that it controls. -Any other ITI that is activated is allowed to read any of the tapes that it has access to, and write to either the incoming tape of another ITI in the protocol instance it is a part of, or on its own output tape. Terminate Phase: In this phase, the various ITIs are allowed the same actions as in the Alive phase. The activation in this phase proceeds as follows: 1. First, each client-side ITI for π that was initiated by Y in the Alive phase is sequentially activated enough times until each one of them halts. 2. For any instance of π for which a client-side ITI was initiated by Y and which was executed to completion, an instance of π D is initiated with input the deletion token for that instance of π (except if such an instance of π D was already initiated). 3. Each client-side ITI for instances of π D that were initiated by Y in the Alive phase or in the previous step is sequentially activated enough times until each one of them halts. We denote by EXEC X ,π,πD Z,Y (λ) the tuple (state X , view X , state Z , view Z ) resulting at the end of above-described real execution with security parameter λ. Ideal Execution. Denote by Y 0 the special Y that is completely silent -whenever it is activated, it simply halts. In particular, it does not initiate any ITIs and does not write on the incoming tape of any other machine. A real execution using such a Y 0 as the deletion-requester is called an ideal execution. We denote by EXEC X ,π,πD Z,Y (λ) the tuple (state X , view X , state Z , view Z ) resulting at the end of an ideal execution with data collector X and environment Z, and with security parameter λ. We are now ready to present our definition for the deletion-compliance of data collectors, which is as follows. Definition 2 (Statistical Deletion-Compliance). Given a data-collector (X , π, π D ), an environment Z, and a deletion-requester Y, let (state R,λ X , view R,λ Z ) denote the corresponding parts of the real execution EXEC X ,π,πD Z,Y (λ), and let (state I,λ X , view I,λ Z ) represent those of the ideal execution EXEC X ,π,πD Z,Y0 (λ). We say that (X , π, π D ) is statistically deletion-compliant if, for any PPT environment Z, any PPT deletion-requester Y, and for all unbounded distinguishers D, there is a negligible function ε such that for all λ ∈ N: Pr[D(state R,λ X , view R,λ Z ) = 1] − Pr[D(state I,λ X , view I,λ Z ) = 1] ≤ ε(λ) In other words, the statistical distance between these two distributions above is at most ε(λ). If D above is required to be computationally bounded (allowed to run only in PPT time in λ), then we get the weaker notion of computational deletion-compliance. Analogously, if ε(λ) is required to be 0, then we get the stronger notion of perfect deletion-compliance. Explanation of the Definition As indicated earlier, the central idea our definition is built around is that the processing of a deletion request should leave the data collector and the rest of the system in a state that is similar to one it would have been in if the data that was deleted was never given to the collector in the first place. This ensures that there is no trace left of deleted data, even in metadata maintained by some of the entities, etc. The first question that arises here is which parts of the system to ask this of. It is clear that the deleted data should no longer persist in the memory of the data collector. A less obvious but clearly necessary demand is that the data collector also not reveal this data to any user other than the one it belongs to. Otherwise, unless whomever this data is revealed to provides certain guarantees for its later deletion, the data collector loses the ability to really delete this data from locations it reached due to actions of the data collector itself, which is clearly undesirable. 7 Once so much is recognized, the basic form of the definition is clear from a cryptographic standpoint. We fix any user, let the user send the collector some data and then request for it to be deleted, and look at the state of the collector at this point together with its communication with the rest of the system so far. We also look at the same in a world where this user did not send this data at all. And we ask that these are distributed similarly. We then note that this property needs to hold not just when the collector is interacting solely with this user, but is doing so as part of its routine operation where it is interacting with any number of other users and processing their data and deletion requests as well. The UC Framework. In order to make this definition formal, we first need to model all entities in a formal framework that allows us to clearly talk about the "state" or the essential memory of the entities, while also being expressive enough to capture all, or at least most, data collectors. We chose the UC framework for this purpose as it satisfies both of these properties and is also simple enough to describe clearly and succinctly. In this framework, the programs that run are represented by Interactive Turing Machines, and communication is modelled as one machine writing on another's tape. The state of an entity is then captured by the contents of the work tape of the machine representing it, and its view by whatever was written on its tapes by other machines. This framework does impose certain restrictions on the kind of executions that it captures, though, and this is discussed later, in Sect. 2.2. Protocols and Interaction. Another choice of formality motivated by its usefulness in our definition is to have all communication with the data collector X be represented by instances of a protocol π. It should be noted that the term "protocol" here might belie the simplicity of π, which could just involve the sending of a piece of data by a user of the system to the data collector X . This compartmentalisation of communication into instances of π is to let us (and the users) refer directly to specific instances later and request their deletion using instances of the deletion protocol π D . As the reference to instances of π, we use a "deletion token" that is computable from the transcript of that instance -this is precise enough to enable us to refer to specific pieces of data that are asked to be deleted, and loose enough to capture many natural systems that might be implemented in reality for this purpose. The Deletion-Requester Y and the Environment Z. The role of the user in the above rudimentary description is played by the deletion-requester Y in our framework. In the "real" execution, Y interacts with the data collector X over some instances of π, and then asks for all information contained in these instances to be deleted. In the "ideal" execution, Y is replaced by a silent Y 0 that does not communicate with X at all. And both of these happen in the presence of an environment Z that interacts arbitrarily with X (through instances of π and π D ) -this Z is supposed to represent both the rest of the users in the system that X interacts with, as well as an adversarial entity that, in a sense, attempts to catch X if it is not handling deletions properly. By asking that the state of X and the view of Z in both these executions be similar, we are asking that the deletion essentially have the same effect on the world as the data never being sent. It is to be noted that while Y here is represented as a single entity, it does not necessarily represent just a single "user" of the system or an entire or single source of data. It could represent just a part of a user that contains the data to be deleted, or represent multiple users, all of whom want their data to be deleted. In other words, if a data collector X is deletion-compliant under our definition, and at some point in time has processed a certain set of deletion requests, then as long as the execution of the entire world at this point can be separated into Z and Y that follow our rules of execution, the deletion-compliance of X promises that all data that was sent to X from Y will disappear from the rest of the world. Using the Definition. Our framework and definition may be used for two purposes: (i) to guide the design of data collectors X that are originally described within our framework (along with protocols π and π D ) and wish to handle deletion requests well, and (ii) to analyse the guarantees provided by existing systems that were not designed with our framework in mind and which handle data deletion requests. In order to use Definition 2 to analyze the deletion-compliance of pre-existing systems, the first step is to rewrite the algorithm of the data collector to fit within our framework. This involves defining the protocols π and π D representing the communication between "users" in the system and the data collector. This part of the process involves some subjectivity, and care has to be taken to not lose crucial but non-obvious parts of the data collector, such as metadata and memory allocation procedures, in this process. The examples of some simple systems presented in Sect. 3 illustrate this process) though they do not talk about modelling lower-level implementation details). Once the data collector and the protocols are described in our framework, the rest of the work in seeing whether they satisfy our definition of deletion-compliance is well-defined. Discussion A number of choices were made in the modelling and the definition above, the reasons for some of which are not immediately apparent. Below, we go through a few of these and discuss their place in our framework and definition. Modelling Interactions. The first such choice is to include in the model the entire communication process between the data collector and its users rather than look just at what goes on internally in the data collector. For comparison, a natural and simpler definition of data deletion would be to consider a data collector that has a database, and maintains the result of some computation on this database. It then receives requests to delete specific rows in the database, and it is required to modify both the database and the processed information that it maintains so as to make it look like the deleted row was never present. The definition of data deletion in machine learning by Ginart et al. [GGVZ19], for instance, is of this form. The first and primary reason for this choice is that the intended scope of our definitions is larger than just the part of the data collector that maintains the data. We intend to analyze the behavior of the data collector as a whole, including the memory used to implement the collector's algorithm and the mechanisms in place for interpreting and processing its interactions with external agents. For instance, as we discuss in Sect. 3, it turns out that any data collector that wishes to provide reasonable guarantees to users deleting their data needs to have in place a non-trivial authentication mechanism. This requirement follows easily from the requirements of our definition, but would not be apparent if only the part of the collector that directly manages the data is considered. The second reason is that while the simpler kind of definition works well when the intention is to apply it to collectors that do indeed have such a static database that is given to them, it fails to capture crucial issues that arise in a more dynamic setting. Our inclusion of the interactions between parties in our definition enables us to take into account dependencies among the data in the system, which in turn enables us to keep our demands on the data collector more reasonable. Consider, for example, a user who sends its name to a data collector that responds with a hash of it under some secret hash function. And then the user asks the same collector to store a piece of data that is actually the same hash, but there is no indication given to the collector that this is the case. At some later time, the user asks the collector to delete its name. To a definition that only looks at the internal data storage of the collector, the natural expectation after this deletion request is processed would be that the collector's state should look as though it never learnt the user's name. However, this is an unreasonable demand -since the collector has no idea that the hash of the name was also given to it, it is not reasonable to expect that it also find the hash (which contains information about the name) and delete it. And indeed, under our definition, the collector is forgiven for not doing so unless the user explicitly asks for the hash also to be deleted. If our modelling had not kept track of the interactions between the collector and the user, we would not have been able to make this relaxation. Restrictions on Y. Another conspicuous choice is not allowing the deletionrequester Y in our framework to send messages to the environment Z. This is, in fact, how we handle cases like the one just described where there are dependencies between the messages that the collector receives that are introduced on the users' side. By requiring that Y does not send messages to Z and that all interaction between Y and X are asked to be deleted over the course of the execution, we ensure that any data that depends on X 's responses to Y's messages is also asked to be deleted. This admits the case above where both the name and the hash are requested to be deleted, and requires X to comply with such a request; but it excludes the case where only the name is asked to be deleted (as then the hash would have to be sent by Z, which has no way of learning it), thus excusing X for not deleting it. Also note that this restriction does not lose any generality outside of excluding the above kind of dependency. Take any world in which a user (or users) asks for some of its messages to be deleted, and where the above perverse dependency does not exist between these and messages not being asked to be deleted. Then, there is a pair of environment Z and deletion-requester Y that simulates that world exactly, and the deletion-compliance guarantees of X have the expected implications for such a deletion request. The same is true of the restriction that all of the messages sent by Y have to be requested to be deleted rather than just some of them -it does not actually lose generality. And also of the fact that Y is a single party that is asking for deletion rather than a collection -a set of users asking for deletion can be simulated by just one Y that does all their work. The Ideal Deletion-Requester. An interesting variant of our definition would be one in which the Y is not replaced by a silent Y 0 in the ideal world, but by another Y that sends essentially the same kinds of messages to X , but with different contents. Currently, our definition says that, after a deletion request, the collector does not even remember that it had some data that was deleted. This might be unnecessarily strong for certain applications, and this modification would relax the requirement to saying that it is fine for the collector to remember that it had some data that was deleted, just not what the data was. The modification is not trivial, though, as in general the number and kinds of messages that Y sends could depend on the contents of its messages and the responses from X , which could change if the contents are changed. Nevertheless, under the assumption that Y behaves nicely in this sense, such an alternative definition could be stated and would be useful in simple applications. Choices that Lose Generality. There are certain assumptions in our modelling that do break from reality. One of these is that all machines running in the system are sequential. Due to this, our definition does not address, for instance, the effects of race conditions in the data collector's implementation. This assumption, however, makes our definition much simpler and easier to work with, while still keeping it meaningful. We leave it as an open question to come up with a reasonable generalization of our definition (or an alternative to it) that accounts for parallel processing. Another such assumption is that, due to the order of activations and the fact that activation is passed on in the execution by ITIs writing on tapes, we do not give Z the freedom to interlace its messages freely with those being sent by Y to X . It could happen, for instance, that X is implemented poorly and simply fails to function if it does not receive all messages belonging to a particular protocol instance consecutively. This failure is not captured by our definition as is, but this is easily remedied by changing the activation rules in the execution to pass activation back to Z after each message from (an ITI controlled by) Y to X is sent and responded to. We do not do this for the sake of simplicity. Finally, our modelling of the data collector's algorithm being the entire ITM corresponds to the implicit assumption of reality that the process running this algorithm is the only one running on the system. Or, at least, that the distinguisher between the real and ideal worlds does not get to see how memory for this process is allocated among all the available memory in the system, does not learn about scheduling in the system, etc. Side-channel attacks involving such information and definitions that provide protection against these would also be interesting for future study, though even more exacting than our definition. Conditional Deletion-Compliance As noted in earlier sections, any data collector that wishes to be deletioncompliant under Definition 2 cannot reveal the data that is given to it by a user to any other entity. There are several situations, however, where such an action is desirable and even safe for the purposes of deletion. And rules for how the collector should act when it is in fact revealing data in this way is even specified in some laws -Article 17 of the GDPR, for instance, says, "Where the controller has made the personal data public and is obliged . . . to erase the personal data, the controller, taking account of available technology and the cost of implementation, shall take reasonable steps, including technical measures, to inform controllers which are processing the personal data that the data subject has requested the erasure by such controllers of any links to, or copy or replication of, those personal data." Consider, for instance, a small company X that offers storage services using space it has rented from a larger company W. X merely stores indexing information on its end and stores all of its consumers' data with W, and when a user asks for its data to be deleted, it forwards (an appropriately modified version of) this request to the W. Now, if W is deletion-compliant and deletes whatever data X asks it to, it could be possible for X to act in way that ensures that state of the entire system composed of X and W has no information about the deleted data. In other words, conditioned on some deletion-compliance properties of the environment (that includes W here), it is reasonable to expect deletion guarantees even from collectors that reveal some collected data. In this subsection, we present a definition of conditional deletion-compliance that captures this. Specifically, we consider the case where the environment Z itself is deletioncompliant, though in a slightly different sense than Definition 2. In order to define this, we consider the deletion-compliance of a data collector X running its protocols (π, π D ) in the presence of other interaction going on in the system. So far, in our executions involving (X , π, π D ), we essentially required that Y and Z only interact with X by means of the protocols π and π D . Now we relax this requirement and, in both phases of execution, allow an additional set of protocols Φ = {φ 1 , . . .} that can be initiated by X to be run between X and Z (but not Y) during the execution. We denote an execution involving X , Z and Y under these rules by EXEC X ,π,πD Z,Y,Φ . Finally, we also consider executions where, additionally, we also let X write on the incoming tape of Y. 8 We call such an execution an auxiliary execution, and denote it by AEXEC X ,π,πD Z,Y,Φ . We define the following notion of auxiliary deletioncompliance that we will be the condition we will place on the environment in our eventual definition of conditional deletion-compliance. (X , π, π D ), an environment Z, a deletion-requester Y, and a set of protocols Φ, let (state R,λ X , view R,λ Z ) denote the corresponding parts of the auxiliary execution AEXEC X ,π,πD Z,Y,Φ (λ), and (state I,λ X , view I,λ Z ) the corresponding parts of the ideal auxiliary execution AEXEC X ,π,πD Z,Y0,Φ (λ). We say that (X , π, π D ) is statistically auxiliary-deletion-compliant in the presence of Φ if, for any PPT environment Z, any PPT deletion-requester Y, and for all unbounded distinguishers D, there is a negligible function ε such that for all λ ∈ N: Definition 3 (Auxiliary Deletion-Compliance). Given a data-collector denoted by Pr[D(state R,λ X , view R,λ Z ) = 1] − Pr[D(state I,λ X , view I,λ Z ) = 1] ≤ ε(λ) Note that we do not ask X for any guarantees on being able to delete executions of the protocols in Φ. It may be seen that any data collector (X , π, π D ) that is deletion-compliant is also auxiliary deletion-compliant in the presence of any Φ, since it never runs any of the protocols in Φ. We say that a data collector X is conditionally deletion-compliant if, whenever it is interacting with an environment that is auxiliary-deletion-compliant, it provides meaningful deletion guarantees. Definition 4 (Conditional Deletion-Compliance). Given a data-collector (X , π, π D ), an environment Z, a deletion-requester Y, and a pair of protocols Φ = (φ, φ D ), let (state R,λ X , state R,λ Z ) denote the corresponding parts of the real execution EXEC X ,π,πD Z,Y,Φ (λ), and (state I,λ X , state I,λ Z ) the corresponding parts of the ideal execution EXEC X ,π,πD Z,Y0,Φ (λ). We say that (X , π, π D ) is conditionally statistically deletion-compliant in the presence of Φ if, for any PPT environment Z such that (Z, φ, φ D ) is statistically auxiliary-deletion-compliant in the presence of (π, π D ), any PPT deletion-requester Y, and for all unbounded distinguishers D, there is a negligible function ε such that for all λ ∈ N: Pr[D(state R,λ X , state R,λ Z ) = 1] − Pr[D(state I,λ X , state I,λ Z ) = 1] ≤ ε(λ) One implication of X being conditionally deletion-compliant is that if, in some execution, it is found that data that was requested of X to be deleted is still present in the system in some form, then this is not due to a failure on the part of X , but was because the environment Z was not auxiliary-deletion-compliant and hence failed to handle deletions correctly. A setup like the one described at the beginning of this subsection is studied as an example of a conditionally deletion-compliant data collector in Sect. 3.1. Properties of Our Definitions In this section, we demonstrate a few properties of our definition of deletioncompliance that are meaningful to know on their own and will also make analyses of data collectors we design in later sections simpler. In order to describe them, we first define certain special classes of deletion-requesters. The first is one where we limit the number of protocol instances the deletion-requester Y is allowed to initiate. Definition 5. For k ∈ N, a deletion-requester Y is said to be k-representative if, when interacting with a data collector X running (π, π D ), it initiates at most k instances of π with X . The other is a class of deletion-requesters intended to represent the collected actions of several 1-representative deletion-requesters operating independently of each other. In other terms, the following represents, say, a collection of users that interact with a data collector by sending it a single message each, and further never interact with each other. This is a natural circumstance that arises in several situations of interest, such as when people respond to a survey or submit their medical records to a hospital, for example. Hence, even deletion-compliance guarantees that hold only in the presence of such deletion-requesters are already meaningful and interesting. Definition 6. A deletion-requester Y is said to be oblivious if, when interacting with a data collector X running (π, π D ), for any instance of π that it initiates, it never accesses the output tape of the corresponding client-side ITI except when running π D to delete this instance, whereupon it merely computes the deletion token and provides it as input to π D . Note that the deletion-requester Y not accessing the output tapes does not necessarily mean that the entities or users that it notionally represents similarly do not look at the responses they receive from the data collector -as long as each user in a collection of users does not communicate anything about such responses to another user, the collection may be faithfully represented by an oblivious Y. Similarly, an oblivious Y could also represent a single user who sends multiple messages to the data collector, under the condition that the content of these messages, and whether and when the user sends them, does not depend on any information it receives from the data collector. We also quantify the error that is incurred by a data collector in its deletioncompliance as follows. In our definition of deletion-compliance (Definition 2), we required this error to be negligible in the security parameter. Definition 7 (Deletion-Compliance Error). Let k ∈ N. Given a datacollector (X , π, π D ), an environment Z and a deletion-requester Y, denote by (state R,λ X , view R,λ Z ) the corresponding parts of EXEC X ,π,πD Z,Y (λ), and denote by (state I,λ X , view I,λ Z ) the corresponding parts of EXEC X ,π,πD Z,Y0 (λ). The (statistical) deletion-compliance error of (X , π, π D ) is a function ε : N → [0, 1] where for λ ∈ N, the function value ε(λ) is set to be the supremum, over all PPT environments Z, all PPT deletion-requesters Y, and all unbounded distinguishers D, of the following quantity when all parties are given λ as the security parameter: Pr[D(state R,λ X , view R,λ Z ) = 1] − Pr[D(state I,λ X , view I,λ Z ) = 1] The oblivious deletion-compliance error is defined similarly, but only quantifying over all oblivious PPT deletion-requesters Y. And the k-representative deletioncompliance error is defined similarly by quantifying over all k-representative PPT Y's. We show that, for oblivious deletion-requesters, the deletion-compliance error of any data collector (X , π, π D ) grows at most linearly with the number of instances of π that are requested to be deleted. In other words, if k different users of X ask for their information to be deleted, and they all operate independently in the sense that none of them looks at the responses from X to any of the others, then the error that X incurs in processing all these requests is at most k times the error it incurs in processing one deletion request. Apart from being interesting on its own, our reason for proving this theorem is that in the case of some data collectors that we construct in Sect. 3, it turns out to be much simpler to analyze the 1-representative deletion-compliance error than the error for a generic deletion-requester. The following theorem then lets us go from the 1-representative error to the error for oblivious deletion-requesters that make more deletion requests. Theorem 1. For k ∈ N and any data collector (X , π, π D ), the k-representative oblivious deletion-compliance error is no more than k times its 1-representative deletion-compliance error. We defer the proof of the above theorem to the full version. We also show that, given two data collectors that are each deletion-compliant, their combination is also deletion-compliant, assuming obliviousness of deletion-requesters. To be more precise, given a pair of data collectors (X 1 , π 1 , π 1,D ) and (X 2 , π 2 , π 2,D ), consider the "composite" data collector ((X 1 , X 2 ), (π 1 , π 2 ), (π 1,D , π 2,D )) that works as follows: -An instance of (π 1 , π 2 ) is either an instance of π 1 or of π 2 . Similarly, an instance of (π 1,D , π 2,D ) is either an instance of π 1,D or of π 2,D . -The collector (X 1 , X 2 ) consists of a simulation of X 1 and of X 2 , each running independently of the other. -When processing an instance of π 1 or π 1,D , it forwards the messages to and from its simulation of X 1 , and similarly X 2 for π 2 or π 2,D . -The state of (X 1 , X 2 ) consists of the states of its simulations of X 1 and X 2 . Such an X would represent, for instance, two data collectors that operate separately but deal with the same set of users. We show that, if the constituting data collectors are deletion-compliant, then under the condition of the deletionrequester being oblivious, the composite data collector is also deletion-compliant. Theorem 2. If (X 1 , π 1 , π 1,D ) and (X 2 , π 2 , π 2,D ) are both statistically deletioncompliant, then the composite data collector ((X 1 , X 2 ), (π 1 , π 2 ), (π 1,D , π 2,D )) is statistically deletion-compliant for oblivious deletion-requesters. We prove Theorem 2 in the full version. The above theorem extends to the composition of any k data collectors in this manner, where there is a loss of a factor of k in the oblivious deletion-compliance error (this will be evident from the proof below). Proof of Theorem 2. The theorem follows by first showing that the composite collector is deletion-compliant for 1-representative data collectors, and then applying Theorem 1. Any 1-representative deletion-requester Y interacts either only with (the simulation of) X 1 or with X 2 . And since both of these are deletioncompliant, the state of (X 1 , X 2 ) and the view of the environment are similarly distributed in both real and ideal executions. Thus, ((X 1 , X 2 ), (π 1 , π 2 ), (π 1,D , π 2,D )) is 1-representative deletion-compliant. Applying Theorem 1 now gives us the theorem. Scenarios In this section, we present examples of data collectors that satisfy our definitions of deletion-compliance with a view to illustrate both the modelling of collectors in our framework, and the aspects of the design of such collectors that are necessitated by the requirement of such deletion-compliance. In interest of space, we only present two of our data collectors here, and defer discussion of the ones based employing differential privacy and data deletion in machine learning to the full version. Data Storage and History-Independence Consider the following ostensibly simple version of data storage. A company wishes to provide the following functionality to its users. A user can ask the company to store a single piece of data, say their date-of-birth or a password. At a later point, the user can ask the company to retrieve this data, whence the company sends this stored data back to the user. And finally, the user can ask for this data to be deleted, at which point the company deletes any data the user has asked to be stored. While a simple task, it is still not trivial to implement the deletion here correctly. The natural way to implement these functionalities is to use a dictionary data structure that stores key-value pairs and supports insertion, deletion and lookup operations. The collector could then store the data a user sends as the value and use a key that is somehow tied to the user, say the user's name or some other identifier. Unless care is taken, however, such data structures could prove insufficient -data that has been deleted could still leave a trace in the memory implementing the data structure. A pathological example is a dictionary that, to indicate that a certain key-value pair has been deleted, simply appends the string "deleted" to the value -note that such a dictionary can still provide valid insertion, deletion and lookup. While actual implementations of dictionaries do not explicitly maintain "deleted" data in this manner, no special care is usually taken to ensure that information about such data does not persist, for instance, in the metadata. The simplest solution to this problem is to use an implementation of such a data structure that explicitly ensures that the above issue does not occur. History independent data structures, introduced by Micciancio [Mic97], are implementations of data structures that are such that their representation in memory at any point in time reveals only the "content" of the data structure at that point, and not the history of the operations (insertion, deletion, etc.) performed that resulted in this content. In particular, this implies that an insertion of some data into such a data structure followed by a deletion of the same data would essentially have the same effect on memory as not having done either in the first place. More formally, these are described as follows by Naor and Teague [NT01]. Any abstract data structure supports a set of operations, each of which, without loss of generality, returns a result (which may be null). Two sequences of operations S 1 and S 2 are said to produce the same content if for any sequence T , the results returned by T with the prefix S 1 is the same as the results with the prefix S 2 . An implementation of a data structure takes descriptions of operations and returns the corresponding results, storing what it needs to in its memory. Naor and Teague then define history independence as a property of how this memory is managed by the implementation. Definition 8. An implementation of a data structure is history independent if any two sequences of operations that produce the same content also induce the same distribution on the memory representation under the implementation. If data is stored by the data collector in a history independent data structure that supports deletion, then being deletion-compliant becomes a lot simpler, as the property of history independence helps satisfy much of the requirements. In our case, we will make us of a history-independent dictionary, a data structure defined as follows. History-independent dictionaries were studied and constructed by Naor and Teague [NT01]. Definition 9. A dictionary is a data structure that stores key-value pairs, denoted by (key, value), and supports the following operations: Our current approach, then, is to implement the data storage using a historyindependent dictionary as follows. When a user sends a (key, value) pair to be stored, we insert it into the dictionary. When a user asks for the value stored under a key key, we look it up in the dictionary and return it. When a user asks to delete whatever is stored under the key key, we delete this from the dictionary. And the deletion, due to history-independence, would remove all traces of anything that was deleted. - There is, however, still an issue that arises from the fact that the channels in our model are not authenticated. Without authentication, any entity that knows a user's key could use it to learn from the data collector whether this user has any data stored with it. And later if the user asks for deletion, the data might be deleted from the memory of the collector, but the other entity has already learnt it, which it could not have done in an ideal execution. In order to deal with this, the data collector has to implement some form of authentication; and further, this authentication, as seen by the above example, has to use some randomness (or perhaps pseudorandomness) generated on the data collector's side. We implement the simplest form of authentication that suffices for this, and the resulting data collector H is described informally as follows. The data collector H maintains a history-independent dictionary Dict. Below, any information that is not required explicitly to be stored is erased as soon as each message is processed. It waits to receive a message from a user that is parsed as (instruction, auth, key, value), where either of auth or value could be ⊥, and processed as follows: -If instruction = insert, • it samples a new random authentication string auth. • it runs Dict.Insert((key, auth), value) to add value to the dictionary under the key (key, auth). • it responds to the message with the string auth. -If instruction = lookup, • it recovers the value stored under the key (key, auth) by running the lookup algorithm Dict.Lookup((key, auth)), and responds with value (if the key is not in use, value will be ⊥). -If instruction = delete, • it deletes any entry under the key (key, auth) by running the deletion algorithm Dict. Delete((key, auth)). The formal description of the above data collector in our framework, along with the associated protocols π and π D , is presented in the full version. We show that this collector is indeed statistically deletion-compliant. Informal Theorem 1. The data collector H presented above is statistically deletion-compliant. We present the formal version of the above theorem and its proof in the full version. The approach is to first observe that, due to the authentication mechanism, the probability that the environment Z will ever see any data that was stored by the deletion-requester Y is negligible in the security parameter. If this never happens, then the view of Z in the real and ideal executions (where Y does not store anything) is identical. And when the view is identical, the sequence of operations performed by Z in the two executions are also identical. Thus, since whatever Y asks to store it also asks to delete, the state of X at the end of the execution, due to its use of a history-independent dictionary, depends only on the operations of Z, which are now the same in the real and ideal executions. In summary, the lessons we learn from this process of constructing a deletioncompliant data collector for data storage are as follow: 1. Attention has to be paid to the implementation of the data structures used, which needs to satisfy some notion of independence from deleted data. 2. Authentication that involves some form of hardness or randomness from the data collector's side has to be employed even to support simple operations. Outsourcing Data Storage. Next, we present a data collector that outsources its storage to an external system, maintaining only bookkeeping information in its own memory. As it actively reveals users' data to this external system, such a data collector cannot be deletion-compliant. However, we show that historyindependence can be used to make it conditionally deletion-compliant. Again, it turns out to be crucial to ensure that an authentication mechanism is used, for reasons similar to that for the previously constructed data collector. This data collector H 2 is informally described as follows, and is quite similar to H. The data collector H2 maintains a history-independent dictionary Dict, and interacts with another collector W that uses the same syntax for messages as the collector H from earlier in this section. It waits to receive a message that is parsed as (instruction, auth, key, value), where either of auth or value could be ⊥, and processed as follows: -If instruction = insert, • It samples a new authentication string auth and a new "external key" exkey at random. • It sends the message (insert, exkey, value) to W and waits to receive a response exauth. • It runs Dict.Insert((key, auth), (exkey, exauth)) to add (exkey, exauth) to the dictionary under the key (key, auth). • It responds to the initial message with the string auth. -If instruction = lookup, • It recovers the (exkey, exauth) stored under the key (key, auth) by running Dict. Lookup((key, auth)). If the lookup fails, it responds with ⊥. • It sends the message (lookup, exkey, exauth) to W and waits to receive a response value. • It responds to the initial message with value. -If instruction = delete, • It recovers the (exkey, exauth) stored under the key (key, auth) by running Dict.Lookup((key, auth)). If the lookup fails, it halts. The formal description of the above data collector in our framework, along with the associated protocols π and π D , is presented in the full version. We show that this collector is conditionally deletion-compliant. Informal Theorem 2. The data collector H 2 described above is conditionally statistically deletion-compliant. The formal version of this theorem and its proof is presented in the full version. The approach is again to first condition on Z not being able to guess any of the authentication strings given to Y, an event that happens with overwhelming probability. After this, we show that the history-independence of the dictionary used by X can be used to effectively split X into two parts -one that handles protocols with Y, and the other than handles protocols with Z -without affecting what essentially happens in the execution. At this point, we switch to looking at the execution as an auxiliary execution with Z as the data collector, the first part of X as the deletion-requester, and the second part as the environment, and apply the auxiliary deletion-compliance of Z to show that the states of Z and X are unchanged if Y is replaced with a silent Y 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . • If not, it sends the message (delete, exkey, exauth) to W. • It deletes any entry under the key (key, auth) by running the deletion algorithm Dict.Delete((key, auth)). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Insert(key, value): stores the value value under the key key. If the key is already in use, does nothing. -Lookup(key): returns the value previously stored under the key key. If there is no such key, returns ⊥. -Delete(key): deletes the key-value pair stored under the key key. If there is no such key, does nothing. Throughout this paper, we refer to any entity collecting individuals' data as a "data collector", and often refer such indivisuals whose data is collected as "users". Certifying deletion could be possible in specific settings though, such as under assumptions on the amount of storage available to the data collector[PT10,DKW11,KK14], or in the presence of quantum computers and data[CW19,BI19]. However, our model naturally generalizes to protocols with more parties. Note that it is essential that Y follow the honest protocol specifications to ensure that the deletion requests are successful. 5 This corresponds to providing guarantees only for entities that do not (maliciously or otherwise) ask for others' data to be deleted. We remark that this is done merely for convenience and is not essential for the model to make sense. In particular, in the perfect security case, no security parameter is needed. Of course, if the entity this data is revealed to does provide some guarantees for later deletion, then we may reasonably expect the data collector to provide deletion guarantees even while revealing data to this entity. In Sect. 2.3, we present a weaker definition of deletion-compliance that captures this. This weakens the definition of deletion-compliance, as it allows X to send to Y anything it wants, since the view or state of Y is not scrutinized by the requirements of deletion-compliance. And though as a definition of deletion-compliance this is not meaningful on its own, it is a property that, if the environment Z possesses it, seems necessary and sufficient to allow a data collector X to safely reveal data to Z that it may wish to delete later. L Bourtoule, abs/1912.03817Machine unlearning. CoRR. Bourtoule, L., et al.: Machine unlearning. CoRR, abs/1912.03817 (2019) A Broadbent, R Islam, arXiv:1910.03551Quantum encryption with certified deletion. arXiv preprintBroadbent, A., Islam, R.: Quantum encryption with certified deletion. arXiv preprint arXiv:1910.03551 (2019) Machine unlearning: linear filtration for logit-based classifiers. T Baumhauer, P Schöttle, M Zeppelzauer, abs/2002.02730CoRRBaumhauer, T., Schöttle, P., Zeppelzauer, M.: Machine unlearning: linear filtration for logit-based classifiers. CoRR, abs/2002.02730 (2020) Universally composable security: a new paradigm for cryptographic protocols. R Canetti, 42nd Annual Symposium on Foundations of Computer Science. IEEE Computer Society PressCanetti, R.: Universally composable security: a new paradigm for crypto- graphic protocols. In: 42nd Annual Symposium on Foundations of Com- puter Science, pp. 136-145. IEEE Computer Society Press, October 2001 Argentina's right to be forgotten. Emory Int'l L. Rev. E L Carter, 2723Carter, E.L.: Argentina's right to be forgotten. Emory Int'l L. Rev. 27, 23 (2013) A simpler variant of universally composable security for standard multiparty computation. R Canetti, A Cohen, Y Lindell, CRYPTO 2015, Part II. Gennaro, R., Robshaw, M.9216Canetti, R., Cohen, A., Lindell, Y.: A simpler variant of universally com- posable security for standard multiparty computation. In: Gennaro, R., Robshaw, M. (eds.) CRYPTO 2015, Part II. LNCS, vol. 9216, pp. 3-22. . 10.1007/978-3-662-48000-7_1SpringerHeidelbergSpringer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-48000- 7 1 . California Consumer Privacy Act (CCPA). California Consumer Privacy Act (CCPA) (2018). https://oag.ca.gov/ privacy/ccpa Towards formalizing the GDPR's notion of singling out. A Cohen, K Nissim, abs/1904.06009CoRRCohen, A., Nissim, K.: Towards formalizing the GDPR's notion of singling out. CoRR, abs/1904.06009 (2019) Proving erasure. X Coiteux-Roy, S Wolf, IEEE International Symposium on Information Theory. Paris, FranceIEEECoiteux-Roy, X., Wolf, S.: Proving erasure. In: IEEE International Sym- posium on Information Theory, ISIT 2019, Paris, France, 7-12 July 2019, pp. 832-836. IEEE (2019) Towards making systems forget with machine unlearning. Y Cao, J Yang, 2015 IEEE Symposium on Security and Privacy. San Jose, CA, USAIEEE Computer SocietyCao, Y., Yang, J.: Towards making systems forget with machine unlearn- ing. In: 2015 IEEE Symposium on Security and Privacy, SP 2015, San Jose, CA, USA, 17-21 May 2015, pp. 463-480. IEEE Computer Society (2015) One-time computable selferasing functions. S Dziembowski, T Kazana, D Wichs, 10.1007/978-3-642-19571-6_9TCC 2011. Ishai, Y.HeidelbergSpringer6597Dziembowski, S., Kazana, T., Wichs, D.: One-time computable self- erasing functions. In: Ishai, Y. (ed.) TCC 2011. LNCS, vol. 6597, pp. 125-143. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642- 19571-6 9 Calibrating noise to sensitivity in private data analysis. C Dwork, F Mcsherry, K Nissim, A D Smith, 10.1007/11681878_14TCC 2006. Halevi, S., Rabin, T.HeidelbergSpringer3876Dwork, C., McSherry, F., Nissim, K., Smith, A.D.: Calibrating noise to sensitivity in private data analysis. In: Halevi, S., Rabin, T. (eds.) TCC 2006. LNCS, vol. 3876, pp. 265-284. Springer, Heidelberg (2006). https:// doi.org/10.1007/11681878 14 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46 (general data protection regulation). Official J. Eur. Union (OJ). 271-88294Regulation (EU) 2016/679 of the European parliament and of the council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46 (general data protection regulation). Official J. Eur. Union (OJ) 59(1-88), 294 (2016) Making AI forget you: data deletion in machine learning. A Ginart, M Y Guan, G Valiant, J Zou, abs/1907.05012CoRR. Ginart, A., Guan, M.Y., Valiant, G., Zou, J.: Making AI forget you: data deletion in machine learning. CoRR, abs/1907.05012 (2019) The knowledge complexity of interactive proof systems. S Goldwasser, S Micali, C Rackoff, SIAM J. Comput. 181Goldwasser, S., Micali, S., Rackoff, C.: The knowledge complexity of inter- active proof systems. SIAM J. Comput. 18(1), 186-208 (1989) O Goldreich, Foundations of Cryptography: Basic Tools. CambridgeCambridge University Press1Goldreich, O.: Foundations of Cryptography: Basic Tools, vol. 1. Cam- bridge University Press, Cambridge (2001) Efficient proofs of secure erasure. N P Karvelas, A Kiayias, SCN 2014. Abdalla, M., De Prisco, R.8642Karvelas, N.P., Kiayias, A.: Efficient proofs of secure erasure. In: Abdalla, M., De Prisco, R. (eds.) SCN 2014. LNCS, vol. 8642, pp. 520-537. . 10.1007/978-3-319-10879-7_30SpringerChamSpringer, Cham (2014). https://doi.org/10.1007/978-3-319-10879-7 30 Oblivious data structures: applications to cryptography. D Micciancio, Proceedings of the Twenty-Ninth Annual ACM Symposium on the Theory of Computing. Leighton, F.T., Shor, P.W.the Twenty-Ninth Annual ACM Symposium on the Theory of ComputingEl Paso, Texas, USAACMMicciancio, D.: Oblivious data structures: applications to cryptography. In: Leighton, F.T., Shor, P.W. (eds.) Proceedings of the Twenty-Ninth Annual ACM Symposium on the Theory of Computing, El Paso, Texas, USA, 4-6 May 1997, pp. 456-464. ACM (1997) Bridging the gap between computer science and legal approaches to privacy. K Nissim, Harv. JL Tech. 31687Nissim, K., et al.: Bridging the gap between computer science and legal approaches to privacy. Harv. JL Tech. 31, 687 (2017) Anti-presistence: history independent data structures. M Naor, V Teague, Proceedings on 33rd Annual ACM Symposium on Theory of Computing. Vitter, J.S., Spirakis, P.G., Yannakakis, M.on 33rd Annual ACM Symposium on Theory of ComputingHeraklion, Crete, Greece, 6-8ACMNaor, M., Teague, V.: Anti-presistence: history independent data struc- tures. In: Vitter, J.S., Spirakis, P.G., Yannakakis, M. (eds.) Proceedings on 33rd Annual ACM Symposium on Theory of Computing, Heraklion, Crete, Greece, 6-8 July 2001, pp. 492-501. ACM (2001) Forgetting personal data and revoking consent under the GDPR: challenges and proposed solutions. E A Politou, E Alepis, C Patsakis, J. Cybersecur. 411Politou, E.A., Alepis, E., Patsakis, C.: Forgetting personal data and revok- ing consent under the GDPR: challenges and proposed solutions. J. Cyber- secur. 4(1), tyy001 (2018) Secure code update for embedded devices via proofs of secure erasure. D Perito, G Tsudik, 10.1007/978-3-642-15497-3_39ESORICS 2010. Gritzalis, D., Preneel, B., Theoharidou, M.HeidelbergSpringer6345Perito, D., Tsudik, G.: Secure code update for embedded devices via proofs of secure erasure. In: Gritzalis, D., Preneel, B., Theoharidou, M. (eds.) ESORICS 2010. LNCS, vol. 6345, pp. 643-662. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15497-3 39 Amnesia" -machine learning models that can forget user data very fast. S Schelter, 10th Conference on Innovative Data Systems Research. Amsterdam, The Netherlands2020Schelter, S.: "Amnesia" -machine learning models that can forget user data very fast. In: 10th Conference on Innovative Data Systems Research, CIDR 2020, Amsterdam, The Netherlands, 12-15 January 2020 (2020). . Online Proceedings. www.cidrdb.org. Online Proceedings. www.cidrdb.org Machine learning models that remember too much. C Song, T Ristenpart, V Shmatikov, Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. Thuraisingham, B.M., Evans, D., Malkin, T., Xu, D.the 2017 ACM SIGSAC Conference on Computer and Communications SecurityDallas, TX, USAACMSong, C., Ristenpart, T., Shmatikov, V.: Machine learning models that remember too much. In: Thuraisingham, B.M., Evans, D., Malkin, T., Xu, D. (eds.) Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS 2017, Dallas, TX, USA, 30 October- 03 November 2017, pp. 587-601. ACM (2017) The complexity of differential privacy. S P Vadhan, 10.1007/978-3-319-57048-8_7Tutorials on the Foundations of Cryptography. ISC. Lindell, Y.ChamSpringerVadhan, S.P.: The complexity of differential privacy. In: Lindell, Y. (ed.) Tutorials on the Foundations of Cryptography. ISC, pp. 347-450. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-57048-8 7 Algorithms that remember: model inversion attacks and data protection law. M Veale, R Binns, L Edwards, abs/1807.04644CoRRVeale, M., Binns, R., Edwards, L.: Algorithms that remember: model inversion attacks and data protection law. CoRR, abs/1807.04644 (2018)
[]
[ "Prompting through Prototype: A Prototype-based Prompt Learning on Pretrained Vision-Language Models", "Prompting through Prototype: A Prototype-based Prompt Learning on Pretrained Vision-Language Models" ]
[ "Yue Zhang \nCognitive Computing Lab Baidu Research\n10900 NE 8th St. Bellevue98004WAUSA\n", "Hongliang Fei \nCognitive Computing Lab Baidu Research\n10900 NE 8th St. Bellevue98004WAUSA\n", "Dingcheng Li \nCognitive Computing Lab Baidu Research\n10900 NE 8th St. Bellevue98004WAUSA\n", "Tan Yu \nCognitive Computing Lab Baidu Research\n10900 NE 8th St. Bellevue98004WAUSA\n", "Ping Li [email protected] \nCognitive Computing Lab Baidu Research\n10900 NE 8th St. Bellevue98004WAUSA\n" ]
[ "Cognitive Computing Lab Baidu Research\n10900 NE 8th St. Bellevue98004WAUSA", "Cognitive Computing Lab Baidu Research\n10900 NE 8th St. Bellevue98004WAUSA", "Cognitive Computing Lab Baidu Research\n10900 NE 8th St. Bellevue98004WAUSA", "Cognitive Computing Lab Baidu Research\n10900 NE 8th St. Bellevue98004WAUSA", "Cognitive Computing Lab Baidu Research\n10900 NE 8th St. Bellevue98004WAUSA" ]
[]
Prompt learning is a new learning paradigm which reformulates downstream tasks as similar pretraining tasks on pretrained models by leveraging textual prompts. Recent works have demonstrated that prompt learning is particularly useful for few-shot learning, where there is limited training data. Depending on the granularity of prompts, those methods can be roughly divided into task-level prompting and instance-level prompting. Task-level prompting methods learn one universal prompt for all input samples, which is efficient but ineffective to capture subtle differences among different classes. Instance-level prompting methods learn a specific prompt for each input, though effective but inefficient. In this work, we develop a novel prototype-based prompt learning method to overcome the above limitations. In particular, we focus on few-shot image recognition tasks on pretrained vision-language models (PVLMs) and develop a method of prompting through prototype (PTP), where we define K image prototypes and K prompt prototypes. In PTP, the image prototype represents a centroid of a certain image cluster in the latent space and a prompt prototype is defined as a soft prompt in the continuous space. The similarity between a query image and an image prototype determines how much this prediction relies on the corresponding prompt prototype. Hence, in PTP, similar images will utilize similar prompting ways. Through extensive experiments on seven real-world benchmarks, we show that PTP is an effective method to leverage the latent knowledge and adaptive to various PVLMs. Moreover, through detailed analysis, we discuss pros and cons for prompt learning and parameter-efficient fine-tuning under the context of few-shot learning.
10.48550/arxiv.2210.10841
[ "https://export.arxiv.org/pdf/2210.10841v1.pdf" ]
253,018,489
2210.10841
af1fb33a4eeffe87ffef95d5b8f157c1d8b3f5c0
Prompting through Prototype: A Prototype-based Prompt Learning on Pretrained Vision-Language Models Yue Zhang Cognitive Computing Lab Baidu Research 10900 NE 8th St. Bellevue98004WAUSA Hongliang Fei Cognitive Computing Lab Baidu Research 10900 NE 8th St. Bellevue98004WAUSA Dingcheng Li Cognitive Computing Lab Baidu Research 10900 NE 8th St. Bellevue98004WAUSA Tan Yu Cognitive Computing Lab Baidu Research 10900 NE 8th St. Bellevue98004WAUSA Ping Li [email protected] Cognitive Computing Lab Baidu Research 10900 NE 8th St. Bellevue98004WAUSA Prompting through Prototype: A Prototype-based Prompt Learning on Pretrained Vision-Language Models Prompt learning is a new learning paradigm which reformulates downstream tasks as similar pretraining tasks on pretrained models by leveraging textual prompts. Recent works have demonstrated that prompt learning is particularly useful for few-shot learning, where there is limited training data. Depending on the granularity of prompts, those methods can be roughly divided into task-level prompting and instance-level prompting. Task-level prompting methods learn one universal prompt for all input samples, which is efficient but ineffective to capture subtle differences among different classes. Instance-level prompting methods learn a specific prompt for each input, though effective but inefficient. In this work, we develop a novel prototype-based prompt learning method to overcome the above limitations. In particular, we focus on few-shot image recognition tasks on pretrained vision-language models (PVLMs) and develop a method of prompting through prototype (PTP), where we define K image prototypes and K prompt prototypes. In PTP, the image prototype represents a centroid of a certain image cluster in the latent space and a prompt prototype is defined as a soft prompt in the continuous space. The similarity between a query image and an image prototype determines how much this prediction relies on the corresponding prompt prototype. Hence, in PTP, similar images will utilize similar prompting ways. Through extensive experiments on seven real-world benchmarks, we show that PTP is an effective method to leverage the latent knowledge and adaptive to various PVLMs. Moreover, through detailed analysis, we discuss pros and cons for prompt learning and parameter-efficient fine-tuning under the context of few-shot learning. Introduction Prompt learning (Li and Liang, 2021;Gao et al., 2021b;Sanh et al., 2022) is a new paradigm to reformulate downstream tasks as similar pretraining tasks on pretrained language models (PLMs) with the help of a textual prompt. Compared with the conventional "pre-train, fine-tuning" paradigm, prompt learning is particularly useful for few-shot learning, where there is no sufficient training data to fine-tune the whole pre-trained model. Recently, light-weight but effective prompt learning methods have been developed in various few-shot learning tasks (Schick and Schütze, 2021;Gao et al., 2021b;Shin et al., 2020) in natural language processing (NLP), such as few-shot sentiment analysis and natural language inference. With the success of prompt learning in NLP, it is natural to generalize prompt learning to pretrained vision-language models (PVLMs) (Radford et al., 2021;Jin et al., 2022b;Zhou et al., 2022b;Tsimpoukelli et al., 2021;Liang et al., 2022;Sanh et al., 2022) for vision-language tasks. In this work, we especially focus on exploring few-shot image recognition tasks in the prompt learning paradigm, which has not been fully explored in the prompt learning research area. The motivation originates from the fact that PVLMs, such as CLIP (Radford et al., 2021) and ViLT , are pre-trained with image-text matching and masked language modeling (MLM) style tasks on images and their aligned descriptions. For the image recognition task, where class labels have a textual form (e.g. "faces", "Hummer SUV"), they can be converted into image-text matching tasks. For example, one simple manual-craft prompt template could be "a photo of a [CLASS]", where [CLASS] will be replaced by any candidate category name. The PVLM matches the query image with all the prompted candidate category names, and chooses the one with the highest matching score. Similar to NLP, the essence of prompt learning for PVLM is designing the most appropriate prompts for the downstream tasks. The latest methods to construct prompts include, i) manual-craft prompts Jin et al., 2022b), where researchers manually create intuitive templates based on human introspection; ii) automatically searched prompts (Shin et al., 2020;Zhong et al., 2021;Zhou et al., 2022b), where researchers search over discrete input token space or continuous embedding space for prompts that elicit correct predictions in the training set; iii) instance-level prompt learning (Zhou et al., 2022a;Rao et al., 2022;Jin et al., 2022a), where instead of learning one universal prompt that works for all the input, they learn instance-level prompts conditional on the given input. Although manually written prompts are interpretable, they are limited by the manual effort, and might not be optimal for eliciting correct predictions. The automated approaches overcome the limitations of manual prompts by training a statistical model, but they learn one universal prompt for each task, which may result in sub-optimal prompts. Instance-level prompt learning methods learn different prompts conditional on the given inputs, however, they usually need to maintain a complex neural module mapping the inputs into prompts, which makes them work poorly on few-shot learning settings. Meanwhile, besides prompt learning on PVLMs, researchers are also exploring parameter-efficient fine-tuning methods for few-shot learning, such as linear probing (Tian et al., 2020), Adaptor (Houlsby et al., 2019), Bitfit (Zaken et al., 2022) and Calibration (Zhao et al., 2021), where they only fine-tune a small set of parameters of pre-trained models. Those works have demonstrated superior performance when training samples are not very scarce. Our experimental study, however, show that the accuracy significantly decreases when #shots ≤ 4 as the limited training samples restrict the capability of learning and generalization of fine-tuning. There are two considerations when designing an elegant prompt learning method on PVLMs for few-shot learning. Firstly, the method should be generic and easily adaptable for different architectures, such as Bi-encoder structure CLIP (Radford et al., 2021) and single encoder ViLT . Secondly, the prompt learning method should be lightweight and competitive to or even outperforms parameter-efficient fine-tuning methods. In this work, we propose our model: Prompting through Prototype (PTP), which is a prototype-based prompt learning method on PVLMs to effectively solve the downstream few-shot image recognition tasks. Based on the observation that 1) the aligned image-text pairs have high matching scores, and 2) the similar images are close to each other in the embedding space in PVLMs, we hypothesize that similar images should use similar prompts in prompt learning. The observation 1) is because that during vision-language model pre-training, one of the pre-training objectives is image-text matching. Hence, pre-trained VL models have remarkable zero-shot performance on image-text matching. In other words, the similar images and aligned text-image paris naturally have high matching scores from PVLMs. The observation 2) will be shown during experiments. Intuitively, assuming training images can be coarsely divided into K clusters based on the similarity between their latent embedding vectors, then each cluster can have its own textual prompt used for category name (label words) prompting. Furthermore, based on our hypothesis, we define K prototype components, where each prototype component contains an image prototype and a prompt prototype. In our context, the image prototype means a point in the image latent space representing a centroid of a certain cluster. The similarity between a query image and an image prototype determines how much this query image's category prediction relies on the corresponding prompt prototype. The final prediction is the weighted summation of all the prediction scores using different prompt prototypes. We summarize our contributions as follows. • We propose a novel prompt learning method PTP on PVLMs, to overcome the drawbacks of tasklevel (manual/auto-searched prompts) and instance-level prompting. Instead of designing a universal prompt regardless of instances (Shin et al., 2020;Zhou et al., 2022b,a) or instance-specific prompt for each instance (Zhou et al., 2022a;Rao et al., 2022), we develop a prototype-based prompting method, wherein similar query images utilizes similar prompting ways. During training, we only update parameters related to prompting while freezing the weights of PVLM to ensure a lightweight and efficient model. • We conduct extensive experiments on 7 real-world benchmarks across 2 types of PVLMs and show that our PTP is an effective method for the full use of the pre-trained knowledge for the downstream few-shot image recognition tasks. The absolute improvement on average accuracy compared to autosearched prompts (Zhou et al., 2022a) over all experiments are around: 4% for 1/2-shot, 5% for 4-shot, 6% for 8-shot, 7% for 16-shot. • We made empirical analyses between prompting and fine-tuning and revealed that both methods have their advantages and limitations. In particular, a good prompt learning performance highly relies on the pre-trained knowledge stored in the pre-training. A prompt learning method will have difficulty triggering the correct answers, if the PVLM itself lacks such visual or textual knowledge. Through detailed hyper-parameter analysis, we show how to choose the number of prototypes based on performance and parameter-efficiency. We also show the importance of our novel regularizers for learning the image prototypes. 2 Related Work Pretrained Vision-and-Language Models Recently, many vision-language models are proposed. The large-scale pre-training allows PVLMs to zeroshot transfer to various downstream classification tasks. They can be coarsely divided into two groups based on their architecture: the bi-encoder model (Radford et al., 2021;Jia et al., 2021), and the single-encoder model Lu et al., 2019). Bi-encoder model, such as CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021), consists of two encoders, one for images and the other for text. This work uses CLIP as a representative for the bi-encoder model, which has remarkable zero-shot performance on image-text retrieval.. By default, CLIP uses "a photo of [CLASS]" on the text side for image recognition tasks. Single-encoder model, such as ViLBERT (Lu et al., 2019), ViLT , etc., concatenates the object features from the image and word features from the sentence into a long sequence. So the two modalities interact with each other in self-attention layers. This work uses ViLT as a representative for single-encoder models. Few-shot Learning Parameter-Efficient Fine-tuning. Parameter-efficient fine-tuning methods mainly include: i) Adapters (Houlsby et al., 2019;Gao et al., 2021a;Zhang et al., 2021), where neural networks layers are inserted between the feed-forward portion of the Transformer architecture; ii) BitFit (Zaken et al., 2022;IV et al., 2022), where they only update the bias terms inside the Transformer; iii) Calibration (Zhao et al., 2021), where they learn an affine transformation on top of the logits output from the Transformer; iv) Linear probe (Tian et al., 2020), where a linear classifier is trained on top of pre-trained models' features. Prompt Learning Methods. Recently, multiple prompt learning works on PVLM are proposed (Jin et al., 2022b;Zhou et al., 2022b;Tsimpoukelli et al., 2021;Liang et al., 2022;Rao et al., 2022). Jin et al. (2022b) first pre-trained a prompt-aware vision language model, then transferred to downstream tasks, such as VQA, with the help of hand-crafted prompts. Zhou et al. (2022b) learned universal soft prompts for solving downstream few-shot image classification tasks. Tsimpoukelli et al. (2021) developed an image to text generation model, with a dynamic prefix to control the generation. Liang et al. (2022) learned soft prompts to align the different modalities. Rao et al. (2022) learned instance-aware prompts for dense prediction. In this work, we focus on designing an efficient and effective prompt learning method on PVLMs for downstream few-shot image classification tasks. We leverage prototype-based prompting. Our image prototypes have a similar concept and usage "this looks like that" in previous works (Li et al., 2018;Chen et al., 2019), where they learn and utilize prototypes to make interpretable predictions. Methodology Problem Setup We define a few-shot image recognition training dataset as D = {(x i , y i , c i )} N i=1 , where x i is the image input, y i is corresponding discrete label, c i is corresponding category name, e.g., "faces", "Hummer SUV". We define the candidate pool of category names as C = {c j } C j=1 , where C is total number of different categories. Given a pre-trained vision-language model (PVLM) and a few-shot training dataset D, our task aims at solving the downstream few-shot image classification task via prompt learning paradigm. Figure 1: The overall architecture of our model PTP. PTP mainly consists of: i) an image encoder, which is a part of PVLM; ii) K prototype components, where each component consists of an image prototype P k and a prompt prototype T k ; iii) a fixed PVLM. During training, we learn the lightweight parameters related to prompting, i.e., image prototypes and prompt prototypes, and keep the pre-trained vision-language model frozen. Model Architecture The overall architecture of our model is shown in Figure 1. Our PTP model consists of three major parts: i) a pre-trained and fixed image encoder f (.), which is a part of a PVLM; ii) K prototype components, where each prototype component consists of an image prototype P k and a prompt prototype T k , k ∈ {1, 2, · · · , K}; iii) a fixed PVLM, which takes an image, prompted category name as input and outputs their matching score. Pre-trained Image Encoder The image encoder f (.) takes image x as input and outputs the image latent representation f (x) ∈ R d . Bi-encoder PVLMs, such as CLIP, incorporate two encoders, one for image and the other for text. So, for bi-encoder PVLMs, we can directly utilize its pre-trained image encoder. While, single-encoder PVLMs, such as ViLT, do not have a standalone image encoder by default. For single-encoder PVLMs, we calculate their image encoding by putting the query image and an empty text as the input of an single-encoder PVLM: f (x) = PVLM pooler (x, [CLS][SEP]), where PVLM is a single-encoder model, f (x) is the pooler output of the PVLM, the text side only contains special tokens [CLS] and [SEP], which are pre-defined in the PVLM vocabulary. Since f (.) is a part of a PVLM, through large-scale pre-training, it has the ability to map similar images into close latent vectors. During training, we keep f (.) frozen. Then, the encoded image representation f (x) is used to calculate the similarity score with each image prototype P k : sim(x, P k ) = exp(f (x) T · P k ) K i=1 exp(f (x) T · P i ) Prototype Component In our few-shot setting, we define K prototype components, where K is much smaller than the total number of class C. Each prototype component consists of an image prototype and a prompt prototype. In total, we have K image prototypes and K prompt prototypes. Image Prototype. The image prototypes are defined in the image latent space, P k ∈ R d . During training, we define two regularizers to encourage the learned image prototype vectors to correspond to meaningful points in the latent space. In our setting, we hypothesize that image prototype vectors are the centroids of certain clusters of training image data. The two regularizers are formulated as follows: R 1 (P 1 , ..., P K , D) = 1 K K j=1 min i∈[1,N ] ||P j − f (x i )|| 2 2 , R 2 (P 1 , ..., P K , D) = 1 N N i=1 min j∈[1,K] ||f (x i ) − P j || 2 2 . Here both terms are averages of minimum squared distances. The minimization of R 1 would require each image prototype to be close to at least one of the training examples in the latent space. R 1 will push each image prototype to be a centroid of a certain cluster. The minimization of R 2 would require every encoded training example to be close to one of the image prototype vectors. This means that R 2 will cluster the training examples around image prototypes in the latent space. We relax R 1 minimization to be over only the random minibatch during stochastic gradient descent. Prompt Prototype. Each image prototype has its corresponding prompt prototype. Our prompt prototype is defined in the continuous space in the form of: T k = [V ] k,1 [V ] k,2 ... [V ] k,m {CLASS}, where each [V ] k,i ∈ R d is a dense vector with the same dimension as the PVLM's input embedding, and the number of [V ] vectors is set to a pre-defined number m. Compared with using discrete vocabulary tokens as prompts, continuous prompts can be viewed as adding new tokens into pre-defined vocabulary, and each [V ] k,i is the word embedding of a new token. In our PTP, we define K prompt prototypes, and each prompt prototype contains m new tokens. So totally, we add m * K new tokens into pre-defined vocabularies, and we will only update the word embedding of new tokens. The values for [V ] k,i can be randomly initialized, and updated through gradient descent. The {CLASS} represents any candidate category name. For bi-encoder PVLMs, such as CLIP, our prompts only affect its textual input, keeping the image side unchanged. While, for single-encoder PVLMs, such as ViLT, since it concatenates and fuses image and text information, our prompts have the ability to change the image-text pair input. PVLM for Classification PVLM model takes image-text pair as input and outputs their matching score. An image classification problem can be converted into an image-text pair matching problem, where the image side is our query image, the text side is our category name with prompting. Finally, we can get the matching scores on all candidate categories. Specifically, given a query image x, a prompt T k , a concrete category name c, we have its matching score under prompt T k equals to: match T k (x, c) = PVLM(x, T k (c)), where T k (c) means we replace {CLASS} in prompt T k using a category c. Bi-encoder PVLM. For a bi-encoder PVLM, such as CLIP, calculating the matching score can be further decomposed into: match T k (x, c) = f (x) T · g(T k (c)), where g(.) represents the text encoder. Also, the prediction scores over all the candidate categories under the prompt T k are then computed as: Prob T k (y = i|x) = exp(match T k (x, c i )/τ ) C j=1 exp(match T k (x, c j )/τ ) , where τ is a temperature parameter learned by CLIP. Single-encoder PVLM. For single-encoder PVLM, such as ViLT, the prediction score for each category under the prompt T k is then computed as: Prob T k (y = i|x) = Sigmoid(match T k (x, c i )), Here, we use different functions to calculate prediction scores for CLIP and ViLT, respectively. During pre-training, CLIP utilizes contrastive loss for the image-text matching objective, while ViLT utilizes binary cross-entropy loss. So, we follow their convention in their pre-training. Considering all the K prompts together, our final prediction score on each category equals the weighted summation of prediction scores under different prompts, which is formulated in the following: Prob(x, c) = k∈[1,K] sim(x, P k ) * Prob T k (x, c), At inference time, we will choose the category with the highest matching score as our final classification result. We can see that if input image x is similar to an image prototype P k on the latent space, then its classification result is more dependent on the prompt T k . Objective Function Our first cost function reflects the classification error. For bi-encoder PVLM, we compute the cross-entropy loss for penalizing the misclassification. The cross-entropy loss on the training data D is denoted by E, and is given by: E(θ, D) = 1 N N i=1 C j=1 −1[y i = j] log(Prob(x i , c j )), For single-encoder PVLM, we compute the binary cross-entropy loss for penalizing the misclassification as follows: E(θ, D) = − 1 N N i=1 C j=1 [y j · log(Prob(x i , c j )) + (1 − y j ) · log(Prob(x i , c j ))] where θ represents all the parameters related to prompting, i.e., θ = {P 1 , ...., P K , T 1 , ...., T K }. Putting everything together, the cost function, denoted by L, on the training data D, is given by: L(θ, D) = E(θ, D) + λ · [R 1 (P 1 , ..., P K , D) + R 2 (P 1 , ..., P K , D)], where λ ∈ [0, 1] is a hyper-parameter that adjusts the weight of the regularizers. Our model PTP is trained end-to-end. Experiment Datasets and Experiment Setting We evaluate PTP on 7 publicly available image classification datasets: 1) Caltech101, which contains 100 classes, such as "faces", "leopards", "laptop", etc., and contains 2,465 test data. 2) StanfordCars (Krause et al., 2013), which contains 196 classes on cars, such as "2000 AM General Hummer SUV", "2012 Acura RL Sedan", etc., and contains 8,041 test data. 3) OxfordPets (Parkhi et al., 2012), which contains 37 classes on pets, such as "beagle", "chihuahua", etc., and contains 3,669 test data. 4) UCF101 dataset (Soomro et al., 2012), where the middle frame of each video is used as image input; it contains 101 classes on people activity, such as "baby crawling", "biking", "blow dry hair", etc., and contains 3,783 test data. 5) Food101 (Bossard et al., 2014), which contains 101 classes on food, such as "apple pie", "crab cakes", etc., and contains 30,300 test data.6) SUN397 (Xiao et al., 2010), which contains 397 classes, such as "barn", "campus", etc., and contains 19,850 test data. 7) FGVCAircraft (Maji et al., 2013), which contains 100 classes on aircraft models, such as "Boeing 717", "DC-10", "EMB-120", etc., and contains 3,333 test data. For PVLM models, we consider bi-encoder CLIP (Radford et al., 2021) and single-encoder ViLT . For both CLIP and ViLT, we use ViT-B/32 as image encoder backbone. Compared with ResNet based backbone, ViT-B/32 is more powerful on image encoding and has better zero-shot transfer ability (Radford et al., 2021). The parameter size of PTP equals to m * d * K + K * d = (m + 1) * d * K. We setup prototype number K considering both the performance and the parameter-efficiency. Table 1 shows the hyper-parameter settings of PTP. Since a bi-encoder is much more computationally efficient than a single-encoder, for ViLT, we set a smaller prompt token number m. We follow the few-shot evaluation protocol adopted in previous work (Radford et al., 2021;Zhou et al., 2022b), using 1,2,4,8 and 16 shots for training, respectively, and deploying models in the full test sets. Here, the n shots means n number of labeled training examples per class. In our few-shot setting, the hold-out validation dataset is unavailable. The average accuracy results on the test set over three runs are reported for comparison. More specifically, For experiments on CLIP, the maximum epoch is set to 200 for 16/8 shots, 100 for 4/2/1/ shots. For experiments on ViLT, the maximum epoch is set to 1000 for 16/8 shots, 500 for 4/2/1/ shots. For baseline CoCoOp (Zhou et al., 2022a), since it is easily overfitting on the few-shot dataset, the maximum epoch is set to 50 for all CoCoOp experiments, except for StanfordCars and SUN397 datasets, which we set to 10. The average accuracy results on the test set over three runs are reported for comparison. Baselines First, we compare with zero-shot baselines: 1) Manual-crafted prompt (MCP) (Radford et al., 2021); 2) Vision matching (VM). Secondly, we compare with parameter-efficient fine-tuning methods: 3) Linear probe (LP) (Tian et al., 2020); 4) Bitfit fine-tuning (Zaken et al., 2022). Thirdly, we compare with other prompt learning baselines: 5) Soft prompt (SP) (Zhong et al., 2021;Zhou et al., 2022b); 6) CoCoOp (Zhou et al., 2022a). Table 2 gives the details of each baseline, including their parameter size, where C is the total class number, d is the latent space dimension of a PVLM, m is the number of prompt vector [V ], h is the middle layer size of the neural nets. In practice, the possible value of h could be 32, m could be 16. For CLIP and ViLT, d equals 512. It only updates the bias terms in PVLM, also it uses "a photo of a {CLASS}" at text side. ≈ 10 3 Soft prompt (SP) (Zhong et al., 2021;Zhou et al., 2022b) it learns a universal prompt with a template [V ] 1 ...[V ] m {CLASS}. m * d CoCoOp (Zhou et al., 2022a) It is an instance-level prompt learning method designed on CLIP. It maintains two-layer neural nets. (2h + m) * d Image Recognition Result using CLIP We set prototype number K = 3 for SUN397; K = 5 for Caltech101, UCF101, Food101, and FGVCAircraft; K = 7 for StanfordCars, and OxfordPets. We report the comparison results on 7 datasets using CLIP from Table 3 to Table 9, respectively. From Tables 3-9, firstly, we see that the zero-shot method manual-crafted prompt (MCP) gets the overall acceptable classification accuracy. Especially in Table 4 at 1-shot, MCP gets the best performance. Secondly, we see that with the training data increasing (i.e., from 1-shot to 16-shot), the performance of vision matching (VM) increases. On some datasets, at 8/16 shots, VM method can even outperform the MCP method, such as in Tables 3 and Table 6. In baseline vision matching (VM), given n-shot training data per class, we first calculate the mean of the image latent vectors in each category as the representation for each class. Then we match the query images with each class representation in the image latent space without fine-tuning. This VM baseline achieves overall good performance on the 16-shot scenarios, hence it proves our hypothesis that similar images are close to each other. For parameter-efficient fine-tuning methods linear probe (LP) and Bitfit, overall, we see that when there is less training data (i.e., 1/2-shot), LP and Bitfit can not perform as well as other baselines, sometimes even much worse than the zero-shot baselines, such as at 1/2-shot in Tables 4, 5 and 7. In Table 4 at 1-shot, the MCP gets the accuracy of 60.3, while LP only gets an accuracy of 29.95, where we can see a huge performance gap between them. While, with the training data increasing (i.e., 8/16-shot), LP and Bitfit become very strong baselines. Overall, prompt learning baselines soft prompt (SP) and CoCoOp perform better on 1/2-shots, compared with LP and Bitfit. CoCoOp is not as effective as SP in many cases. It is because CoCoOp is not as lightweight as SP or our PTP, which makes it easily over-fitting in few-shot scenarios. We observe that except in Table 9, our PTP outperforms all zero-shot baselines, prompt learning baselines and parameter-efficient fine-tuning baselines. Even at 1-shot, PTP has the ability to achieve superior performance. With the training data increasing, our PTP still can outperform the very strong fine-tuning baselines. Now, lets take a close look at Table 9. From Table 9, we observe that VM outperforms MCP, which means for the FGVCAircraft dataset, image-image matching works better than image-text matching. We see that PTP achieves the best performance on 1-shot. However, on 2/4/8/16-shot, LP and Bitfit achieve the overall best performance. Through detailed analysis, we find that many category names provided by the FGVCAircraft dataset do not have semantic meaning, such as "707-320", "A321" and "BAE-125", etc. These semantic meaningless category names make prompt learning methods inferior. But, we can see that our PTP still can get the best performance over other prompt learning methods, given these challenging category names. On the other hand, LP does not reply on category names, and Bitfit can update the word embedding and learn the semantics of "707-320", "A321" and "BAE-125", etc., through fine-tuning. Image Recognition Result using ViLT ViLT itself is not as powerful as CLIP on image-text matching, because ViLT is pre-trained using smaller data, which results in ViLT cannot zero-shot or few-shot transfer to very fine-grained classification tasks. So, for ViLT, we only conduct experiments on three datasets: Caltech101, UCF101, and Food101. For all three datasets, we set K = 5. We report the comparison results on these three datasets using ViLT in Table 10, 11 and 12, respectively. In Table 10, we see PTP significantly outperforms zero-shot manual prompt method, Bitfit fine-tuning method, and prompt learning baseline SP. In Table 11 and Table 12, we see PTP outperforms all other baselines at 1/2/4-shot scenarios. However, in Table 11 and 12, at 8/16-shots, Bitfit fine-tuning method becomes better than PTP. With more training data, Bitfit fine-tuning method updates the parameters of PVLM towards optimal. But, we see PTP still can outperform prompt learning baseline SP on all the cases. Discussion and Analysis 4.5.1 Analysis on Prototype Number K First, we analyze the model performance under different prototype numbers K. We conduct an analysis using CLIP. We set K = 3, 5, 7, respectively, and test on the three datasets: Caltech101, UCF101 and OxfordPets. The results are reported in Figure 2. We can see that at 1/2-shot scenarios, higher K does not necessarily lead to higher performance, such as in Figure 2 (a), the best performance at 1-shot comes from K = 3. At 4, 8, and 16-shot, we see the general trend is that higher K leads to higher performance. However, from Figure 2 (a) and (b), we can see that when K increases from 5 to 7, the performance does not improve significantly. In practice, we choose K Analysis on λ Secondly, we analyze the hyper-parameter λ, which adjusts the weight of the regularizers. To prove the effectiveness of our regularizers, we set λ = 0.0, 0.5, 1.0, and 2.0. Setting λ = 0 means we train PTP without regularizers. We conduct experiments using CLIP on datasets Caltech101, UCF101 and OxfordPets, respectively. We report the results in Figure 3. When we set the lambda=0.0, this analysis is an ablation study on our defined regularizers, since regularizers are the only parameters we can do an ablation study. From Figure 3, we see that the best performance comes from λ = 1.0. The results prove the significance of our regularizers. Our defined regularizers will push the image prototypes to be meaningful points in the latent space and work as centroids of image clusters. PTP vs. Prompt Learning Baselines Generally, we can see that prompt learning baselines SP and CoCoOp are designed only based on one property of PVLM: the aligned images and text (i.e., prompted category names) should have high matching scores. While, our PTP design is also based on the second property of PVLM: similar images are close to each other in the latent space. Leveraging both two properties, our PTP hypothesizes that similar images should use similar prompts. Through prototype-based prompting design, our PTP designs K prompts, which overcomes the drawbacks of task-level method SP and instance-level method CoCoOp. Task-level prompting learns only one prompt for one task, which is suboptimal. Instance-level prompting learns dynamic prompts conditional on instances, which is not lightweight for a few-shot setting. Our PTP outperforms prompt learning baselines on all shots, where the average accuracy gaps between PTP and SP across all the datasets and two PVLMs are shown in Figure 4 (Red bar). We see with the shot number increasing, the gap becomes larger. Prompt Learning vs. Parameter-efficient Fine-tuning Prompt learning and parameter-efficient fine-tuning methods both have their own applicable scenarios. A good prompt learning performance relies on the quality of pre-defined category names. Giving semantic meaningless category names, such as "707-320", "A321", etc., makes prompt learning methods inferior. On the other hand, the linear probe (LP) does not rely on category names, and Bitfit can update the word embedding and learn semantics through fine-tuning. Prompt learning methods highly depend on the PVLM. Prompt learning methods cannot elicit the correct matching if the PVLM itself has limited pre-training visual and text knowledge, since prompting only perturbs the data input. While, with moderate training data, fine-tuning methods can update the image encoding and textual encoding towards optimal. Generally, giving a well-trained PVLM with meaningful category names, our PTP is a superior method for few-shot learning, compared with fine-tuning baselines. The average accuracy gaps between PTP and Bitfit fine-tuning are shown in Figure 4 (Green bar), where we see with the shot increasing, the gap becomes smaller, but still significant. Limitations We summarize two limitations of our model PTP: i) A good prompt learning performance relies on the quality of pre-defined category names. Hence semantic meaningless category names, such as "707-320", "A321" and "BAE-125", etc., makes prompt learning methods inferior, and ii) Prompt learning methods highly depend on the PVLM. Prompt learning methods cannot elicit the correct matching if the PVLM itself has limited visual and text knowledge during pre-training, since prompting only perturbs the data input. Conclusion In this work, we propose a prototype-based prompt learning method PTP to overcome the limitations of task-level prompting and instance-level prompting. In PTP, the image prototype represents a centroid of a certain image cluster in the latent space and a prompt prototype is defined as a soft prompt in the continuous space. The similarity between a query image and an image prototype determines how much the prediction relies on the corresponding prompt prototype. Hence, in PTP, similar images will utilize similar prompting ways. We conduct extensive experiments on seven real-world benchmarks for few-shot image recognition task and show that PTP is highly adaptive to various PVLMs with superior performance to other prompt learning methods and parameter-efficient fine-tuning baselines. Moreover, through detailed analysis, we discuss pros and cons of prompt learning v.s. parameter-efficient fine-tuning for few-shot learning. Figure 2 : 2Prototype number K analysis on three dataset using CLIP.considering both the performance and parameter-efficiency. So, we will choose K = 5 forFigure 2(a) and (b), and choose K = 7 for Figure 2 (c). Figure 3 : 3Hyper-parameter λ analysis on three dataset using CLIP. Figure 4 : 4The average absolute accuracy improvement. Improvement from SP to PTP is shown in red bar, Bitfit to PTP is shown in green bar. Smaller value means smaller gap and better performance. Table 1 : 1Hyper-parameter setting of our model PTP.Hyper-parameter Value m for CLIP 16 m for ViLT 5 λ 1.0 optimizer AdamW learning rate 3e-3 warm-up rate 0.1 batch size 32 image encoder ViT-B/32 Table 2 : 2Details of comparison baselines.Baseline Description Parameter size Manual-crafted prompt (MCP) (Radford et al., 2021) It uses the default prompt "a photo of a {CLASS}"; for the UCF101 dataset, we use the prompt "a photo of a person doing {CLASS}" 0 Vision matching (VM) For CLIP, given n-shot training data per class, we calculate the mean of the image latent vec- tors in each category as the representation for each class. Then we match the query images with each class representation in the image latent space. 0 Linear probe (LP) (Tian et al., 2020) It is a linear classifier on top of CLIP's image encoder. d * C Bitfit fine-tuning (Zaken et al., 2022) Table 3 : 3Accuracy on Caltech101 dataset using CLIP.Method 1 2 4 8 16 MCP 88.52 88.52 88.52 88.52 88.52 VM 74.72 81.50 86.09 89.09 90.02 LP 77.32 86.41 90.51 94.36 95.21 Bitfit 88.60 90.02 92.25 94.44 95.17 SP 89.12 91.44 92.57 92.86 93.34 CoCoOp 90.87 89.94 91.85 92.70 93.75 PTP 91.93 93.31 94.81 95.29 96.11 Table 4 : 4Accuracy on StanfordCars dataset using CLIP.Method 1 2 4 8 16 MCP 60.30 60.30 60.30 60.30 60.30 VM 28.06 34.27 41.29 49.01 53.65 LP 29.95 42.59 56.85 66.35 74.11 Bitfit 42.59 52.23 58.68 63.08 69.25 SP 54.70 56.74 58.02 59.88 60.58 CoCoOp 48.44 59.06 62.16 63.26 65.02 PTP 59.72 61.67 66.10 71.99 75.71 Table 5 : 5Accuracy on Oxford Pets dataset using CLIP.Method 1 2 4 8 16 MCP 79.97 79.97 79.97 79.97 79.97 VM 36.82 42.71 50.80 57.51 64.51 LP 38.51 55.93 66.37 75.66 81.66 Bitfit 76.15 81.74 87.44 87.82 88.44 SP 77.84 82.12 86.70 87.84 89.45 CoCoOp 81.28 82.69 81.25 79.94 87.60 PTP 83.40 86.56 89.39 90.97 91.66 Table 6 : 6Accuracy on UCF101 dataset using CLIP.Method 1 2 4 8 16 MCP 62.09 62.09 62.09 62.09 62.09 VM 46.54 56.2 60.68 64.24 67.29 LP 47.80 63.21 70.69 75.62 79.85 Bitfit 61.71 65.78 71.7 74.28 78.74 SP 61.85 64.10 66.72 69.31 71.38 CoCoOp 58.08 60.80 62.38 72.67 74.20 PTP 65.98 68.99 72.93 77.66 80.02 Table 7 : 7Accuracy on Food101 dataset using CLIP.Method 1 2 4 8 16 MCP 82.34 82.34 82.34 82.34 82.34 VM 38.07 45.78 55.24 63.18 68.04 LP 41.69 55.66 67.99 75.49 78.92 Bitfit 68.86 71.26 76.76 75.82 76.85 SP 81.86 82.19 82.90 83.21 83.49 CoCoOp 67.62 69.18 69.90 72.85 75.39 PTP 83.33 83.63 84.26 84.44 84.91 Table 8 : 8Accuracy on SUN397 dataset using CLIP.Method 1 2 4 8 16 MCP 60.16 60.16 60.16 60.16 60.16 VM 34.85 44.42 50.52 55.69 59.34 LP 36.88 51.88 61.72 66.21 70.19 Bitfit 58.20 62.39 64.52 64.53 67.23 SP 62.32 63.94 65.13 66.55 67.32 CoCoOp 62.85 64.80 66.19 66.45 67.57 PTP 63.51 65.72 68.76 70.93 72.42 Table 9 : 9Accuracy on FGVCAircraft dataset using CLIP.Method 1 2 4 8 16 MCP 17.52 17.52 17.52 17.52 17.52 VM 17.52 18.96 21.72 24.15 25.41 LP 17.22 20.73 27.24 33.33 38.61 Bitfit 14.19 17.07 23.16 33.15 40.56 SP 18.09 17.07 16.98 19.77 22.23 CoCoOp 14.73 16.56 18.90 25.53 29.22 PTP 19.62 19.83 23.7 26.86 33.21 Table 10 : 10Accuracy on Caltech101 dataset using ViLT.Method 1 2 4 8 16 MCP 58.58 58.58 58.58 58.58 58.58 Bitfit 49.04 49.70 54.92 72.53 75.21 SP 54.93 64.30 68.32 74.32 77.28 PTP 67.42 69.53 71.40 77.12 80.73 Table 11 : 11Accuracy on UCF101 dataset using ViLT.Method 1 2 4 8 16 MCP 29.37 29.37 29.37 29.37 29.37 Bitfit 31.95 33.17 41.79 55.16 63.39 SP 31.64 33.25 37.11 44.65 54.93 PTP 34.44 36.37 41.98 53.90 58.84 Table 12: Accuracy on Food101 dataset using ViLT. Method 1 2 4 8 16 MCP 15.41 15.41 15.41 15.41 15.41 Bitfit 18.93 20.02 22.60 27.39 35.57 SP 16.67 16.95 20.53 22.79 24.78 PTP 19.01 20.50 22.67 27.16 32.06 Food-101 -mining discriminative components with random forests. Lukas Bossard, Matthieu Guillaumin, Luc Van Gool, Proceedings of the 13th European Conference on Computer Vision (ECCV), Part VI. the 13th European Conference on Computer Vision (ECCV), Part VIZurich, SwitzerlandLukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101 -mining discriminative components with random forests. In Proceedings of the 13th European Conference on Computer Vision (ECCV), Part VI, pages 446-461, Zurich, Switzerland, 2014. This looks like that: Deep learning for interpretable image recognition. Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, Jonathan Su, Advances in Neural Information Processing Systems (NeurIPS). Vancouver, CanadaChaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, and Jonathan Su. This looks like that: Deep learning for interpretable image recognition. In Advances in Neural Information Processing Systems (NeurIPS), pages 8928-8939, Vancouver, Canada, 2019. Clip-adapter: Better vision-language models with feature adapters. Peng Gao, Shijie Geng, Renrui Zhang, Teli Ma, Rongyao Fang, Yongfeng Zhang, Hongsheng Li, Yu Qiao, arXiv:2110.04544arXiv preprintPeng Gao, Shijie Geng, Renrui Zhang, Teli Ma, Rongyao Fang, Yongfeng Zhang, Hongsheng Li, and Yu Qiao. Clip-adapter: Better vision-language models with feature adapters. arXiv preprint arXiv:2110.04544, 2021a. Making pre-trained language models better few-shot learners. Tianyu Gao, Adam Fisch, Danqi Chen, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL/IJCNLP). the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL/IJCNLP)Virtual EventTianyu Gao, Adam Fisch, and Danqi Chen. Making pre-trained language models better few-shot learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL/IJCNLP), pages 3816-3830, Virtual Event, 2021b. Parameter-efficient transfer learning for NLP. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, Sylvain Gelly, Proceedings of the 36th International Conference on Machine Learning (ICML). the 36th International Conference on Machine Learning (ICML)Long Beach, CANeil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning (ICML), pages 2790-2799, Long Beach, CA, 2019. Cutting down on prompts and parameters: Simple few-shot learning with language models. L Robert, I V Logan, Ivana Balazevic, Eric Wallace, Fabio Petroni, Sameer Singh, Sebastian Riedel, Findings of the Association for Computational Linguistics (ACL Findings). Dublin, Ireland2022Robert L. Logan IV, Ivana Balazevic, Eric Wallace, Fabio Petroni, Sameer Singh, and Sebastian Riedel. Cutting down on prompts and parameters: Simple few-shot learning with language models. In Findings of the Association for Computational Linguistics (ACL Findings), pages 2824-2835, Dublin, Ireland, 2022. Scaling up visual and vision-language representation learning with noisy text supervision. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V Le, Yun-Hsuan Sung, Zhen Li, Tom Duerig, Proceedings of the 38th International Conference on Machine Learning (ICML). the 38th International Conference on Machine Learning (ICML)Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In Proceedings of the 38th International Conference on Machine Learning (ICML), pages 4904-4916, Virtual Event, 2021. Instance-aware prompt learning for language understanding and generation. Feihu Jin, Jinliang Lu, Jiajun Zhang, Chengqing Zong, arXiv:2201.07126arXiv preprintFeihu Jin, Jinliang Lu, Jiajun Zhang, and Chengqing Zong. Instance-aware prompt learning for language understanding and generation. arXiv preprint arXiv:2201.07126, 2022a. A good prompt is worth millions of parameters: Low-resource prompt-based learning for vision-language models. Woojeong Jin, Yu Cheng, Yelong Shen, Weizhu Chen, Xiang Ren, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL). the 60th Annual Meeting of the Association for Computational Linguistics (ACL)Dublin, IrelandWoojeong Jin, Yu Cheng, Yelong Shen, Weizhu Chen, and Xiang Ren. A good prompt is worth millions of parameters: Low-resource prompt-based learning for vision-language models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL), pages 2763-2775, Dublin, Ireland, 2022b. ViLT: Vision-and-language transformer without convolution or region supervision. Wonjae Kim, Bokyung Son, Ildoo Kim, Proceedings of the 38th International Conference on Machine Learning (ICML). the 38th International Conference on Machine Learning (ICML)Wonjae Kim, Bokyung Son, and Ildoo Kim. ViLT: Vision-and-language transformer without convolution or region supervision. In Proceedings of the 38th International Conference on Machine Learning (ICML), pages 5583-5594, Virtual Event, 2021. 3d object representations for fine-grained categorization. Jonathan Krause, Michael Stark, Jia Deng, Li Fei-Fei, Proceedings of the IEEE International Conference on Computer Vision Workshops. the IEEE International Conference on Computer Vision WorkshopsJonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 554-561, 2013. Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions. Oscar Li, Hao Liu, Chaofan Chen, Cynthia Rudin, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI). the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI)New Orleans, LAOscar Li, Hao Liu, Chaofan Chen, and Cynthia Rudin. Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI), pages 3530-3537, New Orleans, LA, 2018. Prefix-tuning: Optimizing continuous prompts for generation. Lisa Xiang, Percy Li, Liang, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL/IJCNLP). the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL/IJCNLP)Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL/IJCNLP), pages 4582-4597, Virtual Event, 2021. Modular and parameter-efficient multimodal fusion with prompting. Sheng Liang, Mengjie Zhao, Hinrich Schütze, Findings of the Association for Computational Linguistics (ACL Findings). Dublin, Ireland2022Sheng Liang, Mengjie Zhao, and Hinrich Schütze. Modular and parameter-efficient multimodal fusion with prompting. In Findings of the Association for Computational Linguistics (ACL Findings), pages 2976-2985, Dublin, Ireland, 2022. ViLBERT: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. Jiasen Lu, Dhruv Batra, Devi Parikh, Stefan Lee, Advances in Neural Information Processing Systems (NeurIPS). Vancouver, CanadaJiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. ViLBERT: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems (NeurIPS), pages 13-23, Vancouver, Canada, 2019. Fine-grained visual classification of aircraft. Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew Blaschko, Andrea Vedaldi, arXiv:1306.5151arXiv preprintSubhransu Maji, Esa Rahtu, Juho Kannala, Matthew Blaschko, and Andrea Vedaldi. Fine-grained visual classification of aircraft. arXiv preprint arXiv:1306.5151, 2013. Cats and dogs. M Omkar, Andrea Parkhi, Andrew Vedaldi, C V Zisserman, Jawahar, Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Providence, RIOmkar M. Parkhi, Andrea Vedaldi, Andrew Zisserman, and C. V. Jawahar. Cats and dogs. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3498-3505, Providence, RI, 2012. Language models as knowledge bases?. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, S H Patrick, Anton Lewis, Yuxiang Bakhtin, Alexander H Wu, Miller, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaFabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick S. H. Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander H. Miller. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463-2473, Hong Kong, China. Learning transferable visual models from natural language supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever, Proceedings of the 38th International Conference on Machine Learning (ICML). the 38th International Conference on Machine Learning (ICML)Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In Proceedings of the 38th International Conference on Machine Learning (ICML), pages 8748-8763, Virtual Event, 2021. DenseCLIP: Language-guided dense prediction with context-aware prompting. Yongming Rao, Wenliang Zhao, Guangyi Chen, Yansong Tang, Zheng Zhu, Guan Huang, Jie Zhou, Jiwen Lu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)New Orleans, LA2022Yongming Rao, Wenliang Zhao, Guangyi Chen, Yansong Tang, Zheng Zhu, Guan Huang, Jie Zhou, and Jiwen Lu. DenseCLIP: Language-guided dense prediction with context-aware prompting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 18061-18070, New Orleans, LA, 2022. . Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, Canwen Bari, Urmish Xu, Shanya Thakker, Eliza Sharma Sharma, Taewoon Szczechla, Gunjan Kim, Chhablani, V Nihal, Debajyoti Nayak, Jonathan Datta, Mike Chang, Tian-Jian, Han Jiang, Matteo Wang, Sheng Manica, Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Févry, Jason Alan Fries, Ryan Teehan, Teven Le ScaoStella Biderman, Leo Gao, Thomas Wolf, andVictor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal V. Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Févry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Multitask prompted training enables zero-shot task generalization. Alexander M Rush, Proceedings of the Tenth International Conference on Learning Representations (ICLR), Virtual Event. the Tenth International Conference on Learning Representations (ICLR), Virtual EventAlexander M. Rush. Multitask prompted training enables zero-shot task generalization. In Proceedings of the Tenth International Conference on Learning Representations (ICLR), Virtual Event, 2022. Few-shot text generation with natural language instructions. Timo Schick, Hinrich Schütze, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)Dominican RepublicVirtual Event / Punta CanaTimo Schick and Hinrich Schütze. Few-shot text generation with natural language instructions. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 390-402, Virtual Event / Punta Cana, Dominican Republic, 2021. AutoPrompt: Eliciting knowledge from language models with automatically generated prompts. Taylor Shin, Yasaman Razeghi, Robert L Logan, I V , Eric Wallace, Sameer Singh, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)OnlineTaylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. AutoPrompt: Eliciting knowledge from language models with automatically generated prompts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4222-4235, Online, 2020. UCF101: A dataset of 101 human actions classes from videos in the wild. Khurram Soomro, Mubarak Amir Roshan Zamir, Shah, arXiv:1212.0402arXiv preprintKhurram Soomro, Amir Roshan Zamir, and Mubarak Shah. UCF101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012. Rethinking few-shot image classification: A good embedding is all you need?. Yonglong Tian, Yue Wang, Dilip Krishnan, Joshua B Tenenbaum, Phillip Isola, Proceedings of the 16th European Conference on Computer Vision (ECCV), Part XIV. the 16th European Conference on Computer Vision (ECCV), Part XIVGlasgow, UKYonglong Tian, Yue Wang, Dilip Krishnan, Joshua B. Tenenbaum, and Phillip Isola. Rethinking few-shot image classification: A good embedding is all you need? In Proceedings of the 16th European Conference on Computer Vision (ECCV), Part XIV, pages 266-282, Glasgow, UK, 2020. Multimodal few-shot learning with frozen language models. Maria Tsimpoukelli, Jacob Menick, Serkan Cabi, S M Ali Eslami, Oriol Vinyals, Felix Hill, Advances in Neural Information Processing Systems (NeurIPS). 2021Maria Tsimpoukelli, Jacob Menick, Serkan Cabi, S. M. Ali Eslami, Oriol Vinyals, and Felix Hill. Multimodal few-shot learning with frozen language models. In Advances in Neural Information Processing Systems (NeurIPS), pages 200-212, virtual, 2021. SUN database: Largescale scene recognition from abbey to zoo. Jianxiong Xiao, James Hays, Krista A Ehinger, Aude Oliva, Antonio Torralba, Proceedings of the Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition (CVPR)San Francisco, CAJianxiong Xiao, James Hays, Krista A. Ehinger, Aude Oliva, and Antonio Torralba. SUN database: Large- scale scene recognition from abbey to zoo. In Proceedings of the Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3485-3492, San Francisco, CA, 2010. BitFit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. Yoav Elad Ben Zaken, Shauli Goldberg, Ravfogel, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL). the 60th Annual Meeting of the Association for Computational Linguistics (ACL)Dublin, Ireland2022Elad Ben Zaken, Yoav Goldberg, and Shauli Ravfogel. BitFit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1-9, Dublin, Ireland, 2022. Renrui Zhang, Rongyao Fang, Peng Gao, Wei Zhang, Kunchang Li, Jifeng Dai, Yu Qiao, Hongsheng Li, arXiv:2111.03930Tip-adapter: Training-free clip-adapter for better vision-language modeling. arXiv preprintRenrui Zhang, Rongyao Fang, Peng Gao, Wei Zhang, Kunchang Li, Jifeng Dai, Yu Qiao, and Hong- sheng Li. Tip-adapter: Training-free clip-adapter for better vision-language modeling. arXiv preprint arXiv:2111.03930, 2021. Calibrate before use: Improving few-shot performance of language models. Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, Sameer Singh, Proceedings of the 38th International Conference on Machine Learning (ICML). the 38th International Conference on Machine Learning (ICML)Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. Calibrate before use: Improving few-shot performance of language models. In Proceedings of the 38th International Conference on Machine Learning (ICML), pages 12697-12706, Virtual Event, 2021. Factual probing is [MASK]: learning vs. learning to recall. Zexuan Zhong, Dan Friedman, Danqi Chen, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)Online2021Zexuan Zhong, Dan Friedman, and Danqi Chen. Factual probing is [MASK]: learning vs. learning to recall. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 5017-5033, Online, 2021. Conditional prompt learning for visionlanguage models. Kaiyang Zhou, Jingkang Yang, Chen Change Loy, Ziwei Liu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)New Orleans, LAKaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Conditional prompt learning for vision- language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16795-16804, New Orleans, LA, 2022a. Learning to prompt for vision-language models. Kaiyang Zhou, Jingkang Yang, Chen Change Loy, Ziwei Liu, Int. J. Comput. Vis. 1309Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Learning to prompt for vision-language models. Int. J. Comput. Vis., 130(9):2337-2348, 2022b.
[]
[ "REVISITING WEYL'S CALCULATION OF THE GRAVITATIONAL PULL IN BACH'S TWO-BODY SOLUTION", "REVISITING WEYL'S CALCULATION OF THE GRAVITATIONAL PULL IN BACH'S TWO-BODY SOLUTION" ]
[ "Salvatore Antoci ", "ANDDierck-Ekkehard Liebscher ", "Luigi Mihich " ]
[]
[]
When the mass of one of the two bodies tends to zero, Weyl's definition of the gravitational force in an axially symmetric, static two-body solution can be given an invariant formulation in terms of a force four-vector. The norm of this force is calculated for Bach's twobody solution, that is known to be in one-to-one correspondence with Schwarzschild's original solution when one of the two masses l, l ′ is made to vanish. In the limit when, say, l ′ → 0, the norm of the force divided by l ′ and calculated at the position of the vanishing mass is found to coincide with the norm of the acceleration of a test body kept at rest in Schwarzschild's field. Both norms happen thus to grow without limit when the test body (respectively the vanishing mass l ′ ) is kept at rest in a position closer and closer to Schwarzschild's two-surface.
10.1088/0264-9381/18/17/307
[ "https://export.arxiv.org/pdf/gr-qc/0104035v2.pdf" ]
9,055,607
gr-qc/0104035
8f3f41093a0ac477e68f2bf31b294dde103b5fc1
REVISITING WEYL'S CALCULATION OF THE GRAVITATIONAL PULL IN BACH'S TWO-BODY SOLUTION 25 Jul 2001 Salvatore Antoci ANDDierck-Ekkehard Liebscher Luigi Mihich REVISITING WEYL'S CALCULATION OF THE GRAVITATIONAL PULL IN BACH'S TWO-BODY SOLUTION 25 Jul 2001 When the mass of one of the two bodies tends to zero, Weyl's definition of the gravitational force in an axially symmetric, static two-body solution can be given an invariant formulation in terms of a force four-vector. The norm of this force is calculated for Bach's twobody solution, that is known to be in one-to-one correspondence with Schwarzschild's original solution when one of the two masses l, l ′ is made to vanish. In the limit when, say, l ′ → 0, the norm of the force divided by l ′ and calculated at the position of the vanishing mass is found to coincide with the norm of the acceleration of a test body kept at rest in Schwarzschild's field. Both norms happen thus to grow without limit when the test body (respectively the vanishing mass l ′ ) is kept at rest in a position closer and closer to Schwarzschild's two-surface. Introduction It is well known since a long time (see e.g. ( [1])) that a test body kept at rest in Schwarzschild's gravitational field undergoes a four-acceleration whose norm tends to infinity if the position of the body is closer and closer to the Schwarzschild two-surface. However the existence of this singular behaviour of the gravitational pull, despite the fact that it can be given an invariant description, has not generally aroused very much concern. By far greater attention has been directed to the features of the world line followed by a test body in free motion, and already in 1950 it has been shown ( [2]) that, once the singularity in the components of the metric is removed through a coordinate transformation that is aptly non regular at the Schwarzschild surface, a radial timelike geodesic may reach the Schwarzschild singularity and "cross it without a bump". This early finding by Synge has been the turning point for the program of analytic extension [3], [4]. Of course, geodesic motion has a fundamental role in general relativity. Nevertheless, it is quite reassuring that the very notion of the force exerted on a test body that is kept at rest, that has played so fundamental a role in the development of physical knowledge, does find a meaningful, i.e. invariant definition in general relativity. It is less reassuring that the norm of that force may be allowed to grow without limit at some surface in the interior of a manifold meant to be a realistic model of physical occurrences, without providing a justification for this allowance in physical terms. The definition of the gravitational force felt by a test body of unit mass kept at rest in a static field is connected to the geometric definition of the four-acceleration by way of hypothesis. It would be interesting to calculate this force in an invariant way without availing of this hypothesis: Einstein's equations alone should suffice for the task. In the present paper the norm of the force exerted on a test body in Schwarzschild's field is obtained by starting, in the footsteps of Weyl [5], [6], from a particular two-body solution of Einstein's equations calculated in 1922 by R. Bach [6]. Bach's solution for two "point masses" While the spherically symmetric field of a "Massenpunkt" was determined by K. Schwarzschild [7] soon after the discovery of the field equations of general relativity [8], [9] 1 the class of axially symmetric, static solutions, that could provide some indication about the gravitational pull in a two-body system, was later found [11], [12] by Weyl and by Levi-Civita. Despite the nonlinear structure of the field equations, Weyl succeeded in reducing the problem to quadratures through the introduction of his "canonical cylindrical coordinates". Let x 0 = t be the time coordinate, while x 1 = z, x 2 = r are the coordinates in a meridian half-plane, and x 3 = ϕ is the azimuth of such a half-plane; then the line element of a static, axially symmetric field in vacuo can be tentatively written as: ds 2 = e 2ψ dt 2 − dσ 2 , e 2ψ dσ 2 = r 2 dϕ 2 + e 2γ (dr 2 + dz 2 ); (2.1) the two functions ψ and γ depend only on z and r. Remarkably enough, in the "Bildraum" introduced by Weyl ψ fulfils the potential equation ∆ψ = 1 r ∂(rψ z ) ∂z + ∂(rψ r ) ∂r = 0 (2.2) (ψ z , ψ r are the derivatives with respect to z and to r respectively), while γ is obtained by solving the system γ z = 2rψ z ψ r , γ r = r(ψ 2 r − ψ 2 z ); (2.3) due to the potential equation (2.2) dγ = 2rψ z ψ r dz + r(ψ 2 r − ψ 2 z )dr (2.4) happens to be an exact differential. Schwarzschild's original "Massenpunkt"solution [7] is recovered exactly when ψ is that solution of (2.2) corresponding to the Newtonian potential that one obtains if one segment of the z-axis is covered by matter with constant mass density [11]. Let 2l be the coordinate length of this segment, and let r 1 , r 2 be the "distances", calculated in the Euclidean way, of a point P with canonical coordinates z, r from the end points P 1 and P 2 of the segment, that lie on the symmetry axis at z = z 1 and at z = z 2 = z 1 − 2l respectively. One finds ψ = 1 2 ln r 1 + r 2 − 2l r 1 + r 2 + 2l ; γ = 1 2 ln (r 1 + r 2 ) 2 − 4l 2 4r 1 r 2 . (2.5) Through a coordinate transformation in the meridian half-plane one proves the agreement with Schwarzschild's result, if m is substituted for l. We draw the attention of the reader on the fact that the agreement occurs with Schwarzschild's original solution [7], not with what is called "Schwarzschild solution" in all the textbooks, but is in fact the spherically symmetric solution found by Hilbert through his peculiar choice of the radial coordinate [13]. In fact by setting z − z 2 = l(1 − cos ϑ), (2.6) one finds that the spatial part dσ 2 of the square of the line element on the segment P 2 P 1 becomes dσ 2 = 4l 2 (dϑ 2 + sin 2 ϑdϕ 2 ), (2.7) i.e. it coincides with the square of the line element of the spherical twosurface that is present at r = 0 in Schwarzschild's true and original solution [7] of Einstein's equations. The solution given by (2.5) is in one-to-one correspondence with that solution, and it does not contain the "interior region" that Hilbert could not help finding [13] due to the particular way he kept in fixing the radial coordinate. A static two-body solution is instead obtained if one assumes, like Bach did [6], that the "Newtonian potential" ψ is generated by matter that is present with constant mass density on two segments of the symmetry axis, like the segments P 4 P 3 and P 2 P 1 of Figure 1. We know already that the particular choice ψ = 1 2 ln r 1 + r 2 − 2l r 1 + r 2 + 2l + 1 2 ln r 3 + r 4 − 2l ′ r 3 + r 4 + 2l ′ , (2.8) will produce a vacuum solution to Einstein's field equations that reduces to Schwarzschild's original solution if one sets either l = 0 or l ′ = 0. Of course, due to the nonlinearity of (2.3) one cannot expect that γ will contain only the sum of the contributions γ 11 = 1 2 ln (r 1 + r 2 ) 2 − 4l 2 4r 1 r 2 , γ 22 = 1 2 ln (r 3 + r 4 ) 2 − 4l ′ 2 4r 3 r 4 , (2.9) corresponding to the individual terms of the potential (2.8); a further term is present, that Bach called γ 12 , and reads γ 12 = ln lr 4 − (l ′ + d)r 1 − (l + l ′ + d)r 2 lr 3 − dr 1 − (l + d)r 2 + c, (2.10) where c is a constant. Since γ must vanish at the spatial infinity, it must be c = ln[d/(l + l ′ )]. With this choice of the constant one eventually finds [6] Figure 1. Representation in the canonical z, r half-plane of the mass sources for Bach's two-body solution. r 4 , r 3 and r 2 , r 1 are the "distances", calculated in the Euclidean way, of a point P from the end points of the two segments endowed with mass. P 4 P 3 = 2l ′ , P 3 P 2 = 2d, P 2 P 1 = 2l, again in coordinate lengths. that the line element of the two-body solution is defined by the functions e 2ψ = r 1 + r 2 − 2l r 1 + r 2 + 2l · r 3 + r 4 − 2l ′ r 3 + r 4 + 2l ′ , e 2γ = (r 1 + r 2 ) 2 − 4l 2 4r 1 r 2 · (r 3 + r 4 ) 2 − 4l ′ 2 4r 3 r 4 · d(l ′ + d)r 1 + d(l + l ′ + d)r 2 − ldr 4 d(l ′ + d)r 1 + (l + d)(l ′ + d)r 2 − l(l ′ + d)r 3 2 . (2.11) With these definitions for ψ and γ the line element (2.1) behaves properly at the spatial infinity and is regular everywhere, except for the two segments P 4 P 3 , P 2 P 1 of the symmetry axis, where the sources of ψ are located, and also for the segment P 3 P 2 , because there γ does not vanish as required, but takes the constant value Γ = ln d(l + l ′ + d) (l + d)(l ′ + d) , (2.12) thus giving rise to the well known conical singularity. Weyl's analysis of the static two-body solutions Due to this lack of elementary flatness occurring on the segment P 3 P 2 the solution is not a true two-body solution; nevertheless Weyl showed [6] that a regular solution could be obtained from it, provided that nonvanishing energy tensor density T k i be allowed for in the space between the two bodies. In this way an axial force K is introduced, with the evident function of keeping the two bodies at rest 2 despite their mutual gravitational attraction. By providing a measure for K, Weyl provided a measure of the gravitational pull. Let us recall here Weyl's analysis [5], [6] of the axially symmetric, static two-body problem. In writing Einstein's field equations, we adopt henceforth Weyl's convention for the energy tensor: R ik − 1 2 g ik R = −T ik . (3.1) Einstein's equations teach that, when the line element has the expression (2.1), T k i shall have the form           T 0 0 0 0 0 0 T 1 1 T 1 2 0 0 T 2 1 T 2 2 0 0 0 0 T 3 3           (3.2) where T 1 1 + T 2 2 = 0. (3.3) By introducing the notation T 3 3 = r̺ ′ , T 0 0 = r(̺ + ̺ ′ ), (3.4) Einstein's equations can be written as: ∆ψ = 1 2 ̺, ∂ 2 γ ∂z 2 + ∂ 2 γ ∂r 2 + ∂ψ ∂z 2 + ∂ψ ∂r 2 = −̺ ′ ; (3.5) T 1 1 = −T 2 2 = γ r − r(ψ 2 r − ψ 2 z ), −T 2 1 = −T 1 2 = γ z − 2rψ r ψ z . (3.6) Weyl shows that ̺ must be interpreted as mass density in the canonical space. To this end he considers the mass density distribution sketched in Figure 2, where ̺ is assumed to be nonvanishing only in the shaded regions labeled 1 and 2. According to (3.5) the potential ψ corresponding to this mass distribution can be uniquely split in two terms ψ 1 and ψ 2 , such that ψ 1 is a potential function that vanishes at infinity and is everywhere regular outside the region 1, while ψ 2 behaves in the same way outside the region 2. The asymptotic forms of ψ 1 and ψ 2 are such that e 2ψ 1 = 1 − m 1 R + ..., e 2ψ 2 = 1 − m 2 R + ... (3.7) where the mass coefficients m 1 and m 2 are given by the integral ̺dV = 2π ̺rdrdz, performed in the canonical space and extended to the appropriate shaded region. Outside the shaded regions one has ̺ = 0, but there shall be some region between the bodies, let us call it L ′ , where ̺ = 0 but T k i = 0, since in a static solution of general relativity the gravitational pull shall be counteracted in some way. Weyl's procedure for determining T k i in L ′ is the following. Suppose that T k i = 0 outside a simply connected region L that includes both material bodies. Since ψ is known there, we can avail of (2.4), together with the injunction that γ vanish at infinity, to determine γ uniquely outside L. Within L ′ we can choose γ arbitrarily, provided that we ensure the regular connection with the vacuum region and the regular behaviour on the axis, i.e. γ vanishing there like r 2 . Since ψ is known in L ′ and γ has been chosen as just shown, we can use equations (3.5) and (3.6) to determine T k i there. If the material bodies include each one a segment of the axis, just as it occurs in Fig. 2, the force K directed along the z axis, with which the stresses in L ′ contrast the gravitational pull can be written as K = 2π C (T 2 1 dz − T 1 1 dr); (3.8) the integration path is along a curve C, like the one drawn in Fig. 2, that separates the two bodies in the meridian half-plane; the value of the integral does not depend on the precise position of C because, as one gathers from the definitions (3.5), (3.6): T 1 1,1 + T 2 1,2 = 0 (3.9) in the region L ′ . Since the region of the meridian half-plane where ̺ = 0 is simply connected, by starting from ψ and from the vacuum equation (2.4), now rewritten as: dγ * = 2rψ z ψ r dz + r(ψ 2 r − ψ 2 z )dr (3.10) one can uniquely define there the function γ * that vanishes at the spatial infinity. In all the parts of the z axis where ̺ = 0 it must be γ * z = 0, γ * r = 0, hence γ * = const., γ * r = 0. In particular, in the parts of the axis that go to infinity one shall have γ * = 0; let us call Γ * the constant value assumed instead by γ * on the segment of the axis lying between the two bodies. The definitions (3.6) can now be rewritten as: (3.11) and the integral of (3.8) becomes T 1 1 = −T 2 2 = γ r − γ * r , −T 2 1 = −T 1 2 = γ z − γ * z ,C (T 2 1 dz − T 1 1 dr) = C (γ * z − γ z )dz + (γ * r − γ r )dr = C d(γ * − γ). (3.12) Since γ vanishes on the parts of the z axis where ̺ = 0, the force K that holds the bodies at rest despite the gravitational pull shall be K = −2πΓ * (3.13) with Weyl's definition (3.1) of the energy tensor. When the mass density ̺ has in the canonical space the particular distribution considered by Bach and drawn in Fig. 1, Γ * is equal to Γ as defined by (2.12). The measure of the gravitational pull with which the two "material bodies" of this particular solution attract each other therefore turns out to be K = 2π ln (d + l)(d + l ′ ) d(d + l + l ′ ) (3.14) in Weyl's units. This expression agrees with the Newtonian value when l and l ′ are small when compared to d, as expected. 4. From Weyl's K to a "quasi" force four-vector k i Despite its mathematical beauty, Weyl's definition of the gravitational pull for an axially symmetric, static two-body solution appears associated without remedy to the adoption of the canonical coordinate system. It is however possible to obtain through Weyl's definition of K, given by (3.8), a "quasi" four-vector k i . In fact that expression can be rewritten as K = Σ T l 1 df * 0l ≡ 1 2 Σ T l 1 ǫ 0lmn df mn , (4.1) where ǫ klmn is Levi-Civita's totally antisymmetric tensor and df mn is the element of the two-surface Σ generated by the curve C through rotation around the symmetry axis. Since the metric is static it is possible to define invariantly a timelike Killing vector ξ k (t) that correspond, in Weyl's canonical coordinates, to a unit coordinate time translation. Therefore (4.1) can be rewritten as K = 1 2 Σ ξ k (t) T l 1 ǫ klmn df mn (4.2) by still using the canonical coordinates. Now the integrand is written as the "1" component of the infinitesimal covariant four-vector ξ k (t) T l i ǫ klmn df mn , (4.3) but of course in general the expression k i = 1 2 Σ ξ k (t) T l i ǫ klmn df mn (4.4) will not be a four-vector, because the integration over Σ spoils the covariance. When evaluated in canonical coordinates, the nonvanishing components of k i are k 1 = K and k 2 = 2π C (T 2 2 dz − T 2 1 dr) = 2π C (γ * r − γ r )dz − (γ * z − γ z )dr, (4.5) that however must vanish too, if k i has to become a four-vector defined on the symmetry axis. But, as one sees from Weyl's analysis, we are at freedom to choose T k i in L ′ as nonvanishing only in a tube with a very small 3 yet finite coordinate radius that encloses in its interior the segment of the symmetry axis lying between the bodies; moreover, we can freely set γ z = γ * z within the tube. Under these conditions the second term of the integral (4.5) just vanishes, while the first one shall be very small, since the regularity of the surface Σ requires that the curve C approach the symmetry axis at a straight angle in canonical coordinates. By properly choosing T k i we thus succeed in providing through equation (4.4) a quasi four-vector k i whose components, written in Weyl's canonical coordinates, reduce in approximation to (K, 0, 0, 0). 5. The norm of the force in Bach's solution when 2l ′ → 0 Having defined, with the above caveats, the quasi four-vector k i along the segment of the symmetry axis between the two bodies, we can use its "quasi" norm to provide a measure of the force that opposes the gravitational pull. In the case of Bach's two-body solution, whose line element is defined in canonical coordinates by (2.1) and (2.11), that quasi norm reads k ≡ (−k i k i ) 1/2 = 2π ln (d + l)(d + l ′ ) d(d + l + l ′ ) · r 1 − 2l r 1 · r 4 − 2l ′ r 4 1/2 (5.1) when measured in Weyl's units at a point of the symmetry axis for which z 3 < z < z 2 . At variance with the behaviour of K, the quasi norm k depends on z, due to the term of (5.1) enclosed within the square brackets, that comes from e 2ψ . Let us evaluate this quasi norm divided by l ′ when l ′ → 0, namely, the coefficient of the linear term in the McLaurin series expansion of k with respect to l ′ . Since Γ * , now defined by the right hand side of (2.12), tends to zero when l ′ → 0, while performing this limit one can also send to zero the radius of the very narrow tube considered in the previous section. Therefore k i can become a true four-vector and k can become a true norm in the above mentioned limit. With this proviso one finds the invariant result lim l ′ →0 k l ′ = ∂k ∂l ′ l ′ =0 = 2πl d(d + l) r 1 − 2l r 1 1/2 . (5.2) When l ′ → 0 the line element of Bach's solution with two bodies tends to the line element defined by (2.1) and (2.5), that is in one-to-one correspondence with the line element of Schwarzschild's original solution [7]. Therefore the scalar quantity [∂k/∂l ′ ] l ′ =0 evaluated at P 3 shall be the norm of the force per unit mass exerted by Schwarzschild's gravitational field on a test particle kept at rest at P 3 . Its value is obtained by substituting 2d + 2l for r 1 in (5.2). One finds lim l ′ →0 k l ′ z=z 3 = 8πl (2d + 2l) 3/2 (2d) 1/2 . (5.3) If one solves Schwarzschild's problem in spherical polar coordinates r, ϑ, ϕ, t with three unknown functions of r, i.e. without fixing the radial coordinate, like Combridge and Janne did long ago [16], [17], one ends up to write de Sitter's line element [18] ds 2 = − exp λdr 2 − exp µ[r 2 (dϑ 2 + sin 2 ϑdϕ 2 )] + exp νdt 2 (5.4) in terms of one unknown function f (r). In fact λ, µ, ν are defined through this arbitrary function f (r) and through its derivative f ′ (r) as follows: exp λ = f ′ 2 1 − 2m/f , (5.5) exp µ = f 2 r 2 , (5.6) exp ν = 1 − 2m/f. (5.7) Here m is the mass constant; of course the arbitrary function f must have the appropriate behaviour as r → ∞. Schwarzschild's original solution [7] is eventually recovered [19], [20] by requiring that f be a monotonic function of r and that f (0) = 2m. Let us imagine that a test body be kept at rest in this field. With our symmetry-adapted coordinates, its world line shall be invariantly specified by requiring that the spatial coordinates r, ϑ, ϕ of the test body be constant in time. If α = (−a i a i ) 1/2 (5.8) is the norm of the acceleration four-vector a i ≡ du i ds + Γ i kl u k u l (5.9) along the world line of the test body, one finds: α = m f 3/2 (f − 2m) 1/2 . (5.10) This norm is assumed by way of hypothesis to be equal to the norm of the force per unit mass needed for constraining the test particle to follow a world line of rest despite the gravitational pull of the Schwarzschild field [1]. The consistency of the hypothesis with Einstein's theory requires that α be equal to the scalar quantity [∂k/∂l ′ ] l ′ =0, z=z 3 that provides the norm of the force per unit mass for Bach's solution in the test particle limit l ′ → 0. This is indeed the case, since the functional dependence of (5.3) on the mass parameter l and on the coordinate distance 2d + 2l is the same as the functional dependence of (5.10) on the mass parameter m and on the function f (r) with f (0) = 2m introduced above. The extra constant 8π appearing in (5.3) is just due to Weyl's adoption of the definition (3.1) of the energy tensor. For Schwarzschild's field, the definition of the norm of the force exerted on a test particle at rest obtained through the acceleration four-vector and the independent definition through the force that, in Bach's two-body solution, T k i must exert to keep the masses at rest when l ′ → 0 lead to one and the same result. In particular, both definitions show that the norm of the force per unit mass grows without limit as the test particle is kept at rest in a position closer and closer to Schwarzschild's two-surface. Figure 2 . 2Representation in the canonical z, r half-plane of extended mass sources of a two-body solution. Schwarzschild actually worked with the next-to-last version of the theory[10], whose covariance was limited to unimodular transformations. As we shall see later, this fortuitous circumstance had momentous consequences. If a metric is static the definition of rest with respect to that metric can be given in invariant form through the Killing vectors. This kind of procedure has been used to derive the equations of motion even for structured particles by Einstein, Infeld and Hoffman[14] and by Fock and Papapetrou (see[15]). . W Rindler, Phys. Rev. 1192082Rindler, W., Phys. Rev. 119 (1960) 2082. . J L Synge, Proc. R. Irish Acad. 5383Synge, J.L., Proc. R. Irish Acad. 53A (1950) 83. . M D Kruskal, Phys. Rev. 1191743Kruskal, M.D., Phys. Rev. 119 (1960) 1743. . G Szekeres, Publ. Math. Debrecen. 7285Szekeres, G., Publ. Math. Debrecen 7 (1960) 285. . H Weyl, Ann. d. Phys. 59185Weyl, H., Ann. d. Phys. 59 (1919) 185. . R Bach, H Weyl, Math. Zeitschrift. 13134Bach, R. and Weyl, H., Math. Zeitschrift 13 (1922) 134. . K Schwarzschild, Sitzungsber, Preuss, Akad, Wiss, Phys. Math. Kl. 1916Schwarzschild, K., Sitzungsber. Preuss. Akad. Wiss., Phys. Math. Kl. 1916, 189 (com- municated 13 Jan. 1916). . A Einstein, Sitzungsber, Preuss, Akad, Wiss, Phys. Math. Kl. 1915Einstein, A., Sitzungsber. Preuss. Akad. Wiss., Phys. Math. Kl. 1915, 844 (communi- cated 25 Nov. 1915). . D Hilbert, Nachr. Ges. Wiss. Göttingen, Math. Phys. Kl. 395Hilbert, D., Nachr. Ges. Wiss. Göttingen, Math. Phys. Kl. 1915, 395 (communicated 20 Nov. 1915). . A Einstein, Sitzungsber, Preuss, Akad, Wiss, Phys. Math. Kl. 1915Einstein, A., Sitzungsber. Preuss. Akad. Wiss., Phys. Math. Kl. 1915, 778 (commu- nicated 11 Nov. 1915). . H Weyl, Ann. Phys. (Leipzig). 54117Weyl, H., Ann. Phys. (Leipzig) 54 (1917) 117. . T Levi-Civita, Rend, Acc. dei Lincei. 283Levi-Civita, T., Rend. Acc. dei Lincei, 28 (1919) 3. . D Hilbert, Nachr. Ges. Wiss. Göttingen, Math. Phys. Kl. 53Hilbert, D., Nachr. Ges. Wiss. Göttingen, Math. Phys. Kl. 1917, 53. . A Einstein, L Infeld, B Hoffmann, Ann. Math. 3965Einstein, A., Infeld, L. and Hoffmann, B., Ann. Math. 39 (1938) 65. . A Papapetrou, Proc. Phys. Soc. 6457Papapetrou, A., Proc. Phys. Soc. A64 (1951) 57. . J T Combridge, Phil. Mag. 45726Combridge, J.T., Phil. Mag. 45 (1923) 726. . H Janne, Bull. Acad. R. Belg. 9484Janne, H., Bull. Acad. R. Belg. 9 (1923) 484. . W De Sitter, Month. Not. R. Astr. Soc. 76699de Sitter, W., Month. Not. R. Astr. Soc. 76 (1916) 699. . L S Abrams, Phys. Rev. D. 202474Abrams, L.S., Phys. Rev. D 20 (1979) 2474. . L S Abrams, Can. J. Phys. 67919Abrams, L.S., Can. J. Phys. 67 (1989) 919.
[]
[ "Meta Learning for Few-Shot Medical Text Classification", "Meta Learning for Few-Shot Medical Text Classification" ]
[ "Pankaj Sharma \nStanford Center for Professional Development December 6\n2022\n", "Minh Tran \nStanford Center for Professional Development December 6\n2022\n", "Imran Qureshi \nStanford Center for Professional Development December 6\n2022\n" ]
[ "Stanford Center for Professional Development December 6\n2022", "Stanford Center for Professional Development December 6\n2022", "Stanford Center for Professional Development December 6\n2022" ]
[]
Medical professionals frequently work in a data constrained setting to provide insights across a unique demographic. A few medical observations, for instance, informs the diagnosis and treatment of a patient. This suggests a unique setting for meta-learning, a method to learn models quickly on new tasks, to provide insights unattainable by other methods. We investigate the use of metalearning and robustness techniques on a broad corpus of benchmark text and medical data. To do this, we developed new data pipelines, combined language models with meta-learning approaches, and extended existing meta-learning algorithms to minimize worst case loss. We find that metalearning on text is a suitable framework for text-based data, providing better data efficiency and comparable performance to few-shot language models and can be successfully applied to medical note data. Furthermore, meta-learning models coupled with DRO can improve worst case loss across disease codes.
10.48550/arxiv.2212.01552
[ "https://export.arxiv.org/pdf/2212.01552v1.pdf" ]
254,246,723
2212.01552
6a17a6fcd8e071c3f232838a9babf48a6671e9c3
Meta Learning for Few-Shot Medical Text Classification Pankaj Sharma Stanford Center for Professional Development December 6 2022 Minh Tran Stanford Center for Professional Development December 6 2022 Imran Qureshi Stanford Center for Professional Development December 6 2022 Meta Learning for Few-Shot Medical Text Classification Medical professionals frequently work in a data constrained setting to provide insights across a unique demographic. A few medical observations, for instance, informs the diagnosis and treatment of a patient. This suggests a unique setting for meta-learning, a method to learn models quickly on new tasks, to provide insights unattainable by other methods. We investigate the use of metalearning and robustness techniques on a broad corpus of benchmark text and medical data. To do this, we developed new data pipelines, combined language models with meta-learning approaches, and extended existing meta-learning algorithms to minimize worst case loss. We find that metalearning on text is a suitable framework for text-based data, providing better data efficiency and comparable performance to few-shot language models and can be successfully applied to medical note data. Furthermore, meta-learning models coupled with DRO can improve worst case loss across disease codes. Extended Abstract Medical professionals frequently work in a data constrained setting to provide insights across a diverse demographic. A few medical observations, for instance, informs the diagnosis and treatment of a patient. This suggests a unique setting for meta-learning to learn models that can quickly adapt to new medical tasks and provide insights unattainable by other methods. We investigate the use of meta-learning and robustness techniques on a broad corpus of benchmark text and medical data. To do this, we developed new data pipelines, combined language models with meta-learning approaches, and extended existing meta-learning algorithms to minimize worst case loss. We find that meta-learning on text is a suitable framework for text-based data, providing better data efficiency and comparable performance to few-shot language models and can be successfully applied to medical note data. Furthermore, meta-learning models coupled with DRO can improve worst case loss across disease codes. The first challenge was to validate the effectiveness of meta-learning on natural language. We did this by validating several approaches on the CLINC150 dataset, which is a corpus of intent snippets (e.g. "where is the phone" labelled as find phone). After trying approaches between both text encoders (e.g. RNN) and meta-learning algorithms (e.g. MANN), we validated that a BERT model to generate text embeddings combined with MAML/Prototypical Network provides near 100% accuracy for the entire dataset (on 150-class, 3-shots). Applying this approach to medical data, we used MIMIC-III, a critical care database with over 2 million notes. However, most notes were far longer than what any text encoder could process, and had many errors and redundancies so we created an end to end data pipeline that can extract data from the MIMIC III corpus to be used in meta-learning. Pre-processing of data required text processing steps (lower case, remove stop words, remove special characters, lemmatize) and using a summarizer to satisfy the 512 token requirement by BERT. Furthermore, we developed a dataloading strategy to construct tasks automatically from the embedded dataset. We experimented with semi-rare, popular and random disease codes that were fed into a baseline ProtoNet with many positive results. For instance, across random disease codes, the models achieved 73% accuracy for 10-way, 5-shot classification tasks which represents better accuracy and memory efficiency over a fine-tuned language model. Comparing the performance of ProtoNets on semi-rare, random and popular disease codes we found that the random codes performed better than the popular and semi-rare disease codes. This can be attributed to the skewed distribution of the number of notes per disease code. Finally, we investigated approaches with distributionally robust optimization, a strategy to minimize worst case loss across specific groups. We modified the loss functions for both ProtoNet and MAML for our experiments. Our model variations included implementing DRO with or without group adjustments and l 2 regularization. We found that DRO combined with MAML does improve prediction and does account for distribution shift, with the conclusion that Meta-learning models coupled with distributionally robust optimization (with some variations) can yield fairer models across disease codes. Introduction Medical professionals rely on limited information to derive insights for varied patient demographics. Often medical professionals rely on notes to document history of care, diagnose medical issues, and hand off cases to other doctors. As a result, medical notes contain valuable information such as clinical observations, medical history, treatment plans, and demographic information. Furthermore, notes are used in a variety of medical billing use cases for hospital revenue management. Recent advances in natural language processing (NLP) have made strides in creating language models with near human level performance on language tasks Brown et al. (2020). These advances have also been applied to medical tasks based on medical notes Biseda et al. (2020). However, current language model techniques require a large sample to fine tune even for a single task (e.g. 1000) and have a large memory and computational footprint. This is especially striking for emerging illnesses (e.g. COVID-19 and variants), where medical note data is scarce and the need for accurate classifications is pressing. Meta-learning provides an excellent framework for tackling these low-resource classification problems. In this framework, a model representation is learned across multiple tasks. Tasks are defined via a Support set of examples (K-shot) where the representation can quickly adapt to the examples via updates to its hidden state. Afterwards, the model can predict on a "query" set of examples. As a result, meta-learning on medical note tasks provides a significant opportunity to create robust, yet flexible models that can be deployed in data constrained healthcare settings. In this study, we focus on few shot classification of medical texts to investigate whether metalearning strategies can accurately learn across rare disease classification tasks. The learned representations should adapt well to new disease types and provide accurate classification with a limited number of examples. We use a large pre-trained language model (e.g. ClinicalBERT) to generate representations to train a MAML model and Prototypical Network. Our note corpus comes from MIMIC-III ('Medical Information Mart for Intensive Care') which is based on patients records from Beth Israel Deaconess Medical Center in Boston, Massachusetts and are classified according to 9th version of the International Classification of Diseases. See Dataset Discussion for more. There is a also the potential for bias in these models et al. (2021). Medical notes can be stratified by both the scribe's and the patient demographics (e.g. ethnicity, disease type, specialty, etc.). Given the critical decisions that rely on the outputs of these models and the potential harm that can result from biases, we also investigate meta-learning with Distributionally Robust Optimization (DRO) to understand performance bias towards any specific disease classification. See Methods for more. Methods BERT Meta-Learning Model Architecture Our primary approach was to first tackle the general NLP meta-learning problem by meta-learning on textual representations on the CLINC150 dataset, an intent classification corpus, and then use the successful modelling strategies across CLINC150 towards the MIMIC III medical notes dataset. Our validated architecture consists of a large pre-trained language model, which generates textual representations, or embeddings, from raw text. With these representations, we automatically construct tasks into a MAML model and Prototypical Network to learn across these tasks. Our embeddings are retrieved using BERT (Bidirectional Encoder Representations from Transformers) and its variants Devlin et al. (2018), which are pre-trained language models which jointly condition on both left and right context in all layers. These models had achieved state of the art on NLP benchmarks in 2019, and more importantly, are effective in representing text as word vectors (in 768 dimensions) that can be used for downstream tasks. As an example, we can input a CLINC150 meaning of life entry "what do you think life is really about" into BERT and retrieve a vector e 1 ∈ R 768 . This vector would be closer to e 2 ∈ R 768 generated from another meaning of life entry "why are humans on earth", but far from e 3 ∈ R 768 the find phone entry "help me find my phone please". These embeddings are then constructed into tasks. Each task consists of a support set D tr with K example across N classes and a query set D ts with Q query examples. See figure 1. The tasks are used to train a meta-learning models: Model Agnostic Meta Learning model and Prototypical Network. Model Agnostic Meta Learning (MAML) Finn et al. (2017) reformulates the training procedure into two steps, (1) an inner loop which is traditional optimization procedure across the task and (2) an outer loop to train across tasks. Combined these methods create a trained network that can easily adapt to multiple tasks. Prototypical Networks (ProtoNets) also learn a representation across tasks, Snell et al. (2017) but do so by learning a metric space where an encoder function f θ projects raw examples into an embedding space. Instead of a set of weights learned from MAML, ProtoNets calculate prototypes c n which can be used to classify query examples via the l 2 distance from the query example to each class prototype: y ts = argmin n softmax||f θ (x ts ) − c n || 2 This approach provides a more memory efficient representation with an easier inductive bias as it is limited to classification problems, making ProtoNets an ideal model for our problem of classifying ICD codes from medical notes. Distributionally Robust Optimization Finally, we investigate a distributionally robust loss function to our architecture. In Distributionally Robust Optimization (DRO) models are trained to minimize the worst case loss over a set of pre- defined groups. Shiori Sagaw et al. Sagawa et al. (2019) showed that by strongly regularizing the the group DRO with l 2 regularization can give substantially better higher worst group accuracies. In our experiments we combined DRO and regularization with Meta-Learning to study impact on worst case group predictions (in this case, disease codes) during testing. The sampling was modified to include group codes (disease codes) in D ts . These codes were used in the outer loop adaption for MAML to optimize for the worst case loss and track the worst performing groups. Datasets and Pre-Processing Datasets Two datasets were used for experiements. The first CLINC150 dataset was used to establish a baseline to validate the meta-learning model. CLINC150 is used for Intent Classification and Out-of-Scope Prediction and is relatively simple Li and Zhang (2021). The CLINC150 dataset was already processed and split into 6 sets including in/out of domain data for train, test, validation accordingly. For our baseline model, we used in domain data only, the data includes 150 "in-scope" intent classes, each with 100 records, 20 validations, and 30 test samples. Each record include a piece of text and its label (Figure 3). The table below (Figure 4) shows the distribution of train and test set which is uniform distribution. The diversity of classes indicates that CLINC150 is a good dataset to establish a baseline, however the distribution of intent classes is much more constrained than the distribution of disease codes in MIMIC-III, where there are thousands of diseases and the most challenging to classify are the ones that are rare and often obscured by more common diseases. After establishing the baseline, we performed experiments using the MIMIC-III dataset, which is a large, freely-available database comprising de-identified health-related data associated with over 40,000 patients who stayed in critical care units of the Beth Israel Deaconess Medical Center between 2001 and 2012. For our scope, we extracted only specific data required for disease classification, which included patient IDs, associated medical notes across time, and associated diagnosis disease codes (or ICD codes). After selection, the dataset ended up containing around 2 millions records of more than 46,000 patients with 6,984 distinct diseases. See 5. The distribution of codes is largely skewed towards heart-or respiratory-related diseases, with a long tail of rare disease codes. Given time and processing constraints, we decided to take sampled 1000 records from each of the top 10 ICDs. We chose to equalize the ICD code samples so that the variance in distribution is due to properties of the medical notes themselves rather than the number of data samples. The selected data distribution is shown in (Figure 6). Data Pre-Processing No data pre-processing was performed on the CLINC150 dataset as the data is clean and well organized. Unlike CLINC150, MIMIC-III medical notes contain many typos, errors, and abbreviations. Furthermore, many notes could be associated with a single patient across time, creating redundant information with too many tokens for our text encoder (BERT) to process into embeddings. As a result, we designed an end-to-end pipeline to extract and clean this MIMIC data. There were other efforts to process MIMIC-III text data Wang et al. (2020) Nuthakki et al. (2019) for classification, we used these pipelines as references and built our own procedure. For data selection, as mentioned above, our pipeline extracts most popular ICD codes and most rare ICD codes from schema tables depending on the experiment settings, example of the extracted data distribution of top 10 ICD is shown in (Figure 6). For each disease, we sampled 1000 medical notes while limiting the number of records from a same patient ID. By doing this, we can increase the variance of the data and minimize the redundant information since notes for a patient in a specific period usually contain same content with small updates. For each medical note, we applied the standard text processing steps (lower case, remove stop words, remove special characters, lemmatize) and finally solved the input length problem by summarizing the text into a string of maximum 512 tokens using BERT and BART Summarizer Devlin et al. (2018) Lewis et al. (2019). The example below (Figure 7) demonstrates the processing steps for one specific example, the final text output will be used at input data for BERT to get the embedding vector. Step-by-step data processing. Experiments, Results and Discussion Experiment Architecture For experimentation (Figure 2) we generated embeddings from clinical notes using BERT (bidirectional encoder representations from transformers) and trained two meta-learning models: ProtoNets and MAML. We also extended MAML model by implementing a Distributionally Robust Optimization (DRO) to minimize the worst-case loss during adaptation. The results of the baseline with CLINC150 using 512 token BERT embedding using the ProtoNet model is shown in Figure 8. For a 10 class 5 shot model, we achieved a meta accuracy of 98% and for a 150 class 3 shot model, we achieved an accuracy of 96.7% Meta Learning with Distributionally Robust Optimization (DRO) Our purpose was to study the impact of predictions by meta learning models once they are optimized for worst case expected loss due to atypical groups of data. We used four variations of two meta-learning models each to compare test time performance. For our experiments disease codes were considered groupings with the assumption that there are distribution shifts due to temporal (written over 10 years) and spatial (written by different individuals within each group) shifts in the medical notes. We introduce the notation G which is a set of all the disease codes and g for the codes that are being used in the adaptation loop. When sampling tasks we sample the triplet of (BERT Embeddings, labels, disease codes) as compared to (BERT Embeddings, labels) with a baseline model. The disease codes are only used in the outer adaptation loop and ignored in the inner loop loss calculation. Batch sizes were limited to 16 which results in high variance in the graphs. We also used batch size of 64 with less variance but achieved similar results. For MAML we apply DRO in the outer loop adaptation only. We perform gradient descent based on the max group loss. We did not implement DRO for the inner loop, hence we do not use g notation for D tr . We wanted to focus on how well does the model "adapt" to distribution shifts. We also track the losses and counts per group globally and use it to determine the worst case and best case disease codes and determine the performance during test time. The count is also used for group adjusted DRO. We first established a baseline with CLINC150 ( Figure: 10) and then used the MIMIC-III dataset for final results (Figure: 11). The following is the DRO objective without group adjustment (not count based) used with MAML ( Finn et al., 2017Finn et al. (2017 and Sagaw et al., 2020Sagawa et al. (2019) min θ max g∈G taski L(θ − ∇ θ L((θ, D tr ), D ts g ))(1) The following is the DRO objective with group adjustment (count based) used with MAML. The count term n g acts as a regularizer. ( Finn et al., 2017Finn et al. (2017 L(θ − ∇ θ L((θ, D tr ), D ts g )) + 1 √ n g(2) The following is the DRO objective without group adjustment (count based) used with ProtoNet. The count term n g acts as a regularizer. ( Snell et al., 2017Snell et al. (2017 The following is the DRO objective with group adjustment (count based) used with ProtoNet. The count term n g acts as a regularizer. ( Snell et al., 2017Snell et al. (2017 Results MAML with DRO using CLINC150 During test time, with CLINC150 we saw improved performance on worst case groups but slightly decreased accuracy for best case groups. This is expected as DRO also acts as a regularizer that forces the model to pay attention to worst case groups. See Figure 10 for a summary of results. Figure 10: DRO Baseline with MAML using CLINC150 for a 150 class 3 shot model. MAML with DRO using MIMICIII Please see Table 1 and Figure 11 for a summary of results. Baseline MAML showed signs of overfitting with the MIMICIII dataset. It did not perform well on the worst case groups. When DRO was enabled, there was no overfitting even when the l 2 regularizer was not used. With DRO there is drop in the best case accuracy but a perceptible increase or comparable results in the worst case scenarios. MAML with Count Based DRO (without l 2 ) performs well in comparison to other DROs as it show the most improvement on worst case and middle case groups. MAML with Count Based DRO (with l 2 ) performs better amongst all the DROs in best case scenarios. 3. ProtoNet with DRO using MIMICIII Please see Figure 12 for a summary of results. As with MAML, baseline ProtoNet overfits the MIMICIII data. DRO does regularize and reduce overfitting but does not eliminate it. Count Based or Group Adjusted DRO with l 2 regularization performs the best in all cases. For our experiment we passed the BERT embeddings through fully connected layers since we did not train BERT. Prototypical Networks and Rare and Popular Disease Codes We compare the performance of ProtoNets on Clinical BERT embedding on random, semi-rare, and popular disease classifications ( Figure 9) • Random: Charts sampled from the full distribution of codes with greater than 10 notes per code • Semi-rare: Charts sampled with codes with the bottom 50 codes with greater than 10 notes per code • Popular: Charts among the top 50 codes (See Figure 5). Results Surprisingly, randomly distributed codes performed better than the popular and semi-rare codes. Likely the popular codes performed worse due to the skewed distribution of medical codes within the top codes which are focused on heart and lung diseases. This introduces a lower variation in the embeddings, making it difficult for the ProtoNet prototypes to be distinguished via minimum l 2 distance. Still, the meta-learning approach on rare codes has shown positive results. In contrast, finetuning a BERT model for similar performance required ¿500 samples, indicating that meta-learning on this dataset is significantly more memory efficient and useful for rare disease settings. Discussion and Future Work Our purpose was to understand the performance of Meta-Learning in tackling low-resource classification such as disease code classification using medical notes. We validated our approach on a NLP benchmark and devised two experiments on a medical dataset by coupling DRO with meta-learning algorithms and using ProtoNet to classify rare and popular disease codes. In our experiment, we did not exploit temporal data such as time-series data of disease change or all notes of a patient over each time period. This temporal data would be very useful to track the progress of each patient and get insight about the conditions development. In the future, processing each patient records as a whole observation would capture the time factor and would allow us to cover a broader spectrum of ICD codes. Another useful improvement could be the relationship between diseases as many diseases are correlated or even belong to a branch of a broader category. Exploiting the temporal data and hierarchy structure of ICD and integrating this information into our current baseline would improve model performance as beside comparing popular and rare diseases, we can also use top-down approach to compare each branch of disease. We conclude that DRO combined with MAML does improve prediction and does accounts for distribution shift. The choice between DROs with or without l 2 may end up being domain specific. DRO combined with ProtoNet gives mixed results. We equalized the distribution of medical notes by choosing 1000 notes per disease code for sampling so that the distribution shift is limited to the content of the medical notes. Future experiments would involve removing that constraint (equalized distribution) and evaluate few shot learning on rare disease codes. In our architecture, BERT was used an embedding generator and was not trained using gradient descent. Future work could involve integrating the BERT loss function with meta-learning algorithms. Furthermore, we can investigate this approach on partial snippets of medical text to understand the limits of NLP meta-learning on these datasets. The results for ProtoNet predictions for random, popular and semi-rare codes are also encouraging. We believe that more experimentation and fine-tuning our encoder will give improved results. 6 Related Works Zhang et al. (2019Zhang et al. (2019) used meta-learning on predicting risk using limited Electronic Health Records. They developed a model agnostic gradient framework to train a meta learner on a set of prediction tasks for relevant high risk clinical tasks. We wanted to adapt this concept using Prototypical Networks (ProtoNets) (Snell et al., 2017Snell et al. (2017) and Model Agnostic Meta Learning -MAML (Finn et al., 2017Finn et al. (2017). To account for distribution shifts in test data, Sagawa et al. (2020Sagawa et al. (2019) achieved higher worst case accuracy by coupling DRO models with increased regularization. This forms the basis of our experiments with meta-learning where we combine DRO with MAML and ProtoNets to see if we can get similar results for Electronic Health Records using MIMICIII dataset. We also used Publicly available BERT embeddings (Alsentzer et al., 2019Alsentzer et al. (2019) to generate embeddings for the medical notes in the MIMICIII dataset. For data processing, we used the ideas of a data pipeline from MIMIC-Extract (Wang et al., 2020Wang et al. (2020) and NLP of MIMIC-III clinical notes (Nuthakki et al., 2019Nuthakki et al. (2019) to build our own procedure of extracting and process related data before the embeddings generation. Team Contributions Our team's original breakdown of work for one individual to write the paper (Imran), one individual to manage the experiment data centrally (Pankaj), and one individual to research promising experiments (Minh). However, as we went through the research process, we found more effort was spend on designing and validating experiments, so we each contributed equal amounts to writing, designing, and managing data. Acknowledgements We are grateful to CS330 teaching staff for providing an engaging course with material that represents a frontier of machine learning! Figure 1 : 1Data Loading Strategy: Create support and query sets from the word embeddings with N classes, K shots, and Q query size Figure 2 : 2Experiment Architectures: Using embeddings for clinical notes using BERT (bidirectional encoder representations from transformers) and training using either MAML (including MAML with Distributionally Robust Optimization (DRO) in the adaptation loop) or ProtoNet. Figure 3 :Figure 4 : 34CLINC150 Intent Class Distribution of CLINC150 training set. Figure 5 : 5Original Data Distribution of all ICD Codes. Figure 6 : 6Top 10 most popular diseases. Figure 7 : 7Figure 7: Step-by-step data processing. 1 . 1Baseline meta-learning model: MAML and Protonet 2. Model with DRO 3. Model with Count Based or Group Adjusted DRO 4. Model with Count Based or Group Adjusted DRO and l 2 regularization Figure 8 : 8Baseline with ProtoNet using CLINC150 for a 150 class 3 shot model. Figure 9 : 9ProtoNet performance for random and semi-rare codes. Figure 11 : 11DRO in the outer loop with MAML 2 class 5 shot model using top 10 MIMIC-III disease codes. Figure 12 : 12DRO with ProtoNet 2 class 5 shot model using top 10 MIMIC-III disease codes. Table 2 : 2Meta Test Accuracy and standard deviation on popular, semi-rare and random ICD disease codes. Language models are few-shot learners. T B Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, A Neelakantan, P Shyam, G Sastry, A Askell, S Agarwal, A Herbert-Voss, G Krueger, T Henighan, R Child, A Ramesh, D M Ziegler, J Wu, C Winter, C Hesse, M Chen, E Sigler, M Litwin, S Gray, B Chess, J Clark, C Berner, S Mccandlish, A Radford, I Sutskever, D Amodei, T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei, "Language models are few-shot learners," 2020. [Online]. Available: https://arxiv.org/abs/2005.14165 Prediction of icd codes with clinical bert embeddings and text augmentation with label balancing using mimic-iii. B Biseda, G Desai, H Lin, A Philip, B. Biseda, G. Desai, H. Lin, and A. Philip, "Prediction of icd codes with clinical bert embeddings and text augmentation with label balancing using mimic-iii," 2020. [Online]. Available: https://arxiv.org/abs/2008.10492 Mitigating bias in machine learning for medicine. V , Nature. V. et al., "Mitigating bias in machine learning for medicine," Nature, 2021. [Online]. Available: https://www.nature.com/articles/s43856-021-00028-w Bert: Pre-training of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, "Bert: Pre-training of deep bidirectional transformers for language understanding," 2018. [Online]. Available: https: //arxiv.org/abs/1810.04805 Model-agnostic meta-learning for fast adaptation of deep networks. C Finn, P Abbeel, S Levine, C. Finn, P. Abbeel, and S. Levine, "Model-agnostic meta-learning for fast adaptation of deep networks," 2017. [Online]. Available: https://arxiv.org/abs/1703.03400 Prototypical networks for few-shot learning. J Snell, K Swersky, R S Zemel, J. Snell, K. Swersky, and R. S. Zemel, "Prototypical networks for few-shot learning," 2017. [Online]. Available: https://arxiv.org/abs/1703.05175 Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. S Sagawa, P W Koh, T B Hashimoto, P Liang, S. Sagawa, P. W. Koh, T. B. Hashimoto, and P. Liang, "Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization," 2019. [Online]. Semi-supervised meta-learning for cross-domain few-shot intent classification. J Y Li, J Zhang, Proceedings of the 1st Workshop on Meta Learning and Its Applications to Natural Language Processing. the 1st Workshop on Meta Learning and Its Applications to Natural Language ProcessingJ. Y. Li and J. Zhang, "Semi-supervised meta-learning for cross-domain few-shot intent classifica- tion," Proceedings of the 1st Workshop on Meta Learning and Its Applications to Natural Language Processing, 2021. Mimic-extract. S Wang, M B A Mcdermott, G Chauhan, M Ghassemi, M C Hughes, T Naumann, Proceedings of the ACM Conference on Health, Inference, and Learning. the ACM Conference on Health, Inference, and LearningS. Wang, M. B. A. McDermott, G. Chauhan, M. Ghassemi, M. C. Hughes, and T. Naumann, "Mimic-extract," Proceedings of the ACM Conference on Health, Inference, and Learning, Apr 2020. [Online]. Available: https://arxiv.org/pdf/1907.08322.pdf Natural language processing of mimic-iii clinical notes for identifying diagnosis and procedures with neural networks. S Nuthakki, S Neela, J W Gichoya, S Purkayastha, S. Nuthakki, S. Neela, J. W. Gichoya, and S. Purkayastha, "Natural language processing of mimic-iii clinical notes for identifying diagnosis and procedures with neural networks," 2019. [Online]. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. M Lewis, Y Liu, N Goyal, M Ghazvininejad, A Mohamed, O Levy, V Stoyanov, L Zettlemoyer, M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension," 2019. [Online]. Available: https://arxiv.org/abs/1910.13461 Metapred: Meta-learning for clinical risk prediction with limited patient electronic health records. X S Zhang, F Tang, H Dodge, J Zhou, F Wang, X. S. Zhang, F. Tang, H. Dodge, J. Zhou, and F. Wang, "Metapred: Meta-learning for clinical risk prediction with limited patient electronic health records," 2019. [Online]. Available: https://arxiv.org/pdf/1905.03218.pdf Publicly available clinical bert embeddings. E Alsentzer, J R Murphy, W Boag, W.-H Weng, D Jin, T Naumann, M B A Mcdermott, E. Alsentzer, J. R. Murphy, W. Boag, W.-H. Weng, D. Jin, T. Naumann, and M. B. A. McDermott, "Publicly available clinical bert embeddings," 2019. [Online]. Available: https: //arxiv.org/pdf/1904.03323.pdf
[]
[ "ALP-KD: Attention-Based Layer Projection for Knowledge Distillation", "ALP-KD: Attention-Based Layer Projection for Knowledge Distillation" ]
[ "Peyman Passban [email protected] \nHuawei Noah's Ark Lab\n\n", "Yimeng Wu [email protected] \nHuawei Noah's Ark Lab\n\n", "Mehdi Rezagholizadeh [email protected] \nHuawei Noah's Ark Lab\n\n", "Qun Liu [email protected] \nHuawei Noah's Ark Lab\n\n" ]
[ "Huawei Noah's Ark Lab\n", "Huawei Noah's Ark Lab\n", "Huawei Noah's Ark Lab\n", "Huawei Noah's Ark Lab\n" ]
[]
Knowledge distillation is considered as a training and compression strategy in which two neural networks, namely a teacher and a student, are coupled together during training. The teacher network is supposed to be a trustworthy predictor and the student tries to mimic its predictions. Usually, a student with a lighter architecture is selected so we can achieve compression and yet deliver high-quality results. In such a setting, distillation only happens for final predictions whereas the student could also benefit from teacher's supervision for internal components. Motivated by this, we studied the problem of distillation for intermediate layers. Since there might not be a one-to-one alignment between student and teacher layers, existing techniques skip some teacher layers and only distill from a subset of them. This shortcoming directly impacts quality, so we instead propose a combinatorial technique which relies on attention. Our model fuses teacher-side information and takes each layer's significance into consideration, then performs distillation between combined teacher layers and those of the student. Using our technique, we distilled a 12-layer BERT (Devlin et al. 2019) into 6-, 4-, and 2-layer counterparts and evaluated them on GLUE tasks(Wang et al. 2018). Experimental results show that our combinatorial approach is able to outperform other existing techniques.
10.1609/aaai.v35i15.17610
[ "https://arxiv.org/pdf/2012.14022v1.pdf" ]
229,679,667
2012.14022
e339c5d31ffc7029c1f72d567ac07b4606701c72
ALP-KD: Attention-Based Layer Projection for Knowledge Distillation Peyman Passban [email protected] Huawei Noah's Ark Lab Yimeng Wu [email protected] Huawei Noah's Ark Lab Mehdi Rezagholizadeh [email protected] Huawei Noah's Ark Lab Qun Liu [email protected] Huawei Noah's Ark Lab ALP-KD: Attention-Based Layer Projection for Knowledge Distillation Knowledge distillation is considered as a training and compression strategy in which two neural networks, namely a teacher and a student, are coupled together during training. The teacher network is supposed to be a trustworthy predictor and the student tries to mimic its predictions. Usually, a student with a lighter architecture is selected so we can achieve compression and yet deliver high-quality results. In such a setting, distillation only happens for final predictions whereas the student could also benefit from teacher's supervision for internal components. Motivated by this, we studied the problem of distillation for intermediate layers. Since there might not be a one-to-one alignment between student and teacher layers, existing techniques skip some teacher layers and only distill from a subset of them. This shortcoming directly impacts quality, so we instead propose a combinatorial technique which relies on attention. Our model fuses teacher-side information and takes each layer's significance into consideration, then performs distillation between combined teacher layers and those of the student. Using our technique, we distilled a 12-layer BERT (Devlin et al. 2019) into 6-, 4-, and 2-layer counterparts and evaluated them on GLUE tasks(Wang et al. 2018). Experimental results show that our combinatorial approach is able to outperform other existing techniques. Introduction Knowledge distillation (KD) (Buciluǎ, Caruana, and Niculescu-Mizil 2006;Hinton, Vinyals, and Dean 2015) is a commonly-used technique to reduce the size of large neural networks (Sanh et al. 2019). Apart from this, we also consider it as a complementary and generic add-on to enrich the training process of any neural model (Furlanello et al. 2018). In KD, a student network (S) is glued to a powerful teacher (T ) during training. These two networks can be trained simultaneously or T can be a pre-trained model. Usually, T uses more parameters than S for the same task, therefore it has a higher learning capacity and is expected to provide reliable predictions. On the other side, S follows its teacher with a simpler architecture. For a given input, both models provide predictions where those of the student are penalized by an ordinary loss function (using hard labels) as well as predictions received from T (also known as soft labels). Training a (student) model for a natural language processing (NLP) task can be formalized as a multi-class classification problem to minimize a cross-entropy (ce) loss function, as shown in Equation 1: L ce = − N i=1 w∈V [1(y i = w)× log p S (y i = w|x i , θ S )] (1) where 1(.) is the indicator function, V is a vocabulary set (or different classes in a multi-class problem), N is the number of tokens in an input sequence, and y is a prediction of the network S with a parameter set θ S given an input x. To incorporate teacher's supervision, KD accompanies L ce with an auxiliary loss term, L KD , as shown in Equation 2: L KD = − N i=1 w∈V [p T (y i = w|x i , θ T )× log p S (y i = w|x i , θ S )] (2) Since S is trained to behave identically to T , model compression can be achieved if it uses a simpler architecture than its teacher. However, if these two models are the same size KD would still be beneficial. What L KD proposes is an ensemble technique by which the student is informed about teacher's predictions. The teacher has better judgements and this helps the student learn how much it deviates from true labels. This form of KD that is referred to as Regular KD (RKD) throughout this paper, only provides S with external supervision for final predictions, but this can be extended to other components such as intermediate layers too. The student needs to be aware of the information flow inside teacher's layers and this becomes even more crucial when distilling from deep teachers. Different alternatives have been proposed to this end, which compare networks' internal layers in addition to final predictions (Jiao et al. 2019;Sun et al. 2020Sun et al. , 2019, but they suffer from other types of problems. The main goal in this paper is to study such models and address their shortcomings. Problem Definition To utilize intermediate layers' information (and other components in general), a family of models exists that defines a dedicated loss function to measure how much a student diverges from its teacher in terms of internal representations. In particular, if the goal is to distill from an n-layer teacher into an m-layer student, a subset of m (out of n) teacher layers is selected whose outputs are compared to those of student layers (see Equation 3 for more details). Figure 1 illustrates this concept. Figure 1: Student and teacher models have m and n layers, respectively. Each node is an intermediate layer and links are cross-model connections. In this example, every other layer of the teacher is skipped in order to match the size of the student. The output of nodes connected to each other are compared via a loss function (shown with ↔) to ensure that the student model has similar internal representations as its teacher. As the figure shows, each student layer is connected to a single, dedicated peer on the teacher side, e.g. the n-th teacher layer corresponds to the m-th student layer. Since outputs of these two layers are compared to each other, we hope that both models generate as similar outputs as possible at points n and m. With this simple technique, teacher's knowledge can be used to supervise student's intermediate layers. Experimental results show that intermediate layer matching could be quite effective, but in our study we realized that it may suffer from two shortcomings: • If n m, multiple layers in T have to be ignored for distillation but we know that those layers consist of precious information for which we spend expensive resources to learn. This issue is referred to as the skip problem in this paper. • Moreover, it seems the way teacher layers are kept/skipped is somewhat arbitrary as there is no particular strategy behind it. Before training, we lack enough knowledge to judge which subset of teacher layers contributes more to the distillation process, so there is a good chance of skipping significant layers if we pick them in an arbitrary fashion. Finding the best subset of layers to distill from requires an exhaustive search or an expert in the field to signify connections. We refer to this issue as the search problem. In order to resolve the aforementioned issues we propose an alternative, which is the main contribution of this paper. Our solution does not skip any layer but utilizes all information stored inside T . Furthermore, it combines teacher layers through an attention mechanism, so there is no need to deal with the search problem. We believe that the new notion of combination defined in this paper is as important as our novel KD architecture and can be adapted to other tasks too. The remainder of this paper is organized as follows: First, we briefly review KD techniques used in similar NLP applications, then we introduce our methodology and explain how it addresses existing shortcomings. We accompany our methodology with experimental results to show whether the proposed technique is useful. Finally, we conclude the paper and discuss future directions. Related Work KD was originally proposed for tasks other than NLP (Buciluǎ, Caruana, and Niculescu-Mizil 2006;Hinton, Vinyals, and Dean 2015). Kim and Rush (2016) adapted the idea and proposed a sequence-level extension for machine translation. Freitag, Al-Onaizan, and Sankaran (2017) took a step further and expanded it to a multi-task scenario. Recently, with the emergence of large NLP and language understanding (NLU) models such as ELMO (Peters et al. 2018) and BERT (Devlin et al. 2019) KD has gained extra attention. Deep models can be trained in a better fashion and compressed via KD, which is favorable in many ways. Therefore, a large body of work in the field such as Patient KD (PKD) (Sun et al. 2019) has been devoted to compressing/distilling BERT (and similar) models. PKD is directly related to this work, so we discuss it in more detail. It proposes a mechanism to match teacher and student models' intermediate layers by defining a third loss function, L P , in addition to L ce and L KD , as shown in Equation 3: L P = − N i=1 m j=1 || h i,j S ||h i,j S || 2 − A(j) i ||A(j) i || 2 || 2 2(3) where h i,j S is the output 1 of the j-th student layer for the i-th input. A subset of teacher layers selected for distillation is denoted with an alignment function A, e.g. A(j) = h l T implies that the output of the j-th student layer should be compared to the output of the l-th teacher layer (h i,j S ↔ h i,l T ). PKD is not the only model that utilizes internal layers' information. Other models such as TinyBERT (Jiao et al. 2019) and MobileBERT (Sun et al. 2020) also found it crucial for training competitive student models. However, as Equation 3 shows, in these models only m teacher layers (the number of teacher layers returned by A) can contribute to distillation. In the presence of deep teachers and small students, this limitation can introduce a significant amount of information loss. Furthermore, what is denoted by A directly impacts quality. If A skips an important layer the student model may fail to provide high-quality results. To tackle this problem, Wu et al. (2020) proposed a combinatorial technique, called CKD. In their model, A(j) returns a subset of teacher layers instead of a single layer. Those layers are combined together and distillation happens between the combination result and the j-th student layer, as shows in equation 4:Ĉ j =F c (h k T ); h k T ∈ A(j) C j =F r (Ĉ j ) ∪ m j=1 A(j) ={h 1 T , ..., h n T }(4) whereĈ j is the result of a combination produced by the function F c given a subset of teacher layers indicated by A(j). In Wu et al. (2020), F c is implemented via a simple concatenation. Depending on the form of combination used in Equation 4, there might be a dimension mismatch betweenĈ j and the student layer h j S . Accordingly, there is another function, F r , to reform the combination result into a comparable shape to the student layer. CKD uses a single projection layer to control the dimension mismatch. With the combination technique (concatena-tion+projection), CKD could solve the skip problem but the search problem still remains unanswered. Similar to PKD, CKD also requires a search process, but it looks for the best subset of teacher layers instead of the best single layer. These two models are directly related to this research so we consider them as baselines in our experiments. The application of KD in NLP and NLU is not limited to the aforementioned models. Aguilar et al. (2020) followed the same architecture as PKD but they introduced a new training regime, called progressive training. In their method, lower layers are trained first and training is progressively shifted to upper layers. They claim that the way internal layers are trained during KD can play a significant role. Liu et al. (2019) investigated KD from another perspective. Instead of focusing on the compression aspect, they kept the size of student models equal to their teachers and showed how KD could be treated as a complementary training ingredient. Tan et al. (2019) squeezed multiple translation engines into one transformer (Vaswani et al. 2017) and showed that knowledge can be distilled from multiple teachers. Wei et al. (2019) introduced a novel training procedure where there is no need for an external teacher. A student model can learn from its own checkpoints. At each validation step, if the current checkpoint is better than the best existing checkpoint, student learns from it otherwise the best stored checkpoint is considered as a teacher. Methodology For a given student model S and a teacher model T we show all intermediate layers with sets H S = {h 1 S , ..., h m S } and H T = {h 1 T , ..., h n T }, respectively. Based on the pipeline designed by current models for intermediate layer KD, there must be a connection between H S and H T during training and each student layer can only correspond to a single peer on the teacher side. As previously mentioned, layer connections are denoted by A. A common heuristic to devise A is to divide teacher layers into m buckets with approximately the same sizes and pick only one layer from each (Jiao et al. 2019;Sun et al. 2019). Therefore, for the j-th layer of the student model, A(j) returns a single teacher layer among those that reside in the j-th bucket. Figure 2a illustrates this setting. Clearly, this is not the best way of connecting layers, because they are picked in a relatively arbitrary manner. More importantly, no matter what heuristic is used there still remain n−m layers in this approach whose information is not used in distillation. To address this issue, we simply propose a combinatorial alternative whereby all layers inside buckets are taken into consideration. Our technique is formulated in Equation 5: C j = h k T ∈A(j) α jk h k T α jk = exp(h j S . h k T ) h k T ∈A(j) exp(h j S . h k T ) ∪ m j=1 A(j) = H T = {h 1 T , ..., h n T }(5) This idea is similar to that of CKD, but we use an attention mechanism (Bahdanau, Cho, and Bengio 2014) instead of a concatenation for layer combination. Experimental results demonstrate that this form of combination is more useful. We refer to this idea as Attention-based Layer Projection for KD or ALP-KD in short. According to the equation, if a student layer associates with a particular bucket, all layers inside that bucket are combined/used for distillation and C j is a vector representation of such a combination. Our model benefits from all n teacher layers and skips none as there is a dedicated C vector for each student layer. Figure 2b visualizes this setting. Weights (α values) assigned to teacher layers are learnable parameters whose values are optimized during training. They show the contribution of each layer to the distillation process. They also reflect the correlation between student and teacher layers, i.e. if a student layer correlates more with a set of teacher layers weights connecting them should receive higher values. In other words, that specific layer is playing the role of its teacher peers on the student side. To measure the correlation, we use the dot product in our experiments but any other function for similarity estimation could be used in this regard. Equation 5 addresses the skip problem with a better combination mechanism and is able to provide state-of-the-art results. However, it still suffers from the search problem as it relies on buckets and we are not sure which bucketing strategy works better. For example, in Figure 2b the first bucket consists of the first three layers of the teacher but it does not mean that we cannot append a fourth layer. In fact, a bucket with four layers might perform better. Buckets can also share layers; namely, a teacher layer can belong to multiple buckets and can be used numerous times in distillation. These constraints make it challenging to decide about buckets and their boundaries, but it is possible to resolve this dilemma through a simple modification in our proposed model. To avoid bucketing, we span the attention mask over all teacher layers rather than over buckets. To implement this Figure 2a, teacher layers are divided into 3 buckets and only one layer from each bucket is connected to the student side, e.g. h 5 T is the source of distillation for h 2 S (h 5 T ↔ h 2 S ). In Figure 2b, a weighted average of teacher layers from each bucket is considered for distillation, e.g. Figure 2c, there is no bucketing and all teacher layers are considered for projection. Links with higher color intensities have higher attention weights. extension, A(j) needs to be replaced with H T in Equation 5. Therefore, for any student layer such as h j S there would be a unique set of n attention weights and C j would be a weighted average of all teacher layers, as shown in Equation 6: A(2) = {h 4 T , h 5 T } and C 2 = α 24 h 4 T + α 25 h 5 T (C 2 ↔ h 2 S ). InC j = h k T ∈A(j) α jk h k T A(j) =H T ∀j ∈ {1, 2, ..., m}(6) This new configuration, which is illustrated in Figure 2c, proposes a straightforward way of combining teacher layers and addresses both skip and search problems at the same time. To train our student models, we use a loss function which is composed of L ce , L KD , and a dedicated loss defined for ALP-KD, as shown in Equation 7: L = βL ce + ηL KD + λL ALP L ALP = N i=1 m j=1 MSE(h i,j S , C i,j )(7) where MSE() is the mean-square error and C i,j shows the value of C j when the teacher is fed with the i-th input. β, η, and λ are hyper-parameters of our model to minimize the final loss. Experimental Study A common practice in our field to evaluate the quality of a KD technique is to feed T and S models with instances of standard datasets and measure how they perform. We followed the same tradition in this paper and selected a set of eight GLUE tasks (Wang et al. 2018) including CoLA, MNLI, MRPC, QNLI, QQP, RTE, SST-2, and STS-B datasets to benchmark our models. Detailed information about datasets is available in the appendix section. In NLP/NLU settings, T is usually a pre-trained model whose parameters are only fine-tuned during training. On the other side, S can be connected to T to be trained thoroughly or can alternatively be initialized with T 's parameters to be fine-tuned similar to its teacher. This helps the student network generate better results and converge faster. Finetuning is more common than training in our context and we thus fine-tune our models rather than training. This concept is comprehensively discussed by Devlin et al. (2019) so we skip its details and refer the reader to their paper. We have the same fine-tuning pipeline in this work. In our experiments, we chose the original BERT model 2 (also known as BERT Base ) as our teacher. We are faithful to the configuration proposed by Devlin et al. (2019) for it. Therefore, our in-house version also has 12 layers with 12 attention heads and the hidden and feed-forward dimensions are 768 and 3072, respectively. Our students are also BERT models only with fewer layers (|H S | = m ; m < 12). We use the teacher BERT to initialize students, but because the number of layers are different (12 = m) we only consider its first m layers. We borrowed this idea from PKD (Sun et al. 2019) in the interest of fair comparisons. In order to maximize each student's performance we need to decide about the learning rate, batch size, the number of fine-tuning iterations, and β, η, and λ. To this end, we run a grid search similar to Sun et al. (2019) and Wu et al. (2020). In our setting, the batch size is set to 32 and the learning rate is selected from {1e − 5, 2e − 5, 5e − 5}. η and λ take values from {0, 0.2, 0.5, 0.7} and β = 1 − η − λ. Details of the grid search and values of all hyper-parameter are reported in the appendix section. We trained multiple models with different configurations and compared our results to RKD-and PKD-based students. To the best of our knowledge, these are the only alternatives that use BERT as a teacher and their students' architecture relies on ordinary Transformer blocks (Vaswani et al. 2017) with the same size as ours, so any comparison to any other model with different settings would not be fair. Due to CKD's similarity to our approach we also re-implemented it in our experiments. The original CKD model was proposed for machine translation and for the first time we evaluate it in NLU tasks. To bridge the performance gap between the teacher and S NKD , we involve KD in the training process and train new models, S RKD and S PKD , with RKD and PKD techniques, respectively. S RKD is equivalent to a configuration known as DistilBERT in the literature (Sanh et al. 2019). To have precise results and a better comparison, we trained/fine-tuned all models in the same experimental environment. Accordingly, we do not borrow any result from the literature but reproduce them. This is the reason we use the term equivalent for these two models. Furthermore, DistilBERT has an extra Cosine embedding loss in addition to those of S RKD . When investigating the impact of intermediate layers in the context of KD, we wanted L P to be the only difference between RKD and PKD, so incorporating any other factor could hurt our investigation and we thus avoided the cosine embedding loss in our implementation. PKD outperforms RKD with an acceptable margin in Table 1 and that is because of the engagement of intermediate layers. For S PKD , we divided teacher layers into 3 buckets (4 layers in each) and picked the first layer of each bucket to connect to student layers, i.e. A(1) = h 1 T , A(2) = h 5 T , and A(3) = h 9 T . There is no teacher layer assigned to the last layer of the student. This form of mapping maximizes PKD's performance and we figured out this via an empirical study. Results discussed so far demonstrate that cross-model layer mapping is effective, but it can be improved even more if the skip issue is settled. Therefore, we trained two other students using CKD. The setting for these models is identical to PKD, namely teacher layers are divided into 3 buckets. The first 4 teacher layers reside in the first bucket. The fifth to eighth layers are in the second bucket and the rest are covered by the third bucket. Layers inside the first bucket are concatenated and passed through a projection layer to match the student layers' dimension. The combination result for the first bucket is assigned to the first student layer (C 1 ↔ h 1 S ). The same procedure is repeated with the second and third buckets for h 2 S and h 3 S . Similar to PKD, there is no teacher layer connected to the last student layer. This configuration is referred to as No Overlap (NO), that indicates buckets share no layers with each other. In addition to NO we designed a second configuration, PO, which stands for Partial Overlap. In PO, each bucket shares its first layer with the preceding bucket, so the first bucket includes the first to fifth layers, the second bucket includes the fifth to ninth layers, and from the ninth layer onward reside in the third bucket. We explored this additional configuration to see the impact of different bucketing strategies in CKD. Comparing S CKD to S PKD shows that the combination (con-catenation+projection) idea is useful in some cases, but for others the simple skip idea is still better. Even defining different bucketing strategies did not change it drastically, and this leads us to believe that a better form of combination such as an attention-based model is required. In S ALP extensions, we replace the CKD's concatenation with attention and results improve. ALP-KD is consistently better than all other RKD, PKD, and CKD variations and this justifies the necessity of using attention for combination. S ALP-NO and S ALP-PO also directly support this claim. In S ALP , we followed Equation 6 and spanned the attention mask over all teacher layers. This setting provides a model that requires no engineering adjustment to deal with skip and search problems and yet delivers the best result on average. Training Deeper/Shallower Models Than 4-Layer Students So far we compared 4-layer ALP-KD models to others and observed superior results. In this section, we design additional experiments to study our technique's behaviour from the size perspective. The original idea of PKD was proposed to distill from a 12-layer BERT to a 6-layer student (Sun et al. 2019). In such a scenario, only every other layer of the teacher is skipped and it seems the student model should not suffer from the skip problem dramatically. We repeated this experiment to understand if our combination idea is still useful or its impact diminishes when student and teacher models have closer architectures. Table 2 summarizes findings of this experiment. Among 6-layer students, S ALP-NO has the best average score which demonstrates that the combinatorial approach is still useful. Moreover, the supremacy of attention-based combination over the simple concatenation holds for this setting too. S ALP is the second best and yet our favorite model as it requires no layer alignment before training. The gap between PKD and ALP-KD is narrowed in 6layer models compared to 4-layer students, and this might be due to an implicit relation between the size and need for combining intermediate layers. We focused on this hypothesis in another experiment and this time used the same teacher to train 2-layer students. In this scenario, student models are considerably smaller with only 39M parameters. Results of this experiment are reported in Table 3. For CKD and ALP-KD, we combine all teacher layers and distill into the first layer of the student. Similar to previous experiments, there is no connection between the last layer of 2-layer students and the teacher model and KD happens between h 1 S and H T . For PKD, we need to decide which teacher layers should be involved in distillation, for which we assessed three configurations with the first (h 1 S ↔ h 1 T ), sixth (h 1 S ↔ h 6 T ), and twelfth (h 1 S ↔ h 12 T ) layers. S ALP outperforms other students in this case too and this time the gap between PKD and ALP-KD is even more visible. This result points out to the fact that when teacher and student models differ significantly, intermediate layer combination becomes crucial. Qualitative Analysis We tried to visualize attention weights to understand what happens during training and why ALP-KD leads to better performance. Figure 3 illustrates results related to this experiment. From the SST-2 dataset, we randomly selected 10 examples and stimulated both teacher and student models to emit attention weights between the first layer of the student (h 1 S ) and all teacher layers (H T ). We carried out this experiment with 2-, 4-, and 6-layer S ALP models. The x and y axes in the figure show the attention weights and 10 examples, respectively. As seen in Figure 3a, the first half of the teacher model is more active, which is expected since we distill into the first Figure 3: Visualizing attention weights between the first layer of the student model and all teacher layers for 10 samples from SST-2. Weights belong to S ALP with 2 (a), 4 (b), and 6 (c) layers. layer of the student. However, h 1 S receives strong signals from other layers in the second half too, e.g. in Example-10 there is a strong connection between h 11 T and h 1 S . This visualization demonstrates that all teacher layers participate in distillation and defining buckets or skipping layers might not be the best approach. A similar situation arises when distilling into the 4-layer model in Figure 3b as the first half is still more active. For the 6-layer model, we see a different pattern where there is a concentration in attention weights around the middle layers of the teacher and h 1 S is mainly fed by layers h 4 T to h 7 T . Considering the distribution of attention weights, any skipor even concatenation-based approach would fail to reveal the maximum capacity of KD. Such approaches assume that a single teacher layer or a subset of adjacent layers affect the student model, whereas almost all of them participate in the process. Apart from previously reported results, this visualization again justifies the need for an attention-based combination in KD. Our technique emphasizes on intermediate layers and the necessity of having similar internal representations between student and teacher models, so in addition to attention weights we also visualized the output of intermediate layers. The main idea behind this analysis is to show the information flow inside student models and how ALP-KD helps them mimic their teacher. Figures 4a and 4b illustrate this experiment. We randomly selected 100 samples from the SST-2 dataset and visualized what hidden representations of S ALP , S PKD , and T models (from Table 1) look like when stimulated with these inputs. Student models have 4 layers but due to space Table 3: The teacher model T BERT has 12 and all other student models have 2 layers. S PKD-l indicates that h l T is used for distillation. The output of each intermediate layer is a 768-dimensional vector, but for visualization purposes we consider the first two principle components extracted via PCA (Wold, Esbensen, and Geladi 1987). During training, h 5 T and h 9 T are connected to h 2 S and h 3 S as the source of distillation in PKD, so we also include those teacher layers' outputs in our visualization. As the figure shows, ALP-KD's representations are closer to teacher's and it demonstrates that our technique helps train better students with closer characteristics to teachers. We conducted another complementary analysis where we used the output of the same teacher and student layers from the previous experiment and measured their distance for all 100 examples. Results of this experiment are illustrated in Figures 4c and 4d for the second and third student layers, respectively. Internal representations generated by PKD are more distant from those of the teacher compared to ALP-KD's representations, e.g. the distance between h 20,2 PKD (the output of the second PKD layer for the 20-th example in Figure 4c) and h 20,5 T is around 0.20 whereas this number is only 0.05 for ALP-KD. This is an indication that the ALP-KD student follows its teacher better than the PKD student. To measure distance, we used the Cosine similarity in this experiment. Conclusion and Future Work In this paper, we discussed the importance of distilling from intermediate layers and proposed an attention-based technique to combine teacher layers without skipping them. Experimental results show that the combination idea is effective. Our findings in this research can be summarized as follows: • It seems to distil from deep teachers with multiple internal components combination is essential. • The more teacher and student models differ in terms of the number of layers, the more intermediate layer combination becomes crucial. • Although a simple concatenation of layers is still better than skipping in many cases, to obtain competitive results an attention-based combination is required. • ALP-KD can be tuned to combine layers inside buckets and this approach is likely to yield state-of-the-art results, but if there is no enough knowledge to decide about buckets, a simple attention mask over all teacher layers should solve the problem. As our future direction, we are interested in applying ALP-KD to other tasks to distill from extremely deep teachers into compact students. Moreover, we will work on designing better attention modules. Techniques that are able to handle sparse structures could be more useful in our architecture. Finally, we like to adapt our model to combine other internal components such as attention heads. • MNLI: A multi-genre natural language inference corpus including sentence pairs with textual entailment annotations (Williams, Nangia, and Bowman 2018 • QNLI: A dataset built for a binary classification task to assess whether a sentence contains the correct answer to a given query (Rajpurkar et al. 2016 (Dagan, Glickman, and Magnini 2005;Bar-Haim et al. 2006;Giampiccolo et al. 2007;Bentivogli et al. 2009). • SST-2: A sentiment analysis dataset with sentence-level (positive/negative) labels. The training and validation sets include 67, 349 and 872 sentences, respectively (Socher et al. 2013). • STS-B: A collection of sentence pairs used for semantic similarity estimation. Each pair has a similarity score from 1 to 5. This dataset has 5, 749 training and 1, 500 validation examples (Cer et al. 2017). Different Attention Models The attention mechanism is the main reason that our architecture works better (than PKD and CKD). The default module designed in ALP-KD relies on a simple dot product, but we studied if a better attention technique can boost performance even more. Accordingly, we adapted the idea of Vaswani et al. (2017) and carried out a new experiment, which is reported in Table 4. To measure the correlation between h j S and teacher layers, we consider h j S as a query vector, teacher layers as key vectors, and train a dedicated value vector for each key. We compute attention weights using key, query, and value vectors as described in Vaswani et al. (2017). For this extension, we implemented single-and multi-head attention modules. According to Table 4, as the number of heads increases performance improves accordingly, but neither the singlehead nor the 4-head model could outperform the simple dotproduct-based technique, which was unexpected. Hyper-parameters We set the batch size to 32 for all models. The maximum sequence length is 64 for single-sentence tasks and 128 for other sentence-pair tasks. For STS-B, SST-2, and QNLI we run 10 epochs for fine-tuning. For datastes with more than 300K training instances we only run 5 epochs. For MRPC and RTE we fine-tune the model for 20 epochs, and CoLA is the only dataset which needs 50 epochs to provide high-quality results. For these numbers we tried to follow the literature and have the same experimental settings as others. We first initialize student models and then fine-tune both T BERT and S NKD with a learning rate chosen from {1e − 5, 2e − 5, 5e − 5}. For S RKD , η takes values from {0.2, 0.5, 0.7} and β is set to 1 − η. λ is 0 since there is no intermediate-layer distillation. For the Softmax function we consider 1, 5, 10, or 20 as potential values for the temperature (T) (Hinton, Vinyals, and Dean 2015). For each task, a grid search is performed over the learning-rate set, η, and T. For S PKD , the process is almost the same with a single difference. There is an additional hyper-parameter λ, which incorporates the effect of the intermediate-layer loss. It takes values from {0.2, 0.5, 0.7}. Unlike the previous setting, this time β is 1 − η − λ. In S PKD , we conduct a grid search over λ as well as all other hyper-parameters. For S CKD and S ALP models we use the same grid search as in S PKD . Tables 5 to 10 list the exact values of hyper-parameters for all experiments. Hardware Each model is fine-tuned on a single NVIDIA 32GB V100 GPU. The fine-tuning time, based on the dataset size, can vary from a few hours to one day on a single GPU. hyper-parameters CoLA MNLI MRPC QNLI QQP RTE SST-2 STS-B learning rate 2e-5 5e-5 2e-5 2e-5 2e-5 2e-5 2e-5 2e-5 Table 5: Hyper-parameters of T BERT . hyper-parameters CoLA MNLI MRPC QNLI QQP RTE SST-2 STS-B learning rate 2e-5 5e-5 2e-5 2e-5 5e-5 2e-5 5e-5 5e-5 Table 6: Hyper-parameters of S NKD . hyper-parameters CoLA MNLI MRPC QNLI QQP RTE SST-2 STS-B learning rate 2e-5 5e-5 2e-5 5e-5 5e-5 5e-5 5e-5 5e-5 T 10 20 10 20 20 5 10 5 η 0.5 0.7 0.5 0.7 0.7 0.7 0.7 0.2 Figure 2 : 2Three pairs of S and T networks with different forms of layer connections. In Figure 4a Figure 4b 4a4bFigure 4a Figure 4b Figure 4c Figure 4d 4c4dFigure 4c Figure 4d Figure 4 : 4Visualizing intermediate layers' outputs and their distance from the teacher in ALP-KD and PKD students. Teacher-, ALP-KD-, and PKD-related information is visualized with green, red, and blue colors, respectively. Figures 4a and 4c provide information about h 2 ALP , h 2 PKD , and h 5 T , and Figures 4b and 4d report information about h 3 ALP , h 3 PKD , and h 9 T . In the bottom figures, the x axis shows samples and the y axis is the Cosine distance from the teacher. limitations we only show middle layers' outputs, namely h 2 S (Figure 4a) and h 3 S (Figure 4b). h 1 S and h 4 S also expressed very similar attitudes. Table 1summarizes our experiments. The teacher model with 12 layers and 109M parameters has the best performance for all datasets. 3 This model can beTable 1: Except the teacher (T BERT ) which is a 12-layer model, all other models have 4 layers. Apart from the number of layers, all students have the same architecture as the teacher. The first column shows what sort of problems each model suffers from. NKD stands for No KD which means there is no KD technique involved during training this student model. NO and PO are different configurations for mapping internal layers. Boldfaced numbers show the best student score for each column over the validation set. Scores in the first column are Matthew's Correlations. SST-B scores are Pearson correlations and the rest are accuracy scores.Problem Model CoLA MNLI MRPC QNLI QQP RTE SST-2 STS-B Average N/A T BERT 57.31 83.39 86.76 91.25 90.96 68.23 92.67 88.82 82.42 N/A S NKD 31.05 76.83 77.70 85.13 88.97 61.73 88.19 87.29 74.61 skip, search S RKD 29.22 79.31 79.41 86.77 90.25 65.34 90.37 87.45 76.02 skip, search S PKD 32.13 79.26 80.15 86.64 90.23 65.70 90.14 87.26 76.44 search S CKD-NO 31.23 79.42 80.64 86.93 88.70 66.06 90.37 87.62 76.37 search S CKD-PO 31.95 79.53 80.39 86.75 89.89 67.51 90.25 87.55 76.73 search S ALP-NO 34.21 79.26 79.66 87.11 90.72 65.70 90.37 87.52 76.82 search S ALP-PO 33.86 79.74 79.90 86.95 90.25 66.43 90.48 87.52 76.89 none S ALP 33.07 79.62 80.72 87.02 90.54 67.15 90.37 87.62 77.01 compressed, so we reduce the number of layers to 4 and train another model (S NKD ). The rest of the configuration (attention heads, hidden dimension etc) remains untouched. There is no connection between the teacher and S NKD and it is trained separately with no KD technique. Because of the number of layers, performance drops in this case but we still gain a lot in terms of memory as this new model only has 53M parameters. Table 2 : 2The teacher model T BERT has 12 and all other student models have 6 layers. ). The task defined based on this dataset is to predict whether the premise entails the hypothesis, contradicts it, or neither, given a premise sentence and a hypothesis. The dataset has two versions, matched (test and training examples are from the same domain) and mismatched, that we use the matched version. This dataset has 392, 702 training and 9, 815 validation examples. • MRPC: A corpus of sentence pairs with human annotations. The task is to decide whether sentences are semantically equivalent (Dolan and Brockett 2005). The training and validation sets have 3, 668 and 408 examples, respectively. ). The set has 104, 743 training and 5, 463 validation examples. • QQP: A set of question pairs with 363, 849 training and 40, 430 validation instances collected from the well-known question answering website Quora. The task is to determine if a given pair of questions are semantically equivalent (Iyer, Dandekar, and Csernai 2017). • RTE: A combined set of 2, 490 training and 277 validations examples collected from four sources for a series of textual entailment challenges ModelCoLA MNLI MRPC QNLI QQP RTE SST-2 STS-B Average S ALP-NO-1 32.84 78.9479.03 86.31 90.31 65.06 88.53 87.26 76.04 S ALP-PO-1 31.56 78.37 79.66 85.39 89.97 65.32 89.04 87.45 75.85 S ALP-1 31.75 79.31 80.64 86.75 90.11 66.06 89.95 87.33 76.49 S ALP-NO-4 33.37 79.62 80.64 86.84 90.27 66.06 89.91 87.67 76.8 S ALP-PO-4 33.62 79.50 79.90 86.69 90.14 65.34 90.25 87.52 76.62 S ALP-4 32.61 79.34 79.90 87.00 90.26 67.31 90.71 87.46 76.82 Table 4 : 4The impact of different attention techniques. All models use the key-query-value architecture to compute weights. Digits appended to subscripts indicate the number of attention heads. Table 7 : 7Hyper-parameters of S RKD .hyper-parameters CoLA MNLI MRPC QNLI QQP RTE SST-2 STS-Blearning rate 2e-5 5e-5 5e-5 5e-5 5e-5 2e-5 5e-5 5e-5 T 5 20 5 20 20 5 5 5 η 0.5 0.2 0.2 0.2 0.5 0.2 0.2 0.2 λ 0.2 0.7 0.7 0.7 0.2 0.7 0.7 0.2 Table 8 : 8Hyper-parameters of S PKD .hyper-parameters CoLA MNLI MRPC QNLI QQP RTE SST-2 STS-Blearning rate 5e-5 5e-5 2e-5 5e-5 5e-5 5e-5 5e-5 5e-5 T 20 10 10 20 5 10 20 20 η 0.2 0.5 0.7 0.7 0.2 0.2 0.5 0.2 λ 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 Table 9 : 9Hyper-parameters of S CKD .hyper-parameters CoLA MNLI MRPC QNLI QQP RTE SST-2 STS-Blearning rate 5e-5 5e-5 5e-5 5e-5 5e-5 5e-5 5e-5 5e-5 T 5 20 10 20 20 10 20 5 η 0.2 0.7 0.7 0.7 0.7 0.2 0.2 0.2 λ 0.5 0.2 0.2 0.2 0.2 0.2 0.7 0.5 Table 10 : 10Hyper-parameters of S ALP . By the output, we mean the output of the layer for the CLS token. For more details about CLS seeDevlin et al. (2019). https://github.com/google-research/bert 3 Similar to other papers, we evaluate our models on validation sets. Testset labels of GLUE datasets are not publicly available and researchers need to participate in leaderboard competitions to evaluate their models on testsets. AcknowledgementWe would like to thank our anonymous reviewers as well as Chao Xing and David Alfonso Hermelo from Huawei Noah's Ark Lab for their valuable feedback.Appendix GLUE DatasetsDatasets used in our experiments are as follows:• CoLA: A corpus of English sentences drawn from books and journal articles with 8, 551 training and 1, 043 validation instances. Each example is a sequence of words with a label indicating whether it is a grammatical sentence(Warstadt, Singh, and Bowman 2018). . G Aguilar, Y Ling, Y Zhang, B Yao, X Fan, C Guo, Aguilar, G.; Ling, Y.; Zhang, Y.; Yao, B.; Fan, X.; and Guo, C. Knowledge Distillation from Internal Representations. AAAI. Knowledge Distillation from Internal Representations. In AAAI, 7350-7357. Neural machine translation by jointly learning to align and translate. D Bahdanau, K Cho, Y Bengio, arXiv:1409.0473arXiv preprintBahdanau, D.; Cho, K.; and Bengio, Y. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 . The second pascal recognising textual entailment challenge. R Bar-Haim, I Dagan, B Dolan, L Ferro, D Giampiccolo, B Magnini, I Szpektor, Proceedings of the second PASCAL challenges workshop on recognising textual entailment. the second PASCAL challenges workshop on recognising textual entailmentVenice6Bar-Haim, R.; Dagan, I.; Dolan, B.; Ferro, L.; Giampiccolo, D.; Magnini, B.; and Szpektor, I. 2006. The second pascal recognising textual entailment challenge. In Proceedings of the second PASCAL challenges workshop on recognising textual entailment, volume 6, 6-4. Venice. The Fifth PASCAL Recognizing Textual Entailment Challenge. L Bentivogli, P Clark, I Dagan, D Giampiccolo, TAC. Bentivogli, L.; Clark, P.; Dagan, I.; and Giampiccolo, D. 2009. The Fifth PASCAL Recognizing Textual Entailment Challenge. In TAC. Model compression. C Buciluǎ, R Caruana, A Niculescu-Mizil, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. the 12th ACM SIGKDD international conference on Knowledge discovery and data miningBuciluǎ, C.; Caruana, R.; and Niculescu-Mizil, A. 2006. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, 535-541. D Cer, M Diab, E Agirre, I Lopez-Gazpio, L Specia, arXiv:1708.00055Semeval-2017 task 1: Semantic textual similaritymultilingual and cross-lingual focused evaluation. arXiv preprintCer, D.; Diab, M.; Agirre, E.; Lopez-Gazpio, I.; and Specia, L. 2017. Semeval-2017 task 1: Semantic textual similarity- multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055 . The PAS-CAL recognising textual entailment challenge. I Dagan, O Glickman, B Magnini, Machine Learning Challenges Workshop. SpringerDagan, I.; Glickman, O.; and Magnini, B. 2005. The PAS- CAL recognising textual entailment challenge. In Machine Learning Challenges Workshop, 177-190. Springer. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, NAACL-HLT. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT. Automatically constructing a corpus of sentential paraphrases. W B Dolan, C Brockett, Proceedings of the Third International Workshop on Paraphrasing (IWP2005). the Third International Workshop on Paraphrasing (IWP2005)Dolan, W. B.; and Brockett, C. 2005. Automatically con- structing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005). Ensemble distillation for neural machine translation. M Freitag, Y Al-Onaizan, B Sankaran, arXiv:1702.01802arXiv preprintFreitag, M.; Al-Onaizan, Y.; and Sankaran, B. 2017. Ensem- ble distillation for neural machine translation. arXiv preprint arXiv:1702.01802 . . T Furlanello, Z C Lipton, M Tschannen, L Itti, A Anandkumar, Born Again Neural Networks. Furlanello, T.; Lipton, Z. C.; Tschannen, M.; Itti, L.; and Anandkumar, A. 2018. Born Again Neural Networks. The third pascal recognizing textual entailment challenge. D Giampiccolo, B Magnini, I Dagan, B Dolan, Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing. the ACL-PASCAL workshop on textual entailment and paraphrasingAssociation for Computational LinguisticsGiampiccolo, D.; Magnini, B.; Dagan, I.; and Dolan, B. 2007. The third pascal recognizing textual entailment challenge. In Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing, 1-9. Association for Computa- tional Linguistics. G Hinton, O Vinyals, J Dean, arXiv:1503.02531Distilling the knowledge in a neural network. arXiv preprintHinton, G.; Vinyals, O.; and Dean, J. 2015. Distill- ing the knowledge in a neural network. arXiv preprint arXiv:1503.02531 . First Quora Dataset Release: Question Pairs. S Iyer, N Dandekar, K Csernai, Iyer, S.; Dandekar, N.; and Csernai, K. 2017. First Quora Dataset Release: Question Pairs. URL https://data.quora.com/First-Quora-Dataset- Release-Question-Pairs. X Jiao, Y Yin, L Shang, X Jiang, X Chen, L Li, F Wang, Q Liu, arXiv:1909.10351Tinybert: Distilling bert for natural language understanding. arXiv preprintJiao, X.; Yin, Y.; Shang, L.; Jiang, X.; Chen, X.; Li, L.; Wang, F.; and Liu, Q. 2019. Tinybert: Distilling bert for natural language understanding. arXiv preprint arXiv:1909.10351 . Sequence-Level Knowledge Distillation. Y Kim, A M Rush, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingKim, Y.; and Rush, A. M. 2016. Sequence-Level Knowl- edge Distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, 1317- 1327. Improving Multi-Task Deep Neural Networks via Knowledge Distillation for Natural Language Understanding. X Liu, P He, W Chen, J Gao, ArXiv abs/1904.09482Liu, X.; He, P.; Chen, W.; and Gao, J. 2019. Improving Multi- Task Deep Neural Networks via Knowledge Distillation for Natural Language Understanding. ArXiv abs/1904.09482. Deep contextualized word representations. M E Peters, M Neumann, M Iyyer, M Gardner, C Clark, K Lee, L Zettlemoyer, Proc. of NAACL. of NAACLPeters, M. E.; Neumann, M.; Iyyer, M.; Gardner, M.; Clark, C.; Lee, K.; and Zettlemoyer, L. 2018. Deep contextualized word representations. In Proc. of NAACL. Squad: 100,000+ questions for machine comprehension of text. P Rajpurkar, J Zhang, K Lopyrev, P Liang, arXiv:1606.05250arXiv preprintRajpurkar, P.; Zhang, J.; Lopyrev, K.; and Liang, P. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250 . Distil-BERT, a distilled version of BERT: smaller, faster, cheaper and lighter. V Sanh, L Debut, J Chaumond, T Wolf, ArXiv abs/1910.01108Sanh, V.; Debut, L.; Chaumond, J.; and Wolf, T. 2019. Distil- BERT, a distilled version of BERT: smaller, faster, cheaper and lighter. ArXiv abs/1910.01108. Recursive deep models for semantic compositionality over a sentiment treebank. R Socher, A Perelygin, J Wu, J Chuang, C D Manning, A Ng, C Potts, Proceedings of the 2013 conference on empirical methods in natural language processing. the 2013 conference on empirical methods in natural language processingSocher, R.; Perelygin, A.; Wu, J.; Chuang, J.; Manning, C. D.; Ng, A.; and Potts, C. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, 1631-1642. Patient Knowledge Distillation for BERT Model Compression. S Sun, Y Cheng, Z Gan, J Liu, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingSun, S.; Cheng, Y.; Gan, Z.; and Liu, J. 2019. Pa- tient Knowledge Distillation for BERT Model Com- pression. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 4314-4323. Z Sun, H Yu, X Song, R Liu, Y Yang, D Zhou, arXiv:2004.02984Mobilebert: a compact task-agnostic bert for resourcelimited devices. arXiv preprintSun, Z.; Yu, H.; Song, X.; Liu, R.; Yang, Y.; and Zhou, D. 2020. Mobilebert: a compact task-agnostic bert for resource- limited devices. arXiv preprint arXiv:2004.02984 . Multilingual Neural Machine Translation with Knowledge Distillation. X Tan, Y Ren, D He, T Qin, T.-Y Liu, International Conference on Learning Representations. Tan, X.; Ren, Y.; He, D.; Qin, T.; and Liu, T.-Y. 2019. Multilingual Neural Machine Translation with Knowledge Distillation. In International Conference on Learning Representations. URL https://openreview. net/forum?id=S1gUsoR9YX. Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, Advances in neural information processing systems. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. At- tention is all you need. In Advances in neural information processing systems, 5998-6008. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. A Wang, A Singh, J Michael, F Hill, O Levy, S R Bowman, ArXiv abs/1804.07461Wang, A.; Singh, A.; Michael, J.; Hill, F.; Levy, O.; and Bowman, S. R. 2018. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. ArXiv abs/1804.07461. A Warstadt, A Singh, S R Bowman, arXiv:1805.12471Neural Network Acceptability Judgments. arXiv preprintWarstadt, A.; Singh, A.; and Bowman, S. R. 2018. Neu- ral Network Acceptability Judgments. arXiv preprint arXiv:1805.12471 . Online Distilling from Checkpoints for Neural Machine Translation. H.-R Wei, S Huang, R Wang, X Dai, J Chen, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Wei, H.-R.; Huang, S.; Wang, R.; Dai, X.; and Chen, J. 2019. Online Distilling from Checkpoints for Neural Ma- chine Translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 1932-1941. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. A Williams, N Nangia, S Bowman, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Long PapersWilliams, A.; Nangia, N.; and Bowman, S. 2018. A Broad- Coverage Challenge Corpus for Sentence Understanding through Inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), 1112-1122. Principal component analysis. S Wold, K Esbensen, P Geladi, Chemometrics and intelligent laboratory systems. 21-3Wold, S.; Esbensen, K.; and Geladi, P. 1987. Principal com- ponent analysis. Chemometrics and intelligent laboratory systems 2(1-3): 37-52. Why Skip If You Can Combine: A Simple Knowledge Distillation Technique for Intermediate Layers. Y Wu, P Passban, M Rezagholizadeh, Q Liu, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingEMNLPWu, Y.; Passban, P.; Rezagholizadeh, M.; and Liu, Q. 2020. Why Skip If You Can Combine: A Simple Knowledge Dis- tillation Technique for Intermediate Layers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP).
[ "https://github.com/google-research/bert" ]
[ "HopRetriever: Retrieve Hops over Wikipedia to Answer Complex Questions", "HopRetriever: Retrieve Hops over Wikipedia to Answer Complex Questions" ]
[ "Shaobo Li [email protected] \nHarbin Institute of Technology\n\n", "Xiaoguang Li \nHuawei Noah's Ark Lab\n\n", "Lifeng Shang \nHuawei Noah's Ark Lab\n\n", "Xin Jiang \nHuawei Noah's Ark Lab\n\n", "Qun Liu [email protected] \nHuawei Noah's Ark Lab\n\n", "Chengjie Sun \nHarbin Institute of Technology\n\n", "Zhenzhou Ji \nHarbin Institute of Technology\n\n", "Bingquan Liu \nHarbin Institute of Technology\n\n" ]
[ "Harbin Institute of Technology\n", "Huawei Noah's Ark Lab\n", "Huawei Noah's Ark Lab\n", "Huawei Noah's Ark Lab\n", "Huawei Noah's Ark Lab\n", "Harbin Institute of Technology\n", "Harbin Institute of Technology\n", "Harbin Institute of Technology\n" ]
[]
Collecting supporting evidence from large corpora of text (e.g., Wikipedia) is of great challenge for open-domain Question Answering (QA). Especially, for multi-hop open-domain QA, scattered evidence pieces are required to be gathered together to support the answer extraction. In this paper, we propose a new retrieval target, hop, to collect the hidden reasoning evidence from Wikipedia for complex question answering. Specifically, the hop in this paper is defined as the combination of a hyperlink and the corresponding outbound link document. The hyperlink is encoded as the mention embedding which models the structured knowledge of how the outbound link entity is mentioned in the textual context, and the corresponding outbound link document is encoded as the document embedding representing the unstructured knowledge within it. Accordingly, we build HopRetriever which retrieves hops over Wikipedia to answer complex questions. Experiments on the HotpotQA dataset demonstrate that Ho-pRetriever outperforms previously published evidence retrieval methods by large margins. Moreover, our approach also yields quantifiable interpretations of the evidence collection process.
10.1609/aaai.v35i15.17568
[ "https://arxiv.org/pdf/2012.15534v1.pdf" ]
229,923,812
2012.15534
64435711f6542aa6b53e95c6e084a0ccd2ec1c16
HopRetriever: Retrieve Hops over Wikipedia to Answer Complex Questions Shaobo Li [email protected] Harbin Institute of Technology Xiaoguang Li Huawei Noah's Ark Lab Lifeng Shang Huawei Noah's Ark Lab Xin Jiang Huawei Noah's Ark Lab Qun Liu [email protected] Huawei Noah's Ark Lab Chengjie Sun Harbin Institute of Technology Zhenzhou Ji Harbin Institute of Technology Bingquan Liu Harbin Institute of Technology HopRetriever: Retrieve Hops over Wikipedia to Answer Complex Questions Collecting supporting evidence from large corpora of text (e.g., Wikipedia) is of great challenge for open-domain Question Answering (QA). Especially, for multi-hop open-domain QA, scattered evidence pieces are required to be gathered together to support the answer extraction. In this paper, we propose a new retrieval target, hop, to collect the hidden reasoning evidence from Wikipedia for complex question answering. Specifically, the hop in this paper is defined as the combination of a hyperlink and the corresponding outbound link document. The hyperlink is encoded as the mention embedding which models the structured knowledge of how the outbound link entity is mentioned in the textual context, and the corresponding outbound link document is encoded as the document embedding representing the unstructured knowledge within it. Accordingly, we build HopRetriever which retrieves hops over Wikipedia to answer complex questions. Experiments on the HotpotQA dataset demonstrate that Ho-pRetriever outperforms previously published evidence retrieval methods by large margins. Moreover, our approach also yields quantifiable interpretations of the evidence collection process. Introduction Multi-hop QA (Yang et al. 2018) is the Question Answering (QA) task that requires reasoning over multiple supporting documents to extract the final answer. For the open-domain setting, a key part of Multi-hop QA is to retrieve an evidence path from the whole knowledge source (e.g., Wikipedia). Most of the recent works view multi-hop evidence collection as an iterative document retrieval problem (Asai et al. 2020;Feldman and El-Yaniv 2019;Das et al. 2019a), which can be decomposed to several single-step document retrieval. In contrast, some others (Dhingra et al. 2020;Ding et al. 2019) focus on mentioned entities and try to traverse textual data like a virtual structured Knowledge Base (KB). These two methods leverage two different kinds of knowledge for evidence collection respectively: (i) informative but unstructured facts inside the introductory documents of entities. (ii) Two examples in Figure 1 show that both of the above knowledge is needed for complex question answering. We consider the problem of based on what evidence can one jump to the second document for further retrieval. For question 1, the structured relation "directed by" implied by "...directed by Adriana Trigiani" in the first document matches the relation "director of" in the question, hence providing sufficient and convincing evidence that one can hop to the introductory document of Adriana Trigiani for further retrieval, even without pre-reading it. However, things become complicated for question 2, for three entities share the same relation "song of": On My Mind, Army, and Something in the Way You Move. In fact, only the entity On My Mind satisfies the condition "works with other writers" in the question, which makes the relation itself insufficient and indistinctive to make the choice among the three entities. The truth is that only if the unstructured facts about the entity On My Mind is browsed through, one can find the conclusive evidence. As shown above, to collect sufficient supporting evidence within Wikipedia, it's necessary to consider both relational structures between entities and unstructured knowledge hidden inside the introductory document. When the answering process follows the pattern of "following the vine to get the melon", implicit entity-level relation makes retrieval efficient and effective. However, when the relation chain failed, those unstructured facts in the document mount the stage. In this paper, we study how the structured and unstructured knowledge can be combined together and collaboratively contribute to the evidence collection. Accordingly, We define a hop as the combination of a hyperlink and a corresponding outbound link document. A hyperlink in Wikipedia implies how the introductory document of an entity mentions some other, while the outbound link document stores all the unstructured facts and events, which makes a hop contain both relational and factoid evidence for future retrieval. One challenge is how to transform the binary (link or not) hyperlink in Wikipedia to distributed representations implying the implicit and complicated entity relation. One step towards this is the recent work on distributed relation learning (Soares et al. 2019), in which the relation representations are learned solely from the entity-linked text in an unsupervised way. With the powerful ability of BERT (Devlin et al. 2019) for text encoding, (Ding et al. 2019) and (Dhingra et al. 2020) encodes entity spans into node representations to conduct relation-based reasoning. In this paper, we represent each hyperlink with the corresponding entity mention, with the currently described entity as the mention subject and the outbound link entity as the mention object. Our contributions. To be more specific, this paper introduces HopRetriever, a method to automatically and adaptively leverage both the structured entity relation and unstructured introductory facts for evidence collection. For each entity mention within Wikipedia documents, we encode the textual context around it into mention embedding to represent the implicit structured knowledge. As for the representation of unstructured knowledge in documents, we use BERT to encode document text conditioned on the original question, as previous works do. For each step retrieval, the hop from one document(entity) to another one can gather evidence from two perspectives: (i) How the current document mentions the other one. (ii) What facts are hidden in the introductory document of the other entity. Experiments conclude that our retrieval method outperforms both entitycentric retrieval methods and document-wise retrieval ones. Our prime contributions are as follows: • We propose to retrieve hops over Wikipedia to answer complex questions, which adaptively and selectively collects evidence from both structured entity relation and unstructured facts within documents. • We propose to represent hyperlinks in Wikipedia with mention embeddings, which we show can precisely capture the implicit relation between entities. • Evaluated on HotpotQA (Yang et al. 2018), the proposed approach significantly outperforms previously published evidence retrieval methods. Additionally, we conduct further experimental analysis and demonstrate the good interpretability of our method. (2019) and Das et al. (2019a) introduce multistep retrievers to explore multiple evidence documents iteratively. Most recently, Asai et al. (2020) proposes the PathRetriever that retrieves documents paths along the outboundlink of text graph. With the graph structure of the documents, PathRetriever reduces the search space of documents during each step retrieval, which is much smaller than that of previous iterative retrievers. The biggest difference between PathRetriever and our method is that we additionally consider the structured and multi-valued relation between entities, while PathRetriever uses hyperlinks in a binary way: link or not link. Entity-centric reasoning. Considering that most factoid QA problems are entity-centric, some other works focus on the entity mention to collect reasoning evidence. Cognitive Graph (Ding et al. 2019) trains a reading comprehension model to predict the next-hop spans, aiming to find the most evidential mentioned entity. Similarly, DrKIT (Dhingra et al. 2020) constructs large mounts of entity mentions from the corpus and proposes a method to reason over these mentions, softly following paths of latent relations. We've shown in Figure 1 that when the question is not the case of "following the vine to get the melon", the mention itself fails to provide sufficient reasoning evidence for which entity to hop. Inspired by the idea of pseudo-relevance feedback (Xu and Croft 2017), Das et al. (2019b) also leverages entitylink to find more supporting evidence. However, this method is still document-level, for the entity links are used not for relation representation, but document expansion. We empirically show significant improvement over the above methods. Question decomposition. Wolfson et al. (2020), Perez et al. (2020), and Min et al. (2019) propose to decompose a complicated question into several simpler sub-questions and conduct single-hop QA at each step. The challenge for question decomposition is to ensure each sub-question collects the truly necessary evidence. As we know from example 2 in Figure troductory documents at the next step. Method Overview Task definition. Our task is to obtain the answer a for an open-domain multi-hop question q. A retriever model Retriever is used to collect the multiple evidence pieces over a large-scale knowledge source K: D q = Retriever(q, K). (1) D q should contain multiple documents that are necessary for answering the multi-hop question. All textual facts in D q and q are concatenated together and fed into a answer extraction model Reader to obtain the answer a: a = Reader(q, D q ). (2) Our approach. In this paper, we propose HopRetriever to take the place of the retriever model Retriever while keeping the answer extraction model Read as standard (Devlin et al. 2019). The knowledge source K is constructed from Wikipedia 2 . Each Wikipedia page corresponds to an entity e i , accompanied by an introductory document d i . Moreover, if there exists an anchor text in d i linked to e j , we denote it as a mention m i,j = e i di − → e j , which means e j is mentioned by e i via d i . Accordingly, the knowledge source is formulated as K = {D, E, M } that consists of an entity set E = {e i }, an introductory document set D = {d i }, and a mention set M = {m i,j }. D q is retrieved iteratively. At each retrieval step, a document is fetched by examining not only the unstructured facts contained in but also the mention of it in the latest selected document. To achieve that, we encode the unstructured textual facts and the mention respectively and then represented them together within a hop. HopRetriever uses hops as the matching objects when retrieving over Wikipedia. The overview of a retrieval process is shown in Figure 2. The details about the hop encoding and the iterative retrieval procedure are discussed in the following two sections. Hop Encoding HopRetriever considers retrieving a new document d j conditioning on the retrieval history as finding the proper hop from the current foothold entity e i to the entity e j . The representation of each hop consists of mention embedding m i,j that implies the entity relation from e i to e j , and the document embedding u j of the introductory document of entity e j . Mention embedding. We consider the problem of how to encode a hop hop i,j into hop encoding hop i,j . The structured entity relation revealed by m i,j is encoded as mention embedding m i,j , based on the context around it. Inspired by Soares et al. (2019), two entity markers clipping the anchor text of each mentioned entity are introduced to obtain the mention embedding. An example is shown in Figure 3 (from the second example in Figure 1), the document that contains the mention of On My Mind is fed into BERT with two additional [MARKER] tokens, and the output representation of the first [MARKER] token is used as the mention embedding vector. If e j is not mentioned directly in the introductory document of e i , we represent the relation between them with a trainable uniformed vector m P , as shown below: m i,j = BERT [M-j] (q; d i ), if m i,j ∈ M m P , otherwise(3) where the BERT [M-j] is the representation of the entity marker corresponding to entity e j . Document embedding. The unstructured knowledge about the entity e j is encoded as document embedding u j by feeding the textual facts in d j (concatenated with q) into BERT, and the output representation of the [CLS] token is taken as the document embedding vector: u j = BERT [CLS] (q; d j ).(4) Knowledge fusion. The mention embedding m i,j and the document embedding u j are fused together as hop encoding hop i,j by the attention mechanism proposed in Sukhbaatar et al. (2015). The following fusion procedure allows HopRetriever to adaptively and selectively manage the two kinds of knowledge according to which truly matters: a m = hW k m i,j a u = hW k u j {w m , w u } = softmax({a m , a u }) hop i,j = w m · W v m i,j + w u · W v u j ,(5) where h is the vector that encodes the corresponding retrieval history, the W k projects the two embedding vectors (i.e. m i,j and u j ) into key vectors. The h acts as query vector that interacts with the key vectors to calculate the importance weight w m for the mention embedding m i,j and w u for the document embedding u j , then m i,j and u j are projected into value vectors by W v and fused as hop encoding with important weights. Figure 4: The retrieval process of HopRetriever for three hops. hop s,i indicates a beginning jump from the start to e i is selected based on the initial hidden state h s . The selection of hop hop i,j retrieves the supporting document d j at the second step. hop e ends the retrieval process finally. Figure 4 illustrates a three-step recurrent hop retrieval process. Generally, let e i denote the foothold entity selected at the previous t − 1 step, the probability of retrieving the document d j at t step is calculated by the dot product of h t and hop encoding hop i,j (i.e. the Hop Selector in Figure 4),as formulated in the following equation: Iterative Retrieval of Hops p(d j ) = sigmoid(h t hop i,j ),(6) where h t is the hidden state vector that encodes all the previously selected hops by a Recurrent Neural Network (RNN): h t = h s , t = 1 RNN(h t−1 , hop k,i ), t ≥ 2(7) where h s is the initial hidden state vector and hop k,i is the encoding of the hop selected at t − 1 step. Specially, for t = 1, the hop hop s,j indicating jumping from the retrieving start to e j is introduced. Similarly, a special end hop hop e is used to mark the end of the retrieval process and it is encoded by m p and a virtual end document encoding u e . Let f denote the fusion function formulated as Equation (5), the encodings of different hops are summarized in Table 1. Notation Encoding Explanation hop i,j f (mP, uj) ej is not mentioned in di f (mi,j, uj) ej is mentioned in di hop s,j f (mP, uj) Select dj at the beginning hop e f (mP, ue) Retrieval finish Fine-Grained Sentence-Level Retrieval A single supporting document can be split into multiple sentences and may not all these sentences are essential for answering the question. Pointing out the indispensable supporting sentences can illuminate the reasons why a document is required. In HopRetriever, the supporting sentence prediction is added as an auxiliary task along with the primary hop retrieval task. At step t, the probability p(s i,l ) that indicates the l-th sentence in the latest retrieved document d i is a supporting sentence is calculated by the following equations: s i,l = BERT [SM-l] (q; d i ) (8) p(s i,l ) = sigmoid(h t W s s i,l ),(9) where s i,l is the sentence embedding vector obtained by inserting a sentence marker [SM-l] at the end of the l-th sentence in d i , which is similar to how the mention embedding is obtained. If p(s i,l ) > 0.5, then the l-th sentence in document d i is identified as a supporting sentence. Objective Functions of HopRetriever HopRetriever is a sequence prediction model with binary cross-entropy objective functions at each step. At the retrieval step t, the objective function of the primary hop retrieval task is log p(d j ) + d j ∈D,dj =dj log(1 − p(d j )),(10) where d j is the ground-truth document. For the auxiliary supporting sentence prediction task, the object function at step t is l∈Li log p(s i,l ) + l / ∈Li log(1 − p(s i,l )),(11) where s i,l is the l-th sentence in d i , L i is the set of indices of the ground-truth supporting sentences in d i . The above two objective functions are maximized together in training. In the official evaluation, the participant model is required to predict both the exact supporting sentences and the answer text. Pipeline. The whole procedure follows a coarse-to-fine pipeline that contains three stages: 1. Preliminary retrieval: Only the top-500 documents are used to construct the initial candidate hops of HopRetriever, according to the TF-IDF scores of documents w.r.t. the input question. 2. Supporting documents retrieval and supporting sentence prediction: HopRetriever retrieves the supporting documents iteratively starting from the initial candidate hops, and also predicts supporting sentences from the retrieved documents. 3. Answer extraction: The answer within the retrieved supporting documents is extracted using BERT (large, whole word mask), following the conventional answer boundary prediction approach (Devlin et al. 2019;Seo et al. 2017), which is the same as PathRetriever (Asai et al. 2020). Implementation details. The negative hop sequences used to train the proposed model are constructed by traversing through the entities in Wikipedia. And the top-40 TD-IDF scored documents w.r.t. the question and top-10 scored documents w.r.t. the ground-truth documents are used as the start points of the traverse.The length of negative hop sequences is fixed to 3. We restrict the maximum input sequence length of BERT to 384. In training, the batch size is set to 16, the learning rate is 3 × 10 −5 , and the number of training epochs is 3. We use beam search with beam size set to 8 at the inference time. To achieve better performance, we introduce a neural ranker based on BERT-base (Nogueira and Cho 2019) to produce more precise top-500 documents in the preliminary retrieval. And use ELECTRA (Clark et al. 2019) to take the place of BERT, i.e., use the ELECTRA base in HopRetriever for document sequence retrieval and use ELECTRA large for answer extraction. The results of this enhanced pipeline are denoted as HopRetriever-plus. Results Evidence collection. The HopRetriever is first evaluated by measuring the coverage of ground-truth answers, supporting sentences, and supporting documents in the retrieved supporting documents, as shown in Table 2. The metric Ans exists measures the percentage of the questions whose answers are extractable from the retrieved document sequence. Sent exists is the percentage of the supporting sentences that can be found. The percentage of the questions that have all ground-truth documents retrieved are showed as the All docs exist. Three models that mainly focus on evidence collection over Wikipedia are evaluated as baselines on the development set: • Cognitive Graph QA (Ding et al. 2019) explicitly utilizes the structured knowledge in Wikipedia with a graph whose nodes are entities or answer spans. The representations of nodes are maintained by Graph Neural Network (GNN) (Battaglia et al. 2018;Kipf et al. 2017). • Semantic Retrieval (Nie et al. 2019) is a multi-grained retrieval baseline that retrieves supporting documents and sentences together, focuses on the unstructured knowledge in Wikipedia. • PathRetriever (Asai et al. 2020) introduces a similar iterative retrieval framework, but only focuses on the unstructured knowledge provided in the introductory document at each retrieval step. To be fairly compared with PathRetriever, which is the state-of-the-art published model, HopRetriever uses the same initial search space (i.e. top-500 documents based on TF-IDF scores) and pre-trained model (i.e. BERT-base) with PathRetriever. Notably, HopRetriever outperforms the PathRetriever by 5.93%, 6.36%, and 8.63% on the top-1 evidence collection metrics respectively, and also achieves sig- nificant improvement over Semantic Retriever and Cognitive Graph QA, which further demonstrates the effectiveness of HopRetriever. A more detailed comparison with PathRetriever is shown in Table 3. We can observe that HopRetrieve works more effectively on the bridging questions. In the HotpotQA dataset, the ground-truth supporting documents of comparison questions may not be directly relevant to each other where no structured knowledge is available, which makes HopRetriever perform almost the same as PathRetriever. In contrast, the ground-truth supporting documents of the bridging questions are stringed with mentions that can provide informative structured knowledge, so HopRetriever performs better by leveraging mentions additionally. Answer extraction and supporting sentence prediction. Table 4 shows the performance of different methods for the answer and supporting sentence prediction. Naturally, the answer extraction and supporting sentence prediction result benefit from the improvements of document retrieving. By providing more accurate supporting documents, HopRetriever outperforms all the aforementioned baseline models on the development set and also the other published models 3 on the test set. Analysis Detailed analysis of HopRetriever is carried out in this section, especially about how the structured and unstructured knowledge in Wikipedia contribute to evidence retrieval. Embedding weights on different question types. At the t step in the retrieval procedure of HopRetriever, the decision whether to select a document d j depends on the hop encoding hop i,j , which contains a mention embedding and a document embedding assigned with learnable weights as formulated in Equation (5). We analyze the weights and find that they provide intuitive explanation about which embedding is more important for different question types. Table 5 shows the average weight of mention embedding and document embedding on different question types. It can be seen that the mention embedding accounts for a large portion (89.53%) on the bridge questions. The bridge questions always require selecting a proper document along with a hyperlink and the mentions do provide helpful information for bridging the evidence pieces. Conversely, when processing the comparison questions, the weight of mention embedding is relatively small (4.61%) because there are no available mentions between the supporting documents. Embedding weights in different cases. Three examples are presented in Figure 5 to further inspect the learned weights in the hop encoding. In case 1, a strong clue that matches with the information "director of" in question occurs as the mention "directed by", so the weight of mention embedding is relatively high. In case 2, the entity "World Table 6: Ablation experiments of HopRetriever. War I" and "World War II" are mentioned with the same context, which means they cannot be distinguished only based on the mention embedding, so more attention is paid to the document embedding which encodes the important fact "60 million". In case 3, no mentions exist in the latest selected document so the hop encoding almost completely depends on the document embedding. We can see that the embedding weights can bring intuitive interpretation about which embedding, or which types of knowledge, is more important for different questions when selecting a hop. Probing task for the mention embedding. The structured entity relation is represented by the markers around the mentions, as described in Section 3.2. To explore what the mention embedding learns, we design a special probing task: distracted hop selection. That is, the ground-truth hop for bridging questions is shuffled with other hops that have the same mentioned entity but different mention context, and HopRetriever is required to select the right one from these distracting hops for each question. To make the right selection, one should understand more about how each entity is mentioned, but not the entity itself. The summary of this task is shown in Table 7. The experiment result shows that although the distracting hops are not used as negative samples for training, the HopRetriever can retrieve ground-truth hops just based on learned mention embedding at high accuracy (96.42%), indicating that the mention embedding does learn the implicit relation between entities, but not the entities themselves. Ablation study. As shown in Table 6, ablation experiments are conducted to corroborate the effectiveness of Ho- Table 7: Summary of the mention embedding probing task. pRetriever. In experiment 1, the structured knowledge in hops is removed (i.e. set the weight of mention embedding w m to 0 in Equation 5), the performance dropped significantly, which stresses the importance of structured knowledge in Wikipedia for multi-hop evidence retrieval. The performance also degraded in experiment 2 in which the weighting for the structured and unstructured knowledge in hops is disabled (i.e. set w m = w u = 1 in Equation 5), demonstrating that the fusion function improves the performance while providing interpretations. The auxiliary supporting sentence prediction task is removed in the experiment 3. The result shows that the auxiliary task has no sideeffect on the primary hop retrieval task. Additionally, the sentence representations are obtained by the sentence markers contained in the latest retrieved document which has been encoded already at the previous step. So the auxiliary task does not require much additional computation. Conclusion In this paper, we propose the HopRetriever to collect reasoning evidence over Wikipedia for multi-hop question answering. Both the structured knowledge indicated by hyperlinks and the unstructured knowledge presented as introduc-tory documents in Wikipedia, are involved and leveraged together in HopRetriever to help the evidence collection. The experiment on the HotpotQA dataset shows that the performance of HopRetriever improved observably as a result of combining the structured knowledge with unstructured knowledge, and outperforms all the published models on the leaderboard. Moreover, by inspecting the proportion of the two kinds of knowledge in hops, which kind of knowledge leads the retrieving of each evidence piece can be observed directly, which also provides extra intuitive interpretations for the selection of each evidence. Figure 1 : 1Two examples showing that both structured relation and unstructured fact are needed for complex question answering.the structured and implicit relations between entities themselves 1 . Figure 2 : 21, when the structured relation fails, one can not ask reasonable sub-question without exploring enough in-Retrieving Hops over Wikipedia text graph. Documents are retrieved by selecting hops over them iteratively. Each directed arrow implies a mention m i,j , which reveals how e i mentions e j in the document d i . Hops between entities are indicated by curved arrows. If the mention m i,j exists between e i and e j , the hop hop i,j is represented based on both m i,j and the introductory document d j for retrieval or based on the d j solely if no mentions exist. Figure 3 : 3Encoding the mention using entity markers. Figure 5 : 5The weights of mention embedding and document embedding in different cases. Table 1 : 1Types of hop encoding. Table 2 : 2Evidence collection result on the HotpotQA fullwiki development set.We compare the top-1, top-5, and top-8 output Table 3 : 3Evidence collection results on different types of questions.Model Ans Sup Joint EM F1 EM F1 EM F1 dev Cognitive Graph QA (Ding et al. 2019) 37.55 49.40 23.11 58.52 12.18 35.28 Semantic Retrieval (Nie et al. 2019) 46.41 58.70 39.86 71.53 26.53 49.00 PathRetriever (Asai et al. 2020) 60.49 73.30 49.16 76.05 35.82 61.43 HopRetriever 62.07 75.18 52.53 78.92 37.81 64.50 HopRetriever-plus 66.56 79.21 56.02 81.81 42.01 68.97 test DecompRC (Min et al. 2019) 30.00 40.65 - - - - Cognitive Graph QA (Ding et al. 2019) 37.12 48.87 22.82 57.69 12.42 34.92 DrKIT (Dhingra et al. 2020) 42.13 51.72 37.05 59.84 24.69 42.8 Semantic Retrieval (Nie et al. 2019) 45.32 57.34 38.67 70.83 25.14 47.60 Transformer-XH (Zhao et al. 2019) 51.60 64.07 40.91 71.42 26.14 51.29 PathRetriever (Asai et al. 2020) 60.04 72.96 49.08 76.41 35.35 61.18 Semantic Retrieval + HGN (Fang et al. 2019) 59.74 71.41 51.03 77.37 37.92 62.26 HopRetriever 60.83 73.93 53.07 79.26 38.00 63.91 HopRetriever-plus 64.83 77.81 56.08 81.79 40.95 67.75 Table 4 : 4Answer extraction and supporting sentence prediction result in the fullwiki setting of HotpotQA. Table 5 : 5Weights of mention embedding and document embedding on bridging questions and comparison questions. Question: The director of the romantic comedy Big Stone Gap is based in what New York city? The Livesey Hall War Memorial commemorates the fallen of World War I and World War II ... Question: The Livesey Hal War Memorial commemorates the fallen of which war, that had over 60 million casualties? The Laleli Mosque is an 18thcentury Ottoman imperial mosque located in Laleli, Fatih, Istanbul, Turkey. The Esma Sultan Mansion, a historical waterside mansion located at Bosphorus in Ortakoy ... Question: Are the Laleli Mosque and Esma Sultan Mansion located in the same neighborhood?Big Stone Gap is a 2014 American drama romantic comedy film written and directed by Adriana Trigiani ... Adriana Trigiani is an Italian American best-selling author of sixteen books and entrepreneur based in Greenwich Village, New York City... Big Stone Gap (film) Adriana Trigiani Document Embedding Mention Embedding 97.85% 2.15% Weights: (Latest selected) (Next) (a) Case 1. World War II was the deadliest military conflict in history in absolute terms of total casualties. Over 60 million people were killed, ... Livesey Hall War Memorial World War II casualties Document Embedding Mention Embedding 82.94% 17.06% Weights: (Latest selected) (Next) (b) Case 2. Laleli Mosque Esma Sultan Mansion Document Embedding No Mentions 0.06% 99.94% Weights: (Latest selected) (Next) (c) Case 3. Recall @ top-1 top-5 top-8 top-1 top-5 top-8 top-1 top-5 top-8 full 86.89 91.11 91.80 88.41 92.78 93.20 82.54 88.60 89.09 1. w/o structured knowledge 76.35 86.02 88.12 80.91 88.49 89.92 66.20 78.Model Ans exists Sent exists All docs exist 89 81.23 2. w/o weighting 86.21 91.07 91.52 87.73 92.55 93.09 81.38 88.09 88.70 3. w/o sentence prediction 86.58 90.88 91.51 87.98 92.54 92.98 82.03 88.29 88.89 Total questions in development set 7405 Number of the bridging questions 5918 Average number of distracting hops per question 52.20 Accuracy based on mention embedding 96.42% In this paper, we view the entity relation as structured knowledge is because it directly connects two entities and can be applied to build a structured entity graph. https://en.wikipedia.org By the submission time of this paper, recently published method on HotpotQA fullwiki leaderboard is PathRetriever. Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering. A Asai, K Hashimoto, H Hajishirzi, R Socher, C Xiong, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020Asai, A.; Hashimoto, K.; Hajishirzi, H.; Socher, R.; and Xiong, C. 2020. Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering. In 8th Interna- tional Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. URL https://openreview.net/forum?id=SJgVHkrYDH. P W Battaglia, J B Hamrick, V Bapst, A Sanchez-Gonzalez, V Zambaldi, M Malinowski, A Tacchetti, D Raposo, A Santoro, R Faulkner, arXiv:1806.01261Relational inductive biases, deep learning, and graph networks. arXiv preprintBattaglia, P. W.; Hamrick, J. B.; Bapst, V.; Sanchez- Gonzalez, A.; Zambaldi, V.; Malinowski, M.; Tacchetti, A.; Raposo, D.; Santoro, A.; Faulkner, R.; et al. 2018. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261 . Reading Wikipedia to Answer Open-Domain Questions. D Chen, A Fisch, J Weston, A Bordes, 10.18653/v1/P17-1171Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Barzilay, R.and Kan, M.the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaLong Papers1Association for Computational LinguisticsChen, D.; Fisch, A.; Weston, J.; and Bordes, A. 2017. Read- ing Wikipedia to Answer Open-Domain Questions. In Barzilay, R.; and Kan, M., eds., Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics, ACL 2017, Vancouver, Canada, July 30 -August 4, Volume 1: Long Papers, 1870-1879. Association for Com- putational Linguistics. doi:10.18653/v1/P17-1171. URL https://doi.org/10.18653/v1/P17-1171. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. K Clark, M.-T Luong, Q V Le, C D Manning, International Conference on Learning Representations. Clark, K.; Luong, M.-T.; Le, Q. V.; and Manning, C. D. 2019. ELECTRA: Pre-training Text Encoders as Discrimi- nators Rather Than Generators. In International Conference on Learning Representations. Multi-step Retriever-Reader Interaction for Scalable Open-domain Question Answering. R Das, S Dhuliawala, M Zaheer, A Mccallum, 7th International Conference on Learning Representations. New Orleans, LA, USADas, R.; Dhuliawala, S.; Zaheer, M.; and McCallum, A. 2019a. Multi-step Retriever-Reader Interaction for Scalable Open-domain Question Answering. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. URL https://openreview.net/forum?id=HkfPSh05K7. Multi-step Entity-centric Information Retrieval for Multi-Hop Question Answering. R Das, A Godbole, D Kavarthapu, Z Gong, A Singhal, M Yu, X Guo, T Gao, H Zamani, M Zaheer, A Mccallum, A Fisch, A Talmor, R Jia, M Seo, E Choi, D Chen, 10.18653/v1/D19-5816Proceedings of the 2nd Workshop on Machine Reading for Question Answering. the 2nd Workshop on Machine Reading for Question AnsweringHong Kong, China2019Association for Computational LinguisticsDas, R.; Godbole, A.; Kavarthapu, D.; Gong, Z.; Singhal, A.; Yu, M.; Guo, X.; Gao, T.; Zamani, H.; Zaheer, M.; and McCallum, A. 2019b. Multi-step Entity-centric Information Retrieval for Multi-Hop Question Answering. In Fisch, A.; Talmor, A.; Jia, R.; Seo, M.; Choi, E.; and Chen, D., eds., Proceedings of the 2nd Workshop on Machine Reading for Question Answering, MRQA@EMNLP 2019, Hong Kong, China, November 4, 2019, 113-118. Association for Com- putational Linguistics. doi:10.18653/v1/D19-5816. URL https://doi.org/10.18653/v1/D19-5816. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. J Devlin, M Chang, K Lee, K Toutanova, 10.18653/v1/n19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019. Burstein, J.Doran, C.and Solorio, T.the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019Minneapolis, MN, USAAssociation for Computational LinguisticsDevlin, J.; Chang, M.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Burstein, J.; Doran, C.; and Solorio, T., eds., Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, NAACL- HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Vol- ume 1 (Long and Short Papers), 4171-4186. Association for Computational Linguistics. doi:10.18653/v1/n19-1423. URL https://doi.org/10.18653/v1/n19-1423. Differentiable Reasoning over a Virtual Knowledge Base. B Dhingra, M Zaheer, V Balachandran, G Neubig, R Salakhutdinov, W W Cohen, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020Dhingra, B.; Zaheer, M.; Balachandran, V.; Neubig, G.; Salakhutdinov, R.; and Cohen, W. W. 2020. Differentiable Reasoning over a Virtual Knowledge Base. In 8th Interna- tional Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. URL https://openreview.net/forum?id=SJxstlHFPH. Cognitive Graph for Multi-Hop Reading Comprehension at Scale. M Ding, C Zhou, Q Chen, H Yang, J Tang, 10.18653/v1/p19-1259Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. Korhonen, A.Traum, D. R.and Màrquez, L.the 57th Conference of the Association for Computational Linguistics, ACL 2019Florence, Italy1Long Papers, 2694-2703. Association for Computational LinguisticsDing, M.; Zhou, C.; Chen, Q.; Yang, H.; and Tang, J. 2019. Cognitive Graph for Multi-Hop Reading Comprehension at Scale. In Korhonen, A.; Traum, D. R.; and Màrquez, L., eds., Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, 2694-2703. As- sociation for Computational Linguistics. doi:10.18653/v1/ p19-1259. URL https://doi.org/10.18653/v1/p19-1259. Y Fang, S Sun, Z Gan, R Pillai, S Wang, J Liu, Hierarchical Graph Network for Multi-hop Question Answering. arXiv arXiv-1911. Fang, Y.; Sun, S.; Gan, Z.; Pillai, R.; Wang, S.; and Liu, J. 2019. Hierarchical Graph Network for Multi-hop Question Answering. arXiv arXiv-1911. Multi-Hop Paragraph Retrieval for Open-Domain Question Answering. Y Feldman, R El-Yaniv, 10.18653/v1/p19-1222Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. Korhonen, A.Traum, D. R.and Màrquez, L.the 57th Conference of the Association for Computational Linguistics, ACL 2019Florence, ItalyLong Papers1Association for Computational LinguisticsFeldman, Y.; and El-Yaniv, R. 2019. Multi-Hop Paragraph Retrieval for Open-Domain Question Answering. In Ko- rhonen, A.; Traum, D. R.; and Màrquez, L., eds., Proceed- ings of the 57th Conference of the Association for Compu- tational Linguistics, ACL 2019, Florence, Italy, July 28-Au- gust 2, 2019, Volume 1: Long Papers, 2296-2309. Associ- ation for Computational Linguistics. doi:10.18653/v1/p19- 1222. URL https://doi.org/10.18653/v1/p19-1222. Dense Passage Retrieval for Open-Domain Question Answering. V Karpukhin, B Oguz, S Min, L Wu, S Edunov, D Chen, W Yih, CoRR abs/2004.04906Karpukhin, V.; Oguz, B.; Min, S.; Wu, L.; Edunov, S.; Chen, D.; and Yih, W. 2020. Dense Passage Retrieval for Open- Domain Question Answering. CoRR abs/2004.04906. URL https://arxiv.org/abs/2004.04906. Semi-supervised classification with graph convolutional networks. T N Kipf, International Conference on Learning Representations. Kipf, T. N.; et al. 2017. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations. Latent Retrieval for Weakly Supervised Open Domain Question Answering. K Lee, 10.18653/v1/p19-1612Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. Korhonen, A.Traum, D. R.and Màrquez, L.the 57th Conference of the Association for Computational Linguistics, ACL 2019Florence, ItalyLong Papers1Association for Computational LinguisticsLee, K.; et al. 2019. Latent Retrieval for Weakly Super- vised Open Domain Question Answering. In Korhonen, A.; Traum, D. R.; and Màrquez, L., eds., Proceedings of the 57th Conference of the Association for Computational Lin- guistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, 6086-6096. Association for Com- putational Linguistics. doi:10.18653/v1/p19-1612. URL https://doi.org/10.18653/v1/p19-1612. Multi-hop Reading Comprehension through Question Decomposition and Rescoring. S Min, V Zhong, L Zettlemoyer, H Hajishirzi, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsMin, S.; Zhong, V.; Zettlemoyer, L.; and Hajishirzi, H. 2019. Multi-hop Reading Comprehension through Question De- composition and Rescoring. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguis- tics, 6097-6109. Revealing the Importance of Semantic Retrieval for Machine Reading at Scale. Y Nie, 10.18653/v1/D19-1258Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaAssociation for Computational LinguisticsNie, Y.; et al. 2019. Revealing the Importance of Semantic Retrieval for Machine Reading at Scale. In Inui, K.; Jiang, J.; Ng, V.; and Wan, X., eds., Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Pro- cessing and the 9th International Joint Conference on Nat- ural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, 2553-2566. Association for Computational Linguistics. doi:10.18653/v1/D19-1258. URL https://doi.org/10.18653/v1/D19-1258. R Nogueira, K Cho, arXiv:1901.04085Passage Re-ranking with BERT. arXiv preprintNogueira, R.; and Cho, K. 2019. Passage Re-ranking with BERT. arXiv preprint arXiv:1901.04085 . Unsupervised question decomposition for question answering. E Perez, P Lewis, W Yih, K Cho, D Kiela, arXiv:2002.09758arXiv preprintPerez, E.; Lewis, P.; Yih, W.-t.; Cho, K.; and Kiela, D. 2020. Unsupervised question decomposition for question answer- ing. arXiv preprint arXiv:2002.09758 . BI-DIRECTIONAL ATTENTION FLOW FOR MA-CHINE COMPREHENSION. M Seo, A Kembhavi, A Farhadi, H Hajishirzi, International Conference on Learning Representations. Seo, M.; Kembhavi, A.; Farhadi, A.; and Hajishirzi, H. 2017. BI-DIRECTIONAL ATTENTION FLOW FOR MA- CHINE COMPREHENSION. In International Conference on Learning Representations. Matching the Blanks: Distributional Similarity for Relation Learning. L B Soares, N Fitzgerald, J Ling, T Kwiatkowski, 10.18653/v1/p19-1279Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. Korhonen, A.Traum, D. R.and Màrquez, L.the 57th Conference of the Association for Computational Linguistics, ACL 2019Florence, ItalyAssociation for Computational Linguistics1Soares, L. B.; FitzGerald, N.; Ling, J.; and Kwiatkowski, T. 2019. Matching the Blanks: Distributional Similarity for Relation Learning. In Korhonen, A.; Traum, D. R.; and Màrquez, L., eds., Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Flo- rence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, 2895-2905. Association for Computational Linguistics. doi: 10.18653/v1/p19-1279. URL https://doi.org/10.18653/v1/ p19-1279. End-toend memory networks. S Sukhbaatar, J Weston, R Fergus, Advances in neural information processing systems. Sukhbaatar, S.; Weston, J.; Fergus, R.; et al. 2015. End-to- end memory networks. In Advances in neural information processing systems, 2440-2448. Break it down: A question understanding benchmark. T Wolfson, M Geva, A Gupta, M Gardner, Y Goldberg, D Deutch, J Berant, Transactions of the Association for Computational Linguistics. 8Wolfson, T.; Geva, M.; Gupta, A.; Gardner, M.; Goldberg, Y.; Deutch, D.; and Berant, J. 2020. Break it down: A ques- tion understanding benchmark. Transactions of the Associ- ation for Computational Linguistics 8: 183-198. Quary Expansion Using Local and Global Document Analysis. J Xu, W B Croft, 10.1145/3130348.3130364SIGIR Forum. 512Xu, J.; and Croft, W. B. 2017. Quary Expansion Using Local and Global Document Analysis. SIGIR Forum 51(2): 168- 175. doi:10.1145/3130348.3130364. URL https://doi.org/ 10.1145/3130348.3130364. HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering. Z Yang, P Qi, S Zhang, Y Bengio, W W Cohen, R Salakhutdinov, C D Manning, 10.18653/v1/d18-1259Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Riloff, E.Chiang, D.Hockenmaier, J.and Tsujii, J.the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsYang, Z.; Qi, P.; Zhang, S.; Bengio, Y.; Cohen, W. W.; Salakhutdinov, R.; and Manning, C. D. 2018. HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question An- swering. In Riloff, E.; Chiang, D.; Hockenmaier, J.; and Tsu- jii, J., eds., Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, Brussels, Bel- gium, October 31 -November 4, 2018, 2369-2380. Associ- ation for Computational Linguistics. doi:10.18653/v1/d18- 1259. URL https://doi.org/10.18653/v1/d18-1259. Transformer-XH: Multi-Evidence Reasoning with eXtra Hop Attention. C Zhao, C Xiong, C Rosset, X Song, P Bennett, S Tiwary, International Conference on Learning Representations. Zhao, C.; Xiong, C.; Rosset, C.; Song, X.; Bennett, P.; and Tiwary, S. 2019. Transformer-XH: Multi-Evidence Reason- ing with eXtra Hop Attention. In International Conference on Learning Representations.
[]
[ "Channel Estimation for RIS Assisted Wireless Communications: Part II -An Improved Solution Based on Double-Structured Sparsity (Invited Paper)", "Channel Estimation for RIS Assisted Wireless Communications: Part II -An Improved Solution Based on Double-Structured Sparsity (Invited Paper)" ]
[ "Xiuhong Wei ", "Decai Shen ", "Linglong Dai " ]
[]
[]
Reconfigurable intelligent surface (RIS) can manipulate the wireless communication environment by controlling the coefficients of RIS elements. However, due to the large number of passive RIS elements without signal processing capability, channel estimation in RIS assisted wireless communication system requires high pilot overhead. In the second part of this invited paper, we propose to exploit the double-structured sparsity of the angular cascaded channels among users to reduce the pilot overhead. Specifically, we first reveal the double-structured sparsity, i.e., different angular cascaded channels for different users enjoy the completely common non-zero rows and the partially common non-zero columns. By exploiting this doublestructured sparsity, we further propose the double-structured orthogonal matching pursuit (DS-OMP) algorithm, where the completely common non-zero rows and the partially common non-zero columns are jointly estimated for all users. Simulation results show that the pilot overhead required by the proposed scheme is lower than existing schemes.Index Terms-Reconfigurable intelligent surface (RIS), channel estimation, compressive sensing.
10.1109/lcomm.2021.3052787
[ "https://arxiv.org/pdf/2101.09405v1.pdf" ]
231,699,091
2101.09405
a563d4aa484127f50aa8ad569ccb4158e081744c
Channel Estimation for RIS Assisted Wireless Communications: Part II -An Improved Solution Based on Double-Structured Sparsity (Invited Paper) 23 Jan 2021 Xiuhong Wei Decai Shen Linglong Dai Channel Estimation for RIS Assisted Wireless Communications: Part II -An Improved Solution Based on Double-Structured Sparsity (Invited Paper) 23 Jan 20211Index Terms-Reconfigurable intelligent surface (RIS)channel estimationcompressive sensing Reconfigurable intelligent surface (RIS) can manipulate the wireless communication environment by controlling the coefficients of RIS elements. However, due to the large number of passive RIS elements without signal processing capability, channel estimation in RIS assisted wireless communication system requires high pilot overhead. In the second part of this invited paper, we propose to exploit the double-structured sparsity of the angular cascaded channels among users to reduce the pilot overhead. Specifically, we first reveal the double-structured sparsity, i.e., different angular cascaded channels for different users enjoy the completely common non-zero rows and the partially common non-zero columns. By exploiting this doublestructured sparsity, we further propose the double-structured orthogonal matching pursuit (DS-OMP) algorithm, where the completely common non-zero rows and the partially common non-zero columns are jointly estimated for all users. Simulation results show that the pilot overhead required by the proposed scheme is lower than existing schemes.Index Terms-Reconfigurable intelligent surface (RIS), channel estimation, compressive sensing. I. INTRODUCTION In the first part of this two-part invited paper, we have introduced the fundamentals, solutions, and future opportunities of channel estimation in the reconfigurable intelligent surface (RIS) assisted wireless communication system. One of the most important challenges of channel estimation is that, the pilot overhead is high, since the RIS consists of a large number of passive elements without signal processing capability [1], [2]. By exploiting the sparsity of the angular cascaded channel, i.e., the cascade of the channel from the user to the RIS and the channel from the RIS to the base station (BS), the channel estimation problem can be formulated as a sparse signal recovery problem, which can be solved by compressive sensing (CS) algorithms with reduced pilot overhead [3], [4]. However, the pilot overhead of most existing solutions is still high. In the second part of this paper, in order to further reduce the pilot overhead, we propose a double-structured orthogonal All matching pursuit (DS-OMP) based cascaded channel estimation scheme by leveraging the double-structured sparsity of the angular cascaded channels 1 . Specifically, we reveal that the angular cascaded channels associated with different users enjoy the completely common non-zero rows and the partially common non-zero columns, which is called as "doublestructured sparsity" in this paper. Then, by exploiting this double-structured sparsity, we propose the DS-OMP algorithm based on the classical OMP algorithm to realize channel estimation. In the proposed DS-OMP algorithm, the completely common row support and the partially common column support for different users are jointly estimated, and the userspecific column supports for different users are individually estimated. After detecting all supports mentioned above, the least square (LS) algorithm can be utilized to obtain the estimated angular cascaded channels. Since the double-structured sparsity is exploited, the proposed DS-OMP based channel estimation scheme is able to further reduce the pilot overhead. The rest of the paper is organized as follows. In Section II, we introduce the channel model and formulate the cascaded channel estimation problem. In Section III, we first reveal the double-structured sparsity of the angular cascaded channels, and then propose the DS-OMP based cascaded channel estimation scheme. Simulation results and conclusions are provided in Section IV and Section V, respectively. Notation: Lower-case and upper-case boldface letters a and A denote a vector and a matrix, respectively; a T denotes the conjugate of vector a; A T and A H denote the transpose and conjugate transpose of matrix A, respectively; A F denotes the Frobenius norm of matrix A; diag (x) denotes the diagonal matrix with the vector x on its diagonal; a⊗b denotes the Kronecker product of a and b. Finally, CN (µ, σ) denotes the probability density function of the circularly symmetric complex Gaussian distribution with mean µ and variance σ 2 . II. SYSTEM MODEL In this section, we will first introduce the cascaded channel in the RIS assisted communication system. Then, the cascaded channel estimation problem will be formulated. A. Cascaded Channel We consider that the BS and the RIS respectively employ the M -antenna and the N -element uniform planer array (UPA) to simultaneously serve K single-antenna users. Let G of size M × N denote the channel from the RIS to the BS, and h r,k of size N × 1 denote the channel from the kth user to the RIS (k = 1, 2, · · · , K). The widely used Saleh-Valenzuela channel model is adopted to represent G as [5] G = M N L G LG l1=1 α G l1 b ϑ Gr l1 , ψ Gr l1 a ϑ Gt l1 , ψ Gt l1 T ,(1) where L G represents the number of paths between the RIS and the BS, α G l1 , ϑ Gr l1 (ψ Gr l1 ), and ϑ Gt l1 (ψ Gt l1 ) represent the complex gain consisting of path loss, the azimuth (elevation) angle at the BS, and the azimuth (elevation) angle at the RIS for the l 1 th path. Similarly, the channel h r,k can be represented by h r,k = N L r,k L r,k l2=1 α r,k l2 a ϑ r,k l2 , ψ r,k l2 ,(2) where L r,k represents the number of paths between the kth user and the RIS, α r,k l2 , ϑ r,k l2 (ψ r,k l2 ) represent the complex gain consisting of path loss, the azimuth (elevation) angle at the RIS for the l 2 th path. b (ϑ, ψ) ∈ C M×1 and a (ϑ, ψ) ∈ C N ×1 represent the normalized array steering vector associated to the BS and the RIS, respectively. For a typical N 1 × N 2 (N = N 1 × N 2 ) UPA, a (ϑ, ψ) can be represented by [5] a (ϑ, ψ) = 1 √ N e −j2πdsin(ϑ)cos(ψ)n1/λ ⊗ e −j2πdsin(ψ)n2/λ ,(3) where n 1 = [0, 1, · · · , N 1 − 1] and n 2 = [0, 1, · · · , N 2 − 1], λ is the carrier wavelength, and d is the antenna spacing usually satisfying d = λ/2. Further, we denote H k Gdiag (h r,k ) as the M × N cascaded channel for the kth user. Using the virtual angulardomain representation, H k ∈ C M×N can be decomposed as H k = U MHk U T N ,(4) whereH k denotes the M × N angular cascaded channel, U M and U N are respectively the M × M and N × N dictionary unitary matrices at the BS and the RIS [5]. Since there are limited scatters around the BS and the RIS, the angular cascaded channelH k has a few non-zero elements, which exhibits the sparsity. B. Problem Formulation In this paper, we assume that the direct channel between the BS and the user is known for BS, which can be easily estimated as these in conventional wireless communication systems [5]. Therefore, we only focus on the cascaded channel estimation problem. By adopting the widely used orthogonal pilot transmission strategy, all users transmit the known pilot symbols to the BS via the RIS over Q time slots for the uplink channel estimation. Specifically, in the qth (q = 1, 2, · · · , Q) time slot, the effective received signal y k,q ∈ C M×1 at the BS for the kth user after removing the impact of the direct channel can be represented as y k,q =Gdiag (θ q ) h r,k s k,q + w k,q =Gdiag (h r,k ) θ q s k,q + w k,q ,(5) where s k,q is the pilot symbol sent by the kth user, θ q = [θ q,1 , · · · , θ q,N ] T is the N × 1 reflecting vector at the RIS with θ q,n representing the reflecting coefficient at the nth RIS element (n = 1, · · · , N ) in the qth time slot, w k,q ∼ CN 0, σ 2 I M is the M × 1 received noise with σ 2 representing the noise power. According to the cascaded channel H k = Gdiag (h r,k ), we can rewrite (5) as y k,q = H k θ q s k,q + w k,q .(6) After Q time slots of pilot transmission, we can obtain the M × Q overall measurement matrix Y k = [y k,1 , · · · , y k,Q ] by assuming s k,q = 1 as Y k = H k Θ + W k ,(7) where Θ = [θ 1 , · · · , θ Q ] and W k = [w k,1 , · · · , w k,Q ]. By substituting (4) into (7), we can obtain (7) can be rewritten as a CS model: Y k = U MHk U T N Θ + W k .(8)Let denoteỸ k = U H M Y k H as the Q × M effective measurement matrix, andW k = U H M W k H as the Q × M effective noise matrix,Y k =ΘH H k +W k ,(9)whereΘ = U T N Θ H is the Q × N sensing matrix. Based on (9), we can respectively estimate the angular cascaded channel for each user k by conventional CS algorithms, such as OMP algorithm. However, under the premise of ensuring the estimation accuracy, the pilot overhead required by the conventional CS algorithms is still high. III. JOINT CHANNEL ESTIMATION FOR RIS ASSISTED WIRELESS COMMUNICATION SYSTEMS In this section, we will first reveal the double-structured sparsity of the angular cascaded channels. Then, by exploiting this important channel characteristic, we will propose a DS-OMP based cascaded channel estimation scheme to reduce the pilot overhead. Finally, the computational complexity of the proposed scheme will be analyzed. A. Double-Structured Sparsity of Angular Cascaded Channels In order to further explore the sparsity of the angular cascaded channel both in row and column, the angular cascaded channelH k in (4) can be expressed as H k = M N L G L r,k LG l1=1 L r,k l2=1 α G l1 α r,k l2 b ϑ Gr l1 , ψ Gr l1 ã T ϑ Gt l1 + ϑ r,k l2 , ψ Gt l1 + ψ r,k l2 ,(10)where bothb (ϑ, ψ) = U H M b (ϑ, ψ) andã (ϑ, ψ) = U H N a (ϑ, ψ) have only one non-zero element, which lie on the position of array steering vector at the direction (ϑ, ψ) in U M and U N . Based on (10), we can find that each complete reflecting path (l 1 , l 2 ) can provide one non-zero element for H k , whose row index depends on ϑ Gr l1 , ψ Gr l1 and column index depends on ϑ Gt l1 + ϑ r,k l2 , ψ Gt l1 + ψ r,k l2 . Therefore,H k has L G non-zero rows, where each non-zero row has L r,k non-zero columns. The total number of non-zero elements is L G L r,k , which is usually much smaller than M N . More importantly, we can find that different sparse channels {H k } K k=1 exhibit the double-structured sparsity, as shown in Fig. 1. Firstly, since different users communicate with the BS via the common RIS, the channel G from the RIS to the BS is common for all users. From (10), we can also find that ϑ Gr l1 , ψ Gr l1 LG l1=1 is independent of the user index k. Therefore, the non-zero elements of {H k } K k=1 lie on the completely common L G rows. Secondly, since different users will share part of the scatters between the RIS and users, {h r,k } K k=1 may enjoy partially common paths with the same angles at the RIS. Let L c (L c ≤ L r,k , ∀k) denote the number of common paths for {h r,k } K k=1 , then we can find that for ∀l 1 , there always exists ϑ Gt l1 − ϑ r,k l2 , ψ Gt l1 − ψ r,k l2 Lc l2=1 shared by {H k } K k=1 . That is to say, for each common non-zero rows l 1 (l 1 = 1, 2, · · · , L G ), {H k } K k=1 enjoy L c common nonzero columns. This double-structured sparsity of the angular cascaded channels can be summarized as follows from the perspective of row and column, respectively. • Row-structured sparsity: Let Ω k r denote the row set of non-zero elements forH k , then we have Ω 1 r = Ω 2 r = · · · = Ω K r = Ω r ,(11) where Ω r represents the completely common row support for {H k } K k=1 . • Partially column-structured sparsity: Let Ω l,k c denote the column set of non-zero elements for the l 1 th non-zero row ofH k , then we have Ω l1,1 c ∩Ω l1,2 c ∩· · ·∩Ω l1,K c = Ω l1,Com c , l 1 = 1, 2, · · · , L G ,(12) where Ω l,Com c represents the partially common column support for the l 1 th non-zero row of {H k } K k=1 . Based on the above double-structured sparsity, the cascaded channels for different users can be jointly estimated to improve the channel estimation accuracy. B. Proposed DS-OMP Based Cascaded Channel Estimation In this subsection, we propose the DS-OMP based cascaded channel estimation scheme by integrating the doublestructured sparsity into the classical OMP algorithm. The specific algorithm can be summarized in Algorithm 1, which includes three key stages to detect supports of angular cascaded channels. for k = 1, 2, · · · , K do 6.Ĥ H k (Ω l1,k c ,Ω r (l 1 )) =Θ † (:,Ω l1,k c )Ỹ k (:,Ω r (l 1 )) 7. end for 8. end for 9.Ĥ k = U H MĤ k U N , ∀k Output: Estimated cascaded channel matricesĤ k , ∀k. The main procedure of Algorithm 1 can be explained as follows. Firstly, the completely common row support Ω r is jointly estimated thanks to the row-structured sparsity in Step 1, where Ω r consists of L G row indexes associated with L G non-zero rows. Secondly, for the l 1 th non-zero row, the partially common column support Ω l1,Com c can be further jointly estimated thanks to the partially column-structured sparsity in Step 2. Thirdly, the user-specific column supports for each user k can be individually estimated in Step 3. After detecting supports of all sparse matrices, we adopt the LS algorithm to obtain corresponding estimated matrices {Ĥ k } K k=1 in Steps 4-8. It should be noted that the sparse signal in (9) isH H k , thus the sparse matrix estimated by the LS algorithm in Step 6 isĤ H k . Finally, we can obtain the estimated cascaded channels {Ĥ k } K k=1 by transforming angular channels into spatial channels in Step 9. In the following part, we will introduce how to estimate the completely common row support, the partially common column supports, and the individual column supports for the first three stages in detail. 1) Stage 1: Estimating the completely common row support. Thanks to the row-structured sparsity of the angular cascaded channels, we can jointly estimate the completely common row support Ω r for {H k } K k=1 by Algorithm 2. From the virtual angular-domain channel representation (4), we can find that non-zero rows of {H k } K k=1 are corresponding to columns with high power in the received pilots {Ỹ k } K k=1 . Since {H k } K k=1 have the completely common non-zero rows, {Ỹ k } K k=1 can be jointly utilized to estimate the completely Algorithm 2: Joint completely common row support estimation Input:Ỹ k : ∀k, L G . Initialization: g = 0 M×1 . 1. for k = 1, 2, · · · , K do 2. g(m) = g(m) + Ỹ k (:, m) 2 F , ∀m = 1, 2, · · · , M 3. end for 4.Ω r = Γ T (g, L G ) Output: Estimated completely common row support Ω r . common row support Ω r , which can resist the effect of noise. Specifically, we denote g of size M ×1 to save the sum power of columns of {Ỹ k } K k=1 , as in Step 2 of Algorithm 2. Finally, L G indexes of elements with the largest amplitudes in g are selected as the estimated completely common row supportΩ r in Step 4, where T (x, L) denotes a prune operator on x that sets all but L elements with the largest amplitudes to zero, and Γ(x) denotes the support of x, i.e., Γ(x) = {i, x(i) = 0}. After obtaining L G non-zero rows by Algorithm 2, we focus on estimating the column support Ω l1,k c for each nonzero row l 1 and each user k by the following Stage 2 and 3. 2) Stage 2: Estimating the partially common column supports. Thanks to the partially column-structured sparsity of the angular cascaded channels, we can jointly estimate the partially common column supports {Ω l1,Com c } L l1=1 for {H k } K k=1 by Algorithm 3. Algorithm 3: Joint partially common column supports estimation Input:Ỹ k : ∀k, L G ,Θ, L r,k : ∀k, L c ,Ω r . Initialization:Ω l1,k c = ∅, ∀l 1 , k, c l 1 = 0 N ×1 , ∀l 1 . 1. for l 1 = 1, 2, · · · , L G do 2. for k = 1, 2, · · · , K do 3.ỹ k =Ỹ k (:,Ω r (l 1 )),r k =ỹ k 4. for l 2 = 1, 2, · · · , L r,k do 5. n * = argmax n=1,2,··· ,N Θ H (:, n)r k 2 F 6.Ω l1,k c =Ω l1,k c n * 7.ĥ k = 0 N ×1 8.ĥ k (Ω l1,k c ) =Θ † (:,Ω l1,k c )ỹ k , 9.r k =ỹ k −Θĥ k 10. c l1 (n * ) = c l1 (n * ) + 1 11. end for 12. end for 13.Ω l1,Com For the l 1 th non-zero row, we only need to utilize the effective measurement vectorỹ k =Ỹ k (:,Ω r (l 1 )) to estimate the partially common column support Ω l1,Com c . The basic idea is that, we firstly estimate the column support Ω l1,k c with L r,k indexes for each user k, then we select L c indexes associated with the largest number of times from all {Ω l1,k c } K k=1 as the estimated partially common column supportΩ l1,Com c . In order to estimate the column supports for each user k, the correlation between the sensing matrixΘ and the residual vectorr k needs to be calculated. As shown in Step 5 of Algorithm 3, the most correlative column index inΘ with r k is regarded as the newly found column support index n * . Based on the updated column supportΩ l1,k c in Step 6, the estimated sparse vectorĥ k is obtained by using the LS algorithm in Step 8. Then, the residual vectorr k is updated by removing the effect of non-zero elements that have been estimated in Step 9. Particularly, the N × 1 vector c l1 is used to count the number of times for selected column indexes in Step 10. Finally, the L c indexes of elements with the largest value in c l1 are selected as the estimated partially common column supportΩ l1,Com c in Step 13. 3) Stage 3: Estimating the individual column supports. Based on the estimated completely common row support Ω r and the estimated partially common column supports {Ω l1,Com c } L l1=1 , the column support Ω l1,k c for each non-zero row l 1 and each user k can be estimated by Algorithm 4. Algorithm 4: Individual column supports estimation Input:Ỹ k : ∀k,Θ, L G L r,k : ∀k, L c ,Ω r , {Ω l1,Com c } L l1=1 . Initialization:Ω l1,k c =Ω l1,Com c , ∀l 1 , k. 1. for l 1 = 1, 2, · · · , L G do 2. for k = 1, 2, · · · , K do 3.ỹ k =Ỹ k (:,Ω r (l 1 )) 4.ĥ k = 0 N ×1 5.ĥ k (Ω l1,k c ) =Θ † (:,Ω l1,Com c )ỹ k 6. r k = y k −Θĥ k , 7. for l 2 = 1, 2, · · · , L r,k − L c do 8. n * = argmax n=1,2,··· ,N Θ H (:, n)r k 2 F 9.Ω l1,k c =Ω l1,k c n * 10.ĥ k = 0 N ×1 11.ĥ k (Ω l1,k c ) =Θ † (:,Ω l1,k c )ỹ k 12.r k =ỹ k −Θĥ k 13. end for 14. end for 15. end for Output: Estimated the individual column supports {{Ω l1,k c } LG l1=1 } K k=1 . For the l 1 th non-zero row, we have estimated L c column support indexes by Algorithm 3. Thus, there are L r,k − L c user-specific column support indexes to be estimated for each user k. The column supportΩ l1,k c is initialized asΩ l1,Com c . Based onΩ l1,Com c , the estimated sparse vectorĥ k and residual vectorr k are initialized in Step 5 and Step 6. Then, the column supportΩ l1,k c for ∀l 1 and ∀k can be estimated in Steps 7-13 by following the same idea of Algorithm 3. Through the above three stages, the supports of all angular cascaded channels are estimated by exploiting the double-structured sparsity. It should be pointed out that, if there are no common scatters between the RIS and users, the double-structured sparse channel will be simplified as the rowstructured sparse channel. In this case, the cascaded channel estimation can also be solved by the proposed DS-OMP algorithm, where Stage 2 will be removed. C. Computational Complexity Analysis In this subsection, the computational complexity of the proposed DS-OMP algorithm is analyzed in terms of three stages of detecting supports. In Stage 1, the computational complexity mainly comes from Step 2 in Algorithm 2, which calculates the power of M columns ofỸ k of size Q × M for k = 1, 2, · · · , K. The corresponding computational complexity is O(KM Q). In Stage 2, for each non-zero row l 1 and each user k in Algorithm 3 , the computational complexity O(N QL 3 r,k ) is the same as that of OMP algorithm [6]. Considering L G K iterations, the overall computational complexity of IV. SIMULATION RESULTS In our simulation, we consider that the number of BS antennas, RIS elements and users are respectively M = 64 (M 1 = 8, M 2 = 8), N = 256 (N 1 = 16, N 2 = 16), and K = 16. The number of paths between the RIS and the BS is L G = 5, and the number of paths from the kth user to the RIS is set as L r,k = 8 for ∀k. All spatial angles are assumed to be on the quantized grids. Each element of RIS reflecting matrix Θ is selected from {− 1 √ N , + 1 √ N } by considering discrete phase shifts of the RIS [7]. |α G l | = 10 −3 d −2.2 BR , where d BR denotes the distance between the BS and RIS and is assumed to be d BR = 10m. |α r,k l | = 10 −3 d −2.8 RU , where d RU denotes the distance between the RIS and user and is assumed to be d RU = 100m for ∀k [7]. The SNR is defined (14) and is set as 0 dB. We compare the proposed DS-OMP based scheme with the conventional CS based scheme [3] and the row-structured sparsity based scheme [4]. In the conventional CS based scheme, the OMP algorithm is used to estimate the sparse cascaded channelH k for ∀k. In the row-structured sparsity based scheme, the common row support Ω r with L G indexes are firstly estimated, and then for each user k and each nonzero row l 1 , column supports are respectively estimated by following the idea of the classical OMP algorithm. In addition, we consider the oracle LS scheme as our benchmark, where the supports of all sparse channels are assumed to be perfectly known. Fig. 2 shows the normalized mean square error (NMSE) performance comparison against the pilot overhead, i.e., the number of time slots Q for pilot transmission. As shown in Fig. 2, in order to achieve the same estimation accuracy, the pilot overhead required by the proposed DS-OMP based scheme is lower than the other two existing schemes [3], [4]. However, when there is no common path between the RIS and all users, i.e., L c = 0, the double-structured sparsity will be simplified as the row-structured sparsity [4]. Thus the NMSE performance of the proposed DS-OMP based and the row-structured sparsity based scheme is the same. With the increased number of common paths L c between the RIS and users, the NMSE performance of the proposed scheme can be improved to approach the benchmark of perfect channel supports. as E{||ΘH H k || 2 F /||W k || 2 F } in V. CONCLUSIONS In this paper, we developed a low-overhead cascaded channel estimation scheme in RIS assisted wireless communication systems. Specifically, we first analyzed the double-structured sparsity of the angular cascaded channels among users. Based on this double-structured sparsity, we then proposed a DS-OMP algorithm to reduce the pilot overhead. Simulation results show that the pilot overhead required by the proposed DS-OMP algorithm is lower compared with existing algorithms. For the future work, we will apply the doublestructured sparsity to the super-resolution channel estimation problem by considering the channel angles are continuous in practice. Fig. 1 . 1Double-structured sparsity of the angular cascaded channels. Algorithm 1 : 1DS-OMP based cascaded channel estimation Input:Ỹ k : ∀k,Θ, L G , L r,k : ∀k, L c . Initialization:Ĥ k = 0 M×N , ∀k.1. Stage 1: Return estimated completely common row supportΩ r by Algorithm 2. 2. Stage 2: Return estimated partially common column supports {Ω l1,Com c } LG l1=1 based onΩ r by Algorithm 3. 3. Stage 3: Return estimated column supports {{Ω l1,k c } LG l1=1 } K k=1 based onΩ r and {Ω l1,Com c }LG l1=1 by Algorithm 4. 4. for l 1 = 1, 2, · · · , L G do 5. c = Γ T (c l 1 ,Pc) 14. end for Output: Estimated completely common row support {Ω l1,Com c } LG l1=1 . Algorithm 3 is O(L G KN QL 3 r,k ). Similarly, the overall computational complexity of Algorithm 4 is O(L G KN Q(L r,k − L c ) 3 ). Therefore, the overall computational complexity of proposed DS-OMP algorithm is O(KM Q) + O(L G KN QL 3 r,k ). Fig. 2 . 2NMSE performance comparison against the pilot overhead Q. authors are with the Beijing National Research Center for Information Science and Technology (BNRist) as well as the Department of Electronic Engineering, Tsinghua University, Beijing 100084, China (e-mails: [email protected], [email protected], [email protected]). This work was supported in part by the National Key Research and Development Program of China (Grant No. 2020YFB1807201) and in part by the National Natural Science Foundation of China (Grant No. 62031019). Simulation codes are provided to reproduce the results presented in this paper: http://oa.ee.tsinghua.edu.cn/dailinglong/publications/publications.html. NMSE (dB) Conventional CS based scheme [3] Row-structured sparsity based scheme [4] Proposed DS-OMP based scheme (Lc=0) Proposed DS-OMP based scheme (Lc=4) Proposed DS-OMP based scheme (Lc=6) Proposed DS-OMP based scheme (Lc=8) Oracle LS based scheme. NMSE (dB) Conventional CS based scheme [3] Row-structured sparsity based scheme [4] Proposed DS-OMP based scheme (Lc=0) Proposed DS-OMP based scheme (Lc=4) Proposed DS-OMP based scheme (Lc=6) Proposed DS-OMP based scheme (Lc=8) Oracle LS based scheme Reconfigurable intelligent surface-based wireless communications: Antenna design, prototyping, and experimental results. L Dai, IEEE Access. 8L. Dai et. al, "Reconfigurable intelligent surface-based wireless commu- nications: Antenna design, prototyping, and experimental results," IEEE Access, vol. 8, pp. 45 913-45 923, Mar. 2020. Reconfigurable intelligent surfaces vs. relaying: Differences, similarities, and performance comparison. M , Di Renzo, IEEE Open J. Commun. Soc. 1M. Di Renzo et. al, "Reconfigurable intelligent surfaces vs. relaying: Differences, similarities, and performance comparison," IEEE Open J. Commun. Soc., vol. 1, pp. 798-807, Jun. 2020. Compressed channel estimation for intelligent reflecting surface-assisted millimeter wave systems. P Wang, J Fang, H Duan, H Li, IEEE Signal Process. Lett. 27P. Wang, J. Fang, H. Duan, and H. Li, "Compressed channel estimation for intelligent reflecting surface-assisted millimeter wave systems," IEEE Signal Process. Lett., vol. 27, pp. 905-909, May 2020. Channel estimation for reconfigurable intelligent surface aided multi-user MIMO systems. J Chen, Y.-C Liang, H V Cheng, W Yu, arXiv:1912.03619arXiv preprintJ. Chen, Y.-C. Liang, H. V. Cheng, and W. Yu, "Channel estimation for reconfigurable intelligent surface aided multi-user MIMO systems," arXiv preprint arXiv:1912.03619, Dec. 2019. Super-resolution channel estimation for mmWave massive MIMO with hybrid precoding. C Hu, L Dai, T Mir, Z Gao, J Fang, IEEE Trans. Veh. Technol. 679C. Hu, L. Dai, T. Mir, Z. Gao, and J. Fang, "Super-resolution channel estimation for mmWave massive MIMO with hybrid precoding," IEEE Trans. Veh. Technol., vol. 67, no. 9, pp. 8954-8958, Sep. 2018. Wideband beamspace channel estimation for millimeter-wave MIMO systems relying on lens antenna arrays. X Gao, L Dai, S Zhou, A M Sayeed, L Hanzo, IEEE Trans. Signal Process. 6718X. Gao, L. Dai, S. Zhou, A. M. Sayeed, and L. Hanzo, "Wideband beamspace channel estimation for millimeter-wave MIMO systems rely- ing on lens antenna arrays," IEEE Trans. Signal Process., vol. 67, no. 18, pp. 4809-4824, Sep. 2019. Beamforming optimization for wireless network aided by intelligent reflecting surface with discrete phase shifts. Q Wu, R Zhang, IEEE Trans. Commun. 683Q. Wu and R. Zhang, "Beamforming optimization for wireless network aided by intelligent reflecting surface with discrete phase shifts," IEEE Trans. Commun., vol. 68, no. 3, pp. 1838-1851, 2020.
[]
[ "FRINGE NEWS NETWORKS: DYNAMICS OF US NEWS VIEWERSHIP FOLLOWING THE 2020 PRESIDENTIAL ELECTION A PREPRINT", "FRINGE NEWS NETWORKS: DYNAMICS OF US NEWS VIEWERSHIP FOLLOWING THE 2020 PRESIDENTIAL ELECTION A PREPRINT" ]
[ "Ashiqur R Khudabukhsh \nMaulana Abul Kalam\nCarnegie Mellon University\nAzad University of Technology\nCarnegie Mellon University\nCarnegie Mellon University\n\n", "Rupak Sarkar [email protected] \nMaulana Abul Kalam\nCarnegie Mellon University\nAzad University of Technology\nCarnegie Mellon University\nCarnegie Mellon University\n\n", "Mark S Kamlet [email protected] \nMaulana Abul Kalam\nCarnegie Mellon University\nAzad University of Technology\nCarnegie Mellon University\nCarnegie Mellon University\n\n", "Tom M Mitchell [email protected] \nMaulana Abul Kalam\nCarnegie Mellon University\nAzad University of Technology\nCarnegie Mellon University\nCarnegie Mellon University\n\n" ]
[ "Maulana Abul Kalam\nCarnegie Mellon University\nAzad University of Technology\nCarnegie Mellon University\nCarnegie Mellon University\n", "Maulana Abul Kalam\nCarnegie Mellon University\nAzad University of Technology\nCarnegie Mellon University\nCarnegie Mellon University\n", "Maulana Abul Kalam\nCarnegie Mellon University\nAzad University of Technology\nCarnegie Mellon University\nCarnegie Mellon University\n", "Maulana Abul Kalam\nCarnegie Mellon University\nAzad University of Technology\nCarnegie Mellon University\nCarnegie Mellon University\n" ]
[]
The growing political polarization of the American electorate over the last several decades has been widely studied and documented. During the administration of President Donald Trump, charges of "fake news" made social and news media not only the means but, to an unprecedented extent, the topic of political communication. Using data from before the November 3rd, 2020 US Presidential election, recent work has demonstrated the viability of using YouTube's social media ecosystem to obtain insights into the extent of US political polarization as well as the relationship between this polarization and the nature of the content and commentary provided by different US news networks. With that work as background, this paper looks at the sharp transformation of the relationship between news consumers and here-to-fore "fringe" news media channels in the 64 days between the US presidential election and the violence that took place at US Capitol on January 6 th . This paper makes two distinct types of contributions. The first is to introduce a novel methodology to analyze large social media data to study the dynamics of social political news networks and their viewers. The second is to provide insights into what actually happened regarding US political social media channels and their viewerships during this volatile 64 day period.
10.1145/3501247.3531577
[ "https://arxiv.org/pdf/2101.10112v1.pdf" ]
231,699,134
2101.10112
e13560574a0912e35fc969ec9f4ff86b9f35a65a
FRINGE NEWS NETWORKS: DYNAMICS OF US NEWS VIEWERSHIP FOLLOWING THE 2020 PRESIDENTIAL ELECTION A PREPRINT January 26, 2021 Ashiqur R Khudabukhsh Maulana Abul Kalam Carnegie Mellon University Azad University of Technology Carnegie Mellon University Carnegie Mellon University Rupak Sarkar [email protected] Maulana Abul Kalam Carnegie Mellon University Azad University of Technology Carnegie Mellon University Carnegie Mellon University Mark S Kamlet [email protected] Maulana Abul Kalam Carnegie Mellon University Azad University of Technology Carnegie Mellon University Carnegie Mellon University Tom M Mitchell [email protected] Maulana Abul Kalam Carnegie Mellon University Azad University of Technology Carnegie Mellon University Carnegie Mellon University FRINGE NEWS NETWORKS: DYNAMICS OF US NEWS VIEWERSHIP FOLLOWING THE 2020 PRESIDENTIAL ELECTION A PREPRINT January 26, 20212020 US election · Voter Fraud · Cable News Networks · Echo Chamber The growing political polarization of the American electorate over the last several decades has been widely studied and documented. During the administration of President Donald Trump, charges of "fake news" made social and news media not only the means but, to an unprecedented extent, the topic of political communication. Using data from before the November 3rd, 2020 US Presidential election, recent work has demonstrated the viability of using YouTube's social media ecosystem to obtain insights into the extent of US political polarization as well as the relationship between this polarization and the nature of the content and commentary provided by different US news networks. With that work as background, this paper looks at the sharp transformation of the relationship between news consumers and here-to-fore "fringe" news media channels in the 64 days between the US presidential election and the violence that took place at US Capitol on January 6 th . This paper makes two distinct types of contributions. The first is to introduce a novel methodology to analyze large social media data to study the dynamics of social political news networks and their viewers. The second is to provide insights into what actually happened regarding US political social media channels and their viewerships during this volatile 64 day period. Introduction The growing political polarization of the American electorate over the last several decades has been widely studied and documented [1,2,3,4,5,6,7,8]. During the administration of President Donald Trump, charges of "fake news" made social and news media not only the means but, to an unprecedented extent, the topic of political communication [9]. At the same time, the partisan and ideological divergences across viewers of major US television networks increasingly became mirrored by the content and commentary which these networks provided [10,11,12,13]. Recent study has demonstrated the viability of using YouTube's social media ecosystem to obtain insights on both of these considerations [8]. That study focused on viewer responses to videos from the "big three" cable news networksnamely CNN, Fox News, and MSNBC. While that study did give cursory attention to One America News Network (OANN), the focus on CNN, Fox News, and MSNBC was logical given the size of these news networks' YouTube viewership at the time. The relative size of these networks' YouTube viewing audiences remained approximately stable through November 3 rd , 2020 when the US Presidential election was held. But, as is described shortly, what happened next was anything but stable. This paper makes two distinct types of contributions. The first is to introduce a novel methodology to analyze large social media data to study the dynamics of news networks and their viewers. This methodology applies a variety of state-of-the-art methods including machine translation; specialized deep network language models trained on different portions of social media text; and cloze tests on those distinct language models to study the difference in opinions across these different subcommunities. Taken together, these methods provide a complementary and corroborative portrayal of the dynamics of viewers' expressed social media opinions; the nature of the content and commentary set forth by the different news networks; and the changes in the alignment between the two. That the results reported in this paper could be obtained so quickly after the event highlights this methodology's potential power and usefulness, and suggest opportunities that may well also be applicable for analysis of other such time-series data. The second type of contribution is to provide insights into what actually happened regarding US political social media channels and their viewerships during the 64 days between the US Presidential election on November 3 rd , 2020 and the entry into the US Capitol of violent demonstrators on January 6 th , 2021. To briefly provide necessary background, a precipitating event to what subsequently took place occurred during the evening of November 4 th , as election results were beginning to come in. Following an unexpectedly strong showing for the President in Florida compared with virtually all prior polling, the mood in the White House was upbeat 2 . But, as reported by the New York Times ". . .[the] mirage of victory was pierced when Fox News called Arizona for former Vice President Joseph R. Biden Jr. at 11:20 p.m., with just 73 percent of the state's vote counted." Fox News made this call before other national networks had done so. In fact, it wasn't until November 12 th , nine days after Election Day, that the other networks with decision desks -NBC, ABC, CBS and CNN -called the state for Biden too. As reported by The Times, "Mr. Trump and his advisers erupted at the news. If it was true that Arizona was lost, it would call into doubt on any claim of victory the president might be able to make." 3 What ensued for Mr. Trump, again according to the Times, "was a night of angry calls to Republican governors. . .leading to a middle-of-the-night presidential briefing in which he made a reckless and unsubstantiated string of remarks about the democratic process. Standing in the East Room at 2:30 a.m., he dismissed the election as a 'fraud'." With this as background, this paper's analyses find the following key conclusions. • Following November 3 rd , there was a notable loss of Fox's YouTube channel's market share, and the departure of previously loyal Fox viewers to what here-to-fore were considered fringe networks (OANN, Newsmax, and Blaze). As an example, the viewership of Newsmax increased by over a factor of seven from the pre-election period to January 6 th , 2021. • Compared to networks such as CNN, MSNBC, and FoxNews, we find that the networks OANN, Newsmax, and Blaze had more features of being "echo chambers," in the sense that their viewerships more nearly uniformly agreed with what they were watching, with lower proportions of their viewerships critiquing what was being presented to them. • We find that viewer opinion about the legitimacy of the election is polarized into two groups, with viewers of MSNBC, CNN, and Fox News far more in agreement that Biden should be considered "president-elect" than OANN, Newsmax, and Blaze. In a similar vein, OANN and Newsmax are strong outliers in terms of usage of the trigram "stop the steal." • Based on cloze tests [14] using the probe The biggest problem of American is [MASK], training a language model [15] based on the comments provided by MSNBC viewers, the top three answers are "Trump", "COVID," and "unemployment", while a language model trained on comments provided by OANN yields "communism," "corruption," and "socialism." The other networks fall into positions along this continuum that are consistent with expectations. A similar behavior is observed when cloze tests are employed to analyze who won the election. These findings are corroborated further with a Natural Language Inference algorithm. • Using a machine translation based method presented in [8] that quantifies the differences between large-scale social media discussion corpora, each channel's viewership is assigned its own language and the similarities between the languages of any two channels can be quantified. The language of the viewership of Trump's own individual YouTube channel is most similar to the language of the viewership of Newsmax, followed by OANN, followed by Blaze, followed by Fox, followed by CNN, followed by MSNBC. Data Set Our data set considers official YouTube channels of six US cable news networks listed in Table 1, and consists of: subscription counts of these YouTube channels; comments posted by viewers of individual news videos posted by each channel; "likes" and "dislikes" associated with each of with these videos; and news video transcripts 4 . In addition to these six YouTube channels, we consider the official YouTube channel of the 45 th US President, Donald J. Trump. We used the publicly available YouTube API to download comments, and video "likes" and "dislikes" information. Apart from CNN, for each news video, we also extracted video transcripts using a Python package 5 . The package did not give reliable results for CNN, hence we omit CNN in our analyses on the news transcripts (presented in Section 4.2). Our analyses primarily focus on two non-overlapping time intervals. We denote the time interval of 31 st August, 2020 to 2 nd November, 2020, i.e., the 64 days leading up to the 2020 US election, as T before . T after refers to the time interval starting from November 3rd , 2020 to January 5th, 2021. Related Work Previous research on US cable news reported divergent views both in audience and in content [12,13]. However, these works primarily relied on surveys and were restricted to the television medium without considering these channels' YouTube presence and therefore were unable to tap into user comments and interactions. In terms of the nature of our data set, our work is closest to [8] in its use of comments on YouTube news videos of major US cable news networks. We also leverage the linguistic framework and a measure to estimate viewership agreement from this work. Our work contrasts with [8] in the following key ways: (1) our focus on an important (and timely) and non-overlapping period of 64 days prior and after the 2020 US election; (2) our emphasis on three fringe news networks, two of which were (Newsmax and Blaze TV) previously ignored in [8] and one briefly analyzed; and (3) our use of a wider variety of NLP tools in analyzing a broader range of research questions rather than presenting a quantifiable framework to gauge linguistic polarization. Previous work on deplatforming has analyzed effects of large-scale bans of communities on other social media platforms such as Reddit [16]. Our work on analyzing the migration of Fox News viewers to Newsmax adds a subtle nuance that in this case users are not being deplatformed by the platform owners. It is rather (potentially) triggered by calling the election as per the Associated Press. Echo chambers in social media is a widely studied topic [17,18,19]. Our work is similar to past work on analyzing the presence of echo chambers [20] in conservative forums with a key distinction that our choice of platform is heavily mainstream. Our work draws inspiration from several recent NLP contributions analyzing political corpora [21,8] or misinformation [22]. For instance, [21] presented an application of language models [15] to mine insights and aggregate opinions using language models fine-tuned on an Indian political social media data set. Similarly, [22] presents a link between stance detection and the entailment literature in the context of detecting COVID-19 misinformation. Instead of methodologically advancing these techniques, in this work we demonstrate the synergy between these methods on a critical domain of political crisis. Results We present a road map of our results section with our research questions and relevant sections. We start with a simple analysis involving two short phrases to characterize (1) the portrayal of the election outcome across different news networks, and (2) how the viewership of the said networks responded during this period. Our selected phrases are "President-elect Biden" and "stop the steal" (and a few high-frequency variants of these -e.g., "President-elect [wildcard ] Biden" to make room for Joseph or Joe or Joseph R.). We examine the first phrase using the video transcripts. We argue that after November 7 th 2020, when the Associated Press called the election for Biden, any reference to President-elect Biden in any news video indicates support for the legitimacy of the Biden victory 6 . We examine the usage of our second phrase on our data set consisting of user comments on news videos. The choice of our next phrase is guided by "stop the steal" protests aimed at discrediting the 2020 election outcome 7 . In this case, our intuition is if a user comment mentions this phrase (or some variant of it), it is highly likely that the user is expressing a belief that the election is fraudulent 8 . Through the usage pattern of our first phrase ("President-elect Biden"), we now estimate the overall stance of a news network across the individual videos hosted in its official YouTube channel. Let the indicator function I(v, "Presidentelect Biden") returns 1 if the said phrase (or some variant of it) is mentioned at least once in the video transcript of v and returns 0 otherwise. Similarly, let the indicator function I(v, "Biden") returns 1 if "Biden" is mentioned at least once in the video transcript of v and returns 0 otherwise. For a given channel and videos posted within November 7 th 2020 and January 5 th 2021, we compute the following factor: Σ i I(v i ,"P resident−elect Biden") Σ i I(v i ,"Biden") . Table 2 lists the value of our measure across each news network. We note that, while the two mainstream media outlets exhibit comparable mentions of the phrase "President-elect", the three conservative fringe networks show remarkably fewer mentions of this term indicating a possible stance of not accepting the official outcome of the election. Of the three big networks, Fox News is a well-known conservative network. This measure further indicates the possibility that a fringe network may afford to present a narrower view of an event than its mainstream media outlets catering to a wider audience and yet enjoy substantial audience approval and engagement (audience approval and engagement results are presented in Section 4.3) . We now answer our second research question using the "stop the steal" trigram. Table 3 presents the frequency-based rank of the trigram over the discussion data set of each of the news networks. In order to ensure that these rankings are comparable across news networks, each of the corpora has identical number of tokens. A relatively higher rank of this phrase in network network i over network network j indicates that the phrase is relatively more popular in network i . Table 2: Analysis of the overall stance toward accepting the election outcome of Biden being the President-elect across different news networks. Percentages shown are the percentage of times that a news video mentioning Biden refers to him as "President elect." These results indicate that both mainstream media outlets Fox News and MSNBC referred to Biden as President-elect relatively more than the fringe media outlets. Table 3: Analysis of the "stop the steal" phrase in comments on news videos across news networks. Table 3 presents the frequency-based rank of the trigram over the discussion data set of each of the news networks. We investigate this research question through three signals: (1) video likes and dislikes; (2) average comment count; and (3) news network subscriber count. Post Election Engagement Shift Video likes and dislikes Following [8], we use the same viewership disagreement measure to estimate disagreement in a network. Let v like and v dislike denote the total number of likes and dislikes received by a given video v. Let for a given channel C, I(v i , C, T ) returns 1 if video v i is uploaded to C within duration T , otherwise it returns 0. The disagreement factor of a channel C for a given time duration T is thus calculated as Σ i I(v i ,C,T ) v i dislike v i dislike +v i like Σ i I(v i ,C,T ) . The interpretation of a low value of this measure is overall, videos are generally liked by substantially more viewers than disliked in the channel. A higher value indicates mixed user response with an increasing fraction of disapproving viewership. As a nice property of this measure, [8] further points that the ratio v dislike v dislike +v like for an individual video and the overall measure are both bounded within [0,1] and one arbitrarily heavily liked or disliked video can at most influence the overall average by 1 n where n is the total number of videos uploaded in that particular duration (as shown in Table 4, the minimum value for n in our case is 294). Table 4 presents the disagreement factor for each channel for time duration T before and T after and the difference in disagreement (denoted by ∆ disagreement ) obtained by subtracting the disagreement in T after from T before . A positive ∆ disagreement indicates that the channel has gained popularity while a negative value indicates a decline in popularity. We note that apart from Fox News, ∆ disagreement is within ±0.03 for all other news networks. v i ,C,T ) v i dislike v i dislike +v i like ΣiI(v i ,C,T ) . T ∈ {T before , T after }; I(v i , C, T ) returns 1 if video v i is uploaded within duration T , otherwise it returns 0. Comments on videos The two time slices we are focusing on, both are expected to generate high news viewership in our current political climate. T before , i.e., the time slice leading up to the election would naturally attract viewers because of the coverage of political debates, rally speeches, and election predictions. As a result of casting widespread doubts over the legitimacy of the election, we anticipated the engagement during T after would be high as well. Also, note that, since any video uploaded during T before would have more time to accrue comments than any video uploaded during T after , it is not surprising if the average number of comments for videos uploaded during T after is slightly less than the average number of comments for videos uploaded during T after for a given channel. However, Table 5 shows three distinct patterns. We notice that (1) Number of subscribers Comments on a news video or likes or dislikes are response to an individual unit of content supplied by a given channel -a single video. YouTube viewers can subscribe to specific channels indicating that they are interested in receiving updates on the channel's activities (e.g., receive notification when a new video is uploaded). In that sense, subscription to a channel is perhaps a more longer term engagement signal than liking (or disliking) or commenting. Let for each channel C, C t,sub denote the total number of subscribers of C at time t. We define market-share of subscribers of a given channel C i at time t as: marketShare(C i , t) = C t,sub i Σj C t,sub j where C i , C j ∈ {Newsmax , Blaze, CNN , OANN , Fox , MSNBC }. We admit that our definition oversimplifies certain things since a specific user can subscribe to multiple news networks at the same time. Also, there can be several other possible news sources even on YouTube. That said, our measure allows us to track the growth of these six networks revealing insights into the nature of growth of these fringe networks in the last 128 days. Table 6 summarizes the market-share of each of the news networks on three particular days: (1) 31 st August 2020, the first day of T before ; (2) 3 rd November, 2020, the first day of T after and the day of 2020 US election; and (3) January Table 6: Analysis of market-share in terms of subscriber count. We define the market-share of a channel in a particular time t as the ratio of its subscriber count to the sum of subscriber count at time t of all the news channels considered. 5th, 2021, the last day of T after 9 . We note that (1) all fringe news networks gained market-share as time progressed with Newsmax's gain being equal to a factor of 6 ( Figure 1 presents its growth in subscriber-count); (2) the big-three (CNN, Fox News, and MSNBC) lost market share when we compare their individual market-shares on 5 th January, 2021 with what was on 31 st August, 2020; and (3) Fox News exhibits a curious pattern where the market-share slightly rises on 3 rd November, 2020 and then dips. User Migration RQ 4: Was there any systematic migrations from mainstream media outlets to fringe media outlets? YouTube channel T before T after T earliest T latest Fox News -Newsmax 91% / 9% 57% / 43% 89% / 11% 59% / 41% CNN -MSNBC 45% / 55% 46% / 54% 45% / 55% 47% / 53% Table 7: Analysis of comment-share between pair of networks. For a network pair C 1 , C 2 the share is summarized as a / b where a denotes comment share of C 1 and b denotes comment share of C 2 . Table 5, Table 4, and Table 6 all point to a decline in Fox News's popularity during T after as compared to T before . We are curious to examine where did these viewers go? Let N i fox and N i newsmax denote the total number of comments made by user u i on Fox News videos and Newsmax videos uploaded during T 128 , respectively. We focus on highly active users who commented both on Fox News and Newsmax videos to obtain a user set U such that u i ∈ U iff N i fox > 0, N i newsmax > 0 and N i fox + N i newsmax ≥ 10. In plain words, our user set contains users who have made at least one comment on Fox News and Newsmax and the total number of comments made on Fox News and Newsmax by the user exceeds or equals 10. We obtain 69,766 users satisfying these conditions. We then analyze their activities by slicing T 128 in two different ways. One natural choice is the temporal slices T after and T before . Our second choice of time slice divided T 128 along the activity timeline of a given user. We consider the earliest 20% and the latest 20% comments made by each user during T 128 and analyze the relative share of comments in Fox News and Newsmax. Table 7 summarizes our findings. In order to contrast our results, as a control group, we consider the channel pair of CNN and MSNBC and analyzed the comment shared of user group of 99,101 users following the conditions described above. We notice that during the distribution of comments in CNN-MSNBC pair was stable across T before and T after . However, we notice a stark contrast in Fox-Newsmax pair. During T before , Newsmax has a minuscule presence while T after exhibits a near equal comments share with Fox. The qualitative trend of this analysis remains unchanged even when we consider our user activity-based timeline. Hence, our analyses indicate that indeed, several users from Fox News moved to Newsmax. Cloze Tests RQ 2: Did the audience of different networks exhibit different attitudes toward accepting and presenting the election outcomes? We now investigate this research question through the lens of cloze tests using language models. The masked word prediction of high-performance language models, such as BERT [15], has a parallel in the form of cloze tests [14] aka fill-in-the-blank questions used in the human psycholinguistics literature [24]. When presented with a sentence (or a sentence stem) with a missing word, a cloze task is essentially a fill-in-the-blank task. For instance, in the following cloze task: In the [MASK], it snows a lot, winter is a likely completion for the missing word. In fact, when given this cloze task to BERT, BERT outputs the following five seasons ranked by decreasing probability: winter, summer, fall, spring and autumn. In a different political context of the 2019 Indian general election, [21] has demonstrated that BERT can be fine-tuned on large-scale social media political discussions to efficiently aggregate political opinions and track evolving national priorities through simple cloze tests like The biggest problem of India is [MASK]. In our work, we are interested in gauging the aggregate attitude of a network viewership toward the outcome of the 2020 election. For each channel, we fine-tune BERT with the comments on videos uploaded during T after . BERT's vulnerability in handling negations is documented in [25]. Following [21], we remove all comments that contains any valence shifter. Before presenting our results on the aggregate opinion of each of the networks' viewership on the election, we make a small digression to discuss a result that sheds light on the stark contrast of opinions across these news networks. On a cloze test The biggest problem of America is [MASK], we notice that the top three results succinctly capture the divergent views of the news audience across news networks. While socialism consistently appeared in all conservative networks, trump, covid , and racism appeared in the their liberal counterparts. To rank the aggregate opinion on the 2020 US election, we consider the following two cloze tests: (1) Trump has [MASK] the 2020 election (denoted by cloze trump ) (2) Biden has [MASK] the 2020 election (denoted by cloze biden ). Let clozeTest(c, w) denote the probability of the word w output by BERT. In order to appropriately calibrate the model, we compute the score for Trump as clozeTest(clozetrump ,won) clozeTest(clozetrump ,won)+clozeTest(cloze biden ,won) and Biden as clozeTest(cloze biden ,won) clozeTest(clozetrump ,won)+clozeTest(cloze biden ,won) . Note that, for any channel, the scores for Trump and Biden sum to 1. The scores for Trump for different news networks give us the following order: MSNBC < CNN < Fox < OANN < Blaze < Newsmax . This result indicates that compared to mainstream media outlets, discussions on fringe news channels exhibit more doubts on the legitimacy of the election. . A separate version of BERT was fine-tuned, using viewer comments from each network. Top three results (ranked by probability) output by fine-tuned BERT are presented for each news network. We further corroborated our results with a well-known natural language inference model [26]. Given a premise text and a hypothesis text, the natural language inference (NLI) task is to predict either entailment, contradiction, or independence. For example, the hypothesis some men are playing a sport is entailed by the premise a soccer game with multiple males playing 10 . Our work draws inspiration from a recent work [22] that cast the task of COVID-19 misinformation detection as an NLI task stating that the class labels informative, misinformative and irrelevant has a natural one-to-one correspondence to entailment, contradiction and semantic irrelevance, respectively. For a given news network, using individual comments from our data set as premise, we considered the following two hypotheses: For a given channel C and a hypothesis H, we randomly sampled 5,000 comments from user discussions on videos uploaded by C during T after and compute the fraction of comments that entail H using an off-the-shelf, well-known NLI inference system [26]. Obtained order for H 1 , from least to greatest: MSNBC < CNN < Fox < OANN < Newsmax < Blaze. Obtained order for H 2 from least to greatest: Blaze < OANN < Fox < Newsmax < MSNBC < CNN . Machine Translation Based Analysis RQ 5: Based on the comments on the viewed videos, which news networks were "linguistically most similar" to those of President Trump's YouTube channel? Quantifying the differences between large-scale social media discussion data sets is a challenging task and we recourse to the most-recent method in the literature [8]. In [8], the authors presented a machine translation based framework. This framework assumes that two sub-communities (e.g., Fox viewers and CNN viewers) are speaking in two different languages (say, L cnn and L fox ) and obtains single-word translations using a well-known machine translation algorithm [28]. In a world not fraught with polarization, any word w in L cnn should translate to itself in L fox . However, if a word w 1 in one language translates to a different word w 2 in another, it indicates w 1 and w 2 are used in similar contexts across these two languages signalling (possible) disagreement. These disagreed pairs 11 present a quantifiable measure to compute differences between large scale corpora as greater the number of disagreed pairs the farther two sub-communities are. Formally, let our goal be computing the similarity measure between two languages, L source and L target , with vocabularies V source and V target , respectively. Let translate(w) Lsource →Ltarget denote a single word translation of w ∈ V source from L source to L target . The similarity measure between two languages along a given translation direction computes the fraction of words in V source that translates to itself, i.e., Beyond prominent US cable news networks, [8] has computed similarities between news networks and discussions on YouTube videos hosted by major prime time US political comedians. In this work, we turn our focus to President Trump whose official YouTube handle has 2.68 million subscribers as of 5 th January, 2021 (see, Table 1). We follow the same steps and hyper-parameter settings described in [8] and in Table 9, we quantify the similarities between language present in the official YouTube channel of the 45 th US president (denoted by L trump ) and the six US cable news networks. We use the same monikers for the languages in the four news networks considered in [8] (L cnn , L fox , L msnbc , and L oann ) and denote the language of the discussions on Blaze TV and Newsmax news videos as L blaze and L newsmax , respectively. It is well-known that corpus size is one of the most important contributing factors to ensure the quality of word embedding [29]. Further, [8] indicates that typical to most deep learning systems, one of the limitations of the machine translation based framework is it is data-hungry. We thus focus on the entire year of 2020 (data set details are provided in the Appendix). Table 9 underscores the following two points: (1) the language present in the YouTube videos hosted by the official channel of President Trump is more similar to fringe media outlets than any mainstream media outlet with the ordering (most similar to least similar): L newsmax > L oann > L blaze > L fox > L cnn > L msnbc ; and and (2) compared to the liberal news outlets, the conservative news networks are more similar to each other. Also, note that, the 45 th US president's YouTube channel is not a news network. Hence, it does not cover issues as varied as a typical news network would. Therefore, it is not surprising that the similarity between L trump and other news-languages are lesser than the similarity between news networks. Table 9: Pairwise similarity between news-languages and L trump computed for the year 2020 using the framework presented in [8]. The cells show the similarity between the language pair along the translation direction of language shown in the row as source and the language shown in the column as target. Hyper-parameters are identical to [8]. L trump (relevant row-cells are shaded with gray) is found to be most similar with L newsmax . Appendix contains additional experiments focusing on T after and considering L fox , L newsmax and L trump . Discussions and Conclusions Discussions A mysterious 11-character word: On a Skip-gram word embedding [30] trained on discussions from T after for a specific channel, we noticed a curious 11-character word among the nearest neighbors of the phrase voter fraud. Upon examination, we realized that it is a YouTube video ID. Soon we realized that when restricted to a specific character length of 11, nearest neighbors of voter fraud in the word embedding space reveal several video IDs, most of which cast doubts on the fairness of the election. Not only that, we found that nearest neighbors of a video ID of a video propagating voter fraud misinformation are also video IDs of videos with similar content. In this intriguing phenomenon where distributional hypothesis [31] meets misinformation, we were surprised to notice the wide range of viewer-reach these videos possessed. Of the 30 nearest neighbors we manually annotated, 28 cast doubts about the electoral process and the viewer-count ranged from a paltry 105 to more than a million views. Our findings indicate that during this political crisis, it is possible that beyond these high-traffic news networks and influencers, several other videos promoting unsubstantiated claims surfaced in the comments section of a mainstream social media platform and it is a challenging task to catch them all. Consumption pattern: While in this work we focus on the fringe media outlets, instead of conservative forums such as parler or gab, our choice of the platform could not be more mainstream: YouTube. Beyond YouTube's tremendous popularity in the US (126 million unique US users in 2020 according to Statista) YouTube is compelling platform for another reason. Because YouTube offers access to these different networks through a single, uniform interface, it is easy for consumers to effectively "flip channels", and easy to track individual behavior from YouTube data, making it an ideal platform for our study. Detecting anomalous consumption patterns such as the abrupt rise of Newsmax in popularity is a much easier task than automatically identifying presence of unsubstantiated claims from videos. Our work thus raises an important point that during a political crisis, consumption patterns may reveal useful signals. Internet abhors vacuum: Our work is an important study in the context of this unique crisis to Western democracy which shows that with the current almost-ubiquitous penetration of the internet, vacuums may fill up rapidly. If a mainstream media is unwilling to present an alternate version of the election outcome, certain fringe networks can fill up the void and enjoy a sudden meteoric rise in popularity possibly through presenting an alternate version of reality. As compared to OANN and Newsmax, the rise in popularity of Blaze TV was relatively muted. While the content and audience of this network is not much different from the other two fringe networks, the 45 th President of the US tweeted favorably about OANN and Newsmax on multiple occasions. Our analysis cannot present causal evidences. Neither can it rule out the possibility that a different fringe network will not enjoy a similar run as Newsmax in a subsequent political crisis in the near future. Conclusions This paper leads to two different types of conclusions: conclusions about what actually transpired during the 128 days covered by our data set, and conclusions about methodologies for analyzing such large scale social media to study political and other social sciences. Understanding What Happened Through a series of corroborating experiments described above, we make the following conclusions. Methodology We demonstrate that recent advancements in NLP methods enable us to analyze a vast amount of data in almost real time with minimal manual supervision. However, each of these methods has certain blindspots (e.g., BERT's vulnerability to negation or the translation based method's requirement of a large of amount data). Our work demonstrates the synergy of these methods in obtaining corroborating evidences from multiple sources and thus gaining valuable insights. While these techniques have been used in isolation on different political corpora [21,8,22], in this work, we present a combined approach to analyze a data set on a political crisis the country has not seen for years. Appendix Experimental Setup Experiments are conducted in a suite of machines with the following specifications: • OS: Windows 10. • Processor Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz, 2592 Mhz, 6 Core(s), 12 Logical Processor(s). • RAM: 64 GB. Preprocessing and Hyperparameters To train word embedding on our data set, we use the following preprocessing steps. First, we remove all the emojis and non-ascii characters. Then, we remove all non-alphanumeric characters and lowercase the remaining text. We preserve the newline character after each individual document in the data set. We use the default parameters for training our FastText [30] Skip-gram embedding with the dimension set to 100. Table 10: Pairwise similarity between languages computed for videos uploaded during T after . Hyperparameters are identical to [8]. The cells show the similarity between the language pair along the translation direction of language shown in the row as source and the language shown in the column as target. Table 10 indicates that if we zoom onto videos uploaded during T after , the qualitative claim that linguistically L trump is more similar to L fox than L newsmax , still holds. Table 11 summarizes the details of our extended data set. Machine Translation Based Analysis Data Set Details for 2020 RQ 1 : 1Did different networks pursue different approaches toward accepting and presenting the election outcomes e.g., if there was widespread voter fraud? (discussed in Section 4.2) RQ 2: Did the audience of different networks exhibit different attitudes toward accepting and presenting the election outcomes? (discussed in Section 4.2, 4.5) RQ 3: Were there any shifts in viewership engagement of news networks post election? (discussed in Section 4.3) RQ 4: Was there any systematic migrations from mainstream media outlets to fringe media outlets? (discussed in Section 4.4) RQ 5: Based on the comments on the viewed videos, which news networks were "linguistically most similar" to those of President Trump's YouTube channel? (discussed in Section 4.6)4.2 The story of two trigramsRQ 1: Did different networks pursue different approaches toward accepting and presenting the election outcomes e.g., if there was widespread voter fraud? RQ 2: Did the audience of different networks exhibit different attitudes toward accepting and presenting the election outcomes? RQ 3 : 3Were there any shifts in viewership engagement of news networks post election? Figure 1 : 1The growth of Newsmax in terms of #subscribers. The vertical lines indicate important dates. The two Presidential debates took place on September 29 th and October 22 nd . The election took place on 3 rd November and AP called the election for Biden on 7 th November. The electoral college vote took place on December 14 th . Similarity(L source , L target ) = Σ w∈Vsource I(translate(w) Lsource →L target =w) |Vsource | . The indicator function returns 1 if the word translates to itself and 0 otherwise. The larger the value of Similarity (L source , L target ), the greater is the similarity between a language pair. C1 : C1Fringe networks did not cover the election the same way as the mainstream networks. (RQ 1, Section 4.2). C2: Audience of the fringe networks exhibit more doubt about the election outcome as compared to the audience of mainstream outlets. (RQ 2, Section 4.2; RQ 2, Section 4.5) C3: A subset of fringe news networks gained audience post election. (RQ 3, Section 4.3; RQ 4, Section 4.4) C4: Fox News was the only mainstream media outlet that lost considerable popularity. (RQ 3, Section 4.3; RQ 4, Section 4.4) C5: Viewer comments on President Trump's official YouTube handle are linguistically more similar to viewer comments on fringe networks than those of the more mainstream media outlets. (RQ 5, Section 4.6) Table 1: List of news networks considered. Video counts during T 128 reflect the number of videos uploaded on or before 5 th January 2021 starting from 31 st August 2020.Starting from 31 st August, 2020 to 5 th January, 2021. We denote the combined time interval of these 128 days as T 128 . Our data set consists of 14,557,966 comments on 11,964 videos posted by 2,278,034 users. YouTube Channel #Subscribers #Videos during T 128 Total #Comments CNN 11.7M 824 3,368,178 Fox News 6.71M 2,066 4,059,446 MSNBC 3.97M 3,890 2,776,968 OANN 1.36M 1,728 427,908 Newsmax 1.77M 746 971,617 Blaze TV 1.34M 518 634,650 Donald J. Trump 2.68M 2,212 2,382,821 Table 3 3shows that while this trigram is considerably popular across all six channels, OANN and Newsmax particularly stand out.YouTube channel Rank CNN 111 Fox News 134 MSNBC 123 OANN 75 Newsmax 63 Blaze TV 111 YouTube channel T before T after ∆ disagreementCNN 0.20 0.17 +0.03 Fox News 0.18 0.28 -0.10 MSNBC 0.10 0.09 +0.01 OANN 0.02 0.02 0 Newsmax 0.01 0.02 -0.01 Blaze TV 0.02 0.05 -0.03 Table 4: Analysis of viewership agreement. For a given news network and time duration, each cell summarizes the ΣiI( CNN, Blaze TV and MSNBC do not show any noticeable change in average number of comments; (2) Fox News shows noticeable decline in comment engagement; and (3) OANN and Newsmax show an increase by more than factors of 2 and 3, respectively.YouTube channel T before T after CNN 398 / 4,188 426 / 3,993 Fox News 1,040 / 2,587 1,026 / 1,809 MSNBC 1,891 / 709 1,999 / 719 OANN 1,090 / 163 638 / 399 Newsmax 294 / 453 452 / 1,852 Blaze TV 280 / 1,241 238 / 1,206 Table 5 : 5Analysis of engagement. For a given news network and time duration, each cell summarizes the channel activity as a / b where a denotes the number of videos uploaded and b denotes the average number comments on videos where commenting is allowed. We note that Newsmax and OANN enjoyed a remarkable increase in average number of comments per video. Note that, OANN was banned for a week by YouTube because of spreading COVID-19 misinformation. Table 8 : 8Cloze test results for the probe The biggest problem of America is [MASK] Hypothesis 1: I prefer Trump as my president. (denoted by H 1 ) Hypothesis 2: I prefer Biden as my president. (denoted by H 2 ) Table 11 : 11Data set details for 2020.YouTube Channel #Videos # Overall comments CNN 2,973 9.27M Fox News 6,066 11.6M MSNBC 10,644 6.08M OANN 5,092 1.01M Newsmax 1,673 1.05M Blaze TV 1,353 1.36M Donald J. Trump 3,991 2.7M https://www.nytimes.com/2020/11/04/us/politics/trump-fox-news-arizona.html3 In response to White House criticism, Fox interviewed at approximately 1 a.m. on November 4 th Arnold Mishkin, director of the Fox News Decision Desk. Mr. Mishkin stated: "We're four standard deviations from being wrong. And, I'm sorry, we're not wrong in this particular case." Of these six news networks, we refer to Blaze TV, OANN, and Newsmax as fringe news networks due to their relatively homogeneous audience and limited reach compared to the big three. 5 https://pypi.org/project/youtube-transcript-api/ Of course, there could be counter-examples -for instance, an anchor saying "I am never going to refer to him as President-elect Biden until Supreme Court hears the case". We manually inspected 100 randomly sampled unique references across 100 videos and confirm that is not the case.7 In fact, this particular phrase has a history that goes beyond the 2020 election; Trump advisor Roger Stone ran an organization[23] with a name identical to this phrase to detect voter fraud in the 2016 US election.8 https://www.nbcnews.com/tech/tech-news/facebook-bans-all-stop-steal-content-n1253809. We obtained the #subscribers from https://web.archive.org/. Piecewise linearity is assumed for missing entries. This example is taken from[27] The original paper[8] refers to these pairs as misaligned pairs. The polarization of american politics. T Keith, Howard Poole, Rosenthal, The journal of politics. 464Keith T Poole and Howard Rosenthal. The polarization of american politics. The journal of politics, 46(4):1061- 1079, 1984. Party polarization, party commitment, and conflict extension among american party activists. C Geoffrey, Layman, M Thomas, Carsey, C John, Richard Green, Rosalyn Herrera, Cooperman, American Political Science Review. 1042Geoffrey C Layman, Thomas M Carsey, John C Green, Richard Herrera, and Rosalyn Cooperman. Party polarization, party commitment, and conflict extension among american party activists. American Political Science Review, 104(2):324-346, 2010. Polarized America: The dance of ideology and unequal riches. Nolan Mccarty, T Keith, Howard Poole, Rosenthal, mit PressNolan McCarty, Keith T Poole, and Howard Rosenthal. Polarized America: The dance of ideology and unequal riches. mit Press, 2016. Past-focused environmental comparisons promote proenvironmental outcomes for conservatives. Matthew Baldwin, Joris Lammers, Proceedings of the National Academy of Sciences. the National Academy of Sciences113Matthew Baldwin and Joris Lammers. Past-focused environmental comparisons promote proenvironmental outcomes for conservatives. Proceedings of the National Academy of Sciences, 113(52):14953-14957, 2016. Research: Political polarization is changing how americans work and shop. C Mcconnell, Margalit, M Malhotra, Levendusky, Harvard Business Review. C McConnell, Y Margalit, N Malhotra, and M Levendusky. Research: Political polarization is changing how americans work and shop. Harvard Business Review, 2017. Analyzing polarization in social media: Method and application to tweets on 21 mass shootings. Dorottya Demszky, Nikhil Garg, Rob Voigt, James Zou, Jesse Shapiro, Matthew Gentzkow, Dan Jurafsky, Proceedings of NAACL-HLT 2019. NAACL-HLT 2019ACLDorottya Demszky, Nikhil Garg, Rob Voigt, James Zou, Jesse Shapiro, Matthew Gentzkow, and Dan Jurafsky. Analyzing polarization in social media: Method and application to tweets on 21 mass shootings. In Proceedings of NAACL-HLT 2019, pages 2970-3005. ACL, June 2019. Quantifying polarization on twitter: The kavanaugh nomination. Kareem Darwish, International Conference on Social Informatics. SpringerKareem Darwish. Quantifying polarization on twitter: The kavanaugh nomination. In International Conference on Social Informatics, pages 188-201. Springer, 2019. We don't speak the same language: Interpreting polarization through machine translation. R Ashiqur, Rupak Khudabukhsh, Mark S Sarkar, Tom M Kamlet, Mitchell, The Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, page To Appear. AAAI Press2021Ashiqur R. KhudaBukhsh, Rupak Sarkar, Mark S. Kamlet, and Tom M. Mitchell. We don't speak the same language: Interpreting polarization through machine translation. In The Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, page To Appear. AAAI Press, 2021. Discursive deflection: Accusation of "fake news" and the spread of mis-and disinformation in the tweets of president trump. S Andrew, Damian J Ross, Rivers, Social Media+ Society. 422056305118776010Andrew S Ross and Damian J Rivers. Discursive deflection: Accusation of "fake news" and the spread of mis-and disinformation in the tweets of president trump. Social Media+ Society, 4(2):2056305118776010, 2018. How MSNBC Became Fox's Liberal Evil Twin. Alessandra Stanley, 1Alessandra Stanley. How MSNBC Became Fox's Liberal Evil Twin, 2012. Online; accessed 01-September-2020. Weapons of mass distortion: The coming meltdown of the liberal media. Brent Bozell, National Review. L Brent Bozell. Weapons of mass distortion: The coming meltdown of the liberal media. National Review, 2004. Selective exposure to cable news and immigration in the us: The relationship between fox news, cnn, and attitudes toward mexican immigrants. Homero Gil De Zúñiga, Teresa Correa, Sebastian Valenzuela, Journal of Broadcasting & Electronic Media. 564Homero Gil de Zúñiga, Teresa Correa, and Sebastian Valenzuela. Selective exposure to cable news and immigration in the us: The relationship between fox news, cnn, and attitudes toward mexican immigrants. Journal of Broadcasting & Electronic Media, 56(4):597-615, 2012. Agenda setting in the partisan tv news context: Attribute agenda setting and polarized evaluation of presidential candidates among viewers of nbc, cnn, and fox news. Ki Deuk Hyun, Soo Jung Moon, Journalism & Mass Communication Quarterly. 933Ki Deuk Hyun and Soo Jung Moon. Agenda setting in the partisan tv news context: Attribute agenda setting and polarized evaluation of presidential candidates among viewers of nbc, cnn, and fox news. Journalism & Mass Communication Quarterly, 93(3):509-529, 2016. cloze procedure": A new tool for measuring readability. L Wilson, Taylor, Journalism quarterly. 304Wilson L Taylor. "cloze procedure": A new tool for measuring readability. Journalism quarterly, 30(4):415-433, 1953. BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, NAACL-HLT. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, pages 4171-4186, June 2019. You can't stay here: The efficacy of reddit's 2015 ban examined through hate speech. Eshwar Chandrasekharan, Umashanthi Pavalanathan, Anirudh Srinivasan, Adam Glynn, Jacob Eisenstein, Eric Gilbert, Proceedings of the ACM on Human-Computer Interaction. 1CSCWEshwar Chandrasekharan, Umashanthi Pavalanathan, Anirudh Srinivasan, Adam Glynn, Jacob Eisenstein, and Eric Gilbert. You can't stay here: The efficacy of reddit's 2015 ban examined through hate speech. Proceedings of the ACM on Human-Computer Interaction, 1(CSCW):1-22, 2017. Open media or echo chamber: The use of links in audience discussions on the facebook pages of partisan news organizations. Susan Jacobson, Eunyoung Myung, Steven L Johnson , Information, Communication & Society. 197Susan Jacobson, Eunyoung Myung, and Steven L Johnson. Open media or echo chamber: The use of links in audience discussions on the facebook pages of partisan news organizations. Information, Communication & Society, 19(7):875-891, 2016. The social structure of political echo chambers: Variation in ideological homophily in online networks. Andrei Boutyline, Robb Willer, Political Psychology. 383Andrei Boutyline and Robb Willer. The social structure of political echo chambers: Variation in ideological homophily in online networks. Political Psychology, 38(3):551-569, 2017. Me, my echo chamber, and i: introspection on social media polarization. Nabeel Gillani, Ann Yuan, Martin Saveski, Soroush Vosoughi, Deb Roy, Proceedings of the 2018 World Wide Web Conference. the 2018 World Wide Web ConferenceNabeel Gillani, Ann Yuan, Martin Saveski, Soroush Vosoughi, and Deb Roy. Me, my echo chamber, and i: introspection on social media polarization. In Proceedings of the 2018 World Wide Web Conference, pages 823-831, 2018. Inside the right-leaning echo chambers: Characterizing gab, an unmoderated social system. Lucas Lima, C S Julio, Philipe Reis, Fabricio Melo, Leandro Murai, Araujo, IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). IEEEPantelis Vikatos, and Fabricio BenevenutoLucas Lima, Julio CS Reis, Philipe Melo, Fabricio Murai, Leandro Araujo, Pantelis Vikatos, and Fabricio Benevenuto. Inside the right-leaning echo chambers: Characterizing gab, an unmoderated social system. In 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), pages 515-522. IEEE, 2018. Mining insights from large-scale corpora using fine-tuned language models. Shriphani Palakodety, Ashiqur R Khudabukhsh, Jaime G Carbonell ; Giuseppe De Giacomo, Alejandro Catalá, Bistra Dilkina, Michela Milano, Senén Barro, Alberto Bugarín, Jérôme Lang, -Including 10th Conference on Prestigious Applications of Artificial Intelligence. Santiago de Compostela, SpainIOS Press29ECAI 2020 -24th European Conference on Artificial IntelligenceShriphani Palakodety, Ashiqur R. KhudaBukhsh, and Jaime G. Carbonell. Mining insights from large-scale corpora using fine-tuned language models. In Giuseppe De Giacomo, Alejandro Catalá, Bistra Dilkina, Michela Milano, Senén Barro, Alberto Bugarín, and Jérôme Lang, editors, ECAI 2020 -24th European Conference on Artificial Intelligence, 29 August-8 September 2020, Santiago de Compostela, Spain, August 29 -September 8, 2020 -Including 10th Conference on Prestigious Applications of Artificial Intelligence (PAIS 2020), volume 325 of Frontiers in Artificial Intelligence and Applications, pages 1890-1897. IOS Press, 2020. COVIDLies: Detecting COVID-19 misinformation on social media. Tamanna Hossain, Robert L Logan, I V , Arjuna Ugarte, Yoshitomo Matsubara, Sean Young, Sameer Singh, Proceedings of the 1st Workshop on NLP for COVID-19. the 1st Workshop on NLP for COVID-19OnlineAssociation for Computational LinguisticsEMNLP 2020Tamanna Hossain, Robert L. Logan IV, Arjuna Ugarte, Yoshitomo Matsubara, Sean Young, and Sameer Singh. COVIDLies: Detecting COVID-19 misinformation on social media. In Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020, Online, December 2020. Association for Computational Linguistics. Voter intimidation and discrimination in the 2016 election: Rhetoric and reality. Adam Gitlin, Adam Gitlin. Voter intimidation and discrimination in the 2016 election: Rhetoric and reality. Cloze but no cigar: The complex relationship between cloze, corpus, and subjective probabilities in language processing. Nathaniel Smith, Roger Levy, Proceedings of the Annual Meeting of the Cognitive Science Society. the Annual Meeting of the Cognitive Science Society33Nathaniel Smith and Roger Levy. Cloze but no cigar: The complex relationship between cloze, corpus, and subjective probabilities in language processing. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 33, 2011. Negated and misprimed probes for pretrained language models: Birds can talk, but cannot fly. Nora Kassner, Hinrich Schütze, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational LinguisticsNora Kassner and Hinrich Schütze. Negated and misprimed probes for pretrained language models: Birds can talk, but cannot fly. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7811-7818, Online, July 2020. Association for Computational Linguistics. Allennlp: A deep semantic natural language processing platform. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson Liu, Matthew Peters, Michael Schmitz, Luke Zettlemoyer, arXiv:1803.07640arXiv preprintMatt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. Allennlp: A deep semantic natural language processing platform. arXiv preprint arXiv:1803.07640, 2018. A large annotated corpus for learning natural language inference. R Samuel, Gabor Bowman, Christopher Angeli, Christopher D Potts, Manning, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsSamuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal, September 2015. Association for Computational Linguistics. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. L Samuel, Smith, H P David, Steven Turban, Nils Y Hamblin, Hammerla, 5th International Conference on Learning Representations. Toulon, FranceConference Track ProceedingsSamuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, 2017. Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, Advances in Neural Information Processing Systems. C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. WeinbergerCurran Associates, Inc26Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 3111-3119. Curran Associates, Inc., 2013. Enriching word vectors with subword information. Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, Transactions of the Association for Computational Linguistics. 5Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146, 2017. . S Zellig, Harris, Distributional structure. Word. 102-3Zellig S Harris. Distributional structure. Word, 10(2-3):146-162, 1954.
[]
[ "SUBMITTED TO IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1 Arbitrary-Oriented Ship Detection through Center-Head Point Extraction", "SUBMITTED TO IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1 Arbitrary-Oriented Ship Detection through Center-Head Point Extraction" ]
[ "Feng Zhang ", "Xueying Wang ", "Shilin Zhou ", "Yingqian Wang ", "Yi Hou " ]
[]
[]
Ship detection in remote sensing images plays a crucial role in various applications and has drawn increasing attention in recent years. However, existing arbitrary-oriented ship detection methods are generally developed on a set of predefined rotated anchor boxes. These predefined boxes not only lead to inaccurate angle predictions but also introduce extra hyper-parameters and high computational cost. Moreover, the prior knowledge of ship size has not been fully exploited by existing methods, which hinders the improvement of their detection accuracy. Aiming at solving the above issues, in this paper, we propose a center-head point extraction based detector (named CHPDet) to achieve arbitrary-oriented ship detection in remote sensing images. Our CHPDet formulates arbitraryoriented ships as rotated boxes with head points which are used to determine the direction. And rotated Gaussian kernel is used to map the annotations into target heatmaps. Keypoint estimation is performed to find the center of ships. Then, the size and head point of the ships are regressed. The orientation-invariant model (OIM) is also used to produce orientation-invariant feature maps. Finally, we use the target size as prior to finetune the results. Moreover, we introduce a new dataset for multi-class arbitrary-oriented ship detection in remote sensing images at a fixed ground sample distance (GSD) which is named FGSD2021. Experimental results on FGSD2021 and two other widely used data sets, i.e., HRSC2016, and UCAS-AOD demonstrate that our CHPDet achieves state-of-the-art performance and can well distinguish between bow and stern. Code and FGSD2021 dataset are available at https://github.com/zf020114/CHPDet.
10.1109/tgrs.2021.3120411
[ "https://arxiv.org/pdf/2101.11189v3.pdf" ]
231,718,640
2101.11189
793e8f6cf7f3cd9c213fa52a87c526508fc3a6f5
SUBMITTED TO IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1 Arbitrary-Oriented Ship Detection through Center-Head Point Extraction Feng Zhang Xueying Wang Shilin Zhou Yingqian Wang Yi Hou SUBMITTED TO IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1 Arbitrary-Oriented Ship Detection through Center-Head Point Extraction Index Terms-Arbitrary-oriented ship detectionRemote sens- ing imagesKeypoint estimationDeep convolution neural net- works Ship detection in remote sensing images plays a crucial role in various applications and has drawn increasing attention in recent years. However, existing arbitrary-oriented ship detection methods are generally developed on a set of predefined rotated anchor boxes. These predefined boxes not only lead to inaccurate angle predictions but also introduce extra hyper-parameters and high computational cost. Moreover, the prior knowledge of ship size has not been fully exploited by existing methods, which hinders the improvement of their detection accuracy. Aiming at solving the above issues, in this paper, we propose a center-head point extraction based detector (named CHPDet) to achieve arbitrary-oriented ship detection in remote sensing images. Our CHPDet formulates arbitraryoriented ships as rotated boxes with head points which are used to determine the direction. And rotated Gaussian kernel is used to map the annotations into target heatmaps. Keypoint estimation is performed to find the center of ships. Then, the size and head point of the ships are regressed. The orientation-invariant model (OIM) is also used to produce orientation-invariant feature maps. Finally, we use the target size as prior to finetune the results. Moreover, we introduce a new dataset for multi-class arbitrary-oriented ship detection in remote sensing images at a fixed ground sample distance (GSD) which is named FGSD2021. Experimental results on FGSD2021 and two other widely used data sets, i.e., HRSC2016, and UCAS-AOD demonstrate that our CHPDet achieves state-of-the-art performance and can well distinguish between bow and stern. Code and FGSD2021 dataset are available at https://github.com/zf020114/CHPDet. I. INTRODUCTION S HIP detection from high-resolution optical remote sensing images is widely applied in various tasks such as illegal smuggling, port management, and target reconnaissance. Recently, ship detection has received increasing attention and was widely investigated in the past decades [1]- [4]. However, ship detection in remote sensing images is a highly challenging task due to the arbitrary orientations, densely-parking scenarios, and complex backgrounds [5]- [7]. To handle the multiorientation issue, existing methods generally use a series of predefined anchors [8], which has the following shortcomings. Inaccurate angle regression. intersection over union (IoU) score is very sensitive to the angle of bounding boxes. As shown in Fig. 1(e), the ground truth box is the bounding box of a ship with an aspect ratio of 10:1. The red rotated box is generated by rotating the ground truth box with a small angle of 5 • . It can be observed that such a small angle variation reduces the IoU between these two boxes to 0.63. Therefore, the anchor-based detectors which define the positive and negative anchors by IoU score usually suffer from an imbalance issue, and thus resulting in detection performance degeneration [9]. Moreover, the angle of the ship is a periodic function, and it is discontinuous at the boundary (0 • or 180 • ), as shown in Fig. 1(f). This discontinuity will also cause performance degeneration. Excessive hyper-parameters and high computational cost. Existing methods generally use oriented bounding boxes as anchors to handle rotated objects and thus introduce excessive hyper-parameters such as box sizes, aspect ratios, and orientation angles. Note that, these hyper-parameters have to be manually tuned for novel scenarios, which limits the generalization capability of these methods. Predefined Fig. 2: The overall framework of our arbitrary-oriented ship detection method. The dotted lines in the graph represent the same position on the feature maps. Feature maps are first generated by using a fully convolutional backbone network and orientation-invariant model (OIM). Afterward, the peaks of the feature map of center points are selected as center points. Then, the center points offset, object sizes, and head regression locations are regressed on the corresponding feature maps at the position of each center point. The potential head points are collected by extracting peaks with confidence scores larger than 0.1 on the head feature map. The final head location is obtained by assigning each regressed location to its nearest potential head points and then add the head offset. based methods usually require a large number of anchor boxes. For example, in R 2 PN [10], 6 different orientations were used in rotated anchor boxes, and there are a total of 24 anchors at each pixel on its feature maps. A large number of anchor boxes introduce excessive computational cost when calculating IoU scores and executing the non-maximum suppression (NMS) algorithm. Under-exploitation of prior information of ships. Most previous ship detectors adopted the commonly-used rotation detection algorithms in the area of remote sensing and scene text detection, while overlook the unique characteristics of ships in remote sensing images. That is, the position of the bow is relatively obvious and a certain category of the ship in remote sensing images has a relatively fixed size range by normalizing the ground sample distance (GSD) of images. The size of the ship and the position of the ship's head are important clues for detection. However, these prior informations have been under-exploited by previous ship detection algorithms. These methods only model the ships as rotated rectangles to regress the parameters and do not use the obvious bow point to determine the direction of the ship. Due to the limitation of the effective receptive field of the network, the appearance information near the central point is mainly used in target classification. Size regression and target classification are obtained independently by two parallel branches. Therefore, the size of the target can not effectively assist target classification. Motivated by the anchor-free detectors CenterNet [11] in natural scenes, in this paper, we propose a one-stage, anchorfree and NMS-free method for arbitrary-oriented ship detection in remote sensing images. We formulate ships as rotated boxes with a head point representing the direction. Specifically, orientation-invariant feature maps are first produced by an orientation-invariant model. Afterward, the peaks of the center feature map are selected as center points. Then, the offset, object sizes, and head positions are regressed on the corresponding feature maps at each center point. Finally, target size is used to adjust the classification score. The architecture of our CHPDet is shown in Fig. 2. The major contributions of this paper are summarized as follows. • We develop a one-stage, anchor-free ship detector CH- PDet, Specifically, we represent the ships using rotated boxes with a head point. This representation addresses the problem of angle periodicity by transforming the angle regression task into a keypoint estimation task. Moreover, our proposed method can expand the scope of angle to [0 • -360 • ), and distinguish between bow and stern. • We design rotated Gaussian kernel to map the annotations into target heatmaps, which can better adapting to the characteristics of the rotated target. • We propose a module to refine the detection results based on prior information. Moreover, we proposed a new dataset named FGSD2021 for multi-class arbitraryoriented ship detection in remote sensing images at fixed GSD. This dataset can facilitate the use of prior knowledge of ship size and promote the actual application for remote sensing ship detection. • We introduce an orientation-invariant model (OIM) to generate orientation-invariant feature maps. Extensive experimental results on three datasets show that our CHPDet achieves state-of-the-art performance in both speed and accuracy, as shown in Fig. 3. The rest of this paper is organized as follows. In Section II, we briefly review the related work. In Section III, we introduce the proposed method in detail. Experimental results and analyses are presented in Section IV. Finally, we conclude this paper in Section V. II. RELATED WORK In this section, we briefly review the major works in horizontal object detection, rotated object detection, and remote sensing ship detection. A. Horizontal Object Detection In recent years, deep convolutional neural networks (DCNN) have been developed as a powerful tool for feature representation learning [12], [13], and have achieved significant improvements in horizontal object detection [14]. Existing object detection methods generally represent objects as horizontal boxes, as shown in Fig. 1(a). According to different detection paradigms, deep learning-based object detection methods can be roughly divided into two-stage detectors, single-stage detectors, and multi-stage detectors. Two-stage detectors (e.g., RCNN [15], Fast-RCNN [16], Faster-RCNN [17], Mask-RCNN [18], R-FCN [19]) used a pre-processing approach to generate object proposals, and extract features from the generated proposals to predict the category. In contrast, one-stage detectors (e.g., YOLO [20], [21], SSD [22], RetinaNet [23]) do not have the pre-processing step and directly performed categorical prediction on the feature maps. Multi-stage detectors (e,g, cascade RCNN [24], HTC [25]) performed multiple classifications and regressions, resulting in notable accuracy improvements. In summary, two-stage and multi-stage detectors generally achieve better performance, but one-stage detectors are usually more time-efficient. Compared to the above-mentioned anchor-based methods, anchor-free methods [26] [11] can avoid the requirement of anchors and have become a new research focus in recent years. For example, CornerNet [26] detected objects at each position of the feature map using the top-left and bottom-right corner points. CenterNet [11] modeled an object as a center point and performed keypoint estimation to find center points and regressed the object size. FCOS [27] predicted four distances, a center score, and a classification score at each position of the feature map to detect objects. The above-mentioned approaches achieve significant improvement in general object detection tasks. However, these detectors can only generate horizontal bounding boxes, which limits their applicability. B. Arbitrary-oriented object detection Arbitrary-oriented detectors are widely used in remote sensing and scene text images. Most of these detectors used rotated bounding boxes or quadrangles to represent multi-oriented objects, as shown in Fig. 1(b) (c). In RRPN [28], rotated region proposal network was proposed to improve the quality of the region proposals. In R 2 CNN [29], a horizontal region of interest (RoI) was generated to simultaneously predict the horizontal and rotated boxes. RoI-Trans [30] transformed a horizontal RoI into a rotated RoI (RRoI). In SCRDet [31] and RSDet [9], novel losses were employed to address the boundary problem for oriented bounding boxes. In R 3 Det [32], a refined single-stage rotated detector was proposed for the feature misalignment problem. In CSL [33] and DCL [34], angle regression was converted into a classification task to handle the boundary problem. In S 2 A-Net [35], a fully convolutional layer was proposed to align features to achieve better performance. The aforementioned methods need a set of anchor boxes for classification and regression. These anchors introduce excessive hyper-parameters which limit the generalization capability and introduce an excessive computational cost. At present, several anchor-free arbitrary-oriented detectors (e.g., O 2 D-Net [36] and X-LineNet [37]) are proposed to detect oriented objects by predicting a pair of intersecting lines. However, The features used in these methods are not rotation-invariant and the performance still lags behind that of the anchor-base detectors. C. Ship detection in remote sensing images Different from other objects in remote sensing images, ships are in strips with a large aspect ratio. Generally, the outline of the ships is an approximate pentagon with two parallel long sides, and the position of the bow is relatively obvious. Consequently, a certain category of the ship in remote sensing images has a relatively fixed size range by normalizing the GSD of images. Traditional ship detectors generally used a coarse-to-fine framework with two stages including ship candidate generation and false alarm elimination. For example, Shi et al. [38] first generated ship candidates by considering ships as anomalies and then discriminated these candidates using the AdaBoost approach [39]. Yang et al. [40] proposed a saliency-based method to generate candidate regions, and used a support vector machine (SVM) to further classify these candidates. Liu et al [41], [42] introduced an RRoI pooling layer to extract features of rotated regions. In R 2 PN [10], a rotated region proposal network was proposed to generate arbitrary-proposals with ship orientation angle information. The above detectors are also based on a set of anchors and cannot fully exploit the prior information of ships. III. PROPOSED METHOD In this section, the architecture of CHPDet is introduced in detail. Our method consists of 5 modules including an arbitrary-oriented ship representation module, a rotated Gaussian kernel module, a head point estimation module, an orientation-invariant module and a probability refinement module. All ships are represented by rotated boxes with a head point. We first detect centers of ships by extracting the peaks in heatmaps which are generated by rotated Gaussian kernels. Then, we locate the head points by two steps (directly regress from image features at the center location, and estimate head points from head heatmaps). We also extract orientation-invariant feature maps by the orientation-invariant model (OIM) to increased consistency between targets and corresponding features. Finally, we refine the detection results based on the prior information. The overall framework of CHPDet is shown in Fig. 2. A. Arbitrary-oriented ship representation As shown in Fig. 1, the widely-used horizontal bounding boxes cannot be directly applied to the arbitrary-oriented ship detection task since excessive redundant background area is included. Moreover, since the arbitrary-oriented ships generally have a large aspect ratio and park densely, the NMS algorithm using a horizontal bounding box tends to produce missing detection. To this end, many methods represent ships as rotated bounding boxes, and these boxes are parameterized with 5 tuples (c x , c y , w, h, θ), where (x, y) is the coordinate of the center of the rotated bounding box, w and h are the width and length of the ship, respectively. The angle θ ∈ [0 • , 180 • ) is the orientation of the long side with respect to the y-axis. This representation can result in the regression inconsistency issue near the boundary case. Recently, some detectors represent objects by four clockwise vertices, which are parameterized by 8 tuples (x a , y a , x b , y b , x c , y c , x d , y d ). This representation can also introduce regression inconsistency due to the order of the four corner points. To avoid the afore-mentioned inconsistency problem, we represent ships as two points and their corresponding size, which are parameterized by 6 tuples (x c , y c , w, h, x h , y h ). (x c , y c ) is the coordinate of the center of the rotated bounding box, w and h are the width and length of the ship, (x h , y h ) is the coordinate of the head point of the ship. The direction of the ship is determined by connecting the center and the bow. This representation of ships converts discontinuous angle regression to continuous keypoint estimation. This representation also extends the range of angle representation to [0 • , 360 • ) and enables the network to distinguish between bow and stern. B. Rotated Gaussian Kernel Our detectors uses center heatmaps to classify and locate ships simultaneously. To adapt to the characteristics of the rotated target, we use the rotated Gaussian kernel (see Fig. 4) to map the annotations to target heatmaps in the training stage. Specifically, given m th annotated box (x, y, w, h, θ) belongs to c th m category, it is linearly mapped to the feature map scale. Then, 2D Gaussian distribution N (m, Σ) is adopted to produce target heatmap C ∈ R W s × H s ×C . Here, m = (x, y) represents the probability density function of the rotated Gaussian distribution, and the probability density function can be calculated according to covariance matrix Eq. 1. Σ 1/2 = RSR = cos θ − sin θ sin θ cos θ σ x 0 0 σ y cos θ sin θ − sin θ cos θ = σ x cos 2 θ + σ y sin 2 θ (σ x − σ y ) cos θ sin θ (σ x − σ y ) cos θ sin θ σ x cos 2 θ + σ y sin 2 θ ,(1) where s is a downsampling stride and σ x = α σp×w √ w×h , σ y = α σp×h √ w×h , σ p is a size-adaptive standard deviation [11]. α is set to 1.2 in our implementation, and its not carefully selected. Fig. 4 is a schematic diagram of mapping a rotated bounding box to a rotated Gaussian distribution. If two Gaussian kernels belong to the same category with an overlap region, we take the maximum value at each pixel of the feature map.Ĉ ∈ R W s × H s ×C is a prediction on feature maps produced by the backbones. Fig. 5(a) shows a visualization of the center heatmaps. We extract locations with values larger or equal to their 8connected neighbors as detected center points. The value of the peak point is set as a confidence measurement, and the coordinates in the feature map are used as an index to get other attributes. Therefore, the accurate location of the center point on the feature map is the key part of the whole detection. The peaks of the Gaussian kernel, also the centers of rotated box, are treated as the positive samples while any other pixels are treated as the negative samples, which may cause a huge imbalance between positive and negative samples. To handle the imbalance issue, we use the variant focal loss as [11], [23]: L c = −1 N          xyc 1 −Ĉ xyc γ log Ĉ xyc if C(xyc) = 1 xyc (1 − C xyc ) β Ĉ xyc γ log 1 −Ĉ xyc otherwise (2) where γ and β are the hyper-parameters of the focal loss, N is the number of objects in image I which is used to normalize all positive focal loss instances to 1. We set γ = 2 and β = 4 in our experiments empirically as in [26]. To reduce the quantization error caused by the output stride, we produce local offset feature maps O ∈ R W S × H S ×2 . Suppose that c = {(x k ,ŷ k )}c isĉenter c = {(x k + δx k ,ŷ k + δŷ k )} n k=1 . Note that, all classes share the same offset predictions to reduce the computational complexity. The offset is optimized with an L1 loss. This supervision is performed on all center point. L co = 1 N N k=1 Oc k − center k S − c k .(3) The regression of the size of objects is similar to that of local offset. C. Head Point estimation We perform two steps for better head points estimation. 1) Regression-based head point estimation: Let head k = (h x , h y ) be the k th head point,we directly regress to the offsets (∆x k , ∆ŷ k ) on feature map R ∈ R W S × H S ×2 at each predicted center point c k ∈ĉenter. The regression-based head point is {(x k + ∆x k ,ŷ k + ∆ŷ k )} n k=1 , where (∆x i , ∆ŷ i ) is the head point regression, and an L1 loss is used to optimized head regression feature maps. L hr = 1 N N k=1 |R c k − h k | .(4) 2) Bottom-up head point estimation: We use standard bottom-up multi-human pose estimation [43] to refine the head points. A target map H ∈ R L he = −1 N xy    (1 − E xy ) γ log (E xy ) if H xy = 1 (1 − H xy ) β (E xy ) γ log (1 − E xy ) otherwise (5) L ho = 1 N N k=1 HO c k − head k S −head .(6) The bottom-up head point estimation is the same as the center point detection. Note that, in center point detection, each category has a center points heat map, while in head points estimation, all categories share one head points heatmap. We extract all peak point locationsĥead = l i i=1 with a confidence HO x,y > 0.1 as a potential head points set, and refine the potential head point locations by adding the offset (ξ x , ξ y ). Fig. 5(b) visualizes the head points heatmap. We introduce a set of weighted factors to balance the contribution of these parts, and set λ o = 1, λ s = 0.1, λ hr = 1, λ he = 1, and λ ho = 1 in all our experiments. We set λ s = 0.1 since the scale of the loss is ranged from 0 to the output size h/S. The overall training loss is L =L c + λ o L o + λ s L s + λ hr L hr + λ he L he + λ ho L ho .(7) In the testing phase, we first extracted the center points on the output center heatmaps C for each category. We used a 3 × 3 max-pooling layer to get the peak points and selected the top 100 peaks as potential center points. Each center point location is represented as an integer coordinateŝ c = (x,ŷ). Take out the offsets (δ x , δ y ), size (w, h), and head points regression (∆ x , ∆ y ) on the corresponding feature map at the location of center points. We also picked all head peak points on the output center heatmaps E with a scoresĥead ∈ (x, y), if E x,y > 0.1, and then assigned each regressed location head r = (x + ∆x,ŷ + ∆y) to its closest detected keypoint arg min l∈headr l −ĥead 2 as the head point (ĥ x ,ĥ y ), then we add the head point offset (ξ x , ξ y ) to refine the head point estimation. Finally, we get the rotated boxes (x + δ x ,ŷ + δ y , w, h,ĥ x + ξ x ,ĥ y + ξ y ). We use D. Orientation-Invariant Model Let I ∈ R W ×H×3 be an input image with width W and height H, the feature map generated from backbone is F ∈ R W s × H s ×K , where S is the output stride, C is the output feature channels. In this paper, we set the default stride value to S = 4 and feature channels to K = 64. The feature generated from these backbones is not rotationinvariant [44], while ships in remote sensing images are distributed with arbitrary orientations. To alleviate the inconsistency, we introduce an orientation-invariant model (OIM) which consists of two modules: active rotating filters (ARF) and oriented response pooling (ORPooling) [44]. We first use active ARF to explicitly encode the orientation information. An ARF is a k × k × N filter that actively rotates N − 1 times during convolution to produce a feature map with N orientation channels. For a feature map M and an ARF F, the i th filter I (i) , i ∈ [1, N − 1], is obtained by clockwise rotating F by 2πn N (N is set to 8 by default) , and can be computed as I (i) = N −1 n=0 F (n) θi · M (n) , θ i = i 2π N , i = 0, . . . , N − 1 (8) where F θi is the clockwise θ i -rotated version of F, F (n) i and M (n) are the n th orientation channel of F i and M respectively. The ARF captures image response in N directions and explicitly encodes its location and orientation into a single feature map with N orientation channels. To reduce computational complexity, we use the combination of small 3 × 3 filters and an 8 orientation channels in our experiments. Feature maps captured by ARF are not rotation-invariant as orientation information are encoded instead of being discarded. Then ORPooling is used to extract orientation-invariant feature. It is simply achieved by choosing the orientation channel with the strongest response as the output feature I ∈ R W s × H s ×K . That is, I = max{I (n) }, 0 < n < N − 1.(9) Since ORPooling is introduced to extract the maximum response value for all ARF, the target features of different orientations at this location are identical. Based on the rotation invariance feature, six kinds of feature maps are got by convolution layers respectively. Moreover, OIM only introduces one convolution layer with a small number of parameters, which has little effect on the speed of training and inferencing. The rotation-invariant feature is very important for detecting arbitrary oriented objects, which enhances the consistency of the feature. Our detectors extract locations with local maximum as detected center points, so at the object center, the rotation-invariant feature of arbitrary oriented objects are identical, which increases the generalization ability of the network. Otherwise, more parameters are needed to encode the orientation information. E. Refine probability according to size By normalizing the GSD of remote sensing images, objects of the same size on the ground have the same size in all images. The size of the target is an important clue to identify the target because a certain type of target in remote sensing images usually has a relatively fixed size range. We propose an approach to adjust the confidence score of targets according to the prior knowledge of ship size. As shown in Fig. 5(d), suppose that the category of the detected box is a, the original confidence score is s a , assume that the length of the detected ship obeys a normal distribution, the mean and standard deviation of the length of category a are L a , δ a . Then the probability of the target belonging to a is p a , i.e. p a = 2 δ a √ 2π −|l−la| −∞ exp − (x − la) 2 2δ 2 a dx.(10) To reduce hyper-parameters, we assume that the standard deviation is proportional to the mean δ a = L a × λ for all categories of ships. We multiply the two probabilities to obtain the final detection confidence,p a = p a × s a . IV. EXPERIMENTS We evaluate our method on our FGSD2021 dataset, the public HRSC2016 [45] and UCAS-AOD [46] dataset. In this section, we first introduce the datasets and implementation details, then perform ablation studies and compare our network to several state-of-the-art methods. A. Datasets 1) HRSC2016: The HRSC2016 dataset [45] is a challenging dataset for ship detection in remote sensing images, which collected six famous harbors on Google Earth. The training, validation, and test sets include 436 images with 1207 samples, 181 with 541 samples, and 444 images with 1228 samples, respectively. The image size of this dataset ranges from 300 × 300 to 1500 × 900. This dataset includes three levels of tasks (i.e., L1, L2, and L3), and these three tasks contain 1 class, 4 classes, and 19 classes, respectively. Besides, the head point of ships is given in this dataset. Following [28] [35] [32], we evaluate our method on task L1. We used the training and validation set in the training phase and evaluated the detection performance on the test set. 2) FGSD2021: Existing ship datasets HRSC2016 have the following shortcomings. First, the GSD is unknown, so we cannot get the size of objects in the image by the actual size on the ground. Second, the size of the image is very small which is inconsistent with the actual remote sensing image detection task. To solve these problems, we propose a new ship detection dataset FGSD2021 which has a fixed GSD. Our dataset is developed by collecting high-resolution satellite images from publicly available Google Earth, which covers some famous ports such as Dandiego, Kitsap-Bremerton, Norfolk, Pearl Harbor, and Yokosuka. We usually obtain multiple images of the same port on different days, and there are also some images from the HRSC2016 dataset. We collected 636 images with a normalized GSD, 1 meter per pixel. The images in our dataset are very large, usually, one image covers a whole port. The width of images is ranged from 157 to 7789 pixels, and the average width is 1202 pixels, the height is ranged from 224 to 6506 pixels, and the average height is 1205 pixels. Our FGSD2021 dataset is divided into 424 training images and 212 test images. The training set is used in the training phase. The detection performance of the proposed method is evaluated on the test set. FGSD2021 including 5274 labeled targets and 20 categories are chosen and annotated. We use the labelimg2 1 tools to label the ship, the angle range is [0 • , 360 • ), and the main direction is the direction of the bow. Some examples of annotated patches are shown in Fig. 7. 3) UCAS-AOD: The UCAS-AOD dataset [46] contains 1510 aerial images of about 659 × 1280 pixels and 14596 instances of two categories including plane and car. The angle range of target in this dataset is [0 • , 180 • ), so we manually marked the direction of the head. We randomly sampled 1132 images for training and 378 images for testing. All images were cropped into patches of size 672 × 672. B. Implementation Details Our network was implemented in PyTorch on a PC with Intel Core i7-8700K CPU, NVIDIA RTX 2080Ti GPU. We used the Adam method [47] as the optimizer, and the initial learning rate was set to 2.5 × 10 −4 . We trained our network for 140 epochs with a learning rate being dropped at 90 epochs. During the training phase, we used random rotation, random flipping, and color jittering for data augmentation. To maintain the GSD of the image, we cropped all images into 1024 × 1024 slices with a stride of 820, resized them to 512 × 512. We merged the detection results of all the slices to restore the detecting results on the original image. Finally, we applied rotated-non-maximum-suppression (RNMS) with an IoU threshold of 0.15 to discard repetitive detections. The speed of the proposed network was measured on a single NVIDIA RTX 2080Ti GPU. Several different backbones (e.g., deep layer aggregation (DLA) [48] and hourglass network (Hourglass) [49]) can be used to extract features from images. We followed CenterNet [11] to enhance DLA by replacing ordinary convolutions with deformable convolutions and add a 256 channel 3 × 3 convolutional layer before the output head. The hourglass network consists of two sequential hourglass modules. Each hourglass module includes 5 pairs of down and up convolutional networks with skip connections. This network generally yields better keypoint estimation performance [26]. C. Evaluation Metrics The IoU between oriented boxes is used to distinguish detection results. The mean average precision (mAP) and head direction accuracy are used to evaluate the performance of arbitrary-Oriented detectors. 1) IoU: The IoU is the result of dividing the overlapping area by the union area of two boxes. We adopted the evaluation approach in DOTA [50] to get the IoU. If the IoU between a detection box and a ground-truth is higher than a threshold, the detection box is marked as true-positive (TP), otherwise false-positive (FP). If a ground-truth box has no matching detections, it is marked as false negative (FN). 2) mAP: The precision and recall are calculate by precision = TP TP+FP , recall = TP TP+FN . We first set a set of thresholds, and then we get a corresponding maximum precision for each recall threshold. AP is the average of these precisions. The mean average precision (mAP) is the mean of APs over all classes. The mAP 0.5 -mAP 0.8 is computed under the IoU threshold of 0.5-0.8 respectively. PASCAL VOC2007 metric is used to compute the mAP in all of our experiments. 3) Head direction accuracy: The prediction angle range of the previous algorithm is 0 • -180 • , which cannot distinguish between the bow and stern of the ship. The mAP base on the IoU between two rotated boxes is taken as the only evaluation criterion, which cannot reflect the accuracy of the bow direction. To solve this problem, we define bow direction accuracy as an additional evaluation. That is the proportion of the ships whose angle difference from the ground-truth less than 10 degrees in all TPs. D. Ablation Study In this subsection, we present ablation experiments to investigate our models. 1) CenterNet as baseline: As an anchor-free detector, Cen-terNet performs keypoint estimation to find the center point and regresses the object size at each center point position. To carry out arbitrary-oriented ship detection, we add an extra branch to predict the angle as a baseline which is named CenterNet-Rbb. CenterNet-Rbb uses a DLA34 as the backbone, and presents ships as rotated boxes with angle, and uses the L1 loss function to optimized angle regression feature maps. We set weighted factor λ angle = 0.1 to balance the contribution of these parts since the scale of the loss is ranged from 0 to 180. As shown in Table I, CenterNet-Rbb achieves an mAP of 70.52% which demonstrates that our baseline achieves competitive performance. 2) Effectiveness of the head point estimation: When we replace the angle prediction branch with the head point estimation module, the overall mAP is improved from 70.52% to 82.96%. It is a significant improvement, which fully demonstrates the effectiveness of the head point estimation approach. This improvement mainly comes from two aspects. First, the algorithm makes full use of the prior knowledge of the bow point and improves the accuracy of angle regression. Second, since multi-task learning is performed, bow detection increases the supervision information and improves the accuracy of other tasks. To further verify the promoting effect of head point estimation for center point detection and size detection, we set all angles of ground-truth and the detected box to 0 • . Compared with the CenterNet-Rbb, The mAP of CHPDet has risen from 84.4% to 88.0%. This shows that the head point estimation is equivalent to multi-task joint training. It gives more supervision to the network and improves the performance of the network. Besides, the head point estimation only introduces 3 additional channels feature maps and 0.7 ms speed latency. 3) Effectiveness of the rotated Gaussian kernel: Our detector uses the rotated Gaussian kernel to map the annotations to target heatmaps and achieves an improvement of 0.6% in terms of nomal Gaussian kernels. This implies that rotated Gaussian kernel is a better representation for OBB in the aerial images. The rotated Gaussian kernel can adjust its shape and di- rection according to the shape of the target and reduce the influence of positioning error on the detection results. As shown in Fig. 4, the rotated Gaussian kernel has the maximum error in the long axis direction, so in the detection process, the center point has a large error on the long axis. Because the error of the center point in the long axis has the least influence on the IoU, the rotated Gaussian kernel can reduce the influence of positioning error on the detection results, and vice versa. Note that, rotated Gaussian kernel does not introduce any additional parameters, and they do not increase training and inferencing time. Consequently, it is a completely cost-free module. 4) Effectiveness of the orientation-invariant model: We add an orientation-invariant model (OIM) at the end of the backbone and keep other settings unchanged to validate its effectiveness. As shown in Table I, compared with the standard backbone, the backbone with the orientation-invariant model improves mAP by about 3 percentages to 86.61%, while only introduces 2.6 ms speed latency. To further verify the effectiveness of the OIM structure, we replace the OIM with two convolution layers. Compared with the standard backbone, the backbone with two extra convolution layers model drops the performance to 82.66%. It is proved that the performance improvement does not come from the improvement of the number of parameters. We argue that the standard backbones are not rotationinvariant, and the corresponding features are rotation-sensitive. Consequently, OIM increases the consistency between targets and corresponding features. It not only improves the accuracy of angle prediction, but also improves the accuracy of center point detection and size regression. 5) Effectiveness of the Refine probability model: In the FGSD2021 dataset, the actual length of each category is determined. For example, the length of the Ticonderoga-class cruiser is 172.8 meters. In our designed network, the prior knowledge of ship length is used to refine the confidence of the detected ships belonging to a certain category. Table I shows the mAP values of different ablation versions on the test set. It can be observed that the baseline model achieves the lowest mAP. When the prior size information is incorporated, the performance has been improved. The accuracy improvement on low-resolution images is more obvious, e.g., from 86.61% to 87.91%, an increase of 1.3% in mAP. It demonstrates that the prior size information can improve classification accuracy. We set a variance coefficient to adjust the influence of size on probability. Consequently, we use the length of this type of ship l a multiplied by a coefficient r as the mean square error of this type δ a , δ a = l a × r. The variance coefficient will affect classification accuracy. When the coefficient is large, the probability difference between different categories will be smaller, and the influence of the size on the confidence of the category will be smaller, and vice versa. As can be observed in Table II, when the coefficient is small, it is equivalent to use size as the main information to classify objects. Accuracy increases gradually as the coefficient increases, and when the coefficient is larger than 0.2, the coefficient has little impact on the accuracy. When we treat all categories as one category and remove the category influence on the detection results, the Ground truth Retinanet-Rbb ROI-Trans SCRDet CHPDet 2 A-Net S Fig. 8: Comparison of the detection results in FGSD2021 with different methods. The first column is the ground truth, and the second to the last columns are the results of Retinanet-Rbb [23], ROI-Trans [30], SCRDet [31] , S 2 A-Net [35], and CHPDet (ours), respectively. Different color of rotated boxes represents a different type of ships. The pink point represents the head point. mAP is 89.33%, and 89.74%, respectively. At the same time, by incorporating prior information to adjust the classification confidence, the detection accuracy under 20 categories with an input image of size 1024x1024 achieved an mAP of 89.28% which shows that after incorporating the prior information, almost all categories are classified correctly. 6) Bow direction accuracy: It can be seen from Table III that the bow direction accuracy of our CHPDet is up to 97.84, 98.14, and 98.39, respectively. This shows that almost all bow directions of ships are correct. As shown in Fig. 9, the pink dots represent the correct head point and the green dots represent the wrong head point. Our detection algorithm can well detect the bow direction of all types of ships, including aircraft carriers, amphibious ships. Only a small number of ships or submarines whose bow and stern are similar from a bird-view perspective, the bow direction will be opposite. E. Comparison with other methods In this section, we compare our method with other representative ship detectors including RetinaNet-Rbb [23] ROItrans [30] 2 , R 2 CNN [29], CSL [33], DCL [34], RSDet [9], SCRDet [31] 3 , and S 2 A-Net [35] 4 on three benchmark datasets including FGSD2021, HRSC2016 [45] and UCAS-AOD [46]. To achieve fair comparison, we used the default settings of the When higher resolution images are used, the accuracy can be improved to 89.29%. This confirms that our method achieves a large superiority in terms of accuracy and speed. To further verify the accuracy of the prediction, we gradually increase the IoU threshold. As can be seen from Table IV, when the IoU threshold is gradually increased, the performance of other detectors dropped significantly, and the decline of our detector is relatively small. When the IoU threshold was increased to 0.8, the mAP of our CHPDet remained at 71. 24. This shows that our detector can get higher quality rotated boxes than other algorithms. Fig. 8 shows a visual comparison of the detection results of Retinanet-Rbb [23], ROI-Trans [30], SCRDet [31], S 2 A-Net [35], and our method. As shown in the first row, all the other methods have misclassification or false alarms, S 2 A-Net [35] has an inaccurate angle prediction, while our method precisely detects them. For the densely parking scene in the second row, all the compared detectors lost at least two submarines, and our method is not influenced by the densely parking scene. The last row of Fig. 8 is a harbor with a complex background. Note that, two ships are not in the water but on the dry dock. ROI-trans [30] and S 2 A-Net [35] miss the targets, SCRDet [31] has an inaccurate bounding box. Compared to these four methods, our method can better detect the ships in the complex background and is more robust for challenging situations. This improvement mainly comes from three aspects. First, the algorithm makes full use of the prior knowledge of the bow point and improves the accuracy of directional regression. Second, since multi-task learning is performed, bow detection increases the supervision information and improves the accuracy of other tasks. Last, the prior knowledge of ship length is used to refine the confidence of the detected ships belonging to a certain category. The usage of the prior knowledge of ships introduces significant performance improvements. 2) Results on HRSC2016: The HRSC2016 dataset contains plenty of ships with arbitrary orientations. we evaluate our method on task L1 which contains 1 class and report the results with VOC2007 metric. To demonstrate the performance of our detector, we compare it with other state-of-the-art methods, i.e., ReDet [51], Oriented R-CNN [52], and Oriented RepPoints [59]. The overall comparison performance is reported in Table V. Our method achieves the best performance over all the compared methods, at an accuracy of 90.55%. To further show the performance of CHPDet, the detection results are visualized in Fig. 10. As shown in the first two columns, the densely parked ships can be detected well. In the last two columns, there is a lot of background around ships, which is a huge challenge for detectors. The results indicate that our proposed method can avoid false alarms in complex background. 3) Results on UCAS-AOD: The UCAS-AOD dataset contains a large mount of cars and planes, which are often overwhelmed by a complex background in aerial images. For a fair comparison, we only report the results under VOC2007 metric. Table VI shows the results with the recent methods on the UCAS-AOD dataset. It can be seen that our proposed method achieves the best performance (with an mAP of 90.00%). The CHPDet, which uses a larger output resolution (output stride of 4) compared to traditional object detectors (output stride of 8) and presents ship as the center and head points, can capture abundant information of small objects. Fig. 11 gives some example detection results on the UCAS-AOD dataset. We find that CHPDet performs well in a variety of challenging scenes, which demonstrates the generalization capability of the detector. V. CONCLUSION Our proposed approach converts discontinuous angle regression to continuous keypoint estimation by formulating ships as rotated boxes with a head point representing the direction. This design can incorporate the prior knowledge of the bow point, which not only improves the detection performance, but also expands the scope of predicted angle to [0 • − 360 • ). Our method can distinguish between bow and stern. CHPDet has simple structure. It has only one positive sample per annotation and simply extracts local peaks in the keypoint heatmap. It does not need Non-Maximum Suppression (NMS). This design ensures high time efficiency. The prior knowledge of ship length is also incorporated to refine the confidence of the detected ships belonging to a certain category. Although our method achieves encouraging results on ship detection from remote sensing images, our method can not be directly used in normal object detection datasets in aerial images such as DOTA [50]. That is because, CHPDet needs more accurate annotations which mark the direction of the target head in the range of 360 • . CHPDet is several times faster than most detectors in inference, but it suffers from a long training time. For future work, we will address this issue by encoding more training samples from annotated boxes. In this paper, we proposed a one-stage anchor-free detection framework to detect arbitrary-oriented ships from remote sensing images by making full use of the prior of ships. Our method detects ships by extracting the center, head of ships, and regresses the size of ships at each center point with rotation-invariant features. And we refine the detection results based on the prior information. And we refine the detection results based on the prior information. CHPDet avoids complex anchor design and computing relative to the anchor-based methods and can accurately predict angles in a large range [0 • -360 • ). Experimental results demonstrate that our method achieves better accuracy and efficiency as compared with other ship detectors. Fig. 1 ( 1a)-(d) illustrate four different representations of an arbitrary-oriented ship. Since ships in remote sensing images are generally in strips, the This work was partially supported in part by the National Natural Science Foundation of China (Nos. 61903373, 61401474, 61921001). Feng Zhang, Xueying Wang, Shilin Zhou, Yingqian Wang, Yi Hou are with the College of Electronic Science and Technology, National University of Defense Technology (NUDT), P. R. China. Emails: {zhangfeng01, wangxueying, slzhou, wangyingqian16, yihou}@nudt.edu.cn. (Corresponding author: Xueying Wang) Fig. 1 : 1Four different representations of the arbitraryoriented ship and the disadvantage of the angle regression scheme. (a) Horizontal boxes parameterized by 4 tuples (x min , y min , x max , y max ). (b) Rotated box with the angle parameterized by 5 tuples (x c , y c , w, h, θ). (c) Rotated box with vertices (a, b, c, d), parametrized by 8 tuples (x a , y a , x b , y b , x c , y c , x d , y d ). (d) Rotated box with head point which is parameterized by 6 tuples (x c , y c , w, h, x h , y h ). (e) A small angle disturbance will cause a large IoU decrease. (f) The angle is discontinous when reaches its range boundary. Fig. 3 : 3Speed vs. accuracy on our proposed FGSD2021 dataset. Fig. 5 : 5A visualization of (a) center heatmap, (b) head heatmap. In center and head heatmaps, different colors represent different categories. the set of detected center points, center point location is given by an integer coordinates c k = (x i ,ŷ i ) on feature map C. For each predicted center point c k , let the value on the offset feature maps f k = (δx k , δŷ k ) be the offset of center point c k . The final center point location of class S ×2 are head maps produced by the backbones. These two head maps are trained with variant focal loss and an L1 loss. Fig. 6 : 6A visualization of ship probability density map. In the ship probability density map, l a represents the mean length of category a, l represents the length of the detected ship. The red area is the probability that the target belongs to category a.the line connecting the center point and the head point to determine the orientation of targets. Fig. 7 : 7Example images from the proposed FGSD2021 dataset. 20 categories are chosen and annotated in our dataset, including Aircraft carriers, Wasp-class, Tarawa-class, Austin-class, Whidbey-island-class, San-Antonio-class, Newport-class, Ticonderogaclass, Arleigh-Burke-class, Perry-class, Lewis and Clark-class, Supply-class, Henry J. Kaiser-class, Bob Hope-Class, Mercyclass, Freedom-class, Independence-class, Avenger-class, submarine, and others. III: Detection accuracy on different types of ships and overall performance with the state-of-the-art methods on FGSD. The short names for categories are defined as (abbreviation-full name): Air -Aircraft carriers, Was -Wasp class, Tar -Tarawa class, Aus -Austin class, Whi -Whidbey Island class, San -San Antonio class, New -Newport class, Tic -Ticonderoga class, Bur-Arleigh Burke class, Per -Perry class, Lew -Lewis and Clark class, Sup -Supply class, Kai -Henry J. Kaiser class, Hop -Bob Hope Class, Mer -Mercy class, Fre -Freedom class, Ind -Independence class, Ave -Avenger class, Sub -Submarine and Oth -Other. CHPDet † means CHPDet trained and detected with 1024 × 1024 image size. Fig. 9 : 9Some bow direction detection result of CHPDet. The pink dots represent the correct head point and the green dots represent the wrong head point. Fig. 10 : 10Sample object detection results of our proposed CHPDet on HRSC2016 dataset. Fig. 11 : 11Sample object detection results of our proposed CHPDet on UCAS-AOD dataset. anchor - anchorarXiv:2101.11189v3 [cs.CV] 13 Oct 2021Backbone Center Position Center Offset Object size Head Point Regression Head Point estimation Head Offset , ℎ ARF ORPooling 45°90°315°R eceptive field Input Image Feature Maps Detection Result OIM , , , , ℎ , ℎ 0°× ×C × ×2 × ×2 × ×2 × ×1 × ×2 × ×64 × ×64 TABLE I : IResults achieved on FGSD2021 with different ablation versions. 'Baseline' represents adding a branch to predict the angle based on CenterNet. 'Head Point' represents replacing the angle prediction branch to head point estimation module. 'Rotate kernel' represents generating center heatmap by rotated kernel in training. 'OIM' represents add orientationinvariant model behand the backbone. 'Extra convolution' represents replacing the OIM with two extra convolution layers. 'Refine probability' represents using the prior size information to adjust the confidence score of the detected boxes.baseline Different Settings of CHPDet Head Point Rotate kernel OIM Extra convolution Refine Probability mAP 70.52 82.96 83.56 86.61 82.66 87.91 TABLE II : IIPerformance of CHEDet achieved on FGSD2021 with different variance coefficient λ. 'without refine' represents using the original confidence without refinement. 'Ground truth class' represents using ground truth class label to eliminate the misclassification. Backbone Image Size coefficient λ without refine Ground truth class 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 DLA34 512 × 512 87.40 87.91 87.39 87.45 87.17 87.20 87.15 87.10 86.61 89.33 DLA34 1024 × 1024 86.37 87.84 89.28 88.17 88.68 88.85 88.47 88.50 88.39 89.74 TABLE TABLE IV : IVDetection performance on the FGSD2021 at different IoU thresholds and the accuracy of bow direction. BDA presents bow direction accuracyMethod Backbone Image Size mAP 0.5 mAP 0.6 mAP 0.7 mAP 0.8 BDA FPS R 2 CNN [29] Resnet50 512 × 512 78.09 75.03 64.83 36.41 10.3 Retinanet-Rbb [23] Resnet50 512 × 512 73.49 69.17 62.82 45.00 35.6 RoI-Trans [30] Resnet50 512 × 512 83.48 82.63 80.35 65.18 19.2 SCRDet [31] Resnet50 512 × 512 75.90 70.98 61.82 35.12 9.2 CSL [33] Resnet50 512 × 512 73.73 69.71 60.25 34.93 10.4 DCL [34] Resnet50 512 × 512 73.34 69.19 57.80 28.54 10.0 R 3 Det [32] Resnet50 512 × 512 70.47 68.32 57.17 27.44 14.0 RSDet [9] Resnet50 512 × 512 73.74 69.55 61.52 35.83 15.4 S 2 A-Net [35] Resnet50 512 × 512 80.19 79.58 75.65 58.82 33.1 ReDet [51] ReResnet50 512 × 512 85.44 84.65 80.24 67.94 13.8 Oriented R-CNN [52] Resnet50 512 × 512 82.54 81.32 78.53 64.87 27.4 BBAVectors [53] Resnet50 512 × 512 83.59 82.74 78.55 62.48 18.5 DARDet [54] Resnet50 512 × 512 80.31 79.62 74.77 59.21 31.9 CenterNet-Rbb [11] DLA34 512 × 512 70.52 69.34 65.52 45.33 48.5 CHPDet(ours) DLA34 512 × 512 87.91 87.15 83.69 71.24 97.84 41.7 CHPDet(ours) DLA34 1024 × 1024 89.29 88.98 86.57 73.56 98.39 15.4 TABLE V : VDetection accuracy on the HRSC2016 dataset, 07 means using the 2007 evaluation metric.Method Backbone mAP(07) R 2 CNN [29] Resnet101 73.07 RRPN [28] Resnet101 79.08 R 2 PN [10] VGG16 79.6 ROI-trans [30] Resnet101 86.20 Gliding Vertex [55] Resnet101 88.20 BBAVectors [53] Resnet101 88.6 R 3 Det [32] Resnet101 89.26 FPN-CSL [33] Resnet101 89.62 R 3 Det-DCL [34] Resnet101 89.46 DAL [56] Resnet101 89.77 R 3 Det-GWD [57] Resnet101 89.85 RSDet [9] ResNet152 86.5 FR-Est [58] Resnet101 89.7 S 2 A-Net [35] Resnet101 90.2 Oriented RepPoints [59] Resnet50 90.38 ReDet [51] ReResnet50 90.46 Oriented R-CNN [52] Resnet101 90.50 DARDet [54] Resnet50 90.37 CHPDet(ours) DLA34 88.81 CHPDet(ours) Hourglass104 90.55 TABLE VI : VIDetection accuracy on the UCAS-AOD dataset.original codes on the DOTA dataset including the same data augmentation strategy, and the number of training epochs.1) Results on FGSD2021: We evaluate CHPDet on the FGSD2021 dataset and compare our method with other rotation detection methods. It can be seen fromTable IIIthat CHPDet achieves 87.91% mAP at the speed of 41.7 FPS, which surpass the other compared methods. Compared with the general rotation detection methods RoI-Trans [30] and S 2 A-Net [35], our proposed method achieves a remarkable improvement by 4.5%, 7.7% in mAP and 19.3, 8.6 in FPS.Method Backbone car airplane mAP(07) YOLOv3 [60] Darknet53 74.63 89.52 82.08 RetinaNet [23] Resnet101 84.64 90.51 87.57 FR-O [50] Resnet101 86.87 89.86 88.36 ROI-trans [30] Resnet101 87.99 89.90 88.95 FPN-CSL [33] Resnet101 88.09 90.38 89.23 R 3 Det-DCL [34] Resnet101 88.15 90.57 89.36 DAL [56] Resnet101 89.25 90.49 89.87 CHPDet(ours) DLA34 88.58 90.64 89.61 CHPDet(ours) Hourglass104 89.18 90.81 90.00 https://github.com/chinakook/labelImg2 https://github.com/dingjiansw101/AerialDetection/ 3 https://github.com/yangxue0827/RotationDetection 4 https://github.com/csuhan/s2anet Enhancing midlow-resolution ship detection with high-resolution feature distillation. S He, H Zou, Y Wang, R Li, F Cheng, X Cao, M Li, IEEE Geoscience and Remote Sensing Letters. S. He, H. Zou, Y. Wang, R. Li, F. Cheng, X. Cao, and M. Li, "En- hancing midlow-resolution ship detection with high-resolution feature distillation," IEEE Geoscience and Remote Sensing Letters, 2021. Gated recurrent multiattention network for vhr remote sensing image classification. B Li, Y Guo, J Yang, L Wang, Y Wang, W An, IEEE Transactions on Geoscience and Remote Sensing. B. Li, Y. Guo, J. Yang, L. Wang, Y. Wang, and W. An, "Gated recurrent multiattention network for vhr remote sensing image classification," IEEE Transactions on Geoscience and Remote Sensing, 2021. Learning deep ship detector in sar images from scratch. Z Deng, H Sun, S Zhou, J Zhao, IEEE Transactions on Geoscience and Remote Sensing. 576Z. Deng, H. Sun, S. Zhou, and J. Zhao, "Learning deep ship detector in sar images from scratch," IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 6, pp. 4021-4039, 2019. Multi-scale object detection in remote sensing imagery with convolutional neural networks. Z Deng, H Sun, S Zhou, J Zhao, L Lei, H Zou, ISPRS Journal of Photogrammetry and Remote Sensing. 145Z. Deng, H. Sun, S. Zhou, J. Zhao, L. Lei, and H. Zou, "Multi-scale object detection in remote sensing imagery with convolutional neural networks," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 145, pp. 3-22, 2018. A survey on object detection in optical remote sensing images. G Cheng, J Han, ISPRS Journal of Photogrammetry and Remote Sensing. 117G. Cheng and J. Han, "A survey on object detection in optical remote sensing images," ISPRS Journal of Photogrammetry and Remote Sens- ing, vol. 117, pp. 11-28, 2016. Pbnet: Part-based convolutional neural network for complex composite object detection in remote sensing imagery. X Sun, P Wang, C Wang, Y Liu, K Fu, ISPRS Journal of Photogrammetry and Remote Sensing. 173X. Sun, P. Wang, C. Wang, Y. Liu, and K. Fu, "Pbnet: Part-based convolutional neural network for complex composite object detection in remote sensing imagery," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 173, pp. 50-65, 2021. Dabnet: Deformable contextual and boundary-weighted network for cloud detection in remote sensing images. Q He, X Sun, Z Yan, K Fu, IEEE Transactions on Geoscience and Remote Sensing. Q. He, X. Sun, Z. Yan, and K. Fu, "Dabnet: Deformable contextual and boundary-weighted network for cloud detection in remote sensing images," IEEE Transactions on Geoscience and Remote Sensing, 2021. Rotated region based fully convolutional network for ship detection. M Li, W Guo, Z Zhang, W Yu, T Zhang, IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium. IEEEM. Li, W. Guo, Z. Zhang, W. Yu, and T. Zhang, "Rotated region based fully convolutional network for ship detection," in IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2018, pp. 673-676. Learning modulated loss for rotated object detection. W Qian, X Yang, S Peng, Y Guo, C Yan, arXiv:1911.08299arXiv preprintW. Qian, X. Yang, S. Peng, Y. Guo, and C. Yan, "Learning modulated loss for rotated object detection," arXiv preprint arXiv:1911.08299, 2019. Toward arbitrary-oriented ship detection with rotated region proposal and discrimination networks. Z Zhang, W Guo, S Zhu, W Yu, IEEE Geoscience and Remote Sensing Letters. 1511Z. Zhang, W. Guo, S. Zhu, and W. Yu, "Toward arbitrary-oriented ship detection with rotated region proposal and discrimination networks," IEEE Geoscience and Remote Sensing Letters, vol. 15, no. 11, pp. 1745- 1749, 2018. Objects as points. X Zhou, D Wang, P Krähenbühl, arXiv:1904.07850arXiv preprintX. Zhou, D. Wang, and P. Krähenbühl, "Objects as points," arXiv preprint arXiv:1904.07850, 2019. Unsupervised change detection in multispectral remote sensing images via spectralspatial band expansion. S Liu, Q Du, X Tong, A Samat, L Bruzzone, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 129S. Liu, Q. Du, X. Tong, A. Samat, and L. Bruzzone, "Unsupervised change detection in multispectral remote sensing images via spectral- spatial band expansion," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 12, no. 9, pp. 3578-3587, 2019. Classification of remote sensing data with morphological attributes profiles: a decade of advances. D S Maia, M.-T Pham, E Aptoula, F Guiotte, S Lefèvre, IEEE Geoscience and Remote Sensing Magazine. D. S. Maia, M.-T. Pham, E. Aptoula, F. Guiotte, and S. Lefèvre, "Classification of remote sensing data with morphological attributes profiles: a decade of advances," IEEE Geoscience and Remote Sensing Magazine, 2021. Deep learning for generic object detection: A survey. L Liu, W Ouyang, X Wang, P Fieguth, J Chen, X Liu, M Pietikäinen, International Journal of Computer Vision. 1282L. Liu, W. Ouyang, X. Wang, P. Fieguth, J. Chen, X. Liu, and M. Pietikäinen, "Deep learning for generic object detection: A survey," International Journal of Computer Vision, vol. 128, no. 2, pp. 261-318, 2020. Rich feature hierarchies for accurate object detection and semantic segmentation. R B Girshick, J Donahue, T Darrell, J Malik, 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, OH, USAIEEE Computer SocietyR. B. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation," in 2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014, Columbus, OH, USA, June 23-28, 2014. IEEE Computer Society, 2014, pp. 580-587. Fast R-CNN. R B Girshick, 2015 IEEE International Conference on Computer Vision. Santiago, ChileIEEE Computer SocietyR. B. Girshick, "Fast R-CNN," in 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015. IEEE Computer Society, 2015, pp. 1440-1448. Faster R-CNN: towards real-time object detection with region proposal networks. S Ren, K He, R B Girshick, J Sun, Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems. N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. GarnettMontreal, Quebec, Canada, C. CortesS. Ren, K. He, R. B. Girshick, and J. Sun, "Faster R-CNN: towards real-time object detection with region proposal networks," in Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, Eds., 2015, pp. 91-99. Mask R-CNN. K He, G Gkioxari, P Dollár, R B Girshick, IEEE International Conference on Computer Vision. Venice, ItalyIEEE Computer SocietyK. He, G. Gkioxari, P. Dollár, and R. B. Girshick, "Mask R-CNN," in IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017. IEEE Computer Society, 2017, pp. 2980- 2988. R-FCN: object detection via regionbased fully convolutional networks. J Dai, Y Li, K He, J Sun, Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems. D. D. Lee, M. Sugiyama, U. von Luxburg, I. Guyon, and R. GarnettBarcelona, SpainJ. Dai, Y. Li, K. He, and J. Sun, "R-FCN: object detection via region- based fully convolutional networks," in Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, D. D. Lee, M. Sugiyama, U. von Luxburg, I. Guyon, and R. Garnett, Eds., 2016, pp. 379-387. You only look once: Unified, real-time object detection. J Redmon, S K Divvala, R B Girshick, A Farhadi, 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USAIEEE Computer SocietyJ. Redmon, S. K. Divvala, R. B. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection," in 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016. IEEE Computer Society, 2016, pp. 779- 788. YOLO9000: better, faster, stronger. J Redmon, A Farhadi, 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USAIEEE Computer SocietyJ. Redmon and A. Farhadi, "YOLO9000: better, faster, stronger," in 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017. IEEE Computer Society, 2017, pp. 6517-6525. Ssd: Single shot multibox detector. W Liu, D Anguelov, D Erhan, C Szegedy, S Reed, C.-Y Fu, A C Berg, European Conference on Computer Vision. SpringerW. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, "Ssd: Single shot multibox detector," in European Conference on Computer Vision. Springer, 2016, pp. 21-37. Focal loss for dense object detection. T Lin, P Goyal, R B Girshick, K He, P Dollár, IEEE International Conference on Computer Vision. Venice, ItalyIEEE Computer SocietyT. Lin, P. Goyal, R. B. Girshick, K. He, and P. Dollár, "Focal loss for dense object detection," in IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017. IEEE Computer Society, 2017, pp. 2999-3007. Cascade R-CNN: delving into high quality object detection. Z Cai, N Vasconcelos, 2018 IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USAZ. Cai and N. Vasconcelos, "Cascade R-CNN: delving into high quality object detection," in 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018. IEEE Computer Society, 2018, pp. 6154-6162. Hybrid task cascade for instance segmentation. K Chen, J Pang, J Wang, Y Xiong, X Li, S Sun, W Feng, Z Liu, J Shi, W Ouyang, C C Loy, D Lin, IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019. Long Beach, CA, USAComputer Vision Foundation / IEEEK. Chen, J. Pang, J. Wang, Y. Xiong, X. Li, S. Sun, W. Feng, Z. Liu, J. Shi, W. Ouyang, C. C. Loy, and D. Lin, "Hybrid task cascade for instance segmentation," in IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019. Computer Vision Foundation / IEEE, 2019, pp. 4974-4983. Cornernet: Detecting objects as paired keypoints. H Law, J Deng, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)H. Law and J. Deng, "Cornernet: Detecting objects as paired key- points," in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 734-750. FCOS: fully convolutional onestage object detection. Z Tian, C Shen, H Chen, T He, 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South). IEEEZ. Tian, C. Shen, H. Chen, and T. He, "FCOS: fully convolutional one- stage object detection," in 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019. IEEE, 2019, pp. 9626-9635. Arbitrary-oriented scene text detection via rotation proposals. J Ma, W Shao, Y Hao, W Li, W Hong, Y Zheng, X Xue, IEEE Transactions on Multimedia. 991J. Ma, W. Shao, Y. Hao, W. Li, W. Hong, Y. Zheng, and X. Xue, "Arbitrary-oriented scene text detection via rotation proposals," IEEE Transactions on Multimedia, vol. PP, no. 99, p. 1, 2017. R 2 cnn: Rotational region cnn for arbitrarily-oriented scene text detection. Y Jiang, X Zhu, X Wang, S Yang, W Li, H Wang, P Fu, Z Luo, 2018 24th International Conference on Pattern Recognition (ICPR). IEEEY. Jiang, X. Zhu, X. Wang, S. Yang, W. Li, H. Wang, P. Fu, and Z. Luo, "R 2 cnn: Rotational region cnn for arbitrarily-oriented scene text detec- tion," in 2018 24th International Conference on Pattern Recognition (ICPR). IEEE, 2018, pp. 3610-3615. Learning roi transformer for detecting oriented objects in aerial images. J Ding, N Xue, Y Long, G Xia, Q Lu, arXiv: Computer Vision and Pattern Recognition. J. Ding, N. Xue, Y. Long, G. Xia, and Q. Lu, "Learning roi transformer for detecting oriented objects in aerial images," arXiv: Computer Vision and Pattern Recognition, 2018. Scrdet: Towards more robust detection for small, cluttered and rotated objects. X Yang, J Yang, Y Zhang, T Zhang, Z Guo, X Sun, K Fu, 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South). IEEEX. Yang, J. Yang, Y. Zhang, T. Zhang, Z. Guo, X. Sun, and K. Fu, "Scrdet: Towards more robust detection for small, cluttered and rotated objects," in 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 -November 2, 2019. IEEE, 2019, pp. 8231-8240. R3det: Refined single-stage detector with feature refinement for rotating object. X Yang, Q Liu, J Yan, A Li, Z Zhang, G Yu, arXiv:1908.05612arXiv preprintX. Yang, Q. Liu, J. Yan, A. Li, Z. Zhang, and G. Yu, "R3det: Refined single-stage detector with feature refinement for rotating object," arXiv preprint arXiv:1908.05612, 2019. Arbitrary-oriented object detection with circular smooth label. X Yang, J Yan, X. Yang and J. Yan, "Arbitrary-oriented object detection with circular smooth label," pp. 677-694, 2020. Dense label encoding for boundary discontinuity free rotation detection. X Yang, L Hou, Y Zhou, W Wang, J Yan, X. Yang, L. Hou, Y. Zhou, W. Wang, and J. Yan, "Dense label encoding for boundary discontinuity free rotation detection," pp. 15 819-15 829, 2021. Align deep features for oriented object detection. J Han, J Ding, J Li, G.-S Xia, IEEE Transactions on Geoscience and Remote Sensing. J. Han, J. Ding, J. Li, and G.-S. Xia, "Align deep features for oriented object detection," IEEE Transactions on Geoscience and Remote Sens- ing, 2021. Oriented objects as pairs of middle lines. H Wei, Y Zhang, Z Chang, H Li, H Wang, X Sun, ISPRS Journal of Photogrammetry and Remote Sensing. 169H. Wei, Y. Zhang, Z. Chang, H. Li, H. Wang, and X. Sun, "Oriented objects as pairs of middle lines," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 169, pp. 268-279, 2020. X-linenet: Detecting aircraft in remote sensing images by a pair of intersecting line segments. H Wei, Y Zhang, B Wang, Y Yang, H Li, H Wang, IEEE Transactions on Geoscience and Remote Sensing. H. Wei, Y. Zhang, B. Wang, Y. Yang, H. Li, and H. Wang, "X-linenet: Detecting aircraft in remote sensing images by a pair of intersecting line segments," IEEE Transactions on Geoscience and Remote Sensing, 2020. Ship detection in high-resolution optical imagery based on anomaly detector and local shape feature. Z Shi, X Yu, Z Jiang, B Li, IEEE Transactions on Geoscience and Remote Sensing. 528Z. Shi, X. Yu, Z. Jiang, and B. Li, "Ship detection in high-resolution optical imagery based on anomaly detector and local shape feature," IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 8, pp. 4511-4523, 2013. A decision-theoretic generalization of on-line learning and an application to boosting. Y Freund, R E Schapire, Journal of Computer and System Sciences. 551Y. Freund and R. E. Schapire, "A decision-theoretic generalization of on-line learning and an application to boosting," Journal of Computer and System Sciences, vol. 55, no. 1, pp. 119-139, 1997. Ship detection from optical satellite images based on saliency segmentation and structure-lbp feature. F Yang, Q Xu, B Li, IEEE Geoscience and Remote Sensing Letters. 145F. Yang, Q. Xu, and B. Li, "Ship detection from optical satellite images based on saliency segmentation and structure-lbp feature," IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 5, pp. 602-606, 2017. Ship rotated bounding box space for ship extraction from high-resolution optical satellite images with complex backgrounds. Z Liu, H Wang, L Weng, Y Yang, IEEE Geoscience and Remote Sensing Letters. 138Z. Liu, H. Wang, L. Weng, and Y. Yang, "Ship rotated bounding box space for ship extraction from high-resolution optical satellite images with complex backgrounds," IEEE Geoscience and Remote Sensing Letters, vol. 13, no. 8, pp. 1074-1078, 2017. Rotated region based cnn for ship detection. Z Liu, J Hu, L Weng, Y Yang, IEEE International Conference on Image Processing. Z. Liu, J. Hu, L. Weng, and Y. Yang, "Rotated region based cnn for ship detection," in IEEE International Conference on Image Processing, 2018. Realtime multi-person 2d pose estimation using part affinity fields. Z Cao, T Simon, S Wei, Y Sheikh, 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USAIEEE Computer SocietyZ. Cao, T. Simon, S. Wei, and Y. Sheikh, "Realtime multi-person 2d pose estimation using part affinity fields," in 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017. IEEE Computer Society, 2017, pp. 1302-1310. Oriented response networks. Y Zhou, Q Ye, Q Qiu, J Jiao, 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USAIEEE Computer SocietyY. Zhou, Q. Ye, Q. Qiu, and J. Jiao, "Oriented response networks," in 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017. IEEE Computer Society, 2017, pp. 4961-4970. A high resolution optical satellite image dataset for ship recognition and some new baselines. Z Liu, L Yuan, L Weng, Y Yang, International Conference on Pattern Recognition Applications and Methods. SCITEPRESS2Z. Liu, L. Yuan, L. Weng, and Y. Yang, "A high resolution optical satellite image dataset for ship recognition and some new baselines," in International Conference on Pattern Recognition Applications and Methods, vol. 2. SCITEPRESS, 2017, pp. 324-331. Featureattentioned object detection in remote sensing imagery. C Li, C Xu, Z Cui, D Wang, T Zhang, J Yang, 2019 IEEE International Conference on Image Processing (ICIP). IEEEC. Li, C. Xu, Z. Cui, D. Wang, T. Zhang, and J. Yang, "Feature- attentioned object detection in remote sensing imagery," in 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019, pp. 3886-3890. Adam: A method for stochastic optimization. D P Kingma, J Ba, 3rd International Conference on Learning Representations. Bengio and Y. LeCunSan Diego, CA, USAConference Track ProceedingsD. P. Kingma and J. Ba, "Adam: A method for stochastic optimiza- tion," in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Y. Bengio and Y. LeCun, Eds., 2015. Deep layer aggregation. F Yu, D Wang, E Shelhamer, T Darrell, 2018 IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USAF. Yu, D. Wang, E. Shelhamer, and T. Darrell, "Deep layer aggregation," in 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018. IEEE Computer Society, 2018, pp. 2403-2412. Stacked hourglass networks for human pose estimation. A Newell, K Yang, J Deng, A. Newell, K. Yang, and J. Deng, "Stacked hourglass networks for human pose estimation," pp. 483-499, 2016. DOTA: A large-scale dataset for object detection in aerial images. G Xia, X Bai, J Ding, Z Zhu, S J Belongie, J Luo, M Datcu, M Pelillo, L Zhang, 2018 IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USAG. Xia, X. Bai, J. Ding, Z. Zhu, S. J. Belongie, J. Luo, M. Datcu, M. Pelillo, and L. Zhang, "DOTA: A large-scale dataset for object detection in aerial images," in 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018. IEEE Computer Society, 2018, pp. 3974-3983. Redet: A rotation-equivariant detector for aerial object detection. J Han, J Ding, N Xue, G.-S Xia, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionJ. Han, J. Ding, N. Xue, and G.-S. Xia, "Redet: A rotation-equivariant detector for aerial object detection," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 2786-2795. Oriented r-cnn for object detection. X Xie, G Cheng, J Wang, X Yao, J Han, arXiv:2108.05699arXiv preprintX. Xie, G. Cheng, J. Wang, X. Yao, and J. Han, "Oriented r-cnn for object detection," arXiv preprint arXiv:2108.05699, 2021. Oriented object detection in aerial images with box boundary-aware vectors. J Yi, P Wu, B Liu, Q Huang, H Qu, D Metaxas, Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. the IEEE/CVF Winter Conference on Applications of Computer VisionJ. Yi, P. Wu, B. Liu, Q. Huang, H. Qu, and D. Metaxas, "Oriented object detection in aerial images with box boundary-aware vectors," in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2021, pp. 2150-2159. Dardet: A dense anchor-free rotated object detector in aerial images. F Zhang, X Wang, S Zhou, Y Wang, arXiv:2110.01025arXiv preprintF. Zhang, X. Wang, S. Zhou, and Y. Wang, "Dardet: A dense anchor-free rotated object detector in aerial images," arXiv preprint arXiv:2110.01025, 2021. Gliding vertex on the horizontal bounding box for multi-oriented object detection. Y Xu, M Fu, Q Wang, Y Wang, K Chen, G.-S Xia, X Bai, IEEE Transactions on Pattern Analysis and Machine Intelligence. 434Y. Xu, M. Fu, Q. Wang, Y. Wang, K. Chen, G.-S. Xia, and X. Bai, "Gliding vertex on the horizontal bounding box for multi-oriented object detection," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 4, pp. 1452-1459, 2020. Dynamic anchor learning for arbitrary-orientedd object detection. Q Ming, Z Zhou, L Miao, H Zhang, L Li, arXiv:2012.0415016arXiv preprintQ. Ming, Z. Zhou, L. Miao, H. Zhang, and L. Li, "Dynamic an- chor learning for arbitrary-orientedd object detection," arXiv preprint arXiv:2012.04150, vol. 1, no. 2, p. 6, 2020. Rethinking rotated object detection with gaussian wasserstein distance loss. X Yang, J Yan, Q Ming, W Wang, X Zhang, Q Tian, arXiv:2101.11952arXiv preprintX. Yang, J. Yan, Q. Ming, W. Wang, X. Zhang, and Q. Tian, "Rethinking rotated object detection with gaussian wasserstein distance loss," arXiv preprint arXiv:2101.11952, 2021. Point-based estimator for arbitrary-oriented object detection in aerial images. K Fu, Z Chang, Y Zhang, X Sun, IEEE Transactions on Geoscience and Remote Sensing. K. Fu, Z. Chang, Y. Zhang, and X. Sun, "Point-based estimator for arbitrary-oriented object detection in aerial images," IEEE Transactions on Geoscience and Remote Sensing, 2020. Oriented reppoints for aerial object detection. W Li, J Zhu, arXiv:2105.11111arXiv preprintW. Li and J. Zhu, "Oriented reppoints for aerial object detection," arXiv preprint arXiv:2105.11111, 2021. Yolov3: An incremental improvement. J Redmon, A Farhadi, arXiv:1804.02767arXiv preprintJ. Redmon and A. Farhadi, "Yolov3: An incremental improvement," arXiv preprint arXiv:1804.02767, 2018. He is currently pursuing a Ph.D. degree from the College of Electronic Science and Technology, NUDT. His research interests focus on include remote sensing image processing. 2009, and the M.E. degree in information and communication engineering from National University of Defense Technology (NUDT). Harbin, China; Changsha, ChinaFeng Zhang received the B.E. degree in electronic information engineering from Harbin Institute of Technology(HIT). pattern recognition, and computer visionFeng Zhang received the B.E. degree in electronic information engineering from Harbin Institute of Technology(HIT), Harbin, China, in 2009, and the M.E. degree in information and communication en- gineering from National University of Defense Tech- nology (NUDT), Changsha, China, in 2011. He is currently pursuing a Ph.D. degree from the College of Electronic Science and Technology, NUDT. His research interests focus on include remote sensing image processing, pattern recognition, and computer vision. He is currently an Assistant Professor with the College of Electrical Science, National University of Defense Technology. His research interests include remote sensing image processing. 2009, the M.S. and Ph.D. degrees in electronic science and technology from the National University of Defense Technology. Beijing, China; Changsha, ChinaXueying Wang received the B.S. degree in electronic information engineering from Beihang University. pattern recognitionXueying Wang received the B.S. degree in elec- tronic information engineering from Beihang Uni- versity, Beijing, China, in 2009, the M.S. and Ph.D. degrees in electronic science and technology from the National University of Defense Technology, Changsha, China, in 2011 and 2016. He is currently an Assistant Professor with the College of Electrical Science, National University of Defense Technology. His research interests include remote sensing image processing, pattern recognition. He is currently a Full Professor with the College of Electrical Science, National University of Defense Technology, Changsha. He has authored or co-authored over 100 referred papers. degrees in electrical engineering from Hunan University. Shilin Zhou received the B.S., M.S., and Ph.D.Changsha, ChinaHis research interests include image processing and pattern recognitionShilin Zhou received the B.S., M.S., and Ph.D. degrees in electrical engineering from Hunan Uni- versity, Changsha, China, in 1994, 1996, and 2000, respectively. He is currently a Full Professor with the College of Electrical Science, National University of Defense Technology, Changsha. He has authored or co-authored over 100 referred papers. His research interests include image processing and pattern recog- nition. He is currently pursuing a Ph.D. degree from the College of Electronic Science and Technology, NUDT. He has authored several papers in journals and conferences such as TPAMI, TIP, CVPR, and ECCV. His research interests focus on low-level vision. 2016, and the M.E. degree in information and communication engineering from National University of Defense Technology (NUDT). Jinan, China; Changsha, ChinaYingqian Wang received the B.E. degree in electrical engineering from Shandong University (SDU)ing and image super-resolutionYingqian Wang received the B.E. degree in electri- cal engineering from Shandong University (SDU), Jinan, China, in 2016, and the M.E. degree in in- formation and communication engineering from Na- tional University of Defense Technology (NUDT), Changsha, China, in 2018. He is currently pursuing a Ph.D. degree from the College of Electronic Science and Technology, NUDT. He has authored several papers in journals and conferences such as TPAMI, TIP, CVPR, and ECCV. His research interests focus on low-level vision, particularly on light field imag- ing and image super-resolution.
[ "https://github.com/zf020114/CHPDet.", "https://github.com/chinakook/labelImg2", "https://github.com/dingjiansw101/AerialDetection/", "https://github.com/yangxue0827/RotationDetection", "https://github.com/csuhan/s2anet" ]
[ "Eikonal Approximation in Celestial CFT", "Eikonal Approximation in Celestial CFT", "Eikonal Approximation in Celestial CFT", "Eikonal Approximation in Celestial CFT" ]
[ "Leonardo Pipolo De Gioia ", "Ana-Maria Raclariu \nPerimeter Institute for Theoretical Physics\n31 Caroline St. NN2L 2Y5WaterlooCanada\n", "\nUniversity of Campinas -UNICAMP\nInstitute of Physics \"Gleb-Wataghin\"\nCampinas -SP13083-859Brazil\n", "Leonardo Pipolo De Gioia ", "Ana-Maria Raclariu \nPerimeter Institute for Theoretical Physics\n31 Caroline St. NN2L 2Y5WaterlooCanada\n", "\nUniversity of Campinas -UNICAMP\nInstitute of Physics \"Gleb-Wataghin\"\nCampinas -SP13083-859Brazil\n" ]
[ "Perimeter Institute for Theoretical Physics\n31 Caroline St. NN2L 2Y5WaterlooCanada", "University of Campinas -UNICAMP\nInstitute of Physics \"Gleb-Wataghin\"\nCampinas -SP13083-859Brazil", "Perimeter Institute for Theoretical Physics\n31 Caroline St. NN2L 2Y5WaterlooCanada", "University of Campinas -UNICAMP\nInstitute of Physics \"Gleb-Wataghin\"\nCampinas -SP13083-859Brazil" ]
[]
We identify an eikonal regime in celestial CFT 2 in which massless 2-2 scattering is dominated by t-channel exchange. We derive a formula for the celestial amplitude that resums exchanges of arbitrary integer spin to all orders in the coupling. The resulting eikonal phase takes the same form as in flat space with the powers of center-of-mass energy replaced by weight-shifting operators on the celestial sphere. We independently compute the celestial two-point function for a scalar propagating in a shockwave background and show that to leading order in the gravitational coupling and for a suitable choice of the source, the result agrees with the prediction from the celestial eikonal formula for graviton exchange. We demonstrate that this two-point function can be directly obtained from the corresponding formula in AdS 4 in a flat space limit. We finally establish a general relation between scalar celestial amplitudes in celestial CFT d−1 and the flat space limit of scalar AdS d+1 Witten diagrams.
10.1007/jhep03(2023)030
[ "https://export.arxiv.org/pdf/2206.10547v1.pdf" ]
249,889,357
2206.10547
2cd95829f3cd2fdea8999624487e6ccdc9150e60
Eikonal Approximation in Celestial CFT 21 Jun 2022 Leonardo Pipolo De Gioia Ana-Maria Raclariu Perimeter Institute for Theoretical Physics 31 Caroline St. NN2L 2Y5WaterlooCanada University of Campinas -UNICAMP Institute of Physics "Gleb-Wataghin" Campinas -SP13083-859Brazil Eikonal Approximation in Celestial CFT 21 Jun 2022 We identify an eikonal regime in celestial CFT 2 in which massless 2-2 scattering is dominated by t-channel exchange. We derive a formula for the celestial amplitude that resums exchanges of arbitrary integer spin to all orders in the coupling. The resulting eikonal phase takes the same form as in flat space with the powers of center-of-mass energy replaced by weight-shifting operators on the celestial sphere. We independently compute the celestial two-point function for a scalar propagating in a shockwave background and show that to leading order in the gravitational coupling and for a suitable choice of the source, the result agrees with the prediction from the celestial eikonal formula for graviton exchange. We demonstrate that this two-point function can be directly obtained from the corresponding formula in AdS 4 in a flat space limit. We finally establish a general relation between scalar celestial amplitudes in celestial CFT d−1 and the flat space limit of scalar AdS d+1 Witten diagrams. Introduction Advances in understanding the asymptotic structure of asymptotically flat spacetimes (AFS) [1][2][3][4][5][6][7][8] have recently crystallized into the proposal that gravity in four-dimensional (4D) AFS may be dual to a conformal field theory (CFT) living on the celestial sphere at null infinity [9][10][11][12][13]. A central aspect of the holographic dictionary is the identification of asymptotic massless fields at I ± with operator insertions on the celestial sphere upon exchanging their dependence on retarded/advanced times for conformal scaling dimensions via a Mellin transform. The resulting observables on the sphere, also known as celestial amplitudes, compute overlaps between past and future asymptotic boost, instead of the standard energy-momentum, eigenstates. As such, celestial amplitudes carry the same information as the S-matrix while making the Lorentz SL(2, C) symmetries manifest [12,13]. As anticipated in [14], the proposed holographic correspondence in AFS distinguishes itself from its counterparts in asymptotically negatively and positively curved spacetimes in that the boundary conformal theory lives in two lower dimensions compared to the gravitational theory. Consequently, familiar aspects of standard CFTs with bulk gravity duals such as the state operator correspondence, unitarity or the relationship between entanglement and bulk geometry are obscured. As a first step in gaining intuition about celestial CFT (CCFT), much of the research to date has focused on studying the imprints of asymptotic symmetries and universal aspects of bulk scattering on celestial amplitudes [15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34]. Remarkably, at tree-level, the symmetry structure of CCFT appears to be much richer than anticipated, including global shift symmetries associated with bulk translations and their local enhancements [7,8], a Virasoro enhancement of the Lorentz SL(2, C) [10,11], all of which are further promoted to a w 1+∞ symmetry associated with the tower of subleading soft graviton theorems [35][36][37]. Taking a leap of faith, one hope is that celestial CFT will ultimately provide a nonperturbative completion of gravity in AFS (see [38] for recent evidence in this direction in a 2D model of gravity), while a complete understanding of celestial symmetries would serve as a guiding principle for extracting non-perturbative details of scattering processes. Evidence for the latter is already manifest, on the one hand in the realization that large gauge symmetries suggest a prescription to eliminate infrared divergences at the S-matrix level to all orders in perturbation theory in abelian [39][40][41][42] and possibly non-abelian gauge theory [43][44][45], and gravity [41,[46][47][48], on the other hand in that CFT machinery such as operator product expansion (OPE) blocks [35,49,50] allows for the resummation of the leading holomorphic or antiholomorphic collinear divergences -a key element in the identification of the w 1+∞ higher spin symmetry of classical gravitational scattering [51]. One of the goals of this paper is to provide a new entry in the AFS/CCFT dictionary related to a universal, non-perturbative property of 2-2 scattering amplitudes in fourdimensional AFS, namely the leading eikonal exponentiation of t-channel exchanges at high energy [52][53][54]. Naively, one challenge is that celestial amplitudes scatter boost eigenstates involving integrals over all energies and hence it is a-priori not clear how to take a high-energy limit. However, as shown in [41] low-and high-energy features of massless 4-point scattering are reflected in the analytic structure of the corresponding celestial amplitudes in the net boost weight β. While low energy features are captured by the poles at negative even β (see also [20][21][22] for similar behavior in conformally soft limits), the high-energy regime can be accessed in the limit of large β [16,41]. It is natural to suspect then that at large β and small cross-ratio z ≡ −t/s 1, celestial amplitudes are dominated by t-channel exchanges. In section 3 we present arguments in favor of this proposal by revisiting the position-space calculation of the flat-space eikonal amplitude [54] in a conformal primary basis. As a result we obtain a celestial version of the eikonal exponentiation of t-channel exchanges of arbitrary spin. Interestingly, the celestial eikonal phase is in general 1 operator valued and each term in its small-coupling expansion acts as a weight-shifting operator [17] on the external scaling dimensions. This is expected as spinning operators couple to scalars via higher derivative interactions which in a conformal primary basis result in shifted weights and resonates with results found in the exponentiation of IR divergences in gauge theory and gravity [41,55]. Note however that our analysis is complementary, since the eikonal phase discussed here is related to the imaginary part of the exponent of soft S-matrix that results from virtual particle exchanges, rather than the typically discussed real part which arises when the exchanges become on-shell [56]. The eikonal exponentiation of graviton exchanges is particularly interesting as in flat space it is well known to be reproduced by the propagation of a probe particle in a shockwave background [57][58][59]. More recently, the scattering problem in non-perturbative backgrounds has been approached with modern amplitude methods [60][61][62] including double copy constructions [63][64][65][66][67]. This motivates us to compute the celestial two-point function in a shockwave background. The result is strikingly similar to the analog formula in AdS 4 [59] and we establish a relation between the two by demonstrating that the celestial result can be directly recovered as a flat space limit of the AdS result. This observation is a special case of a more general relation between celestial amplitudes and flat space limits of Witten diagrams which we discuss in section 5. In particular, we present a general argument that scalar (d + 1)-dimensional AdS Witten diagrams reduce to (d − 1)-dimensional CCFT amplitudes to leading order in the limit of large AdS radius provided the boundary operators are placed on certain past and future time-slices. While it is well known that flat space S-matrices in 4D can be extracted from CFT 3 correlators either via the HKLL prescription [68][69][70] or via the flat space limit of Mellin space correlators [71][72][73][74] (see also [75] for a recent review of the connection between the two), what we find here instead is that celestial amplitudes arise directly as flat space limits of CFT 3 correlators with particular kinematics and with analytically continued dimensions. We regard this as additional evidence that celestial amplitudes are natural candidate holographic observables for quantum gravity in 4D AFS. This paper is organized as follows. In section 3 we identify an eikonal regime in celestial CFT and derive the celestial eikonal amplitude for the scattering of 4 massless scalars mediated by massive scalar exchanges. In section 3.1.1 we show that the same result is reproduced by the direct Mellin transform of the flat-space eikonal amplitude, while in section 3.2 we explicitly check that the first term in a small coupling expansion precisely reproduces the t-channel celestial amplitude in the celestial eikonal limit. We generalize our result to exchanges of arbitrary spin in section 3.3. Section 4 is devoted to the study of the celestial propagator in a shockwave background. After a review of the momentum space phase shift acquired by a particle crossing a shockwave in section 4.1, we express this in a conformal primary basis in section 4.2. We identify the CCFT source that relates this to the celestial eikonal formula for graviton exchange in section 4.3. In section 4.4 we show that the same formula can be obtained as the flat space limit of the CFT 3 correlator associated with propagation through a shock in AdS 4 . We establish a general relation between AdS d+1 Witten diagrams in the flat space limit and CCFT d−1 amplitudes in section 5. Various technical details are collected in the appendices. Preliminaries The momentum space scattering amplitude of 4 massless scalars in 4D Minkowski spacetime takes the general form A 4 (p 1 , · · · , p 4 ) = A 4 (s, t)(2π) 4 δ (4) 4 i=1 p i . (2.1) Here the Mandelstam invariants s, t are defined as s = −(p 1 + p 2 ) 2 , t = −(p 1 + p 3 ) 2 (2.2) and we parameterize massless on-shell momenta as p i = η i ω iq (z i ,z i ), (2.3) where ω i are external energies,q are null vectors towards a point (z i ,z i ) on the celestial sphere 2q (z,z) = (1 + zz, z +z, −i(z −z), 1 − zz) (2.4) and η i = +1 (η i = −1) for outgoing (incoming) particles. The amplitudes (2.1) are mapped to celestial amplitudes 3 or 2D CCFT observables ‹ A by a Mellin transform [12,13], ‹ A(∆ j , z j ,z j ) = 4 i=1 ∞ 0 dω i ω ∆ i −1 i A 4 (p j ). (2.5) This map effectively trades asymptotic energy-momentum eigenstates for states that diagonalize boosts towards the point (z i ,z i ) on the celestial sphere. As such, the resulting celestial amplitudes transform covariantly under the Lorentz SL(2, C). In the following, it will be convenient to recall that the momentum space amplitude (2.1) and the celestial amplitude (2.5) can be obtained directly by integrating the connected component of the time-ordered bulk correlation function C(x 1 , · · · , x 4 ) with amputated external legs against different external wavefunctions ψ(x i ; p i ). While (2.1) is defined by integrating C against plane wave eigenstates ψ( x i ; p i ) = e −ip i ·x i [54] 4 A 4 (p j ) = 4 i=1 d 4 x i e −ip i ·x i C(x j ),(2. 6) 2 Technically in this parameterization the celestial sphere is flattened to a plane. 3 Unless otherwise stated, celestial amplitudes will refer to observables on the 2D celestial sphere. 4 We work in the mostly + signature in which the mode expansion of a massless scalar field takes the form φ(x) = 1 (2π) 3 d 3 k 2k 0 Ä a † k e −ik·x + a k e ik·x ä . Then (2.6) with p i = η i ω iqi is such that positive (negative) energy modes are created in the in (η i = −1) (out (η i = 1)) states (see eg. [76]). celestial amplitudes arise from choosing the external wavefunctions ψ(x i ; p i ) to be instead conformal primary solutions to the scalar wave equation ϕ ∆ i (x i ; η iqi ) [12,13] ϕ ∆ i (x i ; η iqi ) = (iη i ) ∆ i Γ(∆ i ) (−q i · x i + iη i ) ∆ i , (2.7) namely, ‹ A(∆ j , z j ,z j ) = 4 i=1 d 4 x i ϕ ∆ i (x i ; η iqi ) C(x j ). (2.8) Indeed, (2.5) follows immediately upon noticing that plane waves and massless conformal primaries are related by a Mellin transform [12,13], ϕ ∆ (x; ηq) ≡ ∞ 0 dωω ∆−1 e −iωηq·x = (iη) ∆ Γ(∆) (−q · x + iη ) ∆ . (2.9) One of the aims of this paper is to explore the relationship between celestial amplitudes and correlation functions of CFT 3 with bulk AdS 4 gravity duals. Such a relation was first proposed in [77], where it was argued that amplitudes in d-dimensional celestial CFT should be related to CFT d+1 correlators in the bulk point limit. There, this correspondence was studied explicitly for the case of 4-point scalar scattering in AdS 3 mediated by massive and massless scalar exchanges in which case the corresponding Witten diagrams in the bulk-point configuration were found to reduce to amplitudes in 1-dimensional CCFT. In this paper we extend the relationship between celestial amplitudes and AdS Witten diagrams by showing that generic scalar AdS d+1 Witten diagrams with particular kinematics reduce to CCFT d−1 amplitudes in the flat space limit. 5 We will check this explicitly in the example of propagation of a particle in a shockwave background, related to the eikonal exponentiation of t-channel graviton exchanges [52,53,58,59]. As we will see, it is the representation (2.8) that makes the connection between celestial amplitudes and CFT correlators in the flat space limit most manifest. In the next section we start by deriving a formula for the eikonal exponentiation of arbitrary spinning exchanges in celestial 4-point massless scalar scattering. Eikonal regime in celestial CFT In this section we propose that celestial 4-point amplitudes of massless particles have universal behavior in the limit of large net conformal dimension β 1 and small cross ratio z 1. We argue that in this kinematic regime, the CCFT encodes the eikonal physics [52,54] of bulk 4-point scattering amplitudes. We present a formula for the eikonal exponentiation of arbitrary spinning t-channel exchanges in a conformal primary basis. We find that the eikonal exponent is in general operator valued, with weight-shifting operators replacing powers of the center-of-mass energy in a momentum space basis. Our formula shares similarities with the eikonal amplitude in AdS 4 suggesting a relation between celestial amplitudes and CFT 3 correlators with particular kinematics. Figure 1: Contributions from ladder diagrams involving n t-channel exchanges to the scattering of 4 scalars. ψ 1 ψ 2 ψ 3 ψ 4 G(x n − x n−1 ) G e (x 1 −x 1 ) ⋯ x 1xn−1xn x 1 + ⋯ x 1xn−1xn x 1 x n−1 x n ψ 1 ψ 2 ψ 3 ψ 4 A n = + ⋯ Consider the 4-point scalar scattering amplitude associated with the sum over crossed ladder diagrams with n massive exchanges of arbitrary spin j in Figure 1 A n = (ig) 2n d 4 x 1 · · · d 4 x n d 4x 1 · · · d 4x n ψ(x n ; p 3 )G(x n − x n−1 ) · · · G(x 2 − x 1 )ψ(x 1 ; p 1 ) × ψ(x n ; p 4 )G(x n −x n−1 ) · · · G(x 2 −x 1 )ψ(x 1 ; p 2 ) σ∈Sn G e (x 1 −x σ(1) ) · · · G e (x n −x σ(n) ). (3.1) As indicated in Figure 1, G and G e are internal position-space propagators corresponding to the external legs and exchanges respectively, while ψ(x; p) are external wavefunctions. Each vertex comes with a factor of ig, where g is the coupling constant. As reviewed in section 2, the momentum space amplitude associated with n crossed ladder exchanges is obtained by taking ψ(x; p) to be plane waves. Resuming the amplitudes in (3.1) for all n > 0, n ∈ Z (which excludes the disconnected contribution from n = 0) in the approximation where G are on-shell valid at high energies s −t, one obtains the standard eikonal amplitude [54,79] A eik (s, t = −p 2 ⊥ ) 2s R 2 d 2 x ⊥ e ip ⊥ ·x ⊥ e ig 2 2 s j−1 G ⊥ (x ⊥ ) − 1 . (3.2) Here G ⊥ (x ⊥ ) is the transverse propagator G ⊥ (x ⊥ ) ≡ d 2 k ⊥ (2π) 2 e ik ⊥ ·x ⊥ k 2 ⊥ + m 2 ,(3.3) p ⊥ ≡ p 3,⊥ + p 1,⊥ is the net momentum transfer and j is the spin of the exchanged particles. (3.2) is expected to approximate 4-point massless scalar scattering amplitudes in the high energy s −t limit [80]. It is natural to expect a similar regime to exist in celestial CFT, in which celestial amplitudes are dominated by a phase. The s −t regime immediately maps to a small cross-ratio z ≡ − t s 1 limit in the CCFT. Moreover, we will see in the next section that in a conformal primary basis external lines become approximately on-shell in the limit of large external dimensions ∆ 1 , ∆ 2 , or equivalently β ≡ 4 i=1 ∆ i − 4 1. This resonates with the results of [17,41] where it was shown that Mellin integrals are dominated by high energies in the limit of large net boost weight. We will therefore identify a universal eikonal regime in CCFT characterized by β 1, z 1. (3.4) Celestial eikonal exponentiation of scalar exchanges The celestial counterpart of (3.2) can be obtained by evaluating (3.1) with the external wavefunctions replaced by conformal primary wavefunctions ψ( x i ; q i ) → ϕ ∆ i (x i ; η iqi ), where ϕ ∆ i (x i ; η iqi ) were defined in (2.7). By construction, the resulting celestial amplitudes transform covariantly under Lorentz transformations x → Λ · x, z → z = az+b cz+d like 2D correlation functions of scalar primary operators since the measure, G and G e in (3.1) are Lorentz invariant while [12] ϕ ∆ i (Λ · x i ; η iqi (z i ,z i )) = ∂ z i ∂ z i −∆ i /2 ϕ ∆ i (x i ; η iqi (z i ,z i )). (3.5) This implies that the celestial amplitude for n crossed ladder t-channel exchanges will be of the form ‹ A n (∆ i , z i ,z i ) = I 13−24 (z i ,z i )f n (z,z), (3.6) where I 13−24 is a 4-point conformally covariant factor 6 and f n is a function of the conformally invariant cross-ratio z. Motivated by the center of mass kinematics (see appendix C.1), it is convenient to parameterize the null vectorsq i as 7 q i = (1 + q i , q i,⊥ , 1 − q i ), i = 1, 3 q i = (1 + q i , q i,⊥ , −1 + q i ), i = 2, 4, (3.8) where q i,⊥ are 2-component vectors andq 2 i = 0 =⇒ 4q i = |q i,⊥ | 2 . At high energies, ω 1 ω 3 , ω 2 ω 4 and p + i = 2η i ω i p i,⊥ , p − i 0, for i = 1, 3 and vice-versa for 2, 4 meaning that q i ∝ |q i,⊥ | 2 1. In this case the cross-ratio reduces to z = − t s = ω 3 ω 2q 1 ·q 3 q 1 ·q 2 (q 1 24,⊥ + iq 2 24,⊥ )(q 1 13,⊥ − iq 2 13,⊥ ), (3.9) where we used momentum conservation ω 3 ω 2 = q 1 24,⊥ + iq 2 24,⊥ q 1 13,⊥ + iq 2 13,⊥ = q 1 24,⊥ − iq 2 24,⊥ q 1 13,⊥ − iq 2 13,⊥ . (3.10) We hence see that eikonal kinematics imply small z. Note that in the z → 0 limit, (3.8) are a special case (up to a Jacobian factor) of (2.3) where the momenta of 1, 3 and 2, 4 are respectively expanded around antipodal points on the celestial sphere. This kinematic configuration is illustrated in Figure 2. 6 For n ≥ 1 it takes the form with h i =h i = ∆i 2 , but it may also involve singular conformally covariant structures as will be the case for the disconnected n = 0 contribution. 7 The complex coordinates (z i ,z i ), (w i ,w i ) in the parameterizations q i,⊥ = (z i +z i , −i(z i −z i )) for i = 1, 3, and q i,⊥ = (w i +w i , −i(w i −w i )) for i = 2, 4 are in different patches. Writing both in the same patch introduces Jacobian factors in the celestial amplitudes. To evaluate the integrals in (3.1) we employ light-cone coordinates, I 13−24 (z i ,z i ) = Ä z34 z14 ä h13 Ä z14 z12 ä h24 z h1+h3x − = x 0 − x 3 , x + = x 0 + x 3 , x i ⊥ = x i , i = 1, 2, (3.11) in which the Minkowski metric takes the form ds 2 = −dx − dx + + ds 2 ⊥ . (3.12) In the limit q i 1,q i · x are approximated by [54] q i · x = −x − + q i,⊥ · x ⊥ − q i x + −x − + q i,⊥ · x ⊥ , i = 1, 3, (3.13) q i · x = −x + + q i,⊥ · x ⊥ − q i x − −x + + q i,⊥ · x ⊥ , i = 2, 4 (3.14) and the conformal primary wavefunctions are therefore given by ϕ ∆ 1 (x; −q 1 ) = (−i) ∆ 1 Γ(∆ 1 ) (x − − q 1,⊥ · x ⊥ − i ) ∆ 1 , ϕ ∆ 3 (x;q 3 ) = i ∆ 3 Γ(∆ 3 ) (x − − q 3,⊥ · x ⊥ + i ) ∆ 3 , (3.15) ϕ ∆ 2 (x; −q 2 ) = (−i) ∆ 2 Γ(∆ 2 ) (x + − q 2,⊥ · x ⊥ − i ) ∆ 2 , ϕ ∆ 4 (x;q 4 ) = i ∆ 4 Γ(∆ 4 ) (x + − q 4,⊥ · x ⊥ + i ) ∆ 4 . (3.16) In a momentum space basis it can be argued that in the high energy limit, the internal 1-3 and 2-4 propagators are well approximated by on-shell ones (corresponding to classical particle trajectories). In a conformal primary basis, energies are traded for conformal dimensions and it is not obvious whether an analogous argument can be made. Nevertheless, we show in appendix A that a similar approximation holds instead at large ∆ 1 , ∆ 2 1, in which case these propagators become G 13 (x i , x j ) = − i(x − i − q 1,⊥ · x i,⊥ + i ) 2∆ 1 δ(x − i − x − j )Θ(x + i − x + j )δ (2) (x i,⊥ − x j,⊥ ), (3.17) G 24 (x i ,x j ) = − i(x + i − q 2,⊥ ·x i,⊥ + i ) 2∆ 2 Θ(x − i −x − j )δ(x + i −x + j )δ (2) (x i,⊥ −x j,⊥ ). (3.18) As for the propagators for scalar exchanges of mass m, we use the standard formula [76] G e (x −x) = −i d 4 k (2π) 4 e ik·(x−x) k 2 + m 2 − i . (3.19) We now have all ingredients needed to evaluate (3.1). We refer the reader to appendix B for the lengthy yet straightforward calculation and simply state the result. For n crossed scalar exchanges of mass m we find ‹ A n = 4(2π) 2 d 2 x ⊥ d 2x ⊥ (iχ) n n! i ∆ 1 +∆ 3 Γ(∆ 1 + ∆ 3 ) (−q 13,⊥ · x ⊥ ) ∆ 1 +∆ 3 i ∆ 2 +∆ 4 Γ(∆ 2 + ∆ 4 ) (−q 24,⊥ ·x ⊥ ) ∆ 2 +∆ 4 , (3.20) where we definedχ ≡ g 2 8 e −∂ ∆ 1 e −∂ ∆ 2 G ⊥ (x ⊥ ,x ⊥ ), (3.21) and G ⊥ is the position space transverse propagator in (3.3). Summing all connected diagrams with n > 0 yields the eikonal celestial amplitude ‹ A eik 4(2π) 2 d 2 x ⊥ d 2x ⊥ e iχ − 1 i ∆ 1 +∆ 3 Γ(∆ 1 + ∆ 3 ) (−q 13,⊥ · x ⊥ ) ∆ 1 +∆ 3 i ∆ 2 +∆ 4 Γ(∆ 2 + ∆ 4 ) (−q 24,⊥ ·x ⊥ ) ∆ 2 +∆ 4 , (3.22) where stands for the leading terms in the celestial eikonal regime of large ∆ 1 , ∆ 2 and small z. This formula (together with its generalization to arbitrary spinning exchanges where (3.21) is simply replaced by (3.50)) is one of the main results of this paper. It has two interesting features. First, the eikonal phaseχ is operator valued for all spins j = 1. This feature of CCFT is familiar from both celestial double copy constructions [29,81] and the conformally soft exponentiation of infrared divergences in gravity [41,47,82,83]. Second, it looks remarkably similar to the eikonal amplitude in AdS [54]. Indeed, we will later establish a relation between its cousin, the celestial two-point function in a shockwave background, and the flat-space limit of its AdS counterpart. A general argument for the relation between AdS d+1 Witten diagrams in the flat-space limit and CCFT d−1 amplitudes will be given in section 5. The eikonal formula can also be directly derived as a Mellin transform of the momentum space amplitude (3.2). While this is to be expected from the standard relation between conformal primary wavefunctions and plane waves (2.9), we find it nevertheless instructive to provide this alternate derivation in the remainder of this section. Mellin transform of the eikonal amplitude We now show that the celestial eikonal amplitude (3.22) is simply a Mellin transform of the scalar momentum space eikonal amplitude (3.2) including the momentum conserving delta function, A eik = 2s R 2 d 2 x ⊥ e ip ⊥ ·x ⊥ ï exp Å ig 2 2s G ⊥ (x ⊥ ) ã − 1 ò (2π) 4 δ (4) 4 i=1 p i . (3.23) Our strategy is to start with the celestial eikonal formula (3.22) and show that it can be recast as a Mellin transform of (3.23) with respect to the external energies. To this end, consider the Taylor expansion of (3.22) in powers of g 2 , ‹ A eik = 4(2π) 2 ∞ n=1 1 n! d 2 x ⊥ d 2x ⊥ Å ig 2 8 G ⊥ (x ⊥ ,x ⊥ ) ã n × i ∆ 1 +∆ 3 −n Γ(∆ 1 + ∆ 3 − n) (−q 13,⊥ · x ⊥ ) ∆ 1 +∆ 3 −n i ∆ 2 +∆ 4 −n Γ(∆ 2 + ∆ 4 − n) (−q 24,⊥ ·x ⊥ ) ∆ 2 +∆ 4 −n . (3.24) Introducing parameters ω 1 , ω 2 and using the Mellin representation (2.9) for each term in the sum, ‹ A eik = 4(2π) 2 ∞ n=1 1 n! d 2 x ⊥ d 2x ⊥ Å ig 2 8 G ⊥ (x ⊥ ,x ⊥ ) ã n ∞ 0 dω 1 ω 1 ∞ 0 dω 2 ω 2 ω ∆ 1 +∆ 3 −n 1 ω ∆ 2 +∆ 4 −n 2 ×e −iω 1 q 13,⊥ ·x ⊥ e −iω 2 q 24,⊥ ·x ⊥ = ∞ 0 dω 1 ω 1 ∞ 0 dω 2 ω 2 ω ∆ 1 +∆ 3 1 ω ∆ 2 +∆ 4 2 4(2π) 2 d 2x ⊥ e −i(ω 1 q 13,⊥ +ω 2 q 24,⊥ )·x ⊥ × ∞ n=1 1 n! d 2 x ⊥ Å ig 2 2 · 4ω 1 ω 2 G ⊥ (x ⊥ ) ã n e −iω 1 q 13,⊥ ·x ⊥ , (3.25) where the last line follows from shifting x ⊥ → x ⊥ +x ⊥ under which G(x ⊥ ,x ⊥ ) → G(x ⊥ ). The integrals over x ⊥ andx ⊥ are now decoupled and the latter evaluates to a delta function ‹ A eik = ∞ 0 dω 1 ω 1 ∞ 0 dω 2 ω 2 ω ∆ 1 +∆ 3 1 ω ∆ 2 +∆ 4 2 4(2π) 4 δ (2) (ω 1 q 1,⊥ + ω 2 q 2,⊥ − ω 1 q 3,⊥ − ω 2 q 4,⊥ ) × ∞ n=1 1 n! d 2 x ⊥ Å ig 2 2 · 4ω 1 ω 2 G ⊥ (x ⊥ ) ã n e −iω 1 q 13,⊥ ·x ⊥ . (3.26) Inserting the identity ∞ 0 dω 3 dω 4 δ(ω 3 − ω 1 )δ(ω 4 − ω 2 ) = 1, (3.27) (3.26) reduces to ‹ A eik = ∞ 0 4 i=1 dω i ω i ω ∆ i i (2π) 4 δ(ω 1 − ω 3 )δ(ω 2 − ω 4 )δ (2) (ω 1 q 1,⊥ + ω 2 q 2,⊥ − ω 3 q 3,⊥ − ω 4 q 4,⊥ ) × 4ω 1 ω 2 ∞ n=1 1 n! d 2 x ⊥ Å ig 2 2 · 4ω 1 ω 2 G ⊥ (x ⊥ ) ã n e −iω 1 q 13,⊥ ·x ⊥ . (3.28) Using the parameterizations of momenta (2.3) in the eikonal configuration (3.8) with q i 1, 8 p + 1 = −2ω 1 , p − 2 = −2ω 2 , p + 3 = 2ω 3 , p − 4 = 2ω 4 ,(3.29) while the components with + ↔ − vanish to leading order. Then ‹ A eik = ∞ 0 4 i=1 dω i ω i ω ∆ i i (2π) 4 4δ(p + 1 + p + 3 )δ(p − 2 + p − 4 )δ (2) (p 1,⊥ + p 2,⊥ + p 3,⊥ + p 4,⊥ ) × 4ω 1 ω 2 ∞ n=1 1 n! d 2 x ⊥ Å ig 2 2 · 4ω 1 ω 2 G ⊥ (x ⊥ ) ã n e i(p 1,⊥ +p 3,⊥ )·x ⊥ ,(3.30) and since δ (4) (p) = 2δ(p + )δ(p − )δ (2) (p ⊥ ), s 4ω 1 ω 2 (3.31) we find ‹ A eik = 4 i=1 Å ∞ 0 dω i ω i ω ∆ i i ã A eik . (3.32) This shows that the celestial eikonal amplitude (3.22) is precisely the Mellin transform of the momentum space eikonal formula (3.2). On the one hand, this result seems to follow from the defining relations (2.6), (2.8), (2.9). On the other hand, our first derivation in appendix B invokes the approximations (3.17), (3.18) for the external line propagators in a conformal primary basis which are valid at large ∆ 1 , ∆ 2 . Here, we see instead that ∆ 1 , ∆ 2 need to be large in order for the integrand of (3.32) to be dominated by eikonal kinematics. We regard this perfect match as evidence that (3.22) describes the behavior of scalar celestial 4-point scattering to leading order in the celestial eikonal limit (3.22) and to all orders in the coupling g. In the next section we show that the leading term in an expansion of (3.22) in powers of g reproduces the tree-level celestial scalar 4-point amplitude with a massive t-channel exchange in the z → 0 limit. Perturbative expansion As a warm up, let us start by evaluating the disconnected contribution ‹ A 0 = 4(2π) 2 d 2 x ⊥ i ∆ 1 +∆ 3 Γ(∆ 1 + ∆ 3 ) (−q 13,⊥ · x ⊥ ) ∆ 1 +∆ 3 d 2x ⊥ i ∆ 2 +∆ 4 Γ(∆ 2 + ∆ 4 ) (−q 24,⊥ ·x ⊥ ) ∆ 2 +∆ 4 (3.33) given by setting n = 0 in (3.20). While this term has been removed in our formulas, we expect it to reduce to the product of two scalar celestial two point functions with the correct normalization given in [13]. These integrals can be evaluated by writing the integrands in their Mellin representations and evaluating the integrals over the transverse coodinates which give rise to delta functions, ‹ A 0 = 4(2π) 2 ï (2π) 2 δ (2) (q 13,⊥ ) ∞ 0 dω 1 ω (∆ 1 +∆ 3 −2)−1 1 ò ï (2π) 2 δ (2) (q 24,⊥ ) ∞ 0 dω 2 ω (∆ 2 +∆ 4 −2)−1 2 ò . (3.34) The remaining Mellin transforms follow from [13] 9 δ(i∆) = 1 2π ∞ 0 dωω ∆−1 ,(3.35) therefore ‹ A 0 factorizes as ‹ A 0 = î (2π) 4 δ (2) (z 13 )δ(∆ 1 + ∆ 3 − 2) ó î (2π) 4 δ (2) (w 24 )δ(∆ 2 + ∆ 4 − 2) ó . (3.36) (3.36) agrees with the product of two celestial two-point functions, or equivalently the disconnected contribution to massless scalar 4-point t-channel scattering. We now turn to the leading contribution to (3.22) in a small g expansion. This should reproduce the celestial amplitude for massive t-channel exchange [19,85]. We start with ‹ A 1 = 2π 2 ig 2 d 2 x ⊥ d 2x ⊥ G ⊥ (x ⊥ ,x ⊥ ) i ∆ 1 +∆ 3 −1 Γ(∆ 1 + ∆ 3 − 1) (−q 13,⊥ · x ⊥ ) ∆ 1 +∆ 3 −1 i ∆ 2 +∆ 4 −1 Γ(∆ 2 + ∆ 4 − 1) (−q 24,⊥ ·x ⊥ ) ∆ 2 +∆ 4 −1 . (3.37) Replacing G ⊥ (x ⊥ ,x ⊥ ) by its Fourier representation (3.3), and using the Mellin representation of the conformal primary wavefunctions, the integrals over x ⊥ andx ⊥ decouple and again become delta functions ‹ A 1 = (2π) 4 ig 2 2 ∞ 0 dω 1 ω 1 ω ∆ 1 +∆ 3 −1 1 ∞ 0 dω 2 ω 2 ω ∆ 2 +∆ 4 −1 2 × d 2 k ⊥ 1 k 2 ⊥ + m 2 δ (2) (k ⊥ − ω 1 q 13,⊥ )δ (2) (k ⊥ + ω 2 q 24,⊥ ). (3.38) The remaining integrals are evaluated in appendix C and result in ‹ A 1 = (2π) 4 ig 2 sin πβ/2 πm β−2 4 Ç − q 1 24,⊥ q 1 13,⊥ å ∆ 2 +∆ 4 −2 |q 13,⊥ | −β δ(q 1 24,⊥ q 2 13,⊥ − q 2 24,⊥ q 1 13,⊥ ). (3.39) From (3.9) we immediately see that the delta function imposes reality of the cross-ratio, z −z = 0. Moreover, in the center of mass frame with z allowed to be complex, q 1,⊥ = (0, 0), q 2,⊥ = (0, 0),q 3,⊥ = Ä √ z + √z , −i( √ z − √z ) ä , q 4,⊥ = Ä − √ z − √z , −i( √ z − √z ) ä (3.40) we find that ‹ A 1 = (2π) 4 i √ z −β δ(z −z) g 2 π 8m 2 (m/2) β sin πβ/2 + · · · . (3.41) Here · · · denote subleading terms in the small z limit which don't contribute at leading order in the eikonal approximation (3.4). To compare to the expected result (see [85] for the formula in the same conventions used here up to a factor of (2π) 4 i) ‹ A t−channel (∆ i , z i ,z i ) = I 13−24 (z i ,z i )N gm (β)δ(z −z)|z| 2 |z − 1| h 13 −h 24 , (3.42) N gm (β) = g 2 π 8m 2 (m/2) β sin πβ/2 (3.43) we now evaluate (3.42) in the corresponding kinematic configuration z 1 = 0, z 2 = ∞, z 3 = √ z, z 4 = − 1 √ z . (3.44) We find 10 lim z 2 →∞,z→0 |z 2 | 2∆ 2 √ z −2∆ 4 ‹ A t−channel Å 0, ∞, √ z, − 1 √ z ã = g 2 π 8m 2 (m/2) β sin πβ/2 δ(z −z)( √ z) −β . (3.45) We hence see that the tree-level contribution to the eikonal expansion (3.22) agrees with the t-channel massive scalar exchange celestial amplitude as it should. Generalization to spinning exchanges In this section we generalize the celestial eikonal formula (3.22) to the case where the exchanges have arbitrary spin j. Spinning propagators G µ 1 ...µ j ν 1 ...ν j e (x,x) couple to the external lines via derivative interactions. As argued in section 3.1, in the eikonal limit external propagators are approximated by (3.17) and (3.18). This implies that, in analogy to the derivation in [54], the dominant contribution from (celestial) spinning propagators in the eikonal limit is G e (x i ,x σ(i) ) = (−2) j P 1 µ 1 · · · P 1 µ j G µ 1 ...µ j ν 1 ...ν j e (x i ,x σ(i) )P 2 ν 1 · · · P 2 ν j ,(3.46)with 11 G µ 1 ...µ j ν 1 ...ν j e (x,x) η (µ 1 ν 1 · · · η µ j ν j ) G e (x,x). (3.47) Here the indices µ, ν are separately symmetrized, G e (x,x) is the scalar propagator given in (3.19) and we defined the celestial massless momentum operators P 1 µ and P 2 µ acting on external particles 1 and 2 [17] P i µ = −(q i ) µ e ∂ ∆ i , i = 1, 2. (3.48) One can therefore follow through the same derivation in appendix B with the simple replacement G e (x i ,x σ(i) ) → G e (x i ,x σ(i) ) (−2P 1 · P 2 ) j G e (x i ,x σ(i) ). (3.49) Recalling that the eikonal kinematics are such thatq 1 ·q 2 ≈ −2, the final result is of the same form as (3.22) withχ →χ j , wherê χ j = g 2 (4e ∂ ∆ 1 e ∂ ∆ 2 ) j−1 2 G ⊥ (x ⊥ ,x ⊥ ). (3.50) For j = 0, we recover precisely (3.22). In the remainder of this paper, we will focus on the formula for graviton exchanges, namely j = 2, in which case g 2 = 8πG. We will see that the celestial eikonal exponentiation of graviton exchanges is related to the celestial two-point function of a particle in a shockwave background. In particular, we will identify the source in the CCFT that relates the two to leading order in perturbation theory. Interestingly, this relation is analogous to the one in AdS/CFT and will be shown in section 4.4 to be directly recovered in a flat space limit of the AdS result. Celestial scattering in shockwave background In this section we study the celestial amplitude describing the propagation of a scalar field in the presence of a shock h −− (x − , x ⊥ ) = δ(x − )h(x ⊥ ). We compare the leading term in an expansion of this two-point function in powers of h with the leading connected contribution to the eikonal celestial amplitude involving a spin 2 exchange computed in section 3 and find perfect agreement. Moreover, we show that this formula arises as the flat-space limit of the scalar two-point function in the presence of a shock in AdS 4 . This establishes a relation between celestial propagation in a shockwave background and the flat space limit of fourpoint functions in CFT 3 with operators inserted in small time windows around future and past boundary spheres. Review: scalar field in shockwave background We consider the shockwave geometry ds 2 = −dx − dx + + ds 2 ⊥ + h(x ⊥ )δ(x − )(dx − ) 2 (4.1) sourced by a stress tensor whose only non-vanishing component is T −− = δ(x − )T (x ⊥ ),(4.∂ 2 ⊥ h(x ⊥ ) = − κ 2 2 T (x ⊥ ),(4.3) where κ 2 = 32πG. 12 On the other hand, the propagation of a scalar field in the background (4.1) is governed by the wave equation 2 shock φ(x) = 0 (4.4) which reduces to − 4∂ − ∂ + φ − 4δ(x − )h(x ⊥ )∂ 2 + φ + ∂ 2 ⊥ φ = 0. (4.5) In a neighborhood of x − = 0, the transverse part can be neglected and (4.5) simplifies to ∂ + ∂ − φ = −h(x ⊥ )δ(x − )∂ 2 + φ. (4.6) Taking a Fourier transform of both sides with respect to x + and integrating by parts, we find ∂ − φ(x − , k, x ⊥ ) = −ikh(x ⊥ )δ(x − ) φ(x − , k, x ⊥ ),(4.7) where we defined the Fourier transform of φ with respect to x + φ(x − , k, x ⊥ ) ≡ ∞ −∞ dx + φ(x − , x + , x ⊥ )e −ikx + . (4.8) The solution is obtained by integrating (4.7) over x − with x − ∈ [− , ] for infinitesimal > 0. One finds that the scalar modes before and after the shock are simply related by a phase shift φ( , k, x ⊥ ) = φ(− , k, x ⊥ )e −ikh(x ⊥ ) . (4.9) Equivalently, upon inverting the Fourier transform we find the matching condition φ( , x + , x ⊥ ) = ∞ −∞ dk 2π φ(− , k, x ⊥ )e −ikh(x ⊥ )+ikx + = φ(− , x + − h(x ⊥ ), x ⊥ ). (4.10) We hence recover the well known result [58] that upon crossing a shockwave, probe particles acquire a time shift ∆x + = h(x ⊥ ). Celestial shock two-point function Equipped with this result, it can be shown (see appendix D) that the scalar propagator in the background of the shock (4.1) takes the form A shock (p 2 , p 4 ) = 4πp − 4 δ(p − 4 + p − 2 ) d 2 x ⊥ e i(p 4,⊥ +p 2,⊥ )·x ⊥ e i h(x ⊥ ) 2 p − 2 . (4.11) To express this in a conformal primary basis, we parameterize p i as in (2.3), (3.8) in which case p − i = 2η i ω i , p i,⊥ = η i ω i (z i +z i , −i(z i −z i )) ≡ η i ω i q i,⊥(4.12) 12 Our conventions follow from the Einstein-Hilbert action coupled to matter S g+m = d 4 x √ −g 2 κ 2 R + L M . and the momentum space amplitude (4.11) becomes A shock (p 2 , p 4 ) = 4πω 4 δ(ω 4 − ω 2 ) d 2 x ⊥ e i(ω 4 q 4,⊥ −ω 2 q 2,⊥ )·x ⊥ e −iω 2 h(x ⊥ ) . (4.13) The celestial propagator is then found by evaluating Mellin transforms with respect to ω 2 and ω 4 , A shock (∆ 2 , z 2 ,z 2 ; ∆ 4 , z 4 ,z 4 ) = ∞ 0 dω 2 ω ∆ 2 −1 2 ∞ 0 dω 4 ω ∆ 4 −1 4 A shock (p 2 , p 4 ). (4.14) One of the Mellin transforms is easily computed due to the delta function in energy and the remaining Mellin integral reduces to the standard Mellin transform of an exponential, namely A shock (∆ 2 , z 2 ,z 2 ; ∆ 4 , z 4 ,z 4 ) = 4π ∞ 0 dω 2 ω ∆ 2 +∆ 4 −1 2 d 2 x ⊥ e −iω 2[ q 24,⊥ ·x ⊥ +h(x ⊥ )] = 4π d 2 x ⊥ i ∆ 2 +∆ 4 Γ(∆ 2 + ∆ 4 ) [−q 24,⊥ · x ⊥ − h(x ⊥ ) + i ] ∆ 2 +∆ 4 . (4.15) This formula is remarkably similar to its counterpart in AdS 4 [59] O ∆ (p 2 )O ∆ (p 4 ) shock = C ∆ H 2 d 2 x ⊥ Γ(2∆) (2q · x ⊥ − h(x ⊥ ) + i ) 2∆ ,(4.16) where p 2 = −(0, 1, 0), p 4 = (q 2 , 1, q) 13 are embedding space (here R 1,1 × R 1,2 ) coordinates, h(x ⊥ ) is a solution to the AdS counterpart of (4.3) and C ∆ is a normalization constant given by C ∆ ≡ 1 π 2 R 2(∆−1) Γ(∆ − 1 2 ) 2 . (4.17) In section 4.4 we explain how it can be obtained from a flat space limit. Before that, we clarify the relation between (4.15) and the celestial amplitude that resums the eikonal spin 2 exchanges. Relation to eikonal amplitude The momentum space scalar propagator (4.11) reproduces the plane wave basis four-point eikonal amplitude of massless scalars interacting by graviton exchange, given an appropriate choice for the shockwave source [58]. In this section we identify the shockwave source in the CCFT following a similar procedure to that of [59] in the AdS context. To this end, we consider the leading term in the expansion of the celestial eikonal amplitude for graviton exchange, namely ‹ A j=2 1 = 8π 2 iκ 2 d 2 xd 2x ⊥ G m=0 ⊥ (x ⊥ ,x ⊥ ) i ∆ 1 +∆ 3 +1 Γ(∆ 1 + ∆ 3 + 1) (−q 13,⊥ · x ⊥ ) ∆ 1 +∆ 3 +1 i ∆ 2 +∆ 4 +1 Γ(∆ 2 + ∆ 4 + 1) (−q 24,⊥ ·x ⊥ ) ∆ 2 +∆ 4 +1 .(4.18) On the other hand, expanding (4.15) to linear order in h(x ⊥ ), we find A 1 shock = −4πi d 2 x ⊥ i ∆ 2 +∆ 4 +1 Γ(∆ 2 + ∆ 4 + 1) (−q 24,⊥ · x ⊥ + i ) ∆ 2 +∆ 4 +1 h(x ⊥ ). (4.19) Upon choosing h(x ⊥ ) = −2πκ 2 d 2x ⊥ G m=0 ⊥ (x ⊥ ,x ⊥ ) i ∆ 1 +∆ 3 +1 Γ(∆ 1 + ∆ 3 + 1) (−q 13,⊥ ·x ⊥ ) ∆ 1 +∆ 3 +1 ,(4.20) with T = T (x ⊥ ) = −4π i ∆ 1 +∆ 3 +1 Γ(∆ 1 + ∆ 3 + 1) (−q 13,⊥ ·x ⊥ ) ∆ 1 +∆ 3 +1 ,(4.21) we see that (4.19) reproduces (4.18). Note that while in a momentum space basis, the energymomentum tensor carries a scale associated with the energy of the source, 14 (4.21) provides a definition of the source intrinsic to the CCFT. Up to normalization, (4.21) is analogous to the CFT 3 source found in [59]. In the next section we clarify this connection by showing that the celestial formulas can be obtained directly as flat space limit of CFT 3 correlators with particular kinematics. Flat space limit of shockwave two-point function in AdS 4 The symmetries of celestial amplitudes inherited from 4D Lorentz invariance are the same as the symmetries that preserve codimension-1 slices of CFT 3 . Since in the flat space limit, CFT 3 operators are known to localize on such global time slices [68,70,71,75], it is natural to expect a direct relation between CFT 3 correlation functions in the flat space limit and celestial amplitudes. In this section we illustrate how this works in the case of the shockwave two-point function (4.15). Specifically, after reviewing the calculation of the shockwave two-point function in AdS 4 , we show that for particular kinematics, in the limit of large AdS radius R, this two-point function reduces to the celestial propagator in a shockwave background (4.15). Consider the embedding of a 4-dimensional hyperboloid − (X 0 ) 2 − (X 1 ) 2 + 4 i=2 (X i ) 2 = −R 2 (4.22) in R 1,1 × R 1,2 with metric ds 2 = −dX + dX − − (dX 1 ) 2 + 3 i=2 (dX i ) 2 (4.23) and where X ± = X 0 ± X 4 (4. 24) are lightcone coordinates in R 1,1 . Parameterizing X + = −R cos τ − sin ρΩ 4 cos ρ , X − = −R cos τ + sin ρΩ 4 cos ρ , X 1 = −R sin τ cos ρ , X i = R tan ρΩ i , i = 2, 3,ds 2 = R 2 cos 2 ρ −dτ 2 + dρ 2 + sin 2 ρdΩ 2 S 2 . (4.26) The (τ, ρ) coordinates cover the ranges ρ ∈ [0, π 2 ], τ ∈ [−π, π] and the boundary is approached as ρ → π 2 . Up to conformal rescaling, points on the boundary are parameterized by p = lim ρ→π/2 1 2 R −1 cos ρX (4.27) with p 2 = 0. We denote AdS 4 bulk points by X = (X + , X − , X i ) and boundary points by p. Following [59] we consider the AdS 4 shock geometry ds 2 shock = −ds 2 AdS 4 + dX − dX − δ(X − )h(X i ), (4.28) where for X − = 0, − (X 1 ) 2 + 3 i=2 (X i ) 2 = −R 2(ï 2 H 2 − 2 R 2 ò h(x ⊥ ) = − κ 2 2 T(x ⊥ ). (4.30) Note also that the shock front is chosen to lie along the Poincaré horizon as illustrated in Figure 3. The two-point function in this shockwave background takes the form [59] O ∆ (p 2 )O ∆ (p 4 ) shock = C ∆ H 2 d 2 x ⊥ Γ(2∆) Ä 2 3 i=1 q i X i (x ⊥ ) − h(x ⊥ ) ä 2∆ , (4.31) with C ∆ given in (4.17) and and without loss of generality, the boundary operators are inserted at p 2 = − (0, 1, 0) , p 4 = q 2 , 1, q . (4.32) The relative sign is chosen such that the operators are inserted on opposite sides of the shock, otherwise the two point function can be shown to take the same form as in empty AdS. Figure 3: Left: Poincaré patch of AdS 4 with a shockwave along the horizon at X − = 0. The boundary is approached as ρ → π 2 and Ω parameterize S 2 constant τ boundary slices. Right: Zooming into a bulk flat space region of AdS around the shock at ρ = 0. As R → ∞, the AdS 4 shockwave two-point function with p 2 , p 4 inserted around τ 2 = − π 2 and τ 4 = π 2 respectively becomes the celestial shockwave two-point function. p 2 p 4 π π 2 −π − π 2 τ Ω ρ 0 X − X + p 2 p 4 π 2 0 τ′ ρ ∝ R −1 − π 2 We would like to zoom in around the flat space region around τ = π 2 , ρ = 0. To this end we consider the shifted coordinate τ = τ − π 2 (4.33) and take the limit R → ∞ with τ = t R , ρ = r R (4.34) and (t, r) fixed, as illustrated in Figure 3. It is straightforward to show that in this limit X + → t + rΩ 4 + O(R −1 ) = x + , X − → t − rΩ 4 + O(R −1 ) = x − , X 1 → −R + O(1), X i → rΩ i = x i ⊥ , i = 2, 3,(4.35) and hence the shockwave metric becomes that of a planar shock in Minkowski space ds 2 = −dx + dx − + ds 2 ⊥ + (dx − ) 2 δ(x − )h(x ⊥ ) (4.36) with 2 ⊥ h(x ⊥ ) = − κ 2 2 T (x ⊥ ). (4.37) Finally, parameterzing q = (− cos τ q ,Ω 2 ,Ω 3 ), (4.38) where τ q ∈ [0, π] we find lim R→∞ O ∆ (p 2 )O ∆ (p 4 ) shock = C ∆ d 2 x ⊥ Γ(2∆) Ä −R cos τ q + x ⊥ ·Ω − h(x ⊥ ) ä 2∆ . (4.39) Unless τ q = π 2 + O(R −1 ), we see that (4.39) is suppressed 15 by a factor R −2∆ and the amplitude will vanish. This is to be expected as otherwise the point in the bulk at which O interacts with the shockwave will be outside the flat space region we are zooming into (see Figure 3). It is also consistent with the HKLL prescription that relates bulk scattering states in the flat space limit to boundary operators localized in windows of width ∆τ ∼ R −1 around τ = ± π 2 [69,70]. It follows that for this configuration, the shockwave two-point function reduces to lim R→∞ O ∆ (p 2 )O ∆ (p 4 ) shock = C ∆ d 2 x ⊥ Γ(2∆) (−x ⊥ · q 24,⊥ − h(x ⊥ )) 2∆ , (4.40) which precisely agrees with the celestial result (4.15). Placing O ∆ (p 4 ) anywhere else in the ∆τ = O(R −1 ) window results in a constant shift that can be absorbed in the definition of h. 16 We conclude that lim R→∞ O − ∆ (p 2 )O + ∆ (p 4 ) shock = R 2(∆−1) 4π 3 i 2∆ Γ Å ∆ − 1 2 ã −2 A shock (∆,q 2 ; ∆,q 4 ),(4.41) where the + (−) labels on the LHS indicate that the CFT 3 boundary operators are to be inserted at global times τ = π 2 + τ 0 (τ = − π 2 + τ 0 ) provided that the bulk flat space region of interest lies at τ 0 . It would be interesting to generalize this analysis for the scattering of arbitrary spin particles in spherical shock backgrounds (see [78] in the massless background limit for a recent example). It would also be interesting to study the flat space limit of scattering in AdS black hole backgrounds and in particular its implications for signatures of chaos in CCFT [86,87]. Celestial amplitudes from flat space limits of Witten diagrams The discussion in the previous section is a particular instance of a general result namely, that celestial amplitudes arise naturally as the leading term in a large radius expansion of AdS 4 /CFT 3 Witten diagrams. More generally, in this section we show that scalar Witten diagrams in AdS d+1 /CFT d reduce to CCFT d−1 amplitudes in the flat space limit. We restrict to non-derivative interactions for simplicity. In establishing this correspondence we assume the following: • The boundary CFT d operators O ∆ i (p i ) are inserted on global time slices τ = ± π 2 . • The two spheres at τ = ± π 2 on the boundary of AdS are antipodally matched. 17 We start by studying the individual building blocks of AdS d+1 Witten diagrams -external lines, vertices and internal lines -and their expansion in a large R limit. We will see that they map precisely to (d + 1)-dimensional flat space Feynman diagrams computed in a basis of external conformal primary wavefunctions, or equivalently, CCFT d−1 celestial amplitudes. External lines Let K ∆ (p, x) be the bulk-to-boundary propagator in the embedding space representation [88], 18 K ∆ (p, x) = C d ∆ (−2p · x + i ) ∆ (5.1) and C d ∆ ≡ Γ(∆) 2π d/2 Γ(∆ − d 2 + 1)R (d−1)/2−∆ . (5.2) Parameterizing respectively bulk and boundary points x and p with (τ, ρ, Ω) and (τ p , Ω p ) as in (4.25), (4.27) where Ω p , Ω ∈ S d−1 , setting τ = t/R and ρ = r/R and expanding at large R, we find K ∆ (p, x) = C d ∆ ï 1 (R cos τ p + t sin τ p − rΩ p · Ω + O(R −1 ) + i ) ∆ ò . (5.3) Like in the shockwave analysis, we see that assuming ∆ ≥ 0, unless τ p = ± π 2 , the leading contribution to the bracket in (5.3) vanishes as R → ∞. On the other hand, choosing τ p = π 2 we have K ∆ (p, x) = C d ∆ ï 1 (−q · x + i ) ∆ + O(R −1 ) ò ,(5.4) where x = (t, rΩ) ∈ R 1,d is the point in flat space and whereq = (1, Ω p ) ∈ R 1,d is a null vector in the direction Ω p . As a result, up to normalization, K ∆ (p, x) maps (up to a phase) under R → ∞ to an outgoing conformal primary wavefunction, when τ p = π 2 . Likewise if we choose τ p = − π 2 , K ∆ (p, x) = C d ∆ ï 1 (q · x + i ) ∆ + O(R −1 ) ò ,(5. 5) 17 It would be interesting to understand the physical meaning of such a matching condition in AdS, perhaps by studying asymptotic field configurations as the boundary is approached along different null directions. We thank Laurent Freidel for a discussion on this point. 18 This representation of K ∆ (p, x) is valid only in particular Poincaré patches [54]. It is sufficient in our case since we restrict to configurations with boundary insertions at τ = ± π 2 and bulk points close to the center of AdS. where x is the same, but nowq = (1, Ω A p ) with Ω A p = −Ω p the antipodal point of Ω p on the sphere. In this case we see that the bulk-to-boundary propagator maps (up to a phase) to an incoming conformal primary wavefunction. Outgoing or incoming i prescriptions are obtained depending on the sign of τ p = ± π 2 . Moreover, the antipodal identification is needed to ensure Lorentz covariance of the resulting conformal primary wavefunctions. Note that placing the operators at other global times τ p = ± π 2 + ∆τ p with ∆τ p ∝ R −1 leads, in the flat space limit, to conformal primary wavefunctions that diagonalize boosts with respect to different origins in spacetime. Vertices For the particular case of non-derivative coupling we are considering, AdS d+1 vertices take the form ig AdS d+1 d d+1 x. (5.6) Writing the measure explicitly in global coordinates (τ, ρ, Ω), and transforming to τ = t/R and ρ = r/R, we have its large R expansion d d+1 x = d d+1 x + O(R −2 ). (5.7) Moreover since t = Rτ and r = Rρ, it follows that t ∈ (−∞, ∞) and r ∈ [0, ∞) in the flat space limit. Hence 8) and the rule for the vertex in AdS d+1 maps to the rule for the vertex in R 1,d . ig AdS d+1 d d+1 x = ig R 1,d d d+1 x + O(R −2 ),(5. Internal lines To discuss the internal lines we recall that the AdS d+1 bulk-to-bulk propagator of dimension ∆ obeys the equation [59] Å 2 AdS d+1 − ∆(∆ − d) R 2 ã Π ∆ (x,x) = iδ AdS d+1 (x,x). (5.9) On the one hand the Laplacian is 2 AdS d+1 = − cos 2 ρ R 2 ∂ 2 τ + cos d+1 ρ sin d−1 ρ ∂ ρ Ç sin d−1 ρ cos d+1 ρ √ γ cos 2 ρ R 2 ∂ ρ å + cos 2 ρ R 2 sin 2 ρ 1 √ γ ∂ A √ γγ AB ∂ B = 2 R 1,d + O(R −2 ), (5.10) where γ is the round S d−1 metric and 2 R 1,d is the flat space Laplacian. On the other hand the delta function is δ AdS d+1 (x,x) = δ(τ −τ )δ(ρ −ρ)δ d−1 (Ω −Ω) −g AdS d+1 = δ R 1,d (x,x) + O(R −2 ),(5.11) where δ R 1,d (x,x) is the Minkowski space delta distribution. Altogether the large R expansion of the defining equation for the bulk-to-bulk propagator is ï 2 R 1,d + O(R −2 ) − ∆(∆ − d) R 2 ò Π ∆ (x,x) = iδ R 1,d (x,x) + O(R −2 ). (5.12) It follows that the AdS d+1 propagator has a large-R expansion Π ∆ (x,x) = G(x,x) + O(R −2 ),(5.13) where G(x,x) ought to obey (2 R 1,d − m 2 )G(x,x) = iδ R 1,d (x,x), m ≡ lim R→∞ ∆ R . (5.14) Therefore, we either recover massive exchanges when ∆ = O(R) or massless exchanges when ∆ = O(1). A final remark is that while equation (5.14) does not have a unique solution, the fact that Π ∆ (x,x) computes time-ordered two-point functions in AdS d+1 implies that its leading behavior G(x,x) also computes time-ordered two-point functions in R 1,d . This imposes one additional condition on (5.14) which singles out the Feynman propagator. Forming the diagrams Combining all of the ingredients, we find that none of the large-R corrections contribute at leading order. As a result, the leading term in a large R expansion of a Witten diagram reduces to the position space Feynman diagram for the same interaction in flat space with external wavefunctions taken to be conformal primaries. By the definition (2.8), this coincides with the corresponding celestial amplitude! We exemplify by considering a t-channel exchange Witten diagram O ∆ 1 (p 1 )O ∆ 2 (p 2 )O ∆ 3 (p 3 )O ∆ 4 (p 4 ) = (ig) 2 AdS d+1 d d+1 xd d+1 yΠ ∆ (x, y) K ∆ 1 (p 1 , x)K ∆ 3 (p 3 , x)K ∆ 2 (p 2 , y)K ∆ 4 (p 4 , y). (5.15) Taking p 1 and p 2 inserted at τ = − π 2 and p 3 and p 4 inserted at τ = π 2 , we find 17) where N ∆ i are given by K ∆ i (p i , x) = N d ∆ i ψ − ∆ i ,q i (x) + O(R −1 ) , i = 1, 2, (5.16) K ∆ i (p i , x) = N d ∆ i ψ + ∆ i ,q i (x) + O(R −1 ) , i = 3, 4,(5.N d ∆ i = C d ∆ i i ∆ i Γ(∆ i ) = R −(d−1)/2+∆ i 2π d/2 i ∆ i Γ(∆ i − d−1 2 ) . (5.18) Assuming further that the exchanged operator has ∆ = mR + O(1), then O − ∆ 1 (p 1 )O − ∆ 2 (p 2 )O + ∆ 3 (p 3 )O + ∆ 4 (p 4 ) = 4 i=1 N d ∆ i Å (ig) 2 R 1,d d d+1 xd d+1 yG e (x, y) × ψ − ∆ 1 (x)ψ − ∆ 2 (y)ψ + ∆ 3 (x)ψ + ∆ 4 (y) + O(R −1 ) ã , (5.19) and up to normalization the leading term in the large R expansion is the corresponding flat space Feynman diagram computed with position space Feynman rules and conformal primary external wavefunctions. More generally, in the flat space limit, CFT d correlators with operators inserted at τ i = ± π 2 + O(R −1 ) are related to CCFT d−1 amplitudes of in/out operators with the same dimensions, namely ‹ A(∆ i , z i ,z i ) = lim R→∞ 4 i=1 N d ∆ i −1 O − ∆ 1 (p 1 )O − ∆ 2 (p 2 )O + ∆ 3 (p 3 )O + ∆ 4 (p 4 ) . (5.20) Celestial amplitudes of operators with arbitrary dimensions (such as conformally soft ones) may then be obtained by analytic continuation. At the operator level, what we have shown is that a generic CFT d quasi-primary operator O ∆ (p) inserted on past/future global time slices S d−1 maps in the flat space limit to an incoming/outgoing celestial operator O ± ∆ ( z) in CCFT d−1 via O ± ∆ ( z) ≡ lim R→∞ (N d ∆ ) −1 O ± ∆ τ = ± π 2 , z ,(5.21) where the limit holds in the weak sense, O ± ∆ ( z) · · · = lim R→∞ (N d ∆ ) −1 O ± ∆ τ = ± π 2 , z · · · . (5.22) This prescription beautifully matches with the relation between two-point functions in a shock background found by explicit calculation in (4.41). Discussion In this paper we have studied the imprints of high-energy, eikonal physics in 4D asymptotically flat spacetimes on 2D CCFT. We first identified a new universal regime in CCFT of large net scaling dimension β and small cross ratio z in which massless 4-point celestial amplitudes are governed by a simple formula (3.22). This formula resums an infinity of massive scalar exchanges resulting in an operator-valued eikonal phase. On the one hand, the celestial eikonal phase is directly related to the flat space one upon trading the center of mass energy in the latter for an appropriate weight-shifting operator. On the other hand, the fact that it manifestly computes the scattering of particles in a conformal primary basis leads to similarities with the analog eikonal formula in AdS 4 . We generalized this formula to exchanges of arbitrary spin j. The expected relation between the j = 2 result and two-point functions in shockwave backgrounds motivated us to compute the associated celestial two point function of scalars. Again, our result (4.15) is strikingly similar to the two-point function in the background of a shock in AdS 4 . In analogy to the AdS case [59] we identified the stress tensor source in the CCFT that relates this formula to the one for a single graviton exchange computed through the celestial eikonal amplitude. Finally, we showed that celestial two-point functions in a shock background can be simply recovered from a flat space limit of the propagation of a particle in the background of a shock in AdS 4 . This calculation suggests that celestial amplitudes can be directly recovered in the flat space limit from CFT 3 correlators with particular kinematics. Indeed, such a relation is suggested by the flat space limit of the HKLL prescription [69,70] that relates bulk scattering states in flat space to boundary operators on particular time slices of the boundary CFT. However, working with celestial amplitudes instead of momentum space amplitudes allowed us to bypass the construction of bulk energy eigenstates via HKLL and directly relate CFT observables to flat space, celestial observables. Our work suggests that AdS/CFT holography may provide more insights into flat space holography that one would have naively thought. There are many aspects of these intriguing connections which we believe deserve further study. At the level of the global symmetries it is natural to expect CCFT-like observables to arise from a flat space limit of CFT 3 observables: indeed the non-trivial boundary observables in a large AdS 4 radius limit reside on codimension-1 slices of the boundary whose global conformal group SO(3, 1) ⊂ SO (3,2) coincides with the 4D Lorentz group. The restriction of the SO(d + 1, 1) conformal group to SO(d, 1) results in a decomposition of d + 1 dimensional blocks into an infinite sum over d dimensional ones [89]. These decompositions bear some similarities to the conformal block decompositions of massless scalar celestial amplitudes [85], perhaps suggesting that such celestial amplitudes also arise from a flat space limit of CFT 3 4-point correlators. On the other hand, at face value celestial CFT are governed by a much larger symmetry group arising from towers of soft theorems in the 4D bulk [35][36][37][90][91][92]. It would be extremely interesting to understand the nature of CFT 3 that could accommodate such a large amount of symmetry in the flat space limit. It seems likely that a boundary perspective will shed some light on some of the challenges encountered in attempts at recovering BMS 4 symmetries from a flat space limit of AdS 4 [93,94]. A related issue are the matching conditions implied by the soft theorems in 4D AFS, which our analysis suggests should link past and future global time slices in CFT 3 . It therefore seems important to understand if such an infinity of matching conditions can indeed exist in AdS/CFT. The link between the eikonal phase, time delays of probes in shockwave backgrounds and causality has been extensively studied in both flat space [95,96] and AdS/CFT [54,59,[97][98][99][100]. In particular, it is known that bulk causality places strong constraints on the allowed low energy effective field theories of gravity. For example higher derivative couplings can modify graviton three-point couplings and lead to time advances in the absence of an additional infinite tower of massive higher spin states [95]. It would be interesting to understand how causality in 4D AFS emerges from 2D CCFT. A first step in this direction would be to generalize the analysis herein to the case of 4 graviton scattering in theories of gravity with higher derivatives. One could then study whether the tower of celestial soft symmetries [35,36] (or their higher derivative corrected versions [33,92]) place any constraints on the form and in particular the sign of the eikonal phase. The eikonal phase seems to be closely related to the imaginary part of Weinberg's exponentiated infrared divergences arising from exchanges of low energy photons/gravitons [56]. While the real part of this phase has been extensively studied and linked to the existence of asymptotic symmetries in 4D AFS [39-43, 46, 47, 82, 101-106], the imaginary part appears to have received much less attention. 19 In addition to its relation to causality, this phase appears to be important when scattering conformal primary states or superpositions of all energy eigenstates as opposed to momentum eigenstates as it can lead to interference effects due to its dependence on external energies. We hope to address some of these issues in the near future. Upon integration by parts, d 4 x Å −4∂ − ∂ + f (x 2 ) + ∂ 2 ⊥ f (x − − q i,⊥ · x ⊥ ) ∆ i + ∆ i 4∂ + f (x 2 ) + 2q i,⊥ · ∂ ⊥ f (x − − q i,⊥ · x ⊥ ) ∆ i +1 ã G ∆ i (x, x 0 ;q i ) − 2i d 4 x f (x 2 ) (x − − q i,⊥ · x ⊥ ) ∆ i δ(x − − x − 0 )δ(x + − x + 0 )δ (2) (x ⊥ − x ⊥,0 ) = 0. (A.3) For ∆ i 1, and |q i,⊥ | = 2 √ q i 1, the only term that survives in the first line is d 4 x∆ i 4∂ + f (x 2 ) (x − − q i,⊥ · x ⊥ ) ∆ i +1 G ∆ i (x, x 0 ;q i ) − 2i d 4 x f (x 2 ) (x − − q i,⊥ · x ⊥ ) ∆ i δ(x − − x − 0 )δ(x + − x + 0 )δ (2) (x ⊥ − x ⊥,0 ) = 0 (A.4) and so − 4∆ i (x − − q i,⊥ · x ⊥ ) −1 ∂ + G ∆ i (x, x 0 ;q i ) = 2iδ(x − − x − 0 )δ(x + − x + 0 )δ (2) (x ⊥ − x ⊥,0 ), i = 1, 3. (A.5) Repeating the same calculation with wavefunctions as in (3.16) we find that the propagators for the external lines can therefore be approximated in the celestial eikonal limit by G ∆ i (x, x 0 ;q i ) = − i(x − − q i,⊥ · x ⊥ ) 2∆ i δ(x − − x − 0 )Θ(x + − x + 0 )δ (2) (x ⊥ − x ⊥,0 ), i = 1, 3, G ∆ i (x, x 0 ;q i ) = − i(x + − q i,⊥ · x ⊥ ) 2∆ i Θ(x − − x − 0 )δ(x + − x + 0 )δ (2) (x ⊥ − x ⊥,0 ), i = 2, 4, (A.6) as promised. B Eikonal amplitude in CCFT Applying position space Feynman rules to the ladder diagrams with n exchanges we have ‹ A n = (ig) 2n d 4 x 1 · · · d 4 x n d 4x 1 · · · d 4x n ϕ ∆ 3 (x n ;q 3 )G(x n − x n−1 ) · · · G(x 2 − x 1 )ϕ ∆ 1 (x 1 ; −q 1 ) × ϕ ∆ 4 (x n ;q 4 )G(x n −x n−1 ) · · · G(x 2 −x 1 )ϕ ∆ 2 (x 1 ; −q 2 ) × σ∈Sn G e (x 1 −x σ(1) ) · · · G e (x n −x σ(n) ). (B.1) The propagators G(x k − x k−1 ) connecting particles 1 and 3 and G(x k −x k−1 ) connecting particles 2 and 4 can respectively be approximated by (A.6). In this approximation, writing which at large ∆ 1 , ∆ 2 can be approximated by ‹ A n = 4 dx − dx + d 2 x ⊥ d 2x ⊥ (−i) ∆ 1 +1−n Γ(∆ 1 + 1 − n) (x − − q 1,⊥ · x ⊥ − i ) ∆ 1 +1−n (−i) ∆ 2 +1−n Γ(∆ 2 + 1 − n) (x + − q 2,⊥ ·x ⊥ − i ) ∆ 2 +1−n × i ∆ 3 Γ(∆ 3 ) (x − − q 3,⊥ · x ⊥ + i ) ∆ 3 i ∆ 4 Γ(∆ 4 ) (x + − q 4,⊥ ·x ⊥ + i ) ∆ 4 1 n! Å ig 2 8 G ⊥ (x ⊥ ,x ⊥ ) ã n , since (∆ i ) n = ∆ i (∆ i − 1) · · · (∆ i − n + 1) ∆ n i , i = 1, 2. (B.7) The shifts in n can then be written in terms of weight-shifting operators e −n∂ ∆ 1 , e −n∂ ∆ 2 and therefore the connected eikonal celestial amplitude is ‹ A eik. = ∞ n=1 ‹ A n = 4 dx − dx + d 2 x ⊥ d 2x ⊥ e iχ − 1 (−i) ∆ 1 +1 Γ(∆ 1 + 1) (x − − q 1,⊥ · x ⊥ − i ) ∆ 1 +1 × (−i) ∆ 2 +1 Γ(∆ 2 + 1) (x + − q 2,⊥ ·x ⊥ − i ) ∆ 2 +1 i ∆ 3 Γ(∆ 3 ) (x − − q 3,⊥ · x ⊥ + i ) ∆ 3 i ∆ 4 Γ(∆ 4 ) (x + − q 4,⊥ ·x ⊥ + i ) ∆ 4 , (B.8) where the eikonal phase is now an operator χ ≡ ig 2 8 e −∂ ∆ 1 −∂ ∆ 2 G ⊥ (x ⊥ ,x ⊥ ). (B.9) Note that (B.9) is the same as the momentum space formula with the center of mass energy promoted to an operator s →ŝ 4e ∂ ∆ 1 +∂ ∆ 2 . Sinceχ is independent of x − ,x + we can further evaluate these integrals upon shifting x − → x − + q 1,⊥ · x ⊥ andx + →x + + q 2,⊥ ·x ⊥ and then rescaling x − → (q 13,⊥ · x ⊥ )x − and x + → (q 24,⊥ ·x ⊥ )x + . The resulting integrals can be evaluated in terms of the standard identity [107] yielding ‹ A eik. = 4 × (2π) 2 d 2 x ⊥ d 2x ⊥ e iχ − 1 i ∆ 1 +∆ 2 i ∆ 3 +∆ 4 Γ(∆ 1 + ∆ 3 )Γ(∆ 2 + ∆ 4 ) (−q 13,⊥ · x ⊥ ) ∆ 1 +∆ 3 (−q 24,⊥ ·x ⊥ ) ∆ 2 +∆ 4 . (B.11) C t-channel exchange In this section we evaluate the tree-level contribution to the eikonal celestial amplitude. We start with (C.4) and compute the integral over k ⊥ , ‹ A 1 = (2π) 4 ig 2 2 ∞ 0 dω 1 ω 1 ω ∆ 1 +∆ 3 −1 1 ∞ 0 dω 2 ω 2 ω ∆ 2 +∆ 4 −1 2 1 (ω 1 q 13,⊥ ) 2 + m 2 δ (2) (ω 1 q 13,⊥ + ω 2 q 24,⊥ ) (C.1) The integral over ω 2 can be done by first noting that given two two-dimensional vectors v = (v 1 , v 2 ) and w = (w 1 , w 2 ), δ (2) (ξv + ξ w) = δ(ξv 1 + ξ w 1 )δ(ξv 2 + ξ w 2 ) = 1 ξ δ Å ξ + ξ v 1 w 1 ã δ(w 1 v 2 − v 1 w 2 ). (C.2) As a result, δ (2) (ω 1 q 13,⊥ + ω 2 q 24,⊥ ) = 1 ω 1 δ C.1 Eikonal kinematics By studying the small scattering angle kinematics in a center of mass frame one finds that the momenta of the particles can be written as This motivates us to defineq i = (1 + q i , q i,⊥ , 1 − q i ), i = 1, 3, (C.10) q i = (1 + q i , q i,⊥ , −1 + q i ), i = 2, 4. (C. 11) where H + 0 is the zero mass shell and dΩ(q) = d 3 q (2π) 3 2q 0 is the Lorentz invariant measure. Recalling that in/out fields are defined by To evaluate α(p, q) consider v in q (x) = e −iq·x when x − < 0. Using the boundary condition relating the solution at x − < 0 and x − > 0 one finds that v in q ( , x + , x ⊥ ) = v in q (− , x + − h(x ⊥ ), x ⊥ ) = H + 0 dΩ(p) Å 4πp − δ(p − − q − ) d 2 x ⊥ e −i h(x ⊥ ) 2 q − e ix ⊥ ·(p ⊥ −q ⊥ ) ã e i x + 2 p − e −ip ⊥ ·x ⊥ . (D.7) Comparison with the definition of the Bogoliubov coefficients shows that β(p, q) = 0 and allows one to read off the propagator: A shock (p 1 , p 2 ) = 4πp − 2 δ(p − 2 − p − 1 ) d 2 x ⊥ e i(p 2,⊥ −p 1,⊥ )·x ⊥ e −i h(x ⊥ ) 2 p − 1 . (D.8) Figure 2 : 2Eikonal kinematics in which the operators associated with particles 1, 3 and 2, 4 are respectively inserted around antipodal points on the celestial sphere. 23) becomes the AdS 4 metric in global coordinates 1 q 13,⊥ ) 2 + m 2 . (C.4)Relabeling β = i ∆ i − 4 and changing variables by rescaling ω 1 → 1 |q 13,⊥ | ω 1 , we find z, 0, −1). (C.9) particular case β(p, q) = 0, in which case the in/out vacua coincide, this immediately allows one to show that A(p 1 , p 2 ) = α(p 2 , p 1 ).(D.6) For exchanges of spin j = 1. See also[78] for a different kind of relation between celestial amplitudes and AdS 3 Witten diagrams in the particular case of Yang-Mills CCFT with a marginal deformation involving a chirally coupled massive scalar. Here p + = p 0 + p 3 , p − = p 0 − p 3 . Such integrals are formally valid for ∆ i ∈ 1 + iλ for λ ∈ R, violating our eikonal conditions ∆ 1 , ∆ 2 1. We regard the dimensions in (3.36) as analytically continued away from the principal series, see[84] for a prescription to do so. Note that the eikonal conditions on ∆ 1 , ∆ 2 only translate into a condition on β for connected celestial amplitudes. Note that 1, 3 and 2, 4 are evaluated in patches around the north and south poles respectively, hence the Jacobian factor is needed in (3.45) for comparison with(3.41).11 We stick to the convention in[54] that the external particles are oppositely charged with respect to odd j fields. Trace terms vanish since P 1 , P 2 are on-shell, while terms where the derivatives are distributed over all 1, 3 and the propagator are subleading in the eikonal limit. Our h is defined with respect to a future-pointing x + hence the apparent sign difference with respect to[59] We thank Tim Adamo for an interesting discussion on this point. It is assumed that the "parent" boundary CFT 3 is unitary and hence the operators have positive dimensions.16 Recall that (4.37) determines h up to solutions of 2 ⊥ h = 0. The usual argument is that phases are unobservable and hence unimportant. While the integral converges for β ∈ (0, 2), the result can be analytically continued. AcknowledgementsWe are grateful to Tim Adamo, Sebastião Alves Dias, Brando Bellazzini, Vincent Chen, LaurentA Celestial propagators in eikonal regimeIn this appendix we show that in a conformal primary basis, in a limit of large external dimensions, the external leg propagators become nearly on-shell. For massless scalars the Klein-Gordon equation in (x − , x + , x ⊥ ) coordinates(3.11)readsIntegrating this equation against a generalized conformal primary wavefunction[32]with eikonal kinematics like in (3.15), we findthe integrals in the (3.11) coordinates, we findIntegrating over the delta functions setsNow thanks to the theta functions the integrals on the third line decouple[54]andUsing the Fourier representation(3.19)of G e (x,x) one can show thatFurther combining everything to the power n we haveIn this case small z kinematics are equivalent to q i 1. Note that settinĝimplies that (z i ,z i ) and (w i ,w i ) are coordinates in different charts of S 2 , namely, the stereographic projections based respectively on the north and the south poles of the sphere. To express the momenta in the same chart, we perform an inversion,(C.14)In particular, one sees that the center of mass momenta (C.8) are obtained by choosingNotice that it immediately follows from (C.8) and (C.16), that in the eikonal approximation, z is indeed −t/s and also the two-dimensional cross-ratio:Our derivation of the celestial eikonal amplitude will therefore assume external conformal primary wavefunctions ϕ ∆ i (x; η iqi ) with null vectors of the form (C.10) satisfying q i 1. This kinematic configuration is illustrated inFigure 2D Propagator in shockwave backgroundIn this section we review the evaluation of the momentum space scalar propagator Gravitational waves in general relativity. 7. Waves from axisymmetric isolated systems. M V D B H Bondi, A Metzner, 10.1098/rspa.1962.0161;Proc.Roy.Soc.Lond. 269M. v. d. B. H. Bondi and A. Metzner, "Gravitational waves in general relativity. 7. Waves from axisymmetric isolated systems," Proc.Roy.Soc.Lond. A269 (1962) 21-52. Gravitational waves in general relativity. 8. Waves in asymptotically flat space-times. R Sachs, 10.1098/rspa.1962.0206;Proc.Roy.Soc.Lond. 270R. Sachs, "Gravitational waves in general relativity. 8. Waves in asymptotically flat space-times," Proc.Roy.Soc.Lond. A270 (1962) 103-126. A unified treatment of null and spatial infinity in general relativity. I -Universal structure, asymptotic symmetries, and conserved quantities at spatial infinity. A Ashtekar, R O Hansen, 10.1063/1.523863J. Math. Phys. 19A. Ashtekar and R. O. Hansen, "A unified treatment of null and spatial infinity in general relativity. I -Universal structure, asymptotic symmetries, and conserved quantities at spatial infinity," J. Math. Phys. 19 (1978) 1542-1566. Symmetries of asymptotically flat four-dimensional spacetimes at null infinity revisited. G Barnich, C Troessaert, https:/link.aps.org/doi/10.1103/PhysRevLett.105.111103Phys. Rev. Lett. 105111103G. Barnich and C. Troessaert, "Symmetries of asymptotically flat four-dimensional spacetimes at null infinity revisited," Phys. Rev. Lett. 105 (Sep, 2010) 111103. https://link.aps.org/doi/10.1103/PhysRevLett.105.111103. Supertranslations call for superrotations. G Barnich, C Troessaert, 10.22323/1.127.0010arXiv:1102.4632PoS. 201010gr-qcG. Barnich and C. Troessaert, "Supertranslations call for superrotations," PoS CNCFG2010 (2010) 010, arXiv:1102.4632 [gr-qc]. BMS charge algebra. G Barnich, C Troessaert, 10.1007/JHEP12(2011)105arXiv:1106.0213JHEP. 12105hep-thG. Barnich and C. Troessaert, "BMS charge algebra," JHEP 12 (2011) 105, arXiv:1106.0213 [hep-th]. On BMS Invariance of Gravitational Scattering. A Strominger, 10.1007/JHEP07(2014)152arXiv:1312.2229JHEP. 07152hep-thA. Strominger, "On BMS Invariance of Gravitational Scattering," JHEP 07 (2014) 152, arXiv:1312.2229 [hep-th]. BMS supertranslations and Weinberg's soft graviton theorem. T He, V Lysov, P Mitra, A Strominger, 10.1007/JHEP05(2015)151arXiv:1401.7026JHEP. 05151hep-thT. He, V. Lysov, P. Mitra, and A. Strominger, "BMS supertranslations and Weinberg's soft graviton theorem," JHEP 05 (2015) 151, arXiv:1401.7026 [hep-th]. Evidence for a New Soft Graviton Theorem. F Cachazo, A Strominger, arXiv:1404.4091hep-thF. Cachazo and A. Strominger, "Evidence for a New Soft Graviton Theorem," arXiv:1404.4091 [hep-th]. Semiclassical Virasoro symmetry of the quantum gravity S-matrix. D Kapec, V Lysov, S Pasterski, A Strominger, 10.1007/JHEP08(2014)058arXiv:1406.3312JHEP. 0858hep-thD. Kapec, V. Lysov, S. Pasterski, and A. Strominger, "Semiclassical Virasoro symmetry of the quantum gravity S-matrix," JHEP 08 (2014) 058, arXiv:1406.3312 [hep-th]. 2D Stress Tensor for 4D Gravity. D Kapec, P Mitra, A.-M Raclariu, A Strominger, 10.1103/PhysRevLett.119.121601arXiv:1609.00282Phys. Rev. Lett. 11912121601hep-thD. Kapec, P. Mitra, A.-M. Raclariu, and A. Strominger, "2D Stress Tensor for 4D Gravity," Phys. Rev. Lett. 119 no. 12, (2017) 121601, arXiv:1609.00282 [hep-th]. Flat Space Amplitudes and Conformal Symmetry of the Celestial Sphere. S Pasterski, S.-H Shao, A Strominger, 10.1103/PhysRevD.96.065026arXiv:1701.00049Phys. Rev. D96. 665026hep-thS. Pasterski, S.-H. Shao, and A. Strominger, "Flat Space Amplitudes and Conformal Symmetry of the Celestial Sphere," Phys. Rev. D96 no. 6, (2017) 065026, arXiv:1701.00049 [hep-th]. Conformal basis for flat space amplitudes. S Pasterski, S.-H Shao, 10.1103/PhysRevD.96.065022arXiv:1705.01027Phys. Rev. D96. 665022hep-thS. Pasterski and S.-H. Shao, "Conformal basis for flat space amplitudes," Phys. Rev. D96 no. 6, (2017) 065022, arXiv:1705.01027 [hep-th]. A Holographic reduction of Minkowski space-time. J Boer, S N Solodukhin, 10.1016/S0550-3213(03)00494-2arXiv:hep-th/0303006Nucl. Phys. 665hep-thJ. de Boer and S. N. Solodukhin, "A Holographic reduction of Minkowski space-time," Nucl. Phys. B665 (2003) 545-593, arXiv:hep-th/0303006 [hep-th]. 4D scattering amplitudes and asymptotic symmetries from 2D CFT. C Cheung, A De La Fuente, R Sundrum, 10.1007/JHEP01(2017)112arXiv:1609.00732JHEP. 01112hep-thC. Cheung, A. de la Fuente, and R. Sundrum, "4D scattering amplitudes and asymptotic symmetries from 2D CFT," JHEP 01 (2017) 112, arXiv:1609.00732 [hep-th]. Strings on Celestial Sphere. S Stieberger, T R Taylor, 10.1016/j.nuclphysb.2018.08.019arXiv:1806.05688Nucl. Phys. 935hep-thS. Stieberger and T. R. Taylor, "Strings on Celestial Sphere," Nucl. Phys. B935 (2018) 388-411, arXiv:1806.05688 [hep-th]. Symmetries of Celestial Amplitudes. S Stieberger, T R Taylor, 10.1016/j.physletb.2019.03.063arXiv:1812.01080Phys. Lett. 793hep-thS. Stieberger and T. R. Taylor, "Symmetries of Celestial Amplitudes," Phys. Lett. B793 (2019) 141-143, arXiv:1812.01080 [hep-th]. Soft Limits of Yang-Mills Amplitudes and Conformal Correlators. W Fan, A Fotopoulos, T R Taylor, 10.1007/JHEP05(2019)121arXiv:1903.01676JHEP. 05121hep-thW. Fan, A. Fotopoulos, and T. R. Taylor, "Soft Limits of Yang-Mills Amplitudes and Conformal Correlators," JHEP 05 (2019) 121, arXiv:1903.01676 [hep-th]. Celestial Amplitudes: Conformal Partial Waves and Soft Limits. D Nandan, A Schreiber, A Volovich, M Zlotnikov, 10.1007/JHEP10(2019)018arXiv:1904.10940JHEP. 1018hep-thD. Nandan, A. Schreiber, A. Volovich, and M. Zlotnikov, "Celestial Amplitudes: Conformal Partial Waves and Soft Limits," JHEP 10 (2019) 018, arXiv:1904.10940 [hep-th]. Conformally Soft Theorem in Gauge Theory. M Pate, A.-M Raclariu, A Strominger, 10.1103/PhysRevD.100.085017arXiv:1904.10831Phys. Rev. 100885017hep-thM. Pate, A.-M. Raclariu, and A. Strominger, "Conformally Soft Theorem in Gauge Theory," Phys. Rev. D100 no. 8, (2019) 085017, arXiv:1904.10831 [hep-th]. Celestial amplitudes and conformal soft theorems. T Adamo, L Mason, A Sharma, 10.1088/1361-6382/ab42cearXiv:1905.09224Class. Quant. Grav. 3620205018hep-thT. Adamo, L. Mason, and A. Sharma, "Celestial amplitudes and conformal soft theorems," Class. Quant. Grav. 36 no. 20, (2019) 205018, arXiv:1905.09224 [hep-th]. Conformally Soft Theorem in Gravity. A Puhm, 10.1007/JHEP09(2020)130arXiv:1905.09799JHEP. 09130hep-thA. Puhm, "Conformally Soft Theorem in Gravity," JHEP 09 (2020) 130, arXiv:1905.09799 [hep-th]. Notes on Conformal Soft Theorems and Recursion Relations in Gravity. A Guevara, arXiv:1906.07810hep-thA. Guevara, "Notes on Conformal Soft Theorems and Recursion Relations in Gravity," arXiv:1906.07810 [hep-th]. Poincare constraints on celestial amplitudes. Y T A Law, M Zlotnikov, 10.1007/JHEP03(2020)085arXiv:1910.04356JHEP. 0385hep-thY. T. A. Law and M. Zlotnikov, "Poincare constraints on celestial amplitudes," JHEP 03 (2020) 085, arXiv:1910.04356 [hep-th]. Extended BMS Algebra of Celestial CFT. A Fotopoulos, S Stieberger, T R Taylor, B Zhu, 10.1007/JHEP03(2020)130arXiv:1912.10973JHEP. 03130hep-thA. Fotopoulos, S. Stieberger, T. R. Taylor, and B. Zhu, "Extended BMS Algebra of Celestial CFT," JHEP 03 (2020) 130, arXiv:1912.10973 [hep-th]. BMS symmetry of celestial OPE. S Banerjee, S Ghosh, R Gonzo, 10.1007/JHEP04(2020)130arXiv:2002.00975JHEP. 04130hep-thS. Banerjee, S. Ghosh, and R. Gonzo, "BMS symmetry of celestial OPE," JHEP 04 (2020) 130, arXiv:2002.00975 [hep-th]. On Sugawara construction on Celestial Sphere. W Fan, A Fotopoulos, S Stieberger, T R Taylor, 10.1007/JHEP09(2020)139arXiv:2005.10666JHEP. 09139hep-thW. Fan, A. Fotopoulos, S. Stieberger, and T. R. Taylor, "On Sugawara construction on Celestial Sphere," JHEP 09 (2020) 139, arXiv:2005.10666 [hep-th]. Extended Super BMS Algebra of Celestial CFT. A Fotopoulos, S Stieberger, T R Taylor, B Zhu, 10.1007/JHEP09(2020)198arXiv:2007.03785JHEP. 09198hep-thA. Fotopoulos, S. Stieberger, T. R. Taylor, and B. Zhu, "Extended Super BMS Algebra of Celestial CFT," JHEP 09 (2020) 198, arXiv:2007.03785 [hep-th]. Double Copy for Celestial Amplitudes. E Casali, A Puhm, 10.1103/PhysRevLett.126.101602arXiv:2007.15027Phys. Rev. Lett. 12610101602hep-thE. Casali and A. Puhm, "Double Copy for Celestial Amplitudes," Phys. Rev. Lett. 126 no. 10, (2021) 101602, arXiv:2007.15027 [hep-th]. MHV graviton scattering amplitudes and current algebra on the celestial sphere. S Banerjee, S Ghosh, P Paul, 10.1007/JHEP02(2021)176arXiv:2008.04330JHEP. 02176hep-thS. Banerjee, S. Ghosh, and P. Paul, "MHV graviton scattering amplitudes and current algebra on the celestial sphere," JHEP 02 (2021) 176, arXiv:2008.04330 [hep-th]. MHV Gluon Scattering Amplitudes from Celestial Current Algebras. S Banerjee, S Ghosh, arXiv:2011.00017hep-thS. Banerjee and S. Ghosh, "MHV Gluon Scattering Amplitudes from Celestial Current Algebras," arXiv:2011.00017 [hep-th]. Shifting spin on the celestial sphere. S Pasterski, A Puhm, 10.1103/PhysRevD.104.086020arXiv:2012.15694Phys. Rev. D. 104886020hep-thS. Pasterski and A. Puhm, "Shifting spin on the celestial sphere," Phys. Rev. D 104 no. 8, (2021) 086020, arXiv:2012.15694 [hep-th]. Holographic chiral algebra: supersymmetry, infinite Ward identities, and EFTs. H Jiang, 10.1007/JHEP01(2022)113arXiv:2108.08799JHEP. 01113hep-thH. Jiang, "Holographic chiral algebra: supersymmetry, infinite Ward identities, and EFTs," JHEP 01 (2022) 113, arXiv:2108.08799 [hep-th]. Soft Scalars and the Geometry of the Space of Celestial CFTs. D Kapec, Y T A Law, S A Narayanan, arXiv:2205.10935hep-thD. Kapec, Y. T. A. Law, and S. A. Narayanan, "Soft Scalars and the Geometry of the Space of Celestial CFTs," arXiv:2205.10935 [hep-th]. A Guevara, E Himwich, M Pate, A Strominger, arXiv:2103.03961Holographic Symmetry Algebras for Gauge Theory and Gravity. hep-thA. Guevara, E. Himwich, M. Pate, and A. Strominger, "Holographic Symmetry Algebras for Gauge Theory and Gravity," arXiv:2103.03961 [hep-th]. w(1+infinity) and the Celestial Sphere. A Strominger, arXiv:2105.14346hep-thA. Strominger, "w(1+infinity) and the Celestial Sphere," arXiv:2105.14346 [hep-th]. Celestial operator product expansions and w 1+∞ symmetry for all spins. E Himwich, M Pate, K Singh, 10.1007/JHEP01(2022)080Journal of High Energy Physics. 20221E. Himwich, M. Pate, and K. Singh, "Celestial operator product expansions and w 1+∞ symmetry for all spins," Journal of High Energy Physics 2022 no. 1, (2022) 1-29. . A Kar, L Lamprou, C Marteau, F Rosso, arXiv:2205.02240A Celestial Matrix Model. hep-thA. Kar, L. Lamprou, C. Marteau, and F. Rosso, "A Celestial Matrix Model," arXiv:2205.02240 [hep-th]. Soft Factorization in QED from 2D Kac-Moody Symmetry. A Nande, M Pate, A Strominger, 10.1007/JHEP02(2018)079arXiv:1705.00608JHEP. 0279hep-thA. Nande, M. Pate, and A. Strominger, "Soft Factorization in QED from 2D Kac-Moody Symmetry," JHEP 02 (2018) 079, arXiv:1705.00608 [hep-th]. Infrared Divergences in QED, Revisited. D Kapec, M Perry, A.-M Raclariu, A Strominger, 10.1103/PhysRevD.96.085002arXiv:1705.04311Phys. Rev. 96885002hep-thD. Kapec, M. Perry, A.-M. Raclariu, and A. Strominger, "Infrared Divergences in QED, Revisited," Phys. Rev. D96 no. 8, (2017) 085002, arXiv:1705.04311 [hep-th]. Celestial amplitudes from uv to ir. N Arkani-Hamed, M Pate, A.-M Raclariu, A Strominger, 10.1007/JHEP08(2021)062Journal of High Energy Physics. 8N. Arkani-Hamed, M. Pate, A.-M. Raclariu, and A. Strominger, "Celestial amplitudes from uv to ir," Journal of High Energy Physics 2021 no. 8, (2021) 1-36. Shadows and soft exchange in celestial CFT. D Kapec, P Mitra, 10.1103/PhysRevD.105.026009arXiv:2109.00073Phys. Rev. D. 105226009hep-thD. Kapec and P. Mitra, "Shadows and soft exchange in celestial CFT," Phys. Rev. D 105 no. 2, (2022) 026009, arXiv:2109.00073 [hep-th]. The structure of IR divergences in celestial gluon amplitudes. H A González, F Rojas, 10.1007/JHEP06(2021)171arXiv:2104.12979JHEP. 06171hep-thH. A. González and F. Rojas, "The structure of IR divergences in celestial gluon amplitudes," JHEP 2021 no. 06, (2021) 171, arXiv:2104.12979 [hep-th]. Covariant phase space and soft factorization in non-abelian gauge theories. T He, P Mitra, 10.1007/JHEP03(2021)015Journal of High Energy Physics. 3T. He and P. Mitra, "Covariant phase space and soft factorization in non-abelian gauge theories," Journal of High Energy Physics 2021 no. 3, (2021) 1-68. Non-abelian infrared divergences on the celestial sphere. L Magnea, 10.1007/JHEP05(2021)282arXiv:2104.10254JHEP. 05282hep-thL. Magnea, "Non-abelian infrared divergences on the celestial sphere," JHEP 05 (2021) 282, arXiv:2104.10254 [hep-th]. S Choi, R Akhoury, 10.1007/JHEP02(2018)171arXiv:1712.04551BMS Supertranslation Symmetry Implies Faddeev-Kulish Amplitudes. 02171hep-thS. Choi and R. Akhoury, "BMS Supertranslation Symmetry Implies Faddeev-Kulish Amplitudes," JHEP 02 (2018) 171, arXiv:1712.04551 [hep-th]. The Soft S-Matrix in Gravity. E Himwich, S A Narayanan, M Pate, N Paul, A Strominger, 10.1007/JHEP09(2020)129arXiv:2005.13433JHEP. 09129hep-thE. Himwich, S. A. Narayanan, M. Pate, N. Paul, and A. Strominger, "The Soft S-Matrix in Gravity," JHEP 09 (2020) 129, arXiv:2005.13433 [hep-th]. On loop celestial amplitudes for gauge theory and gravity. S Albayrak, C Chowdhury, S Kharel, 10.1103/PhysRevD.102.126020arXiv:2007.09338Phys. Rev. D. 102126020hep-thS. Albayrak, C. Chowdhury, and S. Kharel, "On loop celestial amplitudes for gauge theory and gravity," Phys. Rev. D 102 (2020) 126020, arXiv:2007.09338 [hep-th]. The shadow operator formalism for conformal algebra. vacuum expectation values and operator products. S Ferrara, A F Grillo, G Parisi, R Gatto, 10.1007/BF02907130Lettere al Nuovo Cimento (1971-1985) 4 noS. Ferrara, A. F. Grillo, G. Parisi, and R. Gatto, "The shadow operator formalism for conformal algebra. vacuum expectation values and operator products," Lettere al Nuovo Cimento (1971-1985) 4 no. 4, (1972) 115-120. https://doi.org/10.1007/BF02907130. Celestial OPE blocks. A Guevara, arXiv:2108.12706hep-thA. Guevara, "Celestial OPE blocks," arXiv:2108.12706 [hep-th]. Higher spin dynamics in gravity and w 1+∞ celestial symmetries. L Freidel, D Pranzetti, A.-M Raclariu, arXiv:2112.15573hep-thL. Freidel, D. Pranzetti, and A.-M. Raclariu, "Higher spin dynamics in gravity and w 1+∞ celestial symmetries," arXiv:2112.15573 [hep-th]. Eikonal approximation in quantum field theory. M Lévy, J Sucher, https:/link.aps.org/doi/10.1103/PhysRev.186.1656Phys. Rev. 186M. Lévy and J. Sucher, "Eikonal approximation in quantum field theory," Phys. Rev. 186 (Oct, 1969) 1656-1670. https://link.aps.org/doi/10.1103/PhysRev.186.1656. Eikonal quantum gravity and Planckian scattering. D N Kabat, M Ortiz, 10.1016/0550-3213(92)90627-NarXiv:hep-th/9203082Nucl. Phys. B. 388D. N. Kabat and M. Ortiz, "Eikonal quantum gravity and Planckian scattering," Nucl. Phys. B 388 (1992) 570-592, arXiv:hep-th/9203082. Eikonal approximation in AdS/CFT: Resumming the gravitational loop expansion. L Cornalba, M S Costa, J Penedones, 10.1088/1126-6708/2007/09/037arXiv:0707.0120JHEP. 0937hep-thL. Cornalba, M. S. Costa, and J. Penedones, "Eikonal approximation in AdS/CFT: Resumming the gravitational loop expansion," JHEP 09 (2007) 037, arXiv:0707.0120 [hep-th]. Loop corrections to celestial amplitudes. H A González, A Puhm, F Rojas, 10.1103/PhysRevD.102.126027arXiv:2009.07290Phys. Rev. D. 10212126027hep-thH. A. González, A. Puhm, and F. Rojas, "Loop corrections to celestial amplitudes," Phys. Rev. D 102 no. 12, (2020) 126027, arXiv:2009.07290 [hep-th]. Infrared photons and gravitons. S Weinberg, https:/link.aps.org/doi/10.1103/PhysRev.140.B516Phys. Rev. 140S. Weinberg, "Infrared photons and gravitons," Phys. Rev. 140 (Oct, 1965) B516-B524. https://link.aps.org/doi/10.1103/PhysRev.140.B516. The Gravitational Shock Wave of a Massless Particle. T Dray, G Hooft, 10.1016/0550-3213(85)90525-5Nucl. Phys. B. 253T. Dray and G. 't Hooft, "The Gravitational Shock Wave of a Massless Particle," Nucl. Phys. B 253 (1985) 173-188. Graviton dominance in ultra-high-energy scattering. G Hooft, 10.1016/0370-2693(87)90159-6Physics Letters B. 1981G. Hooft, "Graviton dominance in ultra-high-energy scattering," Physics Letters B 198 no. 1, (1987) 61-63. Eikonal Approximation in AdS/CFT: From Shock Waves to Four-Point Functions. L Cornalba, M S Costa, J Penedones, R Schiappa, 10.1088/1126-6708/2007/08/019arXiv:hep-th/0611122JHEP. 0819L. Cornalba, M. S. Costa, J. Penedones, and R. Schiappa, "Eikonal Approximation in AdS/CFT: From Shock Waves to Four-Point Functions," JHEP 08 (2007) 019, arXiv:hep-th/0611122. Scattering of Spinning Black Holes from Exponentiated Soft Factors. A Guevara, A Ochirov, J Vines, 10.1007/JHEP09(2019)056arXiv:1812.06895JHEP. 0956hep-thA. Guevara, A. Ochirov, and J. Vines, "Scattering of Spinning Black Holes from Exponentiated Soft Factors," JHEP 09 (2019) 056, arXiv:1812.06895 [hep-th]. Black-hole scattering with general spin directions from minimal-coupling amplitudes. A Guevara, A Ochirov, J Vines, 10.1103/PhysRevD.100.104024arXiv:1906.10071Phys. Rev. D. 10010104024hep-thA. Guevara, A. Ochirov, and J. Vines, "Black-hole scattering with general spin directions from minimal-coupling amplitudes," Phys. Rev. D 100 no. 10, (2019) 104024, arXiv:1906.10071 [hep-th]. Kerr black holes as elementary particles. N Arkani-Hamed, Y Huang, D O&apos;connell, 10.1007/JHEP01(2020)046arXiv:1906.10100JHEP. 0146hep-thN. Arkani-Hamed, Y.-t. Huang, and D. O'Connell, "Kerr black holes as elementary particles," JHEP 01 (2020) 046, arXiv:1906.10100 [hep-th]. Black holes and the double copy. R Monteiro, D O&apos;connell, C D White, 10.1007/JHEP12(2014)056arXiv:1410.0239JHEP. 1256hep-thR. Monteiro, D. O'Connell, and C. D. White, "Black holes and the double copy," JHEP 12 (2014) 056, arXiv:1410.0239 [hep-th]. Gravitational shock waves and scattering amplitudes. A Cristofoli, 10.1007/JHEP11(2020)160arXiv:2006.08283JHEP. 11160hep-thA. Cristofoli, "Gravitational shock waves and scattering amplitudes," JHEP 11 (2020) 160, arXiv:2006.08283 [hep-th]. A worldsheet for Kerr. A Guevara, B Maybee, A Ochirov, D , J Vines, 10.1007/JHEP03(2021)201arXiv:2012.11570JHEP. 03201hep-thA. Guevara, B. Maybee, A. Ochirov, D. O'connell, and J. Vines, "A worldsheet for Kerr," JHEP 03 (2021) 201, arXiv:2012.11570 [hep-th]. Classical solutions and their double copy in split signature. R Monteiro, D O&apos;connell, D Peinador Veiga, M Sergola, 10.1007/JHEP05(2021)268arXiv:2012.11190JHEP. 05268hep-thR. Monteiro, D. O'Connell, D. Peinador Veiga, and M. Sergola, "Classical solutions and their double copy in split signature," JHEP 05 (2021) 268, arXiv:2012.11190 [hep-th]. T Adamo, A Cristofoli, A Ilderton, arXiv:2203.13785Classical physics from amplitudes on curved backgrounds. hep-thT. Adamo, A. Cristofoli, and A. Ilderton, "Classical physics from amplitudes on curved backgrounds," arXiv:2203.13785 [hep-th]. A L Fitzpatrick, J Kaplan, arXiv:1104.2597Scattering States in AdS/CFT. hep-thA. L. Fitzpatrick and J. Kaplan, "Scattering States in AdS/CFT," arXiv:1104.2597 [hep-th]. Flat space physics from ads/cft. E Hijano, 10.1007/JHEP07(2019)132Journal of High Energy Physics. 7E. Hijano, "Flat space physics from ads/cft," Journal of High Energy Physics 2019 no. 7, (2019) 1-37. Soft photon theorems from CFT Ward identites in the flat limit of AdS/CFT. E Hijano, D Neuenfeld, 10.1007/JHEP11(2020)009arXiv:2005.03667JHEP. 119hep-thE. Hijano and D. Neuenfeld, "Soft photon theorems from CFT Ward identites in the flat limit of AdS/CFT," JHEP 11 (2020) 009, arXiv:2005.03667 [hep-th]. A Natural Language for AdS/CFT Correlators. A L Fitzpatrick, J Kaplan, J Penedones, S Raju, B C Van Rees, 10.1007/JHEP11(2011)095arXiv:1107.1499JHEP. 1195hep-thA. L. Fitzpatrick, J. Kaplan, J. Penedones, S. Raju, and B. C. van Rees, "A Natural Language for AdS/CFT Correlators," JHEP 11 (2011) 095, arXiv:1107.1499 [hep-th]. Analyticity and the Holographic S-Matrix. A L Fitzpatrick, J Kaplan, 10.1007/JHEP10(2012)127arXiv:1111.6972JHEP. 10127hep-thA. L. Fitzpatrick and J. Kaplan, "Analyticity and the Holographic S-Matrix," JHEP 10 (2012) 127, arXiv:1111.6972 [hep-th]. Unitarity and the Holographic S-Matrix. A L Fitzpatrick, J Kaplan, 10.1007/JHEP10(2012)032arXiv:1112.4845JHEP. 1032hep-thA. L. Fitzpatrick and J. Kaplan, "Unitarity and the Holographic S-Matrix," JHEP 10 (2012) 032, arXiv:1112.4845 [hep-th]. Writing cft correlation functions as ads scattering amplitudes. J Penedones, 10.1007/JHEP03(2011)025Journal of High Energy Physics. 3J. Penedones, "Writing cft correlation functions as ads scattering amplitudes," Journal of High Energy Physics 2011 no. 3, (2011) 1-34. Notes on flat-space limit of AdS/CFT. Y.-Z Li, 10.1007/JHEP09(2021)027arXiv:2106.04606JHEP. 0927hep-thY.-Z. Li, "Notes on flat-space limit of AdS/CFT," JHEP 09 (2021) 027, arXiv:2106.04606 [hep-th]. M Srednicki, Quantum field theory. Cambridge University Press1M. Srednicki, Quantum field theory. Cambridge University Press, 1, 2007. Conformal Basis, Optical Theorem, and the Bulk Point Singularity. H T Lam, S.-H Shao, 10.1103/PhysRevD.98.025020arXiv:1711.06138Phys. Rev. 98225020hep-thH. T. Lam and S.-H. Shao, "Conformal Basis, Optical Theorem, and the Bulk Point Singularity," Phys. Rev. D98 no. 2, (2018) 025020, arXiv:1711.06138 [hep-th]. E Casali, W Melton, A Strominger, arXiv:2204.10249Celestial Amplitudes as AdS-Witten Diagrams. hep-thE. Casali, W. Melton, and A. Strominger, "Celestial Amplitudes as AdS-Witten Diagrams," arXiv:2204.10249 [hep-th]. Eikonal approximation in quantum field theory. M Levy, J Sucher, 10.1103/PhysRev.186.1656Phys. Rev. 186M. Levy and J. Sucher, "Eikonal approximation in quantum field theory," Phys. Rev. 186 (1969) 1656-1670. Validity of the relativistic eikonal approximation. G Tiktopoulos, S Treiman, Physical Review D. 24805G. Tiktopoulos and S. Treiman, "Validity of the relativistic eikonal approximation," Physical Review D 2 no. 4, (1970) 805. Celestial double copy from the worldsheet. E Casali, A Sharma, 10.1007/JHEP05(2021)157arXiv:2011.10052JHEP. 05157hep-thE. Casali and A. Sharma, "Celestial double copy from the worldsheet," JHEP 05 (2021) 157, arXiv:2011.10052 [hep-th]. Celestial IR divergences and the effective action of supertranslation modes. K Nguyen, J Salzer, 10.1007/JHEP09(2021)144arXiv:2105.10526JHEP. 09144hep-thK. Nguyen and J. Salzer, "Celestial IR divergences and the effective action of supertranslation modes," JHEP 09 (2021) 144, arXiv:2105.10526 [hep-th]. Celestial IR divergences in general most-subleading-color gluon and gravity amplitudes. H Nastase, F Rojas, C Rubio, 10.1007/JHEP01(2022)136arXiv:2111.06861JHEP. 01136hep-thH. Nastase, F. Rojas, and C. Rubio, "Celestial IR divergences in general most-subleading-color gluon and gravity amplitudes," JHEP 01 (2022) 136, arXiv:2111.06861 [hep-th]. Asymptotic Symmetries and Celestial CFT. L Donnay, S Pasterski, A Puhm, 10.1007/JHEP09(2020)176arXiv:2005.08990JHEP. 09176hep-thL. Donnay, S. Pasterski, and A. Puhm, "Asymptotic Symmetries and Celestial CFT," JHEP 09 (2020) 176, arXiv:2005.08990 [hep-th]. Conformal block expansion in celestial cft. A Atanasov, W Melton, A.-M Raclariu, A Strominger, https:/link.aps.org/doi/10.1103/PhysRevD.104.126033arXiv:2104.13432Phys. Rev. D. 104126033hep-thA. Atanasov, W. Melton, A.-M. Raclariu, and A. Strominger, "Conformal block expansion in celestial cft," Phys. Rev. D 104 (Dec, 2021) 126033, arXiv:2104.13432 [hep-th]. https://link.aps.org/doi/10.1103/PhysRevD.104.126033. . S Pasterski, H Verlinde, arXiv:2201.01630Chaos in Celestial CFT. hep-thS. Pasterski and H. Verlinde, "Chaos in Celestial CFT," arXiv:2201.01630 [hep-th]. Mapping SYK to the Sky. S Pasterski, H Verlinde, arXiv:2201.05054hep-thS. Pasterski and H. Verlinde, "Mapping SYK to the Sky," arXiv:2201.05054 [hep-th]. Writing CFT correlation functions as AdS scattering amplitudes. J Penedones, 10.1007/JHEP03(2011)025arXiv:1011.1485JHEP. 0325hep-thJ. Penedones, "Writing CFT correlation functions as AdS scattering amplitudes," JHEP 03 (2011) 025, arXiv:1011.1485 [hep-th]. Dimensional Reduction for Conformal Blocks. M Hogervorst, 10.1007/JHEP09(2016)017arXiv:1604.08913JHEP. 0917hep-thM. Hogervorst, "Dimensional Reduction for Conformal Blocks," JHEP 09 (2016) 017, arXiv:1604.08913 [hep-th]. Celestial w 1+∞ Symmetries from Twistor Space. T Adamo, L Mason, A Sharma, 10.3842/SIGMA.2022.016arXiv:2110.06066SIGMA. 1816hep-thT. Adamo, L. Mason, and A. Sharma, "Celestial w 1+∞ Symmetries from Twistor Space," SIGMA 18 (2022) 016, arXiv:2110.06066 [hep-th]. Celestial operator products from the worldsheet. T Adamo, W Bu, E Casali, A Sharma, 10.1007/JHEP06(2022)052arXiv:2111.02279JHEP. 0652hep-thT. Adamo, W. Bu, E. Casali, and A. Sharma, "Celestial operator products from the worldsheet," JHEP 06 (2022) 052, arXiv:2111.02279 [hep-th]. Deformed w 1+∞ Algebras in the Celestial CFT. J Mago, L Ren, A Y Srikant, A Volovich, arXiv:2111.11356hep-thJ. Mago, L. Ren, A. Y. Srikant, and A. Volovich, "Deformed w 1+∞ Algebras in the Celestial CFT," arXiv:2111.11356 [hep-th]. Flat holography and Carrollian fluids. L Ciambelli, C Marteau, A C Petkou, P M Petropoulos, K Siampos, 10.1007/JHEP07(2018)165arXiv:1802.06809JHEP. 07165hep-thL. Ciambelli, C. Marteau, A. C. Petkou, P. M. Petropoulos, and K. Siampos, "Flat holography and Carrollian fluids," JHEP 07 (2018) 165, arXiv:1802.06809 [hep-th]. G Compère, A Fiorucci, R Ruzziconi, 10.1088/1361-6382/ab3d4barXiv:1905.00971The Λ-BMS 4 group of dS 4 and new boundary conditions for AdS 4. 36195017gr-qc. Erratum: Class.Quant.Grav. 38, 229501 (2021)G. Compère, A. Fiorucci, and R. Ruzziconi, "The Λ-BMS 4 group of dS 4 and new boundary conditions for AdS 4 ," Class. Quant. Grav. 36 no. 19, (2019) 195017, arXiv:1905.00971 [gr-qc]. [Erratum: Class.Quant.Grav. 38, 229501 (2021)]. Causality constraints on corrections to the graviton three-point coupling. X O Camanho, J D Edelstein, J Maldacena, A Zhiboedov, 10.1007/JHEP02(2016)020Journal of High Energy Physics. 2X. O. Camanho, J. D. Edelstein, J. Maldacena, and A. Zhiboedov, "Causality constraints on corrections to the graviton three-point coupling," Journal of High Energy Physics 2016 no. 2, (2016) 1-62. Eikonal phase matrix, deflection angle and time delay in effective field theories of gravity. M Huber, A Brandhuber, S De Angelis, G Travaglini, 10.1103/PhysRevD.102.046014arXiv:2006.02375Phys. Rev. D. 102446014hep-thM. Accettulli Huber, A. Brandhuber, S. De Angelis, and G. Travaglini, "Eikonal phase matrix, deflection angle and time delay in effective field theories of gravity," Phys. Rev. D 102 no. 4, (2020) 046014, arXiv:2006.02375 [hep-th]. Black holes, shock waves, and causality in the ads/cft correspondence. G T Horowitz, N Itzhaki, Journal of High Energy Physics. 1999G. T. Horowitz and N. Itzhaki, "Black holes, shock waves, and causality in the ads/cft correspondence," Journal of High Energy Physics 1999 (1999) 010-010. Einstein gravity 3-point functions from conformal field theory. N Afkhami-Jeddi, T Hartman, S Kundu, A Tajdini, 10.1007/JHEP12(2017)049arXiv:1610.09378JHEP. 1249hep-thN. Afkhami-Jeddi, T. Hartman, S. Kundu, and A. Tajdini, "Einstein gravity 3-point functions from conformal field theory," JHEP 12 (2017) 049, arXiv:1610.09378 [hep-th]. Shockwaves from the Operator Product Expansion. N Afkhami-Jeddi, T Hartman, S Kundu, A Tajdini, 10.1007/JHEP03(2019)201arXiv:1709.03597JHEP. 03201hep-thN. Afkhami-Jeddi, T. Hartman, S. Kundu, and A. Tajdini, "Shockwaves from the Operator Product Expansion," JHEP 03 (2019) 201, arXiv:1709.03597 [hep-th]. Subleading eikonal, ads/cft and double stress tensors. M Kulaxizi, G S Ng, A Parnachev, 10.1007/JHEP10(2019)107Journal of High Energy Physics. 10107M. Kulaxizi, G. S. Ng, and A. Parnachev, "Subleading eikonal, ads/cft and double stress tensors," Journal of High Energy Physics 2019 no. 10, (2019) 107. https://doi.org/10.1007/JHEP10(2019)107. On the need for soft dressing. D Carney, L Chaurette, D Neuenfeld, G Semenoff, 10.1007/JHEP09(2018)121arXiv:1803.02370JHEP. 09121hep-thD. Carney, L. Chaurette, D. Neuenfeld, and G. Semenoff, "On the need for soft dressing," JHEP 09 (2018) 121, arXiv:1803.02370 [hep-th]. Subleading soft dressings of asymptotic states in QED and perturbative quantum gravity. S Choi, R Akhoury, 10.1007/JHEP09(2019)031arXiv:1907.05438JHEP. 0931hep-thS. Choi and R. Akhoury, "Subleading soft dressings of asymptotic states in QED and perturbative quantum gravity," JHEP 09 (2019) 031, arXiv:1907.05438 [hep-th]. Generalized coherent states in QCD from asymptotic symmetries. A H Anupam, A P V , 10.1103/PhysRevD.101.066010arXiv:1907.06255Phys. Rev. D. 101666010hep-thA. H. Anupam and A. P. V., "Generalized coherent states in QCD from asymptotic symmetries," Phys. Rev. D 101 no. 6, (2020) 066010, arXiv:1907.06255 [hep-th]. L Donnay, S Pasterski, A Puhm, arXiv:2202.11127Goldilocks Modes and the Three Scattering Bases. hep-thL. Donnay, S. Pasterski, and A. Puhm, "Goldilocks Modes and the Three Scattering Bases," arXiv:2202.11127 [hep-th]. Loop-corrected subleading soft theorem and the celestial stress-tensor. L Donnay, K Nguyen, R Ruzziconi, arXiv:2205.11477hep-thL. Donnay, K. Nguyen, and R. Ruzziconi, "Loop-corrected subleading soft theorem and the celestial stress-tensor," arXiv:2205.11477 [hep-th]. A Comment on Loop Corrections to the Celestial Stress Tensor. S Pasterski, arXiv:2205.10901hep-thS. Pasterski, "A Comment on Loop Corrections to the Celestial Stress Tensor," arXiv:2205.10901 [hep-th]. Ambidextrous light transforms for celestial amplitudes. A Sharma, 10.1007/JHEP01(2022)031arXiv:2107.06250JHEP. 0131hep-thA. Sharma, "Ambidextrous light transforms for celestial amplitudes," JHEP 01 (2022) 031, arXiv:2107.06250 [hep-th].
[]
[ "JT Gravity from Partial Reduction and Defect Extremal Surface", "JT Gravity from Partial Reduction and Defect Extremal Surface", "JT Gravity from Partial Reduction and Defect Extremal Surface", "JT Gravity from Partial Reduction and Defect Extremal Surface" ]
[ "Feiyu Deng [email protected][email protected][email protected] ", "Yu-Sen An ", "Yang Zhou ", "\nDepartment of Physics and Center for Field Theory and Particle Physics\nDepartment of Physics and Center for Field Theory and Particle Physics\nFudan University\n200433ShanghaiChina\n", "\nand Peng Huanwu Center for Fundamental Theory\nFudan University\n200433, 230026Shanghai, HefeiAnhuiChina, China\n", "Feiyu Deng [email protected][email protected][email protected] ", "Yu-Sen An ", "Yang Zhou ", "\nDepartment of Physics and Center for Field Theory and Particle Physics\nDepartment of Physics and Center for Field Theory and Particle Physics\nFudan University\n200433ShanghaiChina\n", "\nand Peng Huanwu Center for Fundamental Theory\nFudan University\n200433, 230026Shanghai, HefeiAnhuiChina, China\n" ]
[ "Department of Physics and Center for Field Theory and Particle Physics\nDepartment of Physics and Center for Field Theory and Particle Physics\nFudan University\n200433ShanghaiChina", "and Peng Huanwu Center for Fundamental Theory\nFudan University\n200433, 230026Shanghai, HefeiAnhuiChina, China", "Department of Physics and Center for Field Theory and Particle Physics\nDepartment of Physics and Center for Field Theory and Particle Physics\nFudan University\n200433ShanghaiChina", "and Peng Huanwu Center for Fundamental Theory\nFudan University\n200433, 230026Shanghai, HefeiAnhuiChina, China" ]
[]
We propose the three-dimensional bulk dual for Jackiw-Teitelboim gravity coupled with CFT2 bath based on partial reduction. The bulk dual is classical AdS gravity with a defect brane which has small fluctuation in transverse direction. We derive full Jackiw-Teitelboim gravity action by considering the transverse fluctuation as a dilaton field. We demonstrate that the fine grained entropy computed from island formula precisely agrees with that computed from defect extremal surface. Our construction provides a Lorentzian higher dimensional dual for Jackiw-Teitelboim gravity and therefore offers a framework to study problems such as black hole information paradox as well as gravity/ensemble duality.
10.1007/jhep02(2023)219
[ "https://export.arxiv.org/pdf/2206.09609v1.pdf" ]
249,889,568
2206.09609
9d8d6dc3a2787ffa2250d653b608404f76a76bb9
JT Gravity from Partial Reduction and Defect Extremal Surface 20 Jun 2022 Feiyu Deng [email protected][email protected][email protected] Yu-Sen An Yang Zhou Department of Physics and Center for Field Theory and Particle Physics Department of Physics and Center for Field Theory and Particle Physics Fudan University 200433ShanghaiChina and Peng Huanwu Center for Fundamental Theory Fudan University 200433, 230026Shanghai, HefeiAnhuiChina, China JT Gravity from Partial Reduction and Defect Extremal Surface 20 Jun 2022(Dated: June 22, 2022) We propose the three-dimensional bulk dual for Jackiw-Teitelboim gravity coupled with CFT2 bath based on partial reduction. The bulk dual is classical AdS gravity with a defect brane which has small fluctuation in transverse direction. We derive full Jackiw-Teitelboim gravity action by considering the transverse fluctuation as a dilaton field. We demonstrate that the fine grained entropy computed from island formula precisely agrees with that computed from defect extremal surface. Our construction provides a Lorentzian higher dimensional dual for Jackiw-Teitelboim gravity and therefore offers a framework to study problems such as black hole information paradox as well as gravity/ensemble duality. A. Introduction Significant progress has been made in recent understanding of black hole information paradox [1]. In particular the island formula [2][3][4] for the von Neumann entropy of Hawking radiation [5] gives Page curve [6] and therefore maintains unitarity. The development relies on the quantum extremal surface formula [7] for the fine grained entropy, which was based on the quantum corrected Ryu-Takayanagi formula in computing holographic entanglement entropy [8][9][10]. To justify the island formula of von Neumann entropy, one can employ replica trick and perform explicit gravitational path integral computation in lower dimensional systems. In particular the Jackiw-Teitelboim (JT) gravity coupled to a quantum bath provides a solvable model to explore the information transfer for black hole in two dimensions [11,12]. It has been found that the island contribution corresponds to Euclidean replica wormholes [12], which may cause the factorization puzzle [13,14]. It is still quite interesting to explore the Lorentzian counterparts for the replica wormholes. There are many applications and generalizations of island formula, including the application in cosmology [15,16] and the generalization to asymptotically flat space [17,18]. For other related works, see . On the other hand, AdS gravity with a defect brane provides a natural holographic framework to study lower dimensional gravity coupled to a bath. There the holographic dual of von Neumann entropy is Ryu-Takayangai surface or Hubeny-Rangamani-Takayanagi surface and therefore Lorentzian. The defect brane model can also be generalized to higher dimensions [44]. In [45], it has been shown that the holographic counterpart of island formula is Defect Extremal Surface * [email protected][email protected] ‡ yang [email protected] (DES) formula, which is the defect corrected Ryu-Takayanagi formula. The defect brane model is based on AdS d+1 /BCFT d [46], where the defect brane with a constant tension in the AdS bulk has AdS d geometry and there are also quantum degrees of freedom localized on it. By doing partial dimension reduction between zero tension brane and constant tension brane, one can obtain the brane world gravity, which is coupled to a CFT bath through boundary conditions. In three dimensional bulk, the simple dimension reduction leads to a two dimensional topological gravity on the brane [45]. It has been further demonstrated that entanglement entropy and reflected entropy computed from DES formula agrees with island formula precisely [47]. See [48][49][50][51] for further works along this line. To fully reconcile the path integral approach and holographic approach, one may ask whether we can obtain JT gravity, which has a dilaton field, directly from dimension reduction in the defect brane model. If yes, what would be the holographic counterpart of the island formula for JT gravity coupled with CFT bath? In this letter we answer these questions. We show that for the 3d AdS bulk with a defect brane, by considering the small transverse fluctuation, one can derive the full JT gravity action from partial reduction between zero tension brane and finite tension brane. In particular, the transverse fluctuation becomes the dilaton field on the brane world. For the remaining part of the bulk one can use standard AdS/CFT and obtain CFT 2 bath on the asymptotic boundary. Eventually we obtain a 2d JT gravity coupled with CFT 2 bath. The fact that the transverse fluctuation is small allows us to ignore higher order contributions in 2d action and obtain precisely the full 2d JT action including boundary term.We therefore obtain a higher dimensional dual for JT gravity coupled to a bath. To support the 3d/2d duality, we compute the finedgrained entropy both from defect extremal surface formula in the bulk and from the boundary island formula. We find that the extremal conditions are consistent and the fined-grained entropy agree with each other precisely for small transverse fluctuation. We consider this agreement as strong evidence to support that JT gravity coupled to CFT bath can be dual to semi-classical 3d gravity with a fluctuating brane. B. Review of the model We consider AdS 3 /BCFT 2 with the action given by I = 1 16πG N N √ −g(R − 2Λ) + 1 8πG N M √ −γK (γ) + 1 8πG N Q √ −hK (h) + I Q + I P ,(1) where N denotes the bulk AdS spacetime, M denotes the asymptotic boundary where the Dirichlet boundary condition is imposed and Q the brane where Neumann boundary condition is imposed. I Q is the action for matter fields constrained on Q and I P is the counter term on the tip P . By varying this action, the Neumann boundary condition on Q becomes K (h) ab − h ab K (h) = 8πG N T ab ,(2) where T ab = − 2 √ −h δI Q δh ab is the stress energy tensor coming from the variation of matter action. Here we consider the bulk to be 3 dimensional and the brane Q is thus two dimensional. There are two sets of useful coordinates: (t, x, z) and (t, ρ, y). Their relation is z = −y/ cosh ρ l , x = y tanh ρ l(3) and the bulk metric can be written in terms of either one ds 2 N = l 2 z 2 (−dt 2 + dz 2 + dx 2 ) = dρ 2 + l 2 cosh 2 ρ l · −dt 2 + dy 2 y 2 ,(4) where l is the AdS radius. It is also useful to introduce polar coordinate θ with 1 cos(θ) = cosh ρ l . Consider the matter action on the brane Q of the form I Q = − 1 8πG N Q √ −hT,(5) where T is a constant tension. By solving (2) The tension T is found to be T = tanh ρ0 l l .(6) For an interval I := [0, x 0 ] in BCFT, the entanglement entropy can be computed holographically using RT formula. As shown in FIG.1, the minimal surface denoted by γ I terminates on a point on the brane which can be determined by extremization. The entanglement entropy is S I = Area (γ I ) 4G N = c 6 log 2x 0 + c 6 ρ 0 = c 6 log 2x 0 + c 6 arctanh(sin θ 0 ),(7) where c is the CFT central charge and is the UV cut off. In [45], the authors improved AdS 3 /CFT 2 by adding CFT matter localized on the brane. Notice that AdS 2 is a maximally symmetric space, the vacuum one point function of the CFT stress tensor takes the form T ab AdS2 = χh ab ,(8) which contributes to Neumann boundary condition (2). Because of the entangled quantum matter on the defect brane, we should add the defect contribution to the ordinary holographic entanglement entropy. The improved holographic formula is therefore called Defect Extremal Surface (DES) formula [45]. C. JT gravity from dimensional reduction Now we derive JT gravity from dimension reduction of 3d AdS gravity action. Under the metric ansatz ds 2 = g µν dx µ dx ν = dρ 2 + l 2 cosh 2 ρ lh ab dx a dx b ,(9) one can perform partial dimensional reduction for wedge W 1 +W by integrating out the ρ direction as shown in √ −g(R − 2Λ) = ρ 0 +ρ 16πG N −g (2) R (2) − 1 16πG N sinh( 2ρ l ) l cosh 2 ρ0 l −g (2) ,(10) where Λ = − 2 l 2 and g (2) ab = l 2 cosh 2 ρ 0 lh ab .(11) The precise reduction of bulk Ricci scalar (12) has been used in eq. (10). Notice that the brane Q is located at √ −gR = −g (2) R (2) − 2(3 cosh 2 ρ l − 1) l 2 cosh 2 ρ0 lρ = ρ 0 +ρ,(13) whereρ is a small fluctuation away from ρ 0 , i.e.ρ ρ0 1. The fluctuationρ is a function of the brane world coordinates and therefore should be treated as a field on the brane. Next we consider Gibbons-Hawking term and the brane tension term. The extrinsic curvature of the brane is K ab = tanh( ρ0+ρ l ) l h ab ,(14) where h ab is the induced metric on the brane h ab = l 2 cosh 2 ρ lh ab . The tension of the brane remains a constant which equals to the tension (6) since the fluctuation of the ρ coordinate will not affect the intrinsic brane tension. Then Gibbons-Hawking term plus the brane tension term is given by 1 8πG N Q √ −h(K − T ) = 1 8πG N sinh( 2ρ0+2ρ l ) l cosh 2 ρ0 l −g (2) − 1 8πG N tanh ρ0 l cosh 2 ( ρ0+ρ l ) l cosh 2 ρ0 l −g (2) .(16) By adding (10) and (16) together and expanding with smallρ/ρ 0 , we get the action of the 2d effective theory after partial dimension reduction Neglecting O(ρ 2 ρ0 2 ) terms, we see that the action is precisely the JT action, andρ ρ0 is identified as the dilaton field in JT gravity. If we vary with respect toρ ρ0 , we get the scalar curvature I tot = ρ 0 16πG N −g (2)ρ ρ 0 R (2) + 2 l 2 cosh 2 ρ0 l + ρ 0 16πG N −g (2) R (2) + O(ρ 2 ρ 0 2 ).(17)R (2) = − 2 l 2 cosh 2 ρ0 l ,(18) which gives the correct scalar curvature of the AdS 2 brane world. To fully recover the JT action, we also need to reproduce the boundary term. Now we show that the JT boundary action can be obtained by doing dimension reduction from the Gibbons-Hawking term on the bulk cutoff surface. [52]. As shown in FIG. 3, the cutoff surface is denoted as Σ. Near the asymptotic boundary, the metric is taken to be ds 2 = dρ 2 + l 2 cosh 2 ρ l −dt 2 + dy 2 y 2 .(19) Consider a generic cutoff surface Σ parameterized by (y, t) = (y(u), t(u)), where u is the time in the cutoff boundary. The induced metric at cutoff surface is ds 2 = dρ 2 + l 2 cosh 2 ( ρ l )( y 2 du 2 − t 2 du 2 y 2 )(21) where we denote y = dy du , t = dt du . One can compute the extrinsic curvature of Σ by K = g µν ∇ µ n ν and the result is K Σ = 1 l cosh ρ l t 3 + yy t − t y 2 − t yy (t 2 − y 2 ) 3/2 = 1 l cosh ρ l K Ω (22) where K Ω = t 3 +yy t −t y 2 −t yy (t 2 −y 2 ) 3/2 is the extrinsic curvature of the boundary Ω which is the intersection between Σ and EOW brane. By doing dimension reduction of Gibbons-Hawking term on Σ, the action on Ω is obtained I bJT = 1 8πG N Σ √ −γK Σ = ρ 0 8πG N du t 2 − y 2 y 2 (1 +ρ ρ 0 | bdy )K Ω ,(23) This is precisely the boundary term of JT gravity wheñ ρ ρ0 is identified as the dilaton field. At the intersection Ω, employing the same trick in [53] one can fix the induced metric g| bdy = − l 2 cosh 2 ( ρ l ) 2 .(24) By identifying the induced metric in (21) to (24), y can be solved to the leading order in , y = t + O( 2 ). By computing K Ω directly, we find K Ω = 1 − 2 Sch(t(u), u).(25) If we neglect the field independent divergent term, the boundary term is a Schwarzian [54]. We leave the computation details to appendix A. Thus after doing the partial dimension reduction for wedge W 1 +W and using standard AdS/CFT for wedge W 2 , we obtain the 2d effective theory to be a JT gravity together with the brane CFT, glued to a nongravitational CFT bath through transparent boundary conditions, which is precisely the original set up to motivate the island formula. See FIG. 2 for an illustration of this procedure. Since we have obtained the 2d effective description in terms of a JT gravity glued with a CFT bath, we can use island formula to calculate the von Neumann entropy for an interval [0, L] on the asymptotic boundary S = min X ext X Area(X) 4G N + S semi-cl (Σ X ) ,(26) where X = ∂I and I is the island, Σ X is the associated region including the island and S semi−cl (Σ X ) is the von-Neumann entropy of the quantum fields on Σ X in the semi-classical description. Introducing the 2d Newton constant G (2) N = G N ρ 0 ,(27) the action of 2d effective theory becomes I 2d = 1 16πG (2) N −g (2) R (2) + 1 16πG (2) N −g (2)ρ ρ 0 R (2) + 2 l 2 cosh 2 ρ0 l + I CFT .(28) Varying with respect to the 2d metric one can solve the dilaton to beρ ρ 0 = −φ r y .(29) Now we employ island formula to calculate the entropy of a single interval [0, L]. For simplicity we focus on static cases. The generalized entropy is given by The extremization condition ∂ a S gen (a) = 0 determines the boundary of island a to be a = L 2 36µ 2 + 36µ + 1 + 6µ + 1 ,(31) where µ is defined as µ = ρ0φr 6lL , which can also be expressed as 1 6 ·ρ (a) l · a L because of eq. (29). It characterizes the amplitude of the brane transverse fluctuation at location a, since a L measures the distance from the origin andρ l measures the fluctuation in angular direction. In fact one can work out a simple relation between extremal point a andρ(a) a L = 1 +ρ (a) l 1 −ρ (a) l .(32) By plugging (31) into S gen (a) one can get the entropy S island computed from island formula. Entanglement entropy from bulk DES The defect extremal surface formula is the defect corrected Ryu-Takayanagi formula. For defect D in the AdS bulk, the entanglement entropy is computed following S DES = min Γ ext Γ,X Area(Γ) 4G N + S defect [D] ,(33) where X = Γ ∩ D and Γ is the codimension two RT surface. We consider the interval [0, L] on the asymptotic boundary and use DES formula to calculate its entropy. First we compute the RT surface Area(Γ) 4G N by using embedding coordinates. Consideringρ ρ0 1, the leading order result is Area(Γ) 4G N = l 4G N arccosh (L + a sin θ 0 ) 2 + a 2 cos 2 θ 0 2a cos θ 0 + ρ 0φr 4G N a .(34) We leave the details to appendix B. The brane CFT also contributes to the entanglement entropy S defect (D) = c 6 log 2l y cos θ 0 .(35) Then the total generalized entropy is S gen (a) = c 6 log (L + a sin θ 0 ) 2 + a 2 cos 2 θ 0 a cos θ 0 + ρ 0φr 4G N a + c 6 log 2l y cos θ 0 ,(36) where c = 3l 2G N is used and c is the central charge of the brane CFT. To compare with the result obtained from boundary island formula, we take c = c. From ∂ a S gen (a) = 0, one obtains the extremal position of a to be a = (3) 2 3 L 12µ 2 + 12µ sin θ 0 + 1 3ν + 2µL + Lν 3 2 3 ,(37) where ν = 3 72µ 3 + 1 6 √ γ + 108µ 2 sin θ 0 + 36µ γ = 46656µ 2 2µ 2 + 3µ sin θ 0 + 1 2 − 108 12µ 2 + 12µ sin θ 0 + 1 3 . Plugging (37) into S gen (a), we have the entropy result calculated from DES. Comparison between DES and island formula Now we compare the entropy result computed from DES and that from island formula. We first consider ρ 0 /l 1 with the small fluctuation conditionρ/ρ 0 1 satisfied. This is the limit that the brane is nearly parallel to the asymptotic boundary. In this limit, the extremal point (31) and (37) coincide with each other and both DES and island formula give the same entropy. To see this, let us fix µ and set θ 0 = π 2 −ω [55]. Expanding the extremal point and the entropy around ω = 0, we get a DES = L 2 36µ 2 + 36µ + 1 + 6µ + 1 + O(ω 2 ) = a island + O(ω 2 ),(39) and S DES = 2cµ 6µ + 1 + η + c 6 log (6µ + 3 + η) 2 L (6µ + 1 + η) + c 6 log l ω 2 y + O(ω 2 ) = S island + O(ω 2 ),(40) where η = 36µ 2 + 36µ + 1. Furthermore, we can consider a generic brane location, i.e. ρ 0 /l is finite. By the small fluctuation conditioñ ρ ρ0 1, we have thatρ/l is small, which implies that µ = 1 6 ·ρ l · a L 1 provided that a/L is order one. In this limit we can expand with small µ. The extremal point of bulk DES becomes a = L + 6(1 + sin θ 0 )Lµ + O(µ 2 ), (42) and the extremal point of island formula is a = L + 12Lµ + O(µ 2 ).(43) The entropy from bulk DES is S DES = c 6 log 2L + c 6 log 1 + sin θ 0 cos θ 0 + c 6 log 2l y cos θ 0 + cµ + O(µ 2 ),(44) while the entropy from boundary island formula is S island = c 6 log 2L + c 6 arctanh(sin θ 0 ) + c 6 log 2l y cos θ 0 + cµ + O(µ 2 ).(45) Comparing the above two results, we find that the entanglement entropy for single interval [0, L] from DES and from island formula precisely match for small brane fluctuations. E. Conclusion and Discussion In this paper we constructed the three-dimensional bulk dual for Jackiw-Teitelboim gravity coupled to CFT 2 bath based on partial reduction. The bulk dual is classical AdS gravity with a defect brane which has small fluctuation in transverse direction. We obtain full Jackiw-Teitelboim gravity action by identifying the transverse fluctuation as a dilaton field on brane world. We further demonstrated that the fine grained entropy computed from island formula precisely agrees with that computed from defect extremal surface. There are a few interesting future questions: First, using our construction to understand JT gravity/ensemble relation. In our construction the JT gravity from partial reduction is dual to a defect theory in asymptotic boundary and it would be interesting to check the relation between this defect theory and the ensemble of quantum mechanics discussed in [56]. Second, generalize our construction to higher dimensions. In higher dimensions, the dilaton field is known as radion field in the original Randall-Sundrum model [57]. It is quite interesting to work out the full brane world theory including the dilaton field from partial reduction. Last but not least, it is interesting to test our construction by other entanglement measures and to explore the physical implications of the dilaton in brane world cosmology [49]. Note added: After this work is finished, [58] appears in arXiv, where the authors consider wedge holography with two finite tension branes. s = l arccosh − (t 2 − t 1 ) 2 + (x 2 − x 1 ) 2 + z 2 1 + z 2 2 2z 1 z 2 . (B2) Plugging in two points A = (t, a cos θ, −a sin θ) and B = (t, , L) where A is the intersection point of DES and EOW brane and B is the right boundary of the interval, one gets Area(Γ) 4G N = l 4G N arccosh (L + a sin θ) 2 + a 2 cos 2 θ 2a cos θ . Consideringρ ρ0 1 , we can expand the RT result to the first order Area(Γ) 4G N = l 4G N arccosh (L + a sin θ 0 ) 2 + a 2 cos 2 θ 0 2a cos θ 0 + ρ 0φr 4G N a ,(B4) where 1 cos θ = cosh( ρ0+ρ l ) and the solution (29) is used. , one can determine that the brane is located at ρ = ρ 0 = arctanh sin θ 0 , where ρ 0 is a positive constant.See FIG. 1for an illustration. FIG. 1 . 1The set up of AdS/BCFT where the brane tension is a constant. FIG. 2 . 2Effective description from Partial Dimension Reduction plus AdS/CFT. FIG. 3 . 3Dynamical UV cut off for the wedge W1 +W . S gen (a) = S area (y = −a) + S matter ([−a, D. Entanglement entropy for an interval 1. Entanglement entropy from island formula ACKNOWLEDGMENTSWe are grateful for the useful discussions with our group members in Fudan University. This work is supported by NSFC grant 11905033. YZ is also supported by NSFC 11947301 through Peng Huanwu Center for Fundamental Theory.Appendix A: Schwarzian theory on the JT boundaryIn this appendix we show further details of how to compute the JT boundary term and recognize that this is a Schwarzian theory.We consider the UV cutoff surface Σ, the tangent vector and normal vector areThe extrinsic curvature is computed asBy using y = t and expand K to O( 2 ), we getAppendix B: Computation of bulk RT surface contributionIn this appendix, we compute the entropy contribution of RT surface by using embedding coordinates.The embedding coordinates arewhere X 0 2 + X 1 2 − X 2 2 − X 3 2 = 2 . Using these the geodesic distance s between two points (t 1 , z 1 , x 1 ) and (t 2 , z 2 , x 2 ) is obtained as Breakdown of Predictability in Gravitational Collapse. S W Hawking, 10.1103/PhysRevD.14.2460Phys. Rev. D. 142460S. W. Hawking, Breakdown of Predictability in Gravita- tional Collapse, Phys. Rev. D 14, 2460 (1976). Entanglement Wedge Reconstruction and the Information Paradox. G Penington, 10.1007/JHEP09(2020)002arXiv:1905.08255JHEP. 092hep-thG. Penington, Entanglement Wedge Reconstruc- tion and the Information Paradox, JHEP 09, 002, arXiv:1905.08255 [hep-th]. The entropy of bulk quantum fields and the entanglement wedge of an evaporating black hole. A Almheiri, N Engelhardt, D Marolf, H Maxfield, 10.1007/JHEP12(2019)063arXiv:1905.08762JHEP. 1263hep-thA. Almheiri, N. Engelhardt, D. Marolf, and H. Maxfield, The entropy of bulk quantum fields and the entangle- ment wedge of an evaporating black hole, JHEP 12, 063, arXiv:1905.08762 [hep-th]. The Page curve of Hawking radiation from semiclassical geometry. A Almheiri, R Mahajan, J Maldacena, Y Zhao, 10.1007/JHEP03(2020)149arXiv:1908.10996JHEP. 03149hep-thA. Almheiri, R. Mahajan, J. Maldacena, and Y. Zhao, The Page curve of Hawking radiation from semiclassical geometry, JHEP 03, 149, arXiv:1908.10996 [hep-th]. Particle Creation by Black Holes, Commun. S W Hawking, 10.1007/BF02345020Math. Phys. 43206Math.Phys.S. W. Hawking, Particle Creation by Black Holes, Com- mun. Math. Phys. 43, 199 (1975), [Erratum: Com- mun.Math.Phys. 46, 206 (1976)]. Information in black hole radiation. D N Page, 10.1103/PhysRevLett.71.3743arXiv:hep-th/9306083Phys. Rev. Lett. 713743D. N. Page, Information in black hole radiation, Phys. Rev. Lett. 71, 3743 (1993), arXiv:hep-th/9306083. Quantum Extremal Surfaces: Holographic Entanglement Entropy beyond the Classical Regime. N Engelhardt, A C Wall, 10.1007/JHEP01(2015)073arXiv:1408.3203JHEP. 0173hepthN. Engelhardt and A. C. Wall, Quantum Extremal Sur- faces: Holographic Entanglement Entropy beyond the Classical Regime, JHEP 01, 073, arXiv:1408.3203 [hep- th]. Holographic derivation of entanglement entropy from AdS/CFT. S Ryu, T Takayanagi, 10.1103/PhysRevLett.96.181602arXiv:hep-th/0603001Phys. Rev. Lett. 96181602S. Ryu and T. Takayanagi, Holographic derivation of en- tanglement entropy from AdS/CFT, Phys. Rev. Lett. 96, 181602 (2006), arXiv:hep-th/0603001. A Covariant holographic entanglement entropy proposal. V E Hubeny, M Rangamani, T Takayanagi, 10.1088/1126-6708/2007/07/062arXiv:0705.0016JHEP. 0762hep-thV. E. Hubeny, M. Rangamani, and T. Takayanagi, A Covariant holographic entanglement entropy proposal, JHEP 07, 062, arXiv:0705.0016 [hep-th]. Quantum corrections to holographic entanglement entropy. T Faulkner, A Lewkowycz, J Maldacena, 10.1007/JHEP11(2013)074arXiv:1307.2892JHEP. 1174hep-thT. Faulkner, A. Lewkowycz, and J. Maldacena, Quantum corrections to holographic entanglement entropy, JHEP 11, 074, arXiv:1307.2892 [hep-th]. Replica Wormholes and the Entropy of Hawking Radiation. A Almheiri, T Hartman, J Maldacena, E Shaghoulian, A Tajdini, 10.1007/JHEP05(2020)013arXiv:1911.12333JHEP. 0513hep-thA. Almheiri, T. Hartman, J. Maldacena, E. Shaghou- lian, and A. Tajdini, Replica Wormholes and the Entropy of Hawking Radiation, JHEP 05, 013, arXiv:1911.12333 [hep-th]. Replica wormholes and the black hole interior. G Penington, S H Shenker, D Stanford, Z Yang, 10.1007/JHEP03(2022)205arXiv:1911.11977JHEP. 03205hep-thG. Penington, S. H. Shenker, D. Stanford, and Z. Yang, Replica wormholes and the black hole interior, JHEP 03, 205, arXiv:1911.11977 [hep-th]. E Witten, S.-T Yau, 10.4310/ATMP.1999.v3.n6.a1arXiv:hep-th/9910245Connectedness of the boundary in the AdS / CFT correspondence. 31635E. Witten and S.-T. Yau, Connectedness of the boundary in the AdS / CFT correspondence, Adv. Theor. Math. Phys. 3, 1635 (1999), arXiv:hep-th/9910245. . J M Maldacena, L Maoz, 10.1088/1126-6708/2004/02/053arXiv:hep-th/0401024Wormholes in AdS. 0253JHEPJ. M. Maldacena and L. Maoz, Wormholes in AdS, JHEP 02, 053, arXiv:hep-th/0401024. Bra-ket wormholes in gravitationally prepared states. Y Chen, V Gorbenko, J Maldacena, 10.1007/JHEP02(2021)009arXiv:2007.16091JHEP. 029hep-thY. Chen, V. Gorbenko, and J. Maldacena, Bra-ket worm- holes in gravitationally prepared states, JHEP 02, 009, arXiv:2007.16091 [hep-th]. . T Hartman, Y Jiang, E Shaghoulian, 10.1007/JHEP11(2020)111arXiv:2008.01022Islands in cosmology. 11111JHEP. hep-thT. Hartman, Y. Jiang, and E. Shaghoulian, Islands in cosmology, JHEP 11, 111, arXiv:2008.01022 [hep-th]. Islands in Asymptotically Flat 2D Gravity. T Hartman, E Shaghoulian, A Strominger, 10.1007/JHEP07(2020)022arXiv:2004.13857JHEP. 0722hep-thT. Hartman, E. Shaghoulian, and A. Strominger, Is- lands in Asymptotically Flat 2D Gravity, JHEP 07, 022, arXiv:2004.13857 [hep-th]. Page Curve and the Information Paradox in Flat Space. C Krishnan, V Patil, J Pereira, arXiv:2005.02993hep-thC. Krishnan, V. Patil, and J. Pereira, Page Curve and the Information Paradox in Flat Space, (2020), arXiv:2005.02993 [hep-th]. Pulling Out the Island with Modular Flow. Y Chen, 10.1007/JHEP03(2020)033arXiv:1912.02210JHEP. 0333hep-thY. Chen, Pulling Out the Island with Modular Flow, JHEP 03, 033, arXiv:1912.02210 [hep-th]. Replica wormhole and information retrieval in the SYK model coupled to Majorana chains. Y Chen, X.-L Qi, P Zhang, 10.1007/JHEP06(2020)121arXiv:2003.13147JHEP. 06121hep-thY. Chen, X.-L. Qi, and P. Zhang, Replica wormhole and information retrieval in the SYK model coupled to Ma- jorana chains, JHEP 06, 121, arXiv:2003.13147 [hep-th]. BCFT entanglement entropy at large central charge and the black hole interior. J Sully, M V Raamsdonk, D Wakeham, 10.1007/JHEP03(2021)167arXiv:2004.13088JHEP. 03167hep-thJ. Sully, M. V. Raamsdonk, and D. Wakeham, BCFT en- tanglement entropy at large central charge and the black hole interior, JHEP 03, 167, arXiv:2004.13088 [hep-th]. Islands and Page Curves for Evaporating Black Holes in JT Gravity. T J Hollowood, S P Kumar, 10.1007/JHEP08(2020)094arXiv:2004.14944JHEP. 0894hep-thT. J. Hollowood and S. P. Kumar, Islands and Page Curves for Evaporating Black Holes in JT Gravity, JHEP 08, 094, arXiv:2004.14944 [hep-th]. Massive islands. H Geng, A Karch, 10.1007/JHEP09(2020)121arXiv:2006.02438JHEP. 09121hep-thH. Geng and A. Karch, Massive islands, JHEP 09, 121, arXiv:2006.02438 [hep-th]. Information Flow in Black Hole Evaporation. H Z Chen, Z Fisher, J Hernandez, R C Myers, S.-M Ruan, 10.1007/JHEP03(2020)152arXiv:1911.03402JHEP. 03152hep-thH. Z. Chen, Z. Fisher, J. Hernandez, R. C. Myers, and S.- M. Ruan, Information Flow in Black Hole Evaporation, JHEP 03, 152, arXiv:1911.03402 [hep-th]. Reflected Entropy for an Evaporating Black Hole. T Li, J Chu, Y Zhou, 10.1007/JHEP11(2020)155arXiv:2006.10846JHEP. 11155hep-thT. Li, J. Chu, and Y. Zhou, Reflected Entropy for an Evaporating Black Hole, JHEP 11, 155, arXiv:2006.10846 [hep-th]. Including contributions from entanglement islands to the reflected entropy. V Chandrasekaran, M Miyaji, P Rath, 10.1103/PhysRevD.102.086009arXiv:2006.10754Phys. Rev. D. 10286009hep-thV. Chandrasekaran, M. Miyaji, and P. Rath, Includ- ing contributions from entanglement islands to the re- flected entropy, Phys. Rev. D 102, 086009 (2020), arXiv:2006.10754 [hep-th]. Effective entropy of quantum fields coupled with gravity. X Dong, X.-L Qi, Z Shangnan, Z Yang, 10.1007/JHEP10(2020)052arXiv:2007.02987JHEP. 1052hep-thX. Dong, X.-L. Qi, Z. Shangnan, and Z. Yang, Effective entropy of quantum fields coupled with gravity, JHEP 10, 052, arXiv:2007.02987 [hep-th]. Evaporating Black Holes Coupled to a Thermal Bath. H Z Chen, Z Fisher, J Hernandez, R C Myers, S.-M Ruan, 10.1007/JHEP01(2021)065arXiv:2007.11658JHEP. 0165hep-thH. Z. Chen, Z. Fisher, J. Hernandez, R. C. Myers, and S.- M. Ruan, Evaporating Black Holes Coupled to a Thermal Bath, JHEP 01, 065, arXiv:2007.11658 [hep-th]. Islands in de Sitter space. V Balasubramanian, A Kar, T Ugajin, 10.1007/JHEP02(2021)072arXiv:2008.05275JHEP. 0272hepthV. Balasubramanian, A. Kar, and T. Ugajin, Islands in de Sitter space, JHEP 02, 072, arXiv:2008.05275 [hep- th]. Quantum Extremal Islands Made Easy, Part I: Entanglement on the Brane. H Z Chen, R C Myers, D Neuenfeld, I A Reyes, J Sandor, 10.1007/JHEP10(2020)166arXiv:2006.04851JHEP. 10166hep-thH. Z. Chen, R. C. Myers, D. Neuenfeld, I. A. Reyes, and J. Sandor, Quantum Extremal Islands Made Easy, Part I: Entanglement on the Brane, JHEP 10, 166, arXiv:2006.04851 [hep-th]. Quantum Extremal Islands Made Easy, Part II: Black Holes on the Brane. H Z Chen, R C Myers, D Neuenfeld, I A Reyes, J Sandor, 10.1007/JHEP12(2020)025arXiv:2010.00018JHEP. 1225hep-thH. Z. Chen, R. C. Myers, D. Neuenfeld, I. A. Reyes, and J. Sandor, Quantum Extremal Islands Made Easy, Part II: Black Holes on the Brane, JHEP 12, 025, arXiv:2010.00018 [hep-th]. Island in Charged Black Holes. Y Ling, Y Liu, Z.-Y Xian, 10.1007/JHEP03(2021)251arXiv:2010.00037JHEP. 03251hep-thY. Ling, Y. Liu, and Z.-Y. Xian, Island in Charged Black Holes, JHEP 03, 251, arXiv:2010.00037 [hep-th]. Euclidean gravity, and the black hole information problem. D Harlow, E Shaghoulian, 10.1007/JHEP04(2021)175arXiv:2010.10539Global symmetry. 175hep-thD. Harlow and E. Shaghoulian, Global symmetry, Eu- clidean gravity, and the black hole information problem, JHEP 04, 175, arXiv:2010.10539 [hep-th]. Quantum extremal islands made easy. Part III. Complexity on the brane. J Hernandez, R C Myers, S.-M Ruan, 10.1007/JHEP02(2021)173arXiv:2010.16398JHEP. 02173hep-thJ. Hernandez, R. C. Myers, and S.-M. Ruan, Quantum extremal islands made easy. Part III. Complexity on the brane, JHEP 02, 173, arXiv:2010.16398 [hep-th]. Signatures of global symmetry violation in relative entropies and replica wormholes. Y Chen, H W Lin, 10.1007/JHEP03(2021)040arXiv:2011.06005JHEP. 0340hep-thY. Chen and H. W. Lin, Signatures of global symme- try violation in relative entropies and replica wormholes, JHEP 03, 040, arXiv:2011.06005 [hep-th]. Codimension two holography for wedges. I Akal, Y Kusuki, T Takayanagi, Z Wei, 10.1103/PhysRevD.102.126007arXiv:2007.06800Phys. Rev. D. 102126007hep-thI. Akal, Y. Kusuki, T. Takayanagi, and Z. Wei, Codi- mension two holography for wedges, Phys. Rev. D 102, 126007 (2020), arXiv:2007.06800 [hep-th]. Information radiation in BCFT models of black holes. M Rozali, J Sully, M Van Raamsdonk, C Waddell, D Wakeham, 10.1007/JHEP05(2020)004arXiv:1910.12836JHEP. 054hep-thM. Rozali, J. Sully, M. Van Raamsdonk, C. Waddell, and D. Wakeham, Information radiation in BCFT models of black holes, JHEP 05, 004, arXiv:1910.12836 [hep-th]. K Suzuki, T Takayanagi, arXiv:2202.08462Two Dimensions, (2022). hep-thK. Suzuki and T. Takayanagi, BCFT and Islands in Two Dimensions, (2022), arXiv:2202.08462 [hep-th]. Islands in closed and open universes. R Bousso, E Wildenhain, 10.1103/PhysRevD.105.086012arXiv:2202.05278Phys. Rev. D. 10586012hep-thR. Bousso and E. Wildenhain, Islands in closed and open universes, Phys. Rev. D 105, 086012 (2022), arXiv:2202.05278 [hep-th]. Island for gravitationally prepared state and pseudo entanglement wedge. M Miyaji, 10.1007/JHEP12(2021)013arXiv:2109.03830JHEP. 1213hep-thM. Miyaji, Island for gravitationally prepared state and pseudo entanglement wedge, JHEP 12, 013, arXiv:2109.03830 [hep-th]. Homology conditions for RT surfaces in double holography. D Neuenfeld, 10.1088/1361-6382/ac51e7arXiv:2105.01130Class. Quant. Grav. 3975009hep-thD. Neuenfeld, Homology conditions for RT surfaces in double holography, Class. Quant. Grav. 39, 075009 (2022), arXiv:2105.01130 [hep-th]. From the BTZ black hole to JT gravity: geometrizing the island. E Verheijden, E Verlinde, 10.1007/JHEP11(2021)092arXiv:2102.00922JHEP. 1192hep-thE. Verheijden and E. Verlinde, From the BTZ black hole to JT gravity: geometrizing the island, JHEP 11, 092, arXiv:2102.00922 [hep-th]. Entanglement Entropy in a Holographic Moving Mirror and the Page Curve. I Akal, Y Kusuki, N Shiba, T Takayanagi, Z Wei, 10.1103/PhysRevLett.126.061604arXiv:2011.12005Phys. Rev. Lett. 12661604hep-thI. Akal, Y. Kusuki, N. Shiba, T. Takayanagi, and Z. Wei, Entanglement Entropy in a Holographic Moving Mirror and the Page Curve, Phys. Rev. Lett. 126, 061604 (2021), arXiv:2011.12005 [hep-th]. Entanglement islands in higher dimensions. A Almheiri, R Mahajan, J E Santos, 10.21468/SciPostPhys.9.1.001arXiv:1911.09666SciPost Phys. 91hep-thA. Almheiri, R. Mahajan, and J. E. Santos, Entangle- ment islands in higher dimensions, SciPost Phys. 9, 001 (2020), arXiv:1911.09666 [hep-th]. Defect extremal surface as the holographic counterpart of Island formula. F Deng, J Chu, Y Zhou, 10.1007/JHEP03(2021)008arXiv:2012.07612JHEP. 038hep-thF. Deng, J. Chu, and Y. Zhou, Defect extremal surface as the holographic counterpart of Island formula, JHEP 03, 008, arXiv:2012.07612 [hep-th]. Holographic Dual of BCFT. T Takayanagi, 10.1103/PhysRevLett.107.101602arXiv:1105.5165Phys. Rev. Lett. 107101602hep-thT. Takayanagi, Holographic Dual of BCFT, Phys. Rev. Lett. 107, 101602 (2011), arXiv:1105.5165 [hep-th]. Defect extremal surface for reflected entropy. T Li, M.-K Yuan, Y Zhou, 10.1007/JHEP01(2022)018arXiv:2108.08544JHEP. 0118hep-thT. Li, M.-K. Yuan, and Y. Zhou, Defect extremal surface for reflected entropy, JHEP 01, 018, arXiv:2108.08544 [hep-th]. Page curve from defect extremal surface and island in higher dimensions. J Chu, F Deng, Y Zhou, 10.1007/JHEP10(2021)149arXiv:2105.09106JHEP. 10149hep-thJ. Chu, F. Deng, and Y. Zhou, Page curve from defect extremal surface and island in higher dimensions, JHEP 10, 149, arXiv:2105.09106 [hep-th]. Partial reduction and cosmology at defect brane. Z Wang, Z Xu, S Zhou, Y Zhou, 10.1007/JHEP05(2022)049arXiv:2112.13782JHEP. 0549hep-thZ. Wang, Z. Xu, S. Zhou, and Y. Zhou, Partial re- duction and cosmology at defect brane, JHEP 05, 049, arXiv:2112.13782 [hep-th]. Y Shao, M.-K Yuan, Y Zhou, arXiv:2206.05951Entanglement Negativity and Defect Extremal Surface, (2022). hep-thY. Shao, M.-K. Yuan, and Y. Zhou, Entanglement Negativity and Defect Extremal Surface, (2022), arXiv:2206.05951 [hep-th]. Defect extremal surfaces for entanglement negativity. D Basu, H Parihar, V Raj, G Sengupta, arXiv:2205.07905hep-thD. Basu, H. Parihar, V. Raj, and G. Sengupta, Defect extremal surfaces for entanglement negativity, (2022), arXiv:2205.07905 [hep-th]. This part of calculation is closely related to the calculation in. 36This part of calculation is closely related to the calcula- tion in [36]. Conformal symmetry and its breaking in two dimensional Nearly Anti-de-Sitter space. J Maldacena, D Stanford, Z Yang, 10.1093/ptep/ptw124arXiv:1606.01857PTEP. 2016hep-thJ. Maldacena, D. Stanford, and Z. Yang, Confor- mal symmetry and its breaking in two dimensional Nearly Anti-de-Sitter space, PTEP 2016, 12C104 (2016), arXiv:1606.01857 [hep-th]. the authors also obtained Schwarzian theory by doing dimension reduction for the UV cutoff given by y = g(t). In [36],In [36], the authors also obtained Schwarzian theory by doing dimension reduction for the UV cutoff given by y = g(t). Although ρ0 is very large, µ can be fixed. Although ρ0 is very large, µ can be fixed. JT gravity as a matrix integral. P Saad, S H Shenker, D Stanford, arXiv:1903.11115hep-thP. Saad, S. H. Shenker, and D. Stanford, JT gravity as a matrix integral, (2019), arXiv:1903.11115 [hep-th]. A Large mass hierarchy from a small extra dimension. L Randall, R Sundrum, 10.1103/PhysRevLett.83.3370arXiv:hep-ph/9905221Phys. Rev. Lett. 833370L. Randall and R. Sundrum, A Large mass hierarchy from a small extra dimension, Phys. Rev. Lett. 83, 3370 (1999), arXiv:hep-ph/9905221. . H Geng, A Karch, C Perez-Pardavila, S Raju, L Randall, M Riojas, S Shashi, arXiv:2206.04695Jackiw-Teitelboim Gravity from the Karch-Randall Braneworld. hep-thH. Geng, A. Karch, C. Perez-Pardavila, S. Raju, L. Randall, M. Riojas, and S. Shashi, Jackiw-Teitelboim Gravity from the Karch-Randall Braneworld, (2022), arXiv:2206.04695 [hep-th].
[]
[ "Q-balls in polynomial potentials", "Q-balls in polynomial potentials" ]
[ "Julian Heeck \nDepartment of Physics\nUniversity of Virginia\n22904-4714CharlottesvilleVirginiaUSA\n", "Mikheil Sokhashvili \nDepartment of Physics\nUniversity of Virginia\n22904-4714CharlottesvilleVirginiaUSA\n" ]
[ "Department of Physics\nUniversity of Virginia\n22904-4714CharlottesvilleVirginiaUSA", "Department of Physics\nUniversity of Virginia\n22904-4714CharlottesvilleVirginiaUSA" ]
[]
Bosons carrying a conserved charge can form stable bound states if their Lagrangian contains attractive self-interactions. Bound-state configurations with a large charge Q can be described classically and are denoted as Q-balls, their properties encoded in a non-linear differential equation. Here, we study Q-balls in arbitrary polynomial single-scalar-field potentials both numerically and via various analytical approximations. We highlight some surprising universal features of Q-balls that barely depend on the details of the potential. The polynomial potentials studied here can be realized in renormalizable models involving additional heavy or light scalars, as we illustrate with several examples. *
10.1103/physrevd.107.016006
[ "https://export.arxiv.org/pdf/2211.00021v2.pdf" ]
253,244,136
2211.00021
9c71f91147cda7dc8d8516d9916b79dcff590754
Q-balls in polynomial potentials Julian Heeck Department of Physics University of Virginia 22904-4714CharlottesvilleVirginiaUSA Mikheil Sokhashvili Department of Physics University of Virginia 22904-4714CharlottesvilleVirginiaUSA Q-balls in polynomial potentials Bosons carrying a conserved charge can form stable bound states if their Lagrangian contains attractive self-interactions. Bound-state configurations with a large charge Q can be described classically and are denoted as Q-balls, their properties encoded in a non-linear differential equation. Here, we study Q-balls in arbitrary polynomial single-scalar-field potentials both numerically and via various analytical approximations. We highlight some surprising universal features of Q-balls that barely depend on the details of the potential. The polynomial potentials studied here can be realized in renormalizable models involving additional heavy or light scalars, as we illustrate with several examples. * I. INTRODUCTION Q-balls are interesting examples of large bound states, in the simplest scenario consisting of Q 1 complex scalars φ with conserved global charge Q(φ) = 1. Assuming an attractive force between these scalars, Q-balls form the lowest-energy configuration for a fixed charge Q and are hence stable [1]. Due to the large amount of scalars residing in the Q-ball, it can be described classically as a spherically-symmetric solution to the non-linear Lagrange equations, also known as a non-topological soliton [2]. As emphasized already by Coleman in his seminal paper on these objects [1], a renormalizable quantum field theory for φ by itself does not provide the required attractive interactions, but it is possible to construct multifield models that lead to the required terms in the scalar potential [1][2][3][4][5][6]. Even in effective single-field potentials it is typically impossible to analytically solve the underlying non-linear differential equations, save for some special and often unphysical examples [7][8][9][10][11][12]. In general, we have to satisfy ourselves with numerical solutions or analytical approximations, which include Coleman's thin-wall approximation [1] (valid for very large Q-balls with thin surface region) and Kusenko's thick-wall approximation [13] (valid for small but dilute Q-balls). These approximations allow for an improved understanding of Q-balls that is difficult to obtain from numerical scans and in particular cover the regions of parameter space that are challenging to investigate numerically [14,15]. The simplest consistent realization of Coleman's large Q-balls [1] requires a scalar potential for φ with a mass term m 2 φ |φ| 2 , an attractive interaction term ∝ −|φ| p , and a term that stabilizes the potential at large field values ∝ +|φ| q , with 2 < p < q. We will study Q-balls in such potentials as a function of p and q, mostly restricted to integer exponents. We provide analytical approximations, including the thin-and thick-wall limits, and compare them to numerical solutions. In the thin-wall, or large Q, regime, we find remarkably simple analytical solutions for arbitrary p and q. Our model and notation is set up in Sec. II. Sec. III generalizes the thin-wall approximation of Ref. [15] to arbitrary p and q, including the particularly easy-to-solve cases of equidistant exponents. Sec. IV collects thick-wall results in our notation, restricted to the case p = 3, as is this the only integer value for p that gives stable Q-balls in the thick-wall regime. In Sec. V we introduce a novel Q-ball approximation that is valid for p, q 2 irrespective of the wall thickness. In Sec. VI we study some simple renormalizable multi-field models and discuss when and how they can be described by our effective polynomial potentials. We discuss our results and conclude in Sec. VII. App. A gives an alternative derivation of some results of Sec. VI. II. MODEL Using the mostly-minus Minkowski metric, we study single-field Q-balls [1] with a Lagrangian L = |∂ µ φ| 2 − U (|φ|) (1) for the complex scalar φ that is invariant under a global U (1) symmetry φ → e iα φ, with constant α ∈ R, leading via Noether's theorem to a conserved charge Q, normalized here to Q(φ) = 1. Q therefore counts the number of φ particles in a given field configuration. The Euler-Lagrange equation takes the form ∂ µ ∂ µ φ + ∂U ∂φ * = 0 ,(2) for which we will discuss a particular set of solutions. If the potential U contains attractive interactions, a boundstate solution with charge Q is possible that has the lowest energy among all configurations with the same charge, and is hence stable [1]. The potential needs to fulfill dU d|φ| φ=0 = 0 , d 2 U dφdφ * φ=0 ≡ m 2 φ > 0 ,(3) so that the vacuum φ = 0 is stable and the U (1) unbroken, and U (|φ|)/|φ| 2 has to have a minimum at arXiv:2211.00021v2 [hep-ph] 31 Jan 2023 |φ| ≡ φ 0 / √ 2 such that 0 ≤ 2U (φ 0 / √ 2) φ 2 0 ≡ ω 0 < m φ(4) for an attractive force to exist that leads to large Qballs [1]. Under these conditions, spherically-symmetric localized solutions to the classical equations of motion of the form φ( x, t) = f (|x|)e iωt φ 0 / √ 2 exist if ω 0 < ω < m φ ,(5) which describe Q-balls [1]. The attractive-force requirement from Eq. (4) cannot be satisfied for a renormalizable bounded-from-below single-field potential [1]. Instead, it is necessary to consider multi-field potentials [3] or higher-dimensional operators obtained by integrating out heavier fields [2]. Neglecting quantum corrections, the latter procedure generates polynomial potentials in |φ|. Here, we restrict ourselves to polynomials involving three terms, U (|φ|) = m 2 φ |φ| 2 − β|φ| p + ξ|φ| q ,(6) as this is the minimal form that satisfies the requirements for large Q-balls, also studied in Ref. [16]. The above form for U should cover a large part of physically motivated potentials, at least approximately. While p and q should be even integers for potentials obtained within effective field theory (see Sec. VI), most of our mathematical analysis holds for arbitrary integer or even real exponents satisfying 2 < p < q. We will show in Sec. VI that multi-field scenarios involving additional light fields can lead to odd and even fractional p and q. For the above potential, we can calculate the parameters relevant for Eq. (4) as φ 0 = √ 2 (p − 2)β (q − 2)ξ 1 q−p , ω 0 = m 2 φ − q − p q − 2 p − 2 q − 2 p−2 q−p β q−2 ξ p−2 1 q−p ,(7) allowing us to replace the (generally dimensionful) couplings β and ξ with the physically relevant φ 0 and ω 0 . For 2 < p < q, β and ξ both need to be positive. The β coupling provides the attractive force that enables the bound state and the ξ term keeps the potential bounded from below. The case p = 4, q = 6 has been discussed extensively in the literature, especially in Ref. [15]. Using the ansatz φ(x, t) = f (|x|)e iωt φ 0 / √ 2 and rescal- ing x → x m 2 φ − ω 2 0 in Eq. (2) leads to the equation of motion for the dimensionless function f (ρ), f (ρ) + 2 ρ f (ρ) + d df V (f ) = 0 ,(8) ρ being the dimensionless radial coordinate, with effective potential and boundary conditions f (0) = 0 and f (ρ → ∞) = 0. We will restrict our analysis to solutions of Eq. (8) with monotonically decreasing non-negative f , as these describe the Q-ball ground state configurations [3,[17][18][19]. Due to the rescaling, the differential equation (8) only depends on p, q, and the parameter V (f ) = (p − q)(κ 2 − 1)f 2 − (q − 2)f p + (p − 2)f q 2(p − q) (9) p =3, q = 6 p =3, q = 8 p =5, q = 6 p =5, q = 8 p =20, q = 22 p =150, q =κ 2 ≡ ω 2 − ω 2 0 m 2 φ − ω 2 0 ,(10) which is restricted to 0 < κ < 1 from Eq. (5) and ultimately determines the Q-ball radius R [15]. The macroscopic Q-ball properties of most interest to us, charge Q and energy E, are given by [15] Q = 4πφ 2 0 ω (m 2 φ − ω 2 0 ) 3/2 ∞ 0 dρ ρ 2 f 2 ,(11)E = ωQ + 4πφ 2 0 3 m 2 φ − ω 2 0 ∞ 0 dρ ρ 2 f 2 ,(12) and thus require knowledge of two dimensionless integrals that are functions of p, q, and κ (or the radius). Equation (8) can be interpreted as a one-dimensional mechanics problem of a particle with position f moving in the potential V , the radial coordinate ρ playing the role of time [1]. In this interpretation, the f /ρ term corresponds to time-dependent friction. The potential V is illustrated in Fig. 1 for several values of p and q. For 0 < κ < 1, the potential has three extrema in the region f ≥ 0: one local maximum at f = 0, one local minimum at f = f − > 0, and a global maximum at f = f + > f − . The particle starts at rest at a value f ∈ (f − , f + ) and then rolls toward f = 0, which it reaches after an infinite amount of time, i.e. for ρ → ∞. For small κ, f + 1 and V (f + ) κ 2 /2, not much larger than V (0) = 0; the particle therefore needs to start very close to f + and wait until the friction term is sufficiently suppressed to roll, almost frictionless, to f = 0. This small κ limit is called the thin-wall limit, where f resembles a step function [1] and the Q-ball radius is large. This leading order approximation was investigated in Ref. [16] as a function of p and q. Following Ref. [15], we will provide improved approximations for this regime. III. THIN-WALL LIMIT Neglecting friction in the small-κ regime simplifies the equation of motion (8) to f (ρ) + d df V (f ) κ=0 = 0 ,(13) which is equivalent to the first-order differential equation 1 2 f 2 + V (f )| κ=0 = 0(14) upon using energy conservation in the classical-mechanics analogy [15]. The profile f (ρ) is then determined via direct integration as [2] df 1 −2V (f ) κ=0 = − dρ .(15) Following Ref. [15], we denote this solution as the transition profile, which is strictly speaking only expected to be valid for small κ and around ρ = R, but practically provides an excellent approximation for all ρ and even for large κ, to be specified below. We define the Q-ball radius R via f (R) = 2 3 f (0), with f (0) f + 1 in the thin-wall regime. This proves a more convenient radius definition than that of Ref. [15] as it turns Eq. (15) into a definite integral that can be calculated numerically with ease to obtain ρ(f ) [2]: ρ(f ) = R − f 2/3 df 1 −2V (f ) κ=0 .(16) To estimate the radius R of a Q-ball in the small-κ regime we return to the mechanical analogy discussed above. The particle starts at f f + 1 with potential energy V (f + ) κ 2 /2 and ends at f = 0 with potential energy V (0) = 0. The difference in energies, κ 2 /2, must equal the energy lost through friction [18], i.e. κ 2 2 = − 0 1 df 2 ρ f (ρ) .(17) Since f is only non-zero around ρ = R, we can approximate 1/ρ 1/R in the integrand; f can then be replaced by the potential using Eq. (14), giving the relation κ 2 2 − 2 R 0 1 df −2V (f ) κ=0 .(18) For small κ, the Q-ball radius is hence of the form R η κ 2 ≡ 4 1 0 df −2V (f ) κ=0 κ 2 .(19) As expected from the mechanics analogy, the Q-ball radius becomes larger for decreasing κ. The prefactor η is determined by a simple integral over the potential, which is an O(1) number with small p and q dependence. Using Eq. (9) we can see that the integrand of the Eq. (19) becomes larger with increasing p and q. The smallest allowed integers are p = 3 and q = 4, which lead to the lower bound η min = 2/3. To determine the upper bound of η, let us take q to infinity first, which leaves us with lim q→∞ η = 4 1 0 df f 2 − f p (20) = √ π Γ p p−2 Γ 3 2 + 2 p−2 (21) = 2 − 2.45 p + O p −2 ,(22) with the Gamma function Γ(x). From Eq. (22) it follows that the upper bound is η max = 2. We conclude that for integer exponents 2 3 ≤ η < 2. Since the potential (9) is symmetric under p ↔ q, subsequent equations should also be symmetric; this leads us to a better approximation for η: η √ π Γ p p−2 Γ 3 2 + 2 p−2 + √ π Γ q q−2 Γ 3 2 + 2 q−2 − 2 ,(23) valid for large p and q. This deviates from the numerical integral of Eq. (19) by less than 8% for integer p > 3 and q > 6 and is therefore a useful approximation for most exponents. In the thin-wall limit, κ 1, the Q-ball radius is hence obtained from Eq. (19) and the profile f (ρ) -or rather ρ(f ) -from Eq. (16), which can then be used to obtain Q-ball charge and energy from Eqs. (11) and (12) using the integrals ∞ 0 dρ ρ 2 f 2 ρ(1−ε) 0 dρρ 2 + 1−ε 0 df ρ(f ) 2 f 2 −2V (f )| κ=0 ,(24)∞ 0 dρ ρ 2 f 2 1 0 df ρ(f ) 2 −2V (f )| κ=0 ,(25) where the f 2 integral is split to avoid the singularity in the second term for ε → 0. Any ε 1 gives a good approximation here. This procedure is trivial to perform numerically for any p and q and is far simpler than solving the original differential equation, especially since the latter becomes numerically difficult for minuscule κ. For some cases of p and q, all integrals can even be performed analytically, leading to particularly simple descriptions of thin-wall Q-balls, as shown below. To compare our analytic approximations with the exact solutions, we solve the differential equation (8) numerically via the shooting method [1], which is straightforward at least for small p and q and κ not too close to 0 or 1. Since the differential equation including boundary conditions is identical to the bounce equation of vacuum decay in three dimensions [20][21][22], we can borrow codes dedicated to that problem to find Q-ball profiles. In addition to our own implementation of the shooting method, we also use AnyBubble [23] in our analysis. A. Equidistant exponents: p = 2 + n, q = 2 + 2n Analytic approximations of the thin-wall Q-ball equation are easiest to obtain when the exponents in the potential are equidistant, i.e. p − 2 = q − p ≡ n, where n is positive and typically an even integer. Special cases include n = 2, discussed in Ref. [15], and n = 1, discussed in Ref. [13]. In this case, the potential V reaches its global maximum at f + = 2 + n + √ n 2 + 4κ 2 + 4nκ 2 2 + 2n 1 n (26) = 1 + κ 2 n 2 − (1 + 3n)κ 4 2n 4 + O(κ 6 ) .(27) The magnitude of the potential at this point is V (f + ) = κ 2 2 1 + κ 2 n 2 − κ 4 n 3 + O(κ 6 ) .(28) The radius integral (19) can be performed analytically to give the radius at small κ: R 2n (2 + n)κ 2 , η 2n 2 + n .(29) Restricting ourselves to integer n, the coefficient η ranges from 2/3 (n = 1) to η = 2 (n → ∞), increasing monotonically. This happens to coincide with the η range for arbitrary p and q, as shown above. In Fig. 2, we compare the prediction from Eq. (29) with numerical results 1 for several n and find excellent agreement even for κ as large as 0.8. The only exception is the n = 1 case, which is special in many ways and will be discussed in more detail below. The analytical transition function of Eq. (16) takes the simple form f (ρ) = 1 + 3 2 n − 1 e n(ρ−R) − 1 n .(30) 1 Numerical data are supplied as ancillary files on the arXiv [24]. (For the radius definition of Ref. [15], f (R) = 0, we would instead have f (ρ) = (1 + n exp[n(ρ − R)]) −1/n .) Rather than using this transition profile f (ρ) directly, we modify it slightly to take into account that the particle does not start at f = 1 but rather f + 1 + κ 2 /n 2 and define F (ρ) ≡ 1 + κ 2 n 2 1 + 3 2 n − 1 e n(ρ−R) − 1 n .(31) This ansatz F (ρ) is equally valid as f (ρ) but leads to a slightly better agreement with numerical results for larger κ. In Fig. 3 we can see how well this approximation describes the exact profiles. The transition profiles become better for smaller κ, as expected, as well as for larger n. The latter can be understood by noting that our thinwall approximations f + 1 and V (f + ) κ 2 /2 become increasingly better for larger n, as can be seen in Eqs. (27) and (28). With radius and transition profiles at our disposal it is straightforward to calculate Q-ball charge and energy, as determined by the two integrals F 2 ρ 2 dρ and F 2 ρ 2 dρ. Expanding in small κ or large radius, we find [F (ρ)] 2 ρ 2 dρ R 3 3 1 + 1 n(n + 2)R −3(n + 2) log 3 2 n − 1 − 3(n + 2) ψ (0) 2 n + γ + 4 + 1 2n 2 R 2 π 2 + 6γ γ − 4 n + 2 + 12 log 3 2 n − 1 + γ − 2 n + 2 ψ (0) 2 n + 6 log 3 2 n − 1 2γ(n + 2) + (n + 2) log 3 2 n − 1 − 4 n + 2 + 8 (n + 2) 2 + 6ψ (0) 2 n 2 + 6ψ (1) 2 n ,(32)[F (ρ)] 2 ρ 2 dρ nR 2 2n + 4 1 + 4 − 2(n + 2) log 3 2 n − 1 + ψ (0) 2 n + γ − 1 n(n + 2)R + 1 6n 2 (n + 2) 2 R 2 48(n + 2) log 3 2 n − 1 + ψ (0) 2 n + γ − 1 + 24 + (n + 2) 2 6ψ (1) 2 n + π 2 + 6(γ − 2)γ + 6 log 3 2 n − 1 log 3 2 n − 1 + 2γ − 2 +6ψ (0) 2 n 2 log 3 2 n − 1 + γ − 1 + ψ (0) 2 n ,(33) where γ 0.577 is the Euler-Mascheroni constant and ψ (1) (x) is the first derivative of the Digamma function ψ (0) (x) ≡ Γ (x)/Γ(x). In figure 4 we compare these integrals to the numerical solutions for various n. Clearly our analytical approximations are excellent even outside the thin-wall limit. For n > 1, they are good up to κ 0.8 and become better for increasing n. The case n = 1 is once again special and will be discussed in more detail below in section IV. The two integrals allow us to determine the charge Q and energy E of Q-balls. To lowest non-trivial order we have E ω 0 Q + n 2 + n 9πφ 2 0 2ω 2 0 1/3 m 2 φ − ω 2 0 Q 2/3 (34) in the thin-wall or large-Q limit ω ω 0 , assuming ω 0 = 0. 2 We see that the n-dependence of the Q-ball energy for a fixed charge Q is very mild, merely an O(1) factor in front of the surface energy. Of particular interest is the ratio E/(m φ Q), which has to be smaller than unity to ensure Q-ball stability against decay into Q free 2 For ω 0 = 0, we have E 5 2 [n/(2 + n)] 3/5 (π/3) 1/5 φ 2/5 0 m 3/5 φ Q 4/5 . particles [25]: E m φ Q = κ 2 + ω 0 m φ 2 (1 − κ 2 ) + 1 − ω0 m φ 2 3 κ 2 + ω0 m φ 2 (1 − κ 2 ) [f (ρ)] 2 ρ 2 dρ [f (ρ)] 2 ρ 2 dρ .(35) The stability criterion E/(m φ Q) < 1 depends on κ, ω 0 /m φ , and the ratio of the two integrals. In the small κ expansion, the ratio of integrals takes the following form [F (ρ)] 2 ρ 2 dρ [F (ρ)] 2 ρ 2 dρ 3n 2(2 + n)R × 1 + 1 R 2 + γ + ln 3 2 n − 1 + ψ (0) ( 2 n ) n + 1 3n 2 R 2 × 3 ln 3 2 n − 1 4 + 2γ + ln 3 2 n − 1 + 6ψ (0) 2 n 2 + γ + ln 3 2 n − 1 + ψ (0) 2 n 2 +3γ(4 + γ) − π 2 − 6ψ (1) 2 n .(36) For small κ or large R, the ratio of integrals goes to zero as 3κ 2 /4 and E → ω 0 Q < m φ Q for ω 0 > 0. Stability against decay into Q free particles is hence guaranteed in the thin-wall limit, as shown long ago by Coleman [1]. This holds for all n. For larger κ, on the other hand, it is not clear that E remains below m φ Q; indeed, our analytical thin-wall results imply E > m φ Q for κ 0.8 for all integer n. Unfortunately, this κ region is just at the edge of viability for our thin-wall results and hence not fully trustworthy, at least for small n. Instead, we have checked this region numerically; for n ≥ 2, Q-balls indeed become unstable for κ ≥ κ critical ∼ 0.8, illustrated in Fig. 5. For n ≥ 3, a regular pattern emerges where κ critical increases with n. This eventually converges toward the black n → ∞ line in Fig. 5, which is derived in Sec. V and does not rely on the thin-wall approximation. Ultimately, κ critical always lies between 0.8 and 0.85 for n ≥ 2, showing a rather mild dependence on n and ω 0 /m φ . Since Q ∝ dρ ρ 2 f 2 is a monotonic function of κ for κ < κ critical (see Fig. 4), the stability criterion is equivalent to a minimal charge Q a stable Q-ball needs to have. For integer n, we are left with the special case n = 1 (or p = 3, q = 4), which does not have a κ critical , i.e. leads to Q-balls with E < m φ Q for all κ ∈ (0, 1), see Fig. 6. The stability of these Q-balls around κ ∼ 1 was proven already in Refs. [13,14,16,26] here we show numerically that the Q-balls are also stable in the intermediate κ regime between the thin-and thickwall limits. Analytical approximations are difficult to obtain in this intermediate regime. B. General exponents Equidistant exponents in the potential lead to simple analytical expressions for thin-wall Q-ball properties, but clearly only cover part of the possible parameter space. Let us briefly discuss general p and q exponents. Notice that even though our original Lagrangian requires p < q, the rescaled differential equation and effective potential V (f ) are symmetric under p ↔ q and thus equally valid for q > p; even the limit q = p is well defined. Eq. (16) cannot be solved analytically for arbitrary p and q, but we can try to find an effective equidistance parameter n(p, q) which generates the profile most similar to the one generated by p and q. To find this n, we note that both radius and profile shape are determined by integrals of the function −2V (f )| κ=0 , see Eqs. (19) and (16). This function −2V (f )| κ=0 is fairly simple: it vanishes at f = 0 and f = 1 and has a (p, q)-dependent maximum at an f ∈ (1/2, 1). For any p and q we can try to describe this function approximately using the equidistant expression −2V (f )| κ=0 = f (1 − f n ). A numerical fit would lead to the optimal n, but to obtain an analytic approximation we simply match the potentials at the radius, i.e. at f = 2/3: V f = 2 3 p=2+n,q=2+2n = V f = 2 3 p,q(37) which provides the effective n(p, q) n(p, q) = log 1 − 3 2 ( 2 3 ) q (p−2) q−p + ( 2 3 ) p (q−2) p−q + 4 9 log 2 3 ,(38) manifestly symmetric under p ↔ q. This ansatz for n(p, q) can now be used with Eq. (31) to predict the profile f (ρ) for arbitrary p and q. We stress that the so-obtained f (ρ) will always be approximate, unlike the equidistant cases that correspond to actual asymptotic solutions to the differential equation. Nevertheless, the profile obtained using this effective n(p, q) is a good approximation of the actual potential V (f, p, q), especially for p q. Profiles generated using this n(p, q) prediction and Eq. (31) can be seen in Fig. 7. The one-parameter set of profiles of Eq. (31) is apparently (and surprisingly) sufficient to capture all possible profile shapes for general exponents p and q! This n(p, q) prediction naturally allows us to apply every analytical formula that we have already derived for the equidistant case in Sec. III A to the general case of arbitrary (p, q). For example, with the help of equations (29) and (38) we can predict radii of Q-balls for arbitrary p and q. In Fig. 8 we can see that we predict radii with high accuracy all the way up to κ 0.8, at least for p > 3. The region beyond κ ≈ 0.86 is unstable anyway in all cases except p = 3, to be discussed in detail in Sec. IV. It is worth noting that R(κ) calculated in this way and using the previous prediction (Eq. (19)) agree to better than 7% for all integer p and q, the largest deviation being 6.3% for the p = 5, q → ∞ case. With radius and profile for arbitrary p and q at our disposal, we can also calculate the two integrals relevant for Q-ball energy and charge. The integrals are simply Eqs. (32) and (33), with the radius replaced by Eq. (29) and n by the effective n(p, q) from Eq. (38). The comparison to numerical results is shown in Fig. 9 and is very good for small κ and p > 3. We can see that [F (ρ)] 2 ρ 2 dρ (bottom) works extremely well for κ 0.86. [F (ρ)] 2 ρ 2 dρ (top) properly fits numerical results only for κ 0.75. The p = 3 case is special as Q-balls remain stable for all κ and also shows the largest deviation with our prediction. This case is discussed in more details in Sec. IV. Just like in the case of equidistant exponents, our thinwall results predict E > m φ Q for κ 0.8. For large p and q, this region is still covered by our thin-wall approximation and hence qualitatively correct. For small p and q, we have to rely on numerical data to investigate Q-ball stability in this region. As shown already in Refs. [14,16,26], only the cases with p = 3 are stable near κ = 1, and are actually stable for all κ, as argued below in Sec. IV. All integer cases with p > 3, on the other hand, become unstable beyond some κ critical ∼ 0.8. We provide many examples for κ critical in Fig. 10. Our nu- merical calculations show that the case of p = 4 & q = 5 has the largest κ critical compared to other integer exponents. As we increase both p and q we see that κ critical decreases. And as we keep increasing p and q, at some point κ critical starts increasing again. We find that out of all integer exponents p = 6 & q = 7 case has the smallest κ critical . Also, in this case and in all the following cases with larger p, increasing q increases κ critical as well. If we take a look at larger exponents we can notice that curves follow more consistent shape and look similar to our p → ∞ prediction. We derived this prediction using Eq. (50) and Eq. (35). IV. THICK-WALL LIMIT AND THE p = 3 CASE As shown above, solutions to the differential equation (8) can be well approximated in the small κ regime using transition functions. For integer p > 3, these approximations are sufficiently accurate over the entire κ region that leads to stable Q-balls. Only the cases with p = 4, q = 5 p = 4, q = 6 p = 4, q = 7 p = 4, q = 100 p = 5, q = 6 p = 5, q = 7 p = 5, q = 50 p = 6, q = 7 p = 6, q = 8 p = 6, q = 9 p = 6, q = 10 p = 10, q = 15 p = 20, q = 21 p = 20, q = 30 p → ∞ p = 3 motivate us to consider larger κ, as they still allow for stable Q-balls [13,14,16,26]. Near κ ∼ 1 we can find the profiles using the thick-wall approximation [13,27], based on the fact that f (0) becomes smaller and smaller as κ → 1, which is clear from the shape of the potential V . For small f , one can then neglect the f q term in the potential, seeing as it is the most suppressed term in the small-f limit. In the classical-mechanics analogy, the particle is not starting near the maximum as in the thin-wall limit, so the f q term that generates this maximum can be neglected. Notice that we still have a q-dependence in our potential despite neglecting the f q term due to our definitions of e.g. ω 0 . q of course drops out of physical quantities in the thick-wall limit. Setting p = 3 and omitting f q allows for a useful rescaling of the differential equation [14]: we write f (ρ) = 2(q − 3) 3(q − 2) (1 − κ 2 ) g 1 − κ 2 ρ (39) with a function g(x) that is determined by the parameterless differential equation g (x) + 2 x g (x) − g(x) + g(x) 2 = 0 ,(40) easily solved numerically 1 and well-approximated by the function g(x) (4.20 − 0.10x − 0.85x 2 + 0.30x 3 )e −0.31x 2 .(41) The Q-ball radius in the thick-wall limit then diverges as R 0.91 √ 1 − κ 2 ⇒ R Q-ball 0.91 m 2 φ − ω 2(42) and the integrals take the simple form [f (ρ)] 2 ρ 2 dρ = (1 − κ 2 ) [f (ρ)] 2 ρ 2 dρ (43) = 4(q − 3) 2 9(q − 2) 2 (1 − κ 2 ) 3 2 dx x 2 g 2 10.42 .(44) These thick-wall predictions are shown in Figs. 2, 4, 8, and 9 and match numerical data very well for κ close to 1, especially for q 3. For the stability ratio we then find E m φ Q = 1 − m 2 φ − ω 2 0 3m 2 φ (1 − κ 2 ) + O (1 − κ 2 ) 2 ,(45) rendering these p = 3 thick Q-balls stable for κ → 1, albeit much weaker bound than thin-wall Q-balls. The charge Q is manifestly q-independent and actually approaches zero in the thick-wall limit ω → m φ , despite the diverging radius: Q 32πω 9β 2 m 2 φ − ω 2 ∞ 0 dx x 2 g(x) 2 ,(46) with the β from Eq. (6). p = 3 Q-balls thus become more and more dilute while carrying less and less charge and energy as κ → 1, eventually approaching the vacuum solution φ = 0. However, our classical-field description of these φ bound states eventually breaks down at small Q and needs to be replaced by a quantum-mechanical picture [13,28,29]. So far we have only shown that p = 3 Q-balls are stable for small κ (thin wall) and near κ = 1 (thick wall). For q > 4 our analytical descriptions are accurate enough to prove stability, i.e. E < m φ Q, over the entire κ range. For q = 4 we have checked numerically that E < m φ Q holds for all κ, as illustrated in Fig. 6. p = 3 Q-balls with any integer q > 3 thus have E < m φ Q for any κ ∈ (0, 1). We have restricted our discussion so far to integer p and q, for which indeed p = 3 < q is the only case with stable Q-balls near κ ∼ 1. For non-integer exponents, Refs. [14,16] have shown that Q-balls with 2 < p < 10/3 are stable near κ ∼ 1. Our results suggest that those Qballs are actually stable over the entire range 0 < κ < 1. V. LIMIT OF LARGE EXPONENTS The case of large exponents, 2 p < q, allows for a qualitatively different approximation than the thin-wall limit above. As can be seen from Fig. 1, large p and q lead to a very narrow maximum of V (f ), positioned at f 1, no matter the value of κ. The particle then falls down the almost vertical cliff, all the while remaining at f 1, until it reaches the potential minimum. The motion from the minimum to f = 0 is subsequently described by the easy-to-solve differential equation f (ρ) + 2 ρ f (ρ) − f (ρ)(1 − κ 2 ) = 0 ,(47) where we neglected any f p or f q terms since they are highly suppressed in the f < 1 region. The largeexponent profile is then simply f large-p = 1 , ρ <R , R ρ exp √ 1 − κ 2 (R − ρ) , ρ ≥R ,(48) demanding continuity at the point ρ =R (which is related to the radius by R R +1/3 in the stable κ regime). This ansatz is valid for all κ in the 2 p < q limit. To find the remainingR(κ) relation, we can use Eq. (17); notice that the left-hand side of Eq. (17), V (f + ), is κ 2 /2 for p, q → ∞, just like in the small-κ limit. This gives R = 1 + √ 1 − κ 2 κ 2 ,(49) valid again for all κ in the 2 p < q limit. The large-p radius is shown in Figs. 2 and 8. The integrals then take the simple forms [f (ρ)] 2 ρ 2 dρ = √ 1 − κ 2 + 1 κ 2 + √ 1 − κ 2 + 1 2κ 4 , [f (ρ)] 2 ρ 2 dρ = 8 − κ 4 − 4κ 2 + 8 √ 1 − κ 2 6κ 6 √ 1 − κ 2 ,(50) illustrated in Figs. 4 and 9. Solutions to our differential equation in the large exponent limit show a simple and universal behavior. This does not imply that Q-ball energy and charge become independent of p and q in this limit, as the φ 0 and ω 0 in Eq. (7) depend on the exponents. The integrals can also be used in Eq. (35) to find the stability constraint in the limit of large exponents, shown in Figs. 5 and 10. This confirms that the critical κ, satisfying E = m φ Q, lies in the narrow finite range (0.8, 0.86) for all integer p > 3. VI. UV COMPLETION So far we have worked with the potential U (|φ|) from Eq. (6), which contains non-renormalizable terms for even exponents and charge-breaking terms for odd exponents. In this section, we will show how these operators can be obtained in UV-complete models. We restrict ourselves to the simplest UV completion, consisting of a U (1)-charged complex scalar φ and a real neutral scalar ψ, which have the Lagrangian L = ∂ µ φ∂ µ φ * + 1 2 ∂ µ ψ∂ µ ψ − U [φ, ψ](51) with U (1)-symmetric scalar potential U [φ, ψ] = m 2 φ |φ| 2 + 1 2 m 2 ψ ψ 2 + b|φ| 4 + c|φ| 2 ψ + d|φ| 2 ψ 2 + eψ 3 + aψ 4 .(52) Here, m ψ and m φ are the particle masses and a, b, c, d, and e are real constants. 3 The Euler-Lagrange equations for the fields are ∂ µ ∂ µ φ + ∂U ∂φ * = 0 , ∂ µ ∂ µ ψ + ∂U ∂ψ = 0 .(53) Once again we are looking for spherically-symmetric localized solutions to these equations, with time dependence φ(x, t) ∝ e iωt for φ and a time-independent ψ. The leading-order thin-wall limit for such a system was recently analyzed in Ref. [6], where it was shown that the multi-field generalizations of Eq. (4) are ∂U (|φ|, ψ) ∂|φ| − 2 U (|φ|, ψ) |φ| |φ|= φ 0 √ 2 , ψ=ψ0 = 0 , ∂U (|φ|, ψ) ∂ψ |φ|= φ 0 √ 2 , ψ=ψ0 = 0 ,(54) which give the Q-ball energy E ω 0 Q , with ω 0 = U (φ 0 / √ 2, ψ 0 ) (φ 0 / √ 2) 2(55) to leading order in large Q. The Q-ball properties beyond this thin-wall approximation have to be obtained numerically by solving the coupled non-linear differential equations. Below, we show that for some special cases of the potential U [φ, ψ] the two-field system can be mapped onto a one-field system of the form discussed in the previous sections, severely simplifying if not even solving the problem. Just like in the one-field case explored in the main part of this article, it proves convenient to rescale the fields φ and ψ by their thin-wall values φ 0 and ψ 0 as determined by Eq. (54): φ(x, t) = e iωt φ 0 √ 2 f (|x|) , ψ(x, t) = ψ 0 h(|x|) ,(56) where f (|x|) and h(|x|) are dimensionless functions that are ≤ O(1) for all x. We furthermore perform the same coordinate transformation as before, x → x m 2 φ − ω 2 0 , with ω 0 from Eq. (55). ω is replaced by κ as in Eq. (10). All of this ensures that the equation of motion for f (ρ) resembles that of our one-field scenario as closely as possible, except, of course, for the presence of h(ρ). Since the rescaling is difficult for the general potential U [φ, ψ] we will only show the resulting differential equations for f and h for some special examples below. The Q-ball charge Q is determined entirely by the charged field φ and is again given by our equation (11) upon using the definitions we set forth. The Q-ball energy, on the other hand, contains a contribution from the neutral field ψ: E = d 3 x φ 2 0 (f ) 2 2 + ψ 2 0 (h ) 2 2 + ω 2 φ 2 0 f 2 2 + U [φ, ψ] = ωQ + 4π 3 m 2 φ − ω 2 0 dρ ρ 2 φ 2 0 (f ) 2 + ψ 2 0 (h ) 2 ,(57) where in the second line we used the virial theorem, e.g. Refs. [3,33]. A. Massive ψ Assuming m ψ m φ we can neglect the kinetic term in the Euler-Lagrangian equation associated with the field ψ. Thus we end up with ∂U [φ, ψ]/∂ψ = 0. We can solve the latter order-by-order in large m ψ with the following ansatz ψ = x 1 m 2 ψ + x 2 m 4 ψ + x 3 m 6 ψ + . . .(58) with coefficients x j that depend on |φ| 2 and the coefficients in the two-field potential. The ψ field is hence suppressed compared to the φ field in this expansion, which suppresses ψ's contribution to the Q-ball energy. After solving ψ's equation of motion and plugging the resulting ψ back into the potential we get the potential for φ: U [φ] = m 2 φ |φ| 2 + b − c 2 2m 2 ψ |φ| 4 + c 2 d m 4 ψ − c 3 e m 6 ψ |φ| 6 + c 4 a + 6c 3 de m 8 ψ − 2c 2 d 2 m 6 ψ |φ| 8 + 4c 2 d 3 m 8 ψ |φ| 10 + O 1 m 10 ψ .(59) As expected for an effective field theory at tree level, we find higher-dimensional operators in |φ| 2 suppressed by powers of m 2 ψ . Below we show some examples that can be approximately described by our one-field potential from Eq. (6). An alternative derivation of the same cases that highlights the proper expansion parameter is deferred to App. A for the curious reader. Setting a = e = 0 and only keeping the terms up to O 1/m 4 ψ , we find U [φ] m 2 φ |φ| 2 + b − c 2 2m 2 ψ |φ| 4 + c 2 d m 4 ψ |φ| 6 .(60) This corresponds to the p = 4, q = 6 case of Eq. The heavy-ψ framework from above unsurprisingly generates even exponents p and q. Considering instead a massless ψ can give odd or even rational exponents as we will show below. For these scenarios we work with the equations of motion from Eq. (53) rather than the potential. We go through two simple cases below. U [φ] m 2 φ |φ| 2 + b − c 2 2m 2 ψ |φ| 4 + c 4 a m 8 ψ |φ| 8 ,(61)1. m ψ = d = e = 0 Performing the above-mentioned rescaling for the case m ψ = d = e = 0 yields the following simple equations of motion for f (ρ) and h(ρ) to h(ρ) = f (ρ) 2/3 . Plugging this into the second equation we recover for f (ρ) exactly the single-field equation (8) with p = 8/3 and q = 4. h (ρ) + 2h (ρ) ρ + 2 a b f (ρ) 2 − h(ρ) 3 = 0 ,(63)f (ρ) + 2f (ρ) ρ + f (ρ) κ 2 − 1 + 2h(ρ) − f (ρ) 3 = 0 , The relation h(ρ) = f (ρ) 2/3 holds for all κ, allowing us to solve the two-field system by simply solving the singlefield equation (8). Of course, for small or large κ we can even approximate f (ρ) and hence h(ρ) using our analytical results from above. As illustrated in Fig. 11, the two profiles (and radii) are indeed very well described by our transition profile from Eq. (30) using n(8/3, 4) 0.8 from Eq. (38), at least for small κ and large a/b. This also allows us to obtain analytic approximations for Q-ball energy and charge. Notice that ψ 0 /φ 0 ∝ (b/a) 1/4 here, so ψ's contribution to the Q-ball energy (57) is suppressed in the limit of interest. 4 Since these Q-balls are then approximately single-field Q-balls with p = 8/3 < 10/3, they have stable thin-and thick-wall limits [14,16] and are hence stable for all κ, allowing for arbitrarily large or small charge Q. This case therefore provides one of the simplest renormalizable realizations of a Q-ball that can grow naturally via accumulation of particles without requiring a minimal threshold charge. Of course, for small Q our classical analysis needs to be replaced by a quantum one. 2. m ψ = d = a = 0 Next, let us consider m ψ = d = a = 0. This gives the system of equations h (ρ) + 2h (ρ) ρ + 9e c f (ρ) 2 − h(ρ) 2 = 0 ,(64) f (ρ) + 2f (ρ) ρ + f (ρ) −2f (ρ) 2 + κ 2 + 3h(ρ) − 1 = 0 . Now, by choosing e c we see from the first equation that the two profiles will approximately coincide: h(ρ) = f (ρ). After plugging this into the second equation we recover Kusenko's single-field case with p = 3 and q = 4 for f (ρ) [13]. The field ratio ψ 0 /φ 0 ∝ |c/e| is again suppressed in the limit of interest. Just like in the previous case, we hence find a simple renormalizable realization of a stable Q-ball with arbitrary charge. VII. DISCUSSION AND CONCLUSION Q-balls are simple examples of bound states consisting of scalars φ. Assuming an attractive self-interaction in the scalar potential, these objects can contain a large number of particles, allowing for a classical description. Q-balls have been conceived many decades ago, but their description outside of the simplest of limits has proven challenging, owing to the non-linear nature of their underlying field equation. In this article, we performed an exhaustive study of Q-balls generated by three-term potentials of the form U (|φ|) = m 2 φ |φ| 2 − β|φ| p + ξ|φ| q . For 2 < p < q and positive β and ξ, these are the simplest potentials that can give large Q-ballsà la Coleman. We have provided analytical approximations that describe stable Q-balls for all exponents p and q, in part by generalizing the procedure of Ref. [15]. We find a surprisingly universal Q-ball behavior that depends only weakly on the integers p and q: i) The instability threshold where E = m φ Q falls in the narrow range κ ∈ (0.80, 0.86) for all p > 3. ii) The volume energy does not depend on p and q, and even the surface energy shows only a mild dependence. iii) Radii of stable Qballs with p > 3 scale with 1/κ 2 up to an O(1) prefactor that depends on p and q. Furthermore, all stable Q-balls have radii R > 1, or, in terms of the actual dimensionful Q-ball radius, R Q-ball > 1/ m 2 φ − ω 2 0 .(65) In particular, R Q-ball > 1/m φ , in perfect agreement with the bound state conjecture of Ref. [34] for the radius of any stable bound state. The discussion of single-field Q-balls is unavoidable an effective one, as there are no values for p and q that lead to a renormalizable charge-conserving potential that is bounded from below. To highlight that our analysis is nevertheless useful, we studied a simple renormalizable two-field model that can be effectively described by our one-field scenario with several p and q, including -quite surprisingly -odd and fractional exponents. Repeating this analysis for models with more fields would undoubtedly allow us to generate potentials with an even wider range of exponents. Finally, our results for the ground-state profiles of global Q-balls can be generalized to excited states [19] as well as gauged and Proca Q-balls via the mapping relations of Refs. [35,36]. compared to φ. For small β, the equation of motion for ψ gives h(ρ) = f (ρ) 2 . The differential equation for f (ρ) matches our single-field equation with p = 4, q = 8 up to terms suppressed by β/b. Finally, the case with b = c 2 /(2m 2 ψ ), d = 0 is slightly more laborious but analogous. We expand in small e, which is equivalent to small β. The field ratio ψ 0 /φ 0 ∝ ec/a/m ψ is suppressed again, and again we find h(ρ) = f (ρ) 2 for small e. The differential equation for f (ρ) matches the p = 6, q = 8 case plus terms suppressed by e 2 /(am 2 ψ ). FIG. 1 : 1Plot of the effective potential V (f ) from Eq. (9) for κ = 0.4 and various integer p and q. FIG. 2 :FIG. 3 : 23R(κ) dependence for various equidistant exponents, p = 2+n, q = 2+2n. Dots correspond to numerical values and solid lines to the R ∝ 1/κ 2 thin-wall prediction of Eq.(29). The dashed line represents the n = 1 thick-wall prediction of Sec. IV and the black line the n → ∞ limit from Sec. V. Comparison between profiles generated numerically (dashed) and analytically (solid) via Eqs.(31) and(29). FIG. 4 : 4[F (ρ)] 2 ρ 2 dρ (top) and [F (ρ)] 2 ρ 2 dρ (bottom) as functions of κ for various n. Dots correspond to the numerical values, solid lines show our thin-wall approximations from Eqs.(32) and(33). The dashed line represents the n = 1 thick-wall prediction of Sec. IV and the black line the n → ∞ limit from Sec. V. FIG. 5 : 5(see Sec. IV for details), Values of κ = κ critical that leads to E = m φ Q for several n. Dotted curves are generated numerically, the solid black line represents our theoretical estimation of the n → ∞ case from Sec. V. FIG. 6 : 6E/(m φ Q) dependence on κ for n = 1 for various ω 2 0 /m 2 φ . The dots represent numerical data, the lines our thinwall results. FIG. 7 : 7Profile behavior in the vicinity of the surface for R = 50. Solid lines correspond to the theoretical prediction of Eq. (31) with effective n(p, q) from Eq. (38), while dots come from numerical computation. FIG. 8 : 8R(κ) dependence for various p and q using n(p, q) prediction. Dots correspond to the numerical values, solid lines show R = 2n/(2 + n)/κ 2 with n(p, q) from Eq. (38). The dashed line represents the thick-wall prediction for p = 3 derived in Sec. IV and the black line the n → ∞ limit from Sec. V. FIG. 9 : 93, q = 8, n = 1.55 p = 3, q = 12, n = 1.76 p = 4, q = 9, n = 2.47 p = 4, q = 13, n = 2.79 p = 20, q = 22, n = 15.[F (ρ)] 2 ρ 2 dρ (top) and [F (ρ)] 2 ρ 2 dρ (bottom) dependence on κ for various p and q using n(p, q) prediction. Dots correspond to the numerical values, when solid lines denote the predictions from Eqs.(33) and(32). The dashed lines represent the thick-wall predictions for p = 3 derived in Sec. IV and the black lines the n → ∞ limit from Sec. V. 10: κ critical (where E = m φ Q) for several p and q. 1 . 1Large m ψ , a = e = 0 . Since β is required to be positive, we have to assume that b is of order O 1/m 2 ψ . Both β and ξ are hence suppressed in this expansion, with ξ being of order β 2 . Eq.(7)showsthat m 2 φ − ω 2 0 = β 2 /(4ξ) is of order O m 0 ψso ω 0 can naturally take any value between 0 and m φ . is of order O (m ψ ) and hence large. Generically we then expect Q-balls with ω 0 m φ . 3. Large m ψ , d = 0, b = c 2 /(2m φ ) Setting d = 0 and b = c 2 order O 1/m 2 ψ and hence small, so ω 0 should be of order m φ . B. Massless ψ FIG. 11 : 11which depend only on two parameters: κ and a/b. If we choose a b we can neglect the derivatives in the h equation and solve the equation of motion algebraically Profiles for f (ρ) and h(ρ) solving Eq. (63) numerically for κ = 0.05 and 2 a/b = 1000. The dashed black lines show our thin-wall predictions, i.e. f (ρ) from Eq. (30) with n(8/3, 4) 0.8 and h(ρ) = f (ρ) 2/3 . The Wick-Cutkosky[30][31][32] and Friedberg-Lee-Sirlin[3] models are notable special cases of this Lagrangian that are not covered by our analysis below because they do not have a thin-wall limit in the sense of Coleman -despite allowing for Q-ball-like solutions. Although it is not difficult to keep the contribution; in the thinwall limit, the relevant integral for h(ρ) = f (ρ) k is dρ ρ 2 (h ) 2 nkR 2 /(2n + 4k). AcknowledgementsWe thank Chris Verhaaren and Arvind Rajaraman for comments on the manuscript. This work was supported in part by the National Science Foundation under Grant PHY-2210428.Appendix A: Alternative derivation of the heavy ψ casesIf the expansion in large m ψ in Sec. VI A did not seem convincing, we will provide an alternative derivation here that follows the procedure of Sec. VI B by first rescaling the two fields.We start with the case a = e = 0. It proves convenient to replace c = √ 2m ψ √ b + β, with β defined just as below Eq. (60). All the rescaling can be performed exactly, but since the limit of interest will be small β we only show the equations in that limit here. Eq. (54) can be solved to giveIn particular, ψ is suppressed compared to φ by β/d, so ψ's contribution to the Q-ball energy will be small. To leading order in small β, the equation of motion for h(ρ) takes the formand thus fixes h(ρ) = f (ρ) 2 as long as β 2 bd. The equation of motion for f with h(ρ) = f (ρ) 2 then matches the single-field Eq. (8) with p = 4 and p = 6, plus terms that are suppressed by β/b. This matches the conclusion of Sec. VI A 1 but highlights that the expansion parameter is not really large m ψ but rather small β. Of course, we have identified β as being of order m −2 ψ above, so this is consistent.The discussion of the case d = e = 0 is analogous. We again replace c by β and go to the small β limit, which gives ψ 0 /φ 0 ∝ (β/a) 1/4 , so ψ is again suppressed Q-balls. S R Coleman, 10.1016/0550-3213(85)90286-XNucl. Phys. B. 262744Nucl.Phys.BS. R. Coleman, "Q-balls," Nucl. Phys. B 262 (1985) 263. [Addendum: Nucl.Phys.B 269, 744 (1986)]. Nontopological solitons. T D Lee, Y Pang, 10.1016/0370-1573(92)90064-7Phys. Rept. 221T. D. Lee and Y. Pang, "Nontopological solitons," Phys. Rept. 221 (1992) 251-350. A Class of Scalar-Field Soliton Solutions in Three Space Dimensions. R Friedberg, T D Lee, A Sirlin, 10.1103/PhysRevD.13.2739Phys. Rev. D. 13R. Friedberg, T. D. Lee, and A. Sirlin, "A Class of Scalar-Field Soliton Solutions in Three Space Dimensions," Phys. Rev. D 13 (1976) 2739-2761. Solitons in the supersymmetric extensions of the standard model. A Kusenko, 10.1016/S0370-2693(97)00584-4hep-ph/9704273Phys. Lett. B. 405A. Kusenko, "Solitons in the supersymmetric extensions of the standard model," Phys. Lett. B 405 (1997) 108, [hep-ph/9704273]. Review of Nontopological Solitons in Theories with U (1)-Symmetry. E Y Nugaev, A V Shkerin, 10.1134/S1063776120020077J. Exp. Theor. Phys. 13021905.05146E. Y. Nugaev and A. V. Shkerin, "Review of Nontopological Solitons in Theories with U (1)-Symmetry," J. Exp. Theor. Phys. 130 no. 2, (2020) 301-320, [1905.05146]. Multi-Field Q-balls with Real Scalars. O Lennon, 2112.14263O. Lennon, "Multi-Field Q-balls with Real Scalars," [2112.14263]. Particlelike Solutions to Nonlinear Complex Scalar Field Theories with Positive-Definite Energy Densities. G Rosen, 10.1063/1.1664693J. Math. Phys. 9996G. Rosen, "Particlelike Solutions to Nonlinear Complex Scalar Field Theories with Positive-Definite Energy Densities," J. Math. Phys. 9 (1968) 996. Dilatation covariance and exact solutions in local relativistic field theories. G Rosen, 10.1103/PhysRev.183.1186Phys. Rev. 183G. Rosen, "Dilatation covariance and exact solutions in local relativistic field theories," Phys. Rev. 183 (1969) 1186-1188. Analytic Q ball solutions in a parabolic-type potential. S Theodorakis, 10.1103/PhysRevD.61.047701Phys. Rev. D. 6147701S. Theodorakis, "Analytic Q ball solutions in a parabolic-type potential," Phys. Rev. D 61 (2000) 047701. From Q walls to Q balls. R B Mackenzie, M B Paranjape, 10.1088/1126-6708/2001/08/003hep-th/0104084JHEP. 083R. B. MacKenzie and M. B. Paranjape, "From Q walls to Q balls," JHEP 08 (2001) 003, [hep-th/0104084]. Compact Q-balls in the complex signum-Gordon model. H Arodz, J Lis, 10.1103/PhysRevD.77.107702Phys. Rev. D. 771077020803.1566H. Arodz and J. Lis, "Compact Q-balls in the complex signum-Gordon model," Phys. Rev. D 77 (2008) 107702, [0803.1566]. Analytic Q-ball solutions and their stability in a piecewise parabolic potential. I E Gulamov, E Y Nugaev, M N Smolyakov, 10.1103/PhysRevD.87.0850431303.1173Phys. Rev. D. 8785043I. E. Gulamov, E. Y. Nugaev, and M. N. Smolyakov, "Analytic Q-ball solutions and their stability in a piecewise parabolic potential," Phys. Rev. D 87 (2013) 085043, [1303.1173]. Small Q balls. A Kusenko, 10.1016/S0370-2693(97)00582-0hep-th/9704073Phys. Lett. B. 404A. Kusenko, "Small Q balls," Phys. Lett. B 404 (1997) 285, [hep-th/9704073]. Q-balls: Some analytical results. F , Paccetti Correia, M G Schmidt, 10.1007/s100520100710hep-th/0103189Eur. Phys. J. C. 21F. Paccetti Correia and M. G. Schmidt, "Q-balls: Some analytical results," Eur. Phys. J. C 21 (2001) 181-191, [hep-th/0103189]. Understanding Q-Balls Beyond the Thin-Wall Limit. J Heeck, A Rajaraman, R Riley, C B Verhaaren, 10.1103/PhysRevD.103.045008Phys. Rev. D. 103450082009.08462J. Heeck, A. Rajaraman, R. Riley, and C. B. Verhaaren, "Understanding Q-Balls Beyond the Thin-Wall Limit," Phys. Rev. D 103 (2021) 045008, [2009.08462]. Non-Canonical Q-balls. O Lennon, 2112.12547O. Lennon, "Non-Canonical Q-balls," [2112.12547]. Spinning Q balls. M S Volkov, E Wohnert, 10.1103/PhysRevD.66.085003hep-th/0205157Phys. Rev. D. 6685003M. S. Volkov and E. Wohnert, "Spinning Q balls," Phys. Rev. D 66 (2002) 085003, [hep-th/0205157]. Radial excitations of Q-balls, and their D-term. M Mai, P Schweitzer, 10.1103/PhysRevD.86.0960021206.2930Phys. Rev. D. 8696002M. Mai and P. Schweitzer, "Radial excitations of Q-balls, and their D-term," Phys. Rev. D 86 (2012) 096002, [1206.2930]. Excited Q-balls. Y Almumin, J Heeck, A Rajaraman, C B Verhaaren, 10.1140/epjc/s10052-022-10772-52112.00657Eur. Phys. J. C. 82801Y. Almumin, J. Heeck, A. Rajaraman, and C. B. Verhaaren, "Excited Q-balls," Eur. Phys. J. C 82 (2022) 801, [2112.00657]. The Fate of the False Vacuum. 1. Semiclassical Theory. S R Coleman, 10.1103/PhysRevD.16.1248Phys. Rev. D. 151248Phys.Rev.D[20] S. R. Coleman, "The Fate of the False Vacuum. 1. Semiclassical Theory," Phys. Rev. D 15 (1977) 2929-2936. [Erratum: Phys.Rev.D 16, 1248 (1977)]. The Fate of the False Vacuum. 2. First Quantum Corrections. C G Callan, Jr , S R Coleman, 10.1103/PhysRevD.16.1762Phys. Rev. D. 16C. G. Callan, Jr. and S. R. Coleman, "The Fate of the False Vacuum. 2. First Quantum Corrections," Phys. Rev. D 16 (1977) 1762-1768. Action Minima Among Solutions to a Class of Euclidean Scalar Field Equations. S R Coleman, V Glaser, A Martin, 10.1007/BF01609421Commun. Math. Phys. 58S. R. Coleman, V. Glaser, and A. Martin, "Action Minima Among Solutions to a Class of Euclidean Scalar Field Equations," Commun. Math. Phys. 58 (1978) 211-221. Efficient numerical solution to vacuum decay with many fields. A Masoumi, K D Olum, B Shlaer, 10.1088/1475-7516/2017/01/0511610.06594JCAP. 0151A. Masoumi, K. D. Olum, and B. Shlaer, "Efficient numerical solution to vacuum decay with many fields," JCAP 01 (2017) 051, [1610.06594]. Q-balls in polynomial potentials. J Heeck, M Sokhashvili, 2211.00021J. Heeck and M. Sokhashvili, "Q-balls in polynomial potentials," [2211.00021]. Some stationary properties of a Q-ball in arbitrary space dimensions. M I Tsumagari, E J Copeland, P M Saffin, 10.1103/PhysRevD.78.065021Phys. Rev. D. 78650210805.3233M. I. Tsumagari, E. J. Copeland, and P. M. Saffin, "Some stationary properties of a Q-ball in arbitrary space dimensions," Phys. Rev. D 78 (2008) 065021, [0805.3233]. Stability of Q-balls and Catastrophe. N Sakai, M Sasaki, 10.1143/PTP.119.929Prog. Theor. Phys. 1190712.1450N. Sakai and M. Sasaki, "Stability of Q-balls and Catastrophe," Prog. Theor. Phys. 119 (2008) 929-937, [0712.1450]. Decay of the False Vacuum at Finite Temperature. A D Linde, 10.1016/0550-3213(83)90072-XNucl. Phys. B. 216544Nucl. Phys. BA. D. Linde, "Decay of the False Vacuum at Finite Temperature," Nucl. Phys. B 216 (1983) 421. [Erratum: Nucl. Phys. B 223, 544 (1983)]. Quantum corrections to Q-balls. N Graham, 10.1016/S0370-2693(01)00669-4hep-th/0105009Phys. Lett. B. 513N. Graham, "Quantum corrections to Q-balls," Phys. Lett. B 513 (2001) 112-118, [hep-th/0105009]. Solitosynthesis of Q-balls. M Postma, 10.1103/PhysRevD.65.085035hep-ph/0110199Phys. Rev. D. 6585035M. Postma, "Solitosynthesis of Q-balls," Phys. Rev. D 65 (2002) 085035, [hep-ph/0110199]. Properties of Bethe-Salpeter Wave Functions. G C Wick, 10.1103/PhysRev.96.1124Phys. Rev. 96G. C. Wick, "Properties of Bethe-Salpeter Wave Functions," Phys. Rev. 96 (1954) 1124-1134. Solutions of a Bethe-Salpeter equation. R E Cutkosky, 10.1103/PhysRev.96.1135Phys. Rev. 96R. E. Cutkosky, "Solutions of a Bethe-Salpeter equation," Phys. Rev. 96 (1954) 1135-1141. Q-balls in the Wick-Cutkosky model. E Y Nugaev, M N Smolyakov, 10.1140/epjc/s10052-017-4681-41605.02056Eur. Phys. J. C. 77118E. Y. Nugaev and M. N. Smolyakov, "Q-balls in the Wick-Cutkosky model," Eur. Phys. J. C 77 (2017) 118, [1605.02056]. Comments on nonlinear wave equations as models for elementary particles. G H Derrick, 10.1063/1.1704233J. Math. Phys. 5G. H. Derrick, "Comments on nonlinear wave equations as models for elementary particles," J. Math. Phys. 5 (1964) 1252-1254. A Conjecture on the Minimal Size of Bound States. B Freivogel, T Gasenzer, A Hebecker, S Leonhardt, 10.21468/SciPostPhys.8.4.0581912.09485SciPost Phys. 8458B. Freivogel, T. Gasenzer, A. Hebecker, and S. Leonhardt, "A Conjecture on the Minimal Size of Bound States," SciPost Phys. 8 no. 4, (2020) 058, [1912.09485]. Mapping Gauged Q-Balls. J Heeck, A Rajaraman, R Riley, C B Verhaaren, 10.1103/PhysRevD.103.1160042103.06905Phys. Rev. D. 103116004J. Heeck, A. Rajaraman, R. Riley, and C. B. Verhaaren, "Mapping Gauged Q-Balls," Phys. Rev. D 103 (2021) 116004, [2103.06905]. J Heeck, A Rajaraman, R Riley, C B Verhaaren, 10.1007/JHEP10(2021)1032107.10280Proca Q-balls and Q-shells. a Q-balls and Q-shells103J. Heeck, A. Rajaraman, R. Riley, and C. B. Verhaaren, "Proca Q-balls and Q-shells," JHEP 10 (2021) 103, [2107.10280].
[]
[ "Observed galaxy power spectrum in cubic Galileon model", "Observed galaxy power spectrum in cubic Galileon model" ]
[ "Bikash R Dinda \nCentre for Theoretical Physics\nJamia Millia Islamia, New Delhi-110025India\n", "Md Wali Hossain \nAsia Pacific Center for Theoretical Physics\n37673PohangKorea\n", "Anjan A Sen \nCentre for Theoretical Physics\nJamia Millia Islamia, New Delhi-110025India\n" ]
[ "Centre for Theoretical Physics\nJamia Millia Islamia, New Delhi-110025India", "Asia Pacific Center for Theoretical Physics\n37673PohangKorea", "Centre for Theoretical Physics\nJamia Millia Islamia, New Delhi-110025India" ]
[]
In this paper, we study the effects of general relativistic corrections on the observed galaxy power spectrum in thawing class of cubic Galileon model with linear potential that preserves the shift symmetry. In this scenario, the observed galaxy power spectrum differs from the standard matter power spectrum mainly due to redshift space distortion (RSD) factor and relativistic effects. The RSD term enhances the matter power spectrum both at larger and smaller scales whereas the relativistic terms further enhance the matter power spectrum only at larger scales. In comparison with ΛCDM, the observed galaxy power spectrum is always suppressed at large scales in this scenario although this suppression is always small compared to the canonical quintessence scenario.
10.1088/1475-7516/2018/01/045
[ "https://arxiv.org/pdf/1706.00567v1.pdf" ]
119,373,628
1706.00567
f00c9fcfb9389ef3eb66782494aa49cb0a17d171
Observed galaxy power spectrum in cubic Galileon model 2 Jun 2017 Bikash R Dinda Centre for Theoretical Physics Jamia Millia Islamia, New Delhi-110025India Md Wali Hossain Asia Pacific Center for Theoretical Physics 37673PohangKorea Anjan A Sen Centre for Theoretical Physics Jamia Millia Islamia, New Delhi-110025India Observed galaxy power spectrum in cubic Galileon model 2 Jun 2017 In this paper, we study the effects of general relativistic corrections on the observed galaxy power spectrum in thawing class of cubic Galileon model with linear potential that preserves the shift symmetry. In this scenario, the observed galaxy power spectrum differs from the standard matter power spectrum mainly due to redshift space distortion (RSD) factor and relativistic effects. The RSD term enhances the matter power spectrum both at larger and smaller scales whereas the relativistic terms further enhance the matter power spectrum only at larger scales. In comparison with ΛCDM, the observed galaxy power spectrum is always suppressed at large scales in this scenario although this suppression is always small compared to the canonical quintessence scenario. In this paper, we study the effects of general relativistic corrections on the observed galaxy power spectrum in thawing class of cubic Galileon model with linear potential that preserves the shift symmetry. In this scenario, the observed galaxy power spectrum differs from the standard matter power spectrum mainly due to redshift space distortion (RSD) factor and relativistic effects. The RSD term enhances the matter power spectrum both at larger and smaller scales whereas the relativistic terms further enhance the matter power spectrum only at larger scales. In comparison with ΛCDM, the observed galaxy power spectrum is always suppressed at large scales in this scenario although this suppression is always small compared to the canonical quintessence scenario. I. INTRODUCTION Observational cosmology is currently passing through a revolutionary phase. It all started since 1998 when we first discovered using Supernova Type-Ia observation that our Universe is going through an accelerated phase of expansion [1,2]. Since then a wide variety of cosmological observations related to the cosmic microwave background radiation anisotropy [3][4][5][6], baryon acoustic oscillations measurements in galaxy power spectrum [7,8] with unprecedented accuracies have confirmed this acceleration. All these observations also confirm that we live in a Universe with flat spatial section, having 25% of the energy budget in the form of cold dark matter (cdm) and 5% in baryons [5]. It also confirms another accelerated phase of expansion at very early stage of cosmological evolution termed as inflation [9][10][11][12][13]. The confirmation of the accelerated expansion in our Universe defies our understanding of the attractive nature of gravity that can only produce decelerated expansion in the Universe under the realm of general theory of relativity. To get this accelerated expansion, either we need to add unknown form of matter with repulsive gravitational force [14][15][16][17] that has no direct observational detection till date or we need to modify the Einstein gravity at large cosmological scales [9,[18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33] where this accelerated expansion has been observed. One of possibilities to get this repulsive gravity had already prescribed by Einstein himself although in a different context. This is the cosmological constant Λ, with an equation of state (EoS) p = −ρ. Together with the presence of cold dark matter, this is the concordance ΛCDM model that is the simplest way to explain late time acceleration of the Universe. At late times, the energy content in Λ has to be around 70% of the total energy density of the Universe to result accelerated Universe. To achieve this, the value of Λ required is embarrassingly low compared with what we expect from our current understanding of particle physics and results the issue of fine tuning [34]. The constant Λ also demands that the expansion starts precisely at the present epoch, making this epoch a very special one in the entire cosmological evolution. Theoreticians describe this as cosmic coincidence problem [35]. The fine tuning is unavoidable as far our current understanding of particle physics and cosmology is concerned. But we can ameliorate the cosmic coincidence problem by replacing Λ with an unknown component with negative pressure that is not constant but evolves with cosmic time. We call this dark energy [14]. Motivation for considering dark energy also comes from the inconsistency in ΛCDM model with respect to couple of recent observational results [36][37][38][39]. Similar to inflaton, we can model dark energy as a scalar field slowly rolling over a sufficiently flat potential around present time [40][41][42]. These scalar field dark energy models can further be classified to tracker [35] and thawing models [43,44] depending on the form of its potentials and the subsequent evolution. Late time acceleration with scalar field dark energy model has been studied extensively by different authors [35,. A large scale modification of gravity in the higher dimensional brane world scenario has been proposed by Dvali, Gabadadze and Porrati (DGP) [24] to explain the late time cosmic acceleration. In the decoupling limit of DGP model, one can obtain a scalar degree of freedom containing a higher derivative term like (∇φ) 2 φ [64].The Lagrangian for such scalar degree of freedom respects the Galilean shift symmetry and known as Galileon [27]. Apart from the cubic Galilon and usual kinetic terms, the full Galileon Lagrangian has two more terms with higher derivatives [27,65]. Despite the presence of higher derivative terms in the Lagrangian, the equation of motion of the Galileon field is second order [27,65] and the theory is free from Ostrogradsky ghosts [66]. Vainshtein mechanism [67], first proposed to overcome the problem of van Dam-Veltman-Zakharov (vDVZ) discontinuity [68,69] in the linear theory of massive gravity [70], can also be implemented in Galileon theory [27] to preserve the local physics. Till date, there have been plethora of investigations to constrain the background evolution of the scalar field dark energy models including the Galileon models [14,[29][30][31][32][71][72][73][74][75][76][77][78][79][80][81][82]. The inhomogeneity in the dark energy field has not been constrained till date as dark energy perturbations are only relevant on horizon scales and beyond and accurate measurements of observed galaxy power spectra on these very large scales has not been done yet. But with the scope of future optical and infrared/radio surveys like LSST, SKA, we shall have the opportunity to probe our Universe at horizon scales and beyond which in turn enable us to probe dark energy inhomogeneities. As cosmological constant does not contain any inhomogeneity whereas any other evolving dark energy contains inhomogeneities, any detection (or not detection) of dark energy inhomogeneity can decisively settle the issue of dark energy being cosmological constant or not. To study the structure formation on horizon scales and beyond, one needs to consider the full general relativistic (GR) treatment. There are also number of general relativistic corrections in the observed galaxy power spectra related to gravitational potential and peculiar velocity. The observed galaxy power spectra including necessary GR corrections has been studied earlier for ΛCDM model [83][84][85] as well as for tracking [86] and thawing scalar field models [87]. In this present work, we extend the similar study for the cubic Galileon models with linear potential that preserve the shift symmetry. Later we also consider other phenomenological potentials that breaks the shift symmetry. The paper is organised as: in section II, we briefly describe the background evolution of the Universe in cubic Galileon model; in section III, we study the first order general relativistic perturbation in the Galileon field and study the deviations from ΛCDM model in some important perturbed quantities; in section IV, we study the observed galaxy power spectrum and its deviation for the cubic Galileon model from ΛCDM; finally in section V, we present our conclusion. II. BACKGROUND EVOLUTION We consider the lowest nontrivial order of the Galileon action i.e., cubic Galileon action along with a potential [80][81][82] S = d 4 x √ −g M 2 pl 2 R + 1 2 (∇φ) 2 1 + α M 3 φ − V (φ) + S m ,(1) where M pl = (8πG) −1/2 is the reduced Planck mass. α is a dimensionless constant; for α = 0 this action (1) reduces to that of a standard quintessence action [40][41][42]. V (φ) is the potential. V (φ) = c 1 φ preserve the shift symmetry and we mainly consider this potential for our subsequent study. S m is the action for the matter field. M is a constant of mass dimension one; by a redefinition of the parameter α, we can fix M = M pl . Action (1) can also be thought as a particular form of the Kinetic Gravity Braiding action [32]. Variation of the action (1) with respect to (w.r.t.) the metric tensor g µν and assuming a flat Friedman-Robertson-Walker (FRW) spacetime with scale factor a(t), we get the Einstein's equations 3M 2 pl H 2 = ρ m +φ 2 2 1 − 6 α M 3 pl Hφ + V (φ),(2)M 2 pl (2Ḣ + 3H 2 ) = −φ 2 2 1 + 2 α M 3 plφ + V (φ),(3) where overdot is the derivative w.r.t. the time and H is the Hubble parameter. Varying the action w.r.t the field φ, we get the equation of motion for the fieldφ + 3Hφ − 3 α M 3 plφ 3H 2φ +Ḣφ + 2Hφ + V φ = 0,(4) where subscript φ is the derivative w.r.t the field φ. III. RELATIVISTIC PERTURBATIONS WITH THE GALILEON FIELD In this paper we are mainly interested in the observed galaxy power spectrum on a scale where the perturbations are assumed to be linear i.e. we can use the linear perturbation theory with the full general relativistic treatment. In the linear perturbation theory the scalar, vector and tensor perturbations evolve independently. So, we can study the linear scalar perturbations independently with two scalar degrees of freedom. Here we work in the conformal Newtonian gauge where the perturbed space-time is given by ds 2 = (1 + 2Ψ)dt 2 − a(t) 2 (1 − 2Φ)d r.d r,(5) where r is the comoving coordinate, Φ is the gravitational potential and for simplicity we choose anisotropic stress to be zero which corresponds to Ψ = Φ. So, we are left with one scalar degree of freedom which is Φ. In this perturbed space-time the linearized Einstein equations become [88]: ∇ 2 Φ − 3a 2 H(Φ + HΦ) = 4πGa 2 i δρ i ,(6)Φ + HΦ = 4πGa i (ρ i +P i )v i ,(7)Φ + 4HΦ + (2Ḣ + 3H 2 )Φ = 4πG i δP i ,(8) where the summation index i stands for either 'm' for matter or 'φ' for Galileon field, H is the conformal Hubble parameter (H = aH), bar represents the unperturbed quantity for the individual fluid i; δρ i , δP i and v i are the perturbations of the individual component's energy density, pressure and velocity field respectively. Combining Eqs. (6) and (7) we get the relativistic Poisson equation which is given by ∇ 2 Φ = 4πGa 2 iρ i ∆ i ,(9) where we have introduced a quantity ∆ i corresponding to the particular individual component which is given by ∆ i = δ i + 3H(1 + w i )v i where δ i is defined as δρ i =ρ i δ i . This gauge invariant quantity is the comoving energy density contrast for a particular component i.e. either for the matter or for the Galileon field. Working in the space-time (5), we can calculate components of the energy momentum tensor from the action (1). The first order perturbed energy density, pressure and velocity for the Galileon field φ are respectively given by [82] δρ φ = (1 − 9βHφ)φδ φ + βφ 2 ∇ 2 δφ a 2 − (1 − 12βHφ)φ 2 Φ + 3βφ 3Φ + V φ δφ,(10)δP φ = βφ 2δ φ + (1 + 2βφ)φδ φ − (1 + 4βφ)φ 2 Φ − βφ 3Φ − V φ δφ,(11)a(ρ φ +P φ )v φ =φ βφδ φ + (1 − 3βHφ)δφ − βφ 2 Φ ,(12) where δφ is the first order perturbation to the background field φ and β = α We now introduce following dimensionless quantities [43,80,81,89] x = dφ dN √ 6M P l , y = √ V √ 3HM P l , λ = −M P l V φ V , Γ = V V φφ V 2 φ , ǫ = −6βH 2 dφ dN , q = (δφ)/ dφ dN .(13) where N = ln(a) is the number of e-foldings. Using these dimensionless quantities, we form the following autonomous system of equations [43,80,81]: dx dN = 3x 3 2 + 5ǫ + ǫ 2 − 3x 2 − ǫ + y 2 (2 + 3ǫ) + 2 √ 6y 2 λ − √ 6x 2 y 2 ǫλ 4 + 4ǫ + x 2 ǫ 2 dy dN = − y 12 −1 + y 2 (1 + ǫ) − 6x 2 2 + 4ǫ + ǫ 2 + √ 6x 3 ǫ 2 λ + 2 √ 6x 2 + 2 + y 2 ǫ λ 8 + 8ǫ + 2x 2 ǫ 2 , dǫ dN = − ǫ −3x −3 + y 2 (2 + ǫ) + 3x 3 2 + 3ǫ + ǫ 2 − 2 √ 6y 2 λ − √ 6x 2 y 2 ǫλ x (4 + 4ǫ + x 2 ǫ 2 ) , dλ dN = √ 6xλ 2 (1 − Γ), dH dN = − 1 2 (1 + 3w φ Ω φ )H, dΦ dN = Φ 1 , dq dN = q 1 , dΦ 1 dN = A −1 2 [x 2 (ǫ(4ǫ 2 (−2(J − 3)x 2 + L − 3) + 4ǫ(−4J + L + 6x 2 − 6) + Lx 2 ǫ 3 − 48) −12Q 2 (ǫ(ǫ(x 2 (2ǫ + 3) + 4) + 8) + 4))]Φ −A −1 1 [2(ǫ + 1) A 4 x 2 ǫ − 2A 3 ]q −A −1 2 [2x 4 ǫ 2 ǫ(J + 2ǫ) + 3Q 2 − 3 + 2x 2 ǫ 2 (8J + 10ǫ − 11) + 4(J − 6)ǫ + 12Q 2 (ǫ + 1) 2 − 12 + 40(ǫ + 1) 2 ]Φ 1 +A −1 2 [2x 2 ǫ 2J ǫ x 2 ǫ − 2 − 4 + 3ǫ x 2 Q 2 (3ǫ + 4) − 2(ǫ + 1) + 3ǫ + 20 + 84 + 24 ]q 1 , dq 1 dN = A −1 2 [8J ǫ 3x 2 ǫ + 8 + 4 − 2x 2 ǫ 3 L + 6Q 2 − 3 x 2 + 3 − 8ǫ 2 L + 3 Q 2 + 2 x 2 − 8ǫ L + 3x 2 + 9 ]Φ +A −1 1 [2A 3 ǫ + 4A 4 (ǫ + 1)]q +A −1 2 [ǫ 16J + ǫ 2x 2 −6Q 2 + 7ǫ + 16 + x 4 ǫ 2 + 28 + 56 + 64]Φ 1 +A −1 2 [2J ǫ x 2 ǫ x 2 ǫ − 8 + 4 − 24 − 16 − 3x 4 ǫ 2 −2Q 2 (3ǫ + 1) + ǫ(ǫ + 6) + 2 +6x 2 Q 2 6ǫ 2 + 8ǫ + 4 + ǫ −ǫ 2 + ǫ − 8 − 4 − 12((ǫ − 4)ǫ − 2)]q 1 ,(14) where, L = k 2 3H 2 and Q = y x J = 3 2 λ y 2 x ω φ = p φ ρ φ = ǫ(3(ǫ + 8) − 4J) − 12Q 2 (ǫ + 1) + 12 3 (Q 2 + ǫ + 1) (ǫ (x 2 ǫ + 4) + 4) Ω φ = x 2 Q 2 + ǫ + 1 A 1 = 4 + ǫ(4 + x 2 ǫ) A 2 = A 2 1 A 3 = −Q −2 A −3 1 x 2 [Q 2 (4J 2 ǫ(ǫ(x 6 ǫ 3 + 4x 4 ǫ(ǫ + 1) − 4x 2 (7ǫ + 6) + 8) + 16) +6J(ǫ(−x 6 ǫ 3 (5ǫ + 4) + x 4 ǫ(ǫ((ǫ − 24)ǫ − 40) − 16) + 16x 2 (ǫ + 1)(2ǫ(ǫ + 6) + 5) − 8(ǫ(ǫ + 16) + 26)) − 64) +9(x 6 ǫ 3 (3ǫ(ǫ + 2) 2 + 4) + x 4 ǫ(ǫ(ǫ(ǫ(23ǫ + 112) + 156) + 80) + 16) − x 2 (ǫ(ǫ(ǫ(ǫ(9ǫ + 94) + 380) + 480) + 208) + 32) −2ǫ 3 (3ǫ + 26) + 96ǫ + 32)) + 2ΓJ 2 ǫ(x 2 ǫ − 2)(ǫ(x 2 ǫ + 4) + 4) 2 +3Q 4 x 2 (ǫ(8J(ǫ(x 2 (ǫ + 1)(ǫ(x 2 ǫ + 8) + 4) − 2(7ǫ + 12)) − 8) −3x 4 ǫ 2 (3ǫ(ǫ + 2)(ǫ + 3) + 8) −A 4 = Q −2 (1 + ǫ) −1 A −3 1 [−2J 2 x 2 ǫ(2Q 2 (ǫ(x 2 (ǫ(ǫ + 2)(x 2 ǫ + 8) + 8) − 44ǫ − 80) − 32) +Γ(3ǫ + 2)(ǫ(x 2 ǫ + 4) + 4) 2 ) − 4JQ 2 (x 4 ǫ(ǫ(ǫ(ǫ((L + 21)ǫ − 45) − 192) − 168) + 6Q 2 (ǫ(ǫ(13ǫ + 34) + 28) + 8) − 48) +2x 2 (ǫ(ǫ(ǫ(4L(ǫ + 1) + 75ǫ + 390) + 612) + 360) − 24Q 2 (2ǫ + 1)(ǫ + 1) 2 + 72) + 16(ǫ + 1) 2 ((L + 6)ǫ + 3) +3x 6 ǫ 3 (Q 2 (ǫ + 1)(ǫ + 4) + ǫ((ǫ − 1)ǫ − 7) − 4)) + Q 2 (9(x 2 (16Q 2 (3ǫ 2 + ǫ + 2)(ǫ + 1) 2 +ǫ(ǫ(ǫ(3ǫ(ǫ + 16) + 284) + 456) + 240) + 32) + x 6 ǫ 2 (Q 4 (3ǫ 3 − 12ǫ − 8) + Q 2 (3ǫ + 2)(ǫ(ǫ(ǫ + 3) + 8) + 8) −2(ǫ + 1)(ǫ(ǫ(2ǫ + 7) + 10) + 4)) − x 4 (16Q 4 (ǫ + 1) 2 (ǫ(3ǫ + 4) + 2) −2Q 2 (ǫ(ǫ(ǫ(ǫ(27ǫ + 184) + 384) + 352) + 160) + 32) +ǫ(ǫ(ǫ(ǫ(ǫ + 7)(3ǫ + 50) + 624) + 496) + 192) + 32) + 48ǫ(ǫ + 1) 2 ) −L(ǫ(x 2 ǫ + 4) + 4) 2 (ǫ(ǫ(x 2 (−3Q 2 + 2ǫ + 6) + 5) + 8) + 12))].(15) Note that for simplicity of the notations, in the above set of equations, we have kept the same notations for Φ and q in the Fourier space corresponding to the same quantities in the real space. By putting Eq. (10) into Eq. (6) and going to the Fourier space we get the matter density contrast given by δ m = − 1 Ω m (2 − x 2 ǫ) dΦ dN + 2 1 + L − x 2 (1 + 2ǫ) Φ + x 2 (2 + 3ǫ) dq dN + x 2 (2 + 3ǫ)A − 2J + Lǫ q .(17) Similarly, by putting Eq. (12) into Eq. (7) and going to the Fourier space we get the pecular velocity for the matter given by y m = 3Hv m = 1 Ω m 2 dΦ dN + (2 − x 2 ǫ)Φ + x 2 ǫ dq dN − x 2 6 + ǫ(3 − A) q ,(18) where A = d 2 φ dN 2 dφ dN = −3Bǫ − 2B + 2J + 6ǫ 2(ǫ + 1) ,(19) with B = 1.5(1 − ω φ Ω φ ). Now we can calculate comoving matter energy density contrast from Eqs. (17) and (18) by using the definition ∆ m = δ m + y m . A. Initial conditions To solve the autonomous system (14), we need initial conditions for the background quantities (x, y, ǫ, λ, H) as well as for the perturbed quantities (Φ, dΦ dN , q, dq dN ). We fix the initial condition at z = 1000 in early matter dominated era where the dark energy contribution is negligible. In this work, we focus on the thawing class of the Galileon models where the Galileon field φ is initially frozen at w φ ∼ −1 in early matter dominated era due to large Hubble friction. The condition w φ ∼ −1 automatically transformed to the condition x i ∼ 0 through Eq. (13). So, we fix x i = 10 −8 . The solutions of the system of evolution equations Eq. (14) are not sensitive to the initial value of x as long x i ≪ 1. Since the dark energy density parameter Ω φ is related to x and y (see Eq. (15)), we can relate the initial condition in y to the boundary condition in Ω φ . So, we fix y i in such a way that the value of Ω φ at present becomes 0.72. The initial slope of the potential is determined by the initial value of λ. λ i determines the evolution of the equation of state (EoS) of the Galileon field. For λ i ≪ 1, the EoS of the Galileon field does not deviate much from its initial frozen value −1 and always stays very close to the cosmological constant behaviour. For large value of λ i , the Galileon field thaws away sufficiently from its initial frozen state and can have sufficient deviations from the cosmological constant behviour. For all the models we fix λ i = 0.7. Next, the initial condition H i is taken such a way that at present H 0 = H 0 = 100h km/s/Mpc with h = 0.7. The initial condition for ǫ i remains as a parameter (note that this parameter is related to the parameter α in the action (1) and represents the contribution from Galileon term). Initially at redshift z = 1000 there is hardly any contribution from the Galileon field in the evolution equations. So, we set q i = 0 and q 1 | i = dq dN i = 0. Next, we can find the initial condition for dΦ dN using the fact that during the matter dominated era Φ is constant i.e. Φ 1 | i = dΦ dN i = 0. Also during matter dominated era, one can find that ∆ m ∼ a and using the Poisson equation, we get the initial condition in Φ which is given by Φ i = − 3 2 H 2 i k 2 a i .(20) B. Behaviour of different cosmological parameters By using the above mentioned initial conditions we solve the system of autonomous equations given in Eq. (14) for three different initial conditions (ǫ i = 0, 20 and 50) with linear potential and study various cosmological parameters. In all subsequent sections, we study these three cases except for the last figure where two other polynomial potentials (squared and inversesquared) have been introduced to see the differences in different potentials. In Fig. 1, we show the evolution of the EoS for these three cases. As we consider thawing class of Galileon models, the EoS of all three cases initially starts from nearly −1 and slowly increases towards higher values at late times. At present (z = 0), the EoS of the models ǫ i = 0, 20 and 50 reaches to the values nearly −0.9, −0.94 and −0.96 respectively. It shows that the models with higher ǫ i values deviate lesser from ΛCDM behaviour. So with similar initial conditions, Galileon models are closer to ΛCDM than the standard quintessence models. In Fig. 2, we study the deviations in the gravitational potential Φ for all three different initial conditions and compare them with the ΛCDM model. In this plot and in all subsequent plots, we define %∆X = ( X de XΛCDM − 1) × 100 for any quantity X. At lower redshifts the deviations are enhanced on larger scale whereas the deviations are suppressed on smaller scales. At higher redshift the deviations are always suppressed and the suppression decreases with increasing redshift. The differences in the deviations between larger and smaller scales decrease with increasing redshift which means the scale dependency of the deviations decreases with increasing redshift. This behaviour is not surprising because of the fact that the dark energy perturbation is only relevant on large scales and at lower redshifts. So, whatever deviation is present on smaller scales, is due the differences in the background expansion only. Similarly on higher redshifts, the effect of dark energy is negligible. As the matter perturbation is scale independent, on higher redshifts, the deviation from ΛCDM is also scale independent. In Fig. 3, we study the deviations in ∆ m for all three cases compared to the ΛCDM. There is always suppression in the deviations for all the models compared to the ΛCDM on all scales and for all redshifts. This suppression decreases with increasing redshifts. Another point is that the suppressions are always smaller on larger scales compared to smaller scales. And these differences between two scales decrease with increasing redshifts because of the same reason that the scale dependence comes only through the dark energy perturbation which plays an extra role only on large scales. Next, we introduce a quantity f = − k 2 vm H∆m which is related to the velocity perturbation and gives rise to the redshift space distortion [90]. The reason to introduce this quantity is that it plays an important role to the observed galaxy power spectrum which is discussed in the next section. So, before going to the discussion of the observed galaxy power spectrum it is important to study the behaviour of f . In Fig. 4, we study the deviations in f for all the models compared to the ΛCDM. The deviations are always suppressed and the suppressions are almost scale independent except at very low redshifts because of the same reason due to the dark energy perturbation discussed above. One interesting point to notice that the suppressions at first increase with increasing redshifts and is maximum at redshift z ∼ 0.5 and then decrease with increasing redshifts for z 0.5. We should stress that this maximum occurring at z ∼ 0.5 is due to our certain choice of parameters. For other choices, this redshift value where the maximum occurs will change, but there will always be a maximum deviation at some particular redshift. IV. THE OBSERVED GALAXY POWER SPECTRUM To describe the inhomogeneous Universe and its evolution, the main quantity of interest is the matter density perturbation whose evolution we study through cosmological perturbation theory (here linear perturbation theory using Eq. (17)). However we can not directly measure the matter perturbation. We actually observe the tracers of this matter inhomogeneity such as galaxies. By studying the distribution of the galaxies in the Universe we can probe the underlying structure formation history of the Universe. Since the fluctuation in the galaxy number density is related to the matter density perturbation, we can study the underlying dark matter density fluctuation on different scales by observing various features in the galaxy distribution. As because dark energy also plays an important role to the structure formation, we can also use the galaxy distribution to distinguish different dark energy models or modified gravity models. Theoretically, the galaxy density contrast δ g and the matter density contrast δ m can be related by a simple relation δ g = bδ m by introducing a bias parameter b; however this relation is gauge dependent on super-Hubble or near super-Hubble scales. So, to have a physical bias we have to use comoving density perturbation in the bias relation. On large scale, the rest frame of the dark matter and galaxies coincide and in this frame we use the gauge independent relation as ∆ g (k, z) = b(z)∆ m (k, z) by assuming a linear bias with Gaussian initial conditions and this relation is valid on all linear scales. However this ∆ g is not an observable quantity on large scale because of some extra relativistic effects such as light cone and redshift effects [85,86,90,91]. In late eighties, Kaiser [92] showed that we do not observe galaxy distribution in real space but in redshift space. In addition to the matter density perturbation, the peculiar velocities of the galaxies also affect the galaxy distribution in redshift space. This effect is known as the Kaiser redshift space distortion which is a measure of the large scale velocity fields. The Kaiser redshift space distortion term contains valuable information to the large scale structure formation. In addition, the gravitational potential in the metric can affect the photon geodesics by integration along the path. This effect is known as the magnification bias [93] i.e. the observed galaxy distribution is also affected by the gravitational lensing. This gravitational lensing can allow us to detect the faint galaxies too through the magnification due to lensing effect. In recent years, people have shown that on large scales in the observed galaxy distribution there are some other effects which are purely general relativistic. These effects are influenced by the gravitational potential, velocity fields and the matter density perturbations on the observed number density of galaxies on large scales [83-85, 90, 94-98]. This effects are negligible in the sub-Hubble limit compared to the other effects like Kaiser redshift space distortion. Since we can not neglect these general relativistic effects on large scales, these effects can be important to probe dark energy perturbation as well as to distinguish different dark energy models. By incorporating all the above mentioned effects, the galaxy number overdensity ∆ obs (across the sky and at different redshifts and angles) can be written as [85,86,90,91] ∆ obs = b + f µ 2 + A H k 2 + iµB H k ∆ m ,(21) where b is the bias parameter which relates the galaxy density contrast to the underlying dark matter density contrast, f is the redshift space distortion parameter which is mentioned in the previous section, µ = − n. k k with n denotes the direction of the observation, k denotes the wave vector whose magnitude is k. The parameters A and B are given by A = 3f + k H 2 3 + H ′ H 2 + Φ ′ HΦ Φ ∆ m ,(22)B = − 2 + H ′ H 2 f.(23) Here we have assumed scale independent bias which is a valid assumption on large scales where we use linear perturbation theory. We have also assumed a constant comoving galaxy number density where galaxy evolution bias is absent and we have taken the unit magnification bias [86]. We have neglected other terms like the time-delay, ISW and weak lensing integrated terms. For simplicity we put b = 1 throughout all the subsequent calculations. The right hand side (r.h.s) of the Eq. (21) contains four terms. The first term is related to the galaxy bias, the second term is the Kaiser redshift space distortion term and other two terms are completely due to the general relativistic effects. The quantity A in the third term is related to the peculiar velocity fields and the gravitational potential. The quantity B in the fourth term is related to the Doppler effect. Using definition of the power spectrum and Eq. (21), we can relate the matter power spectrum to the observed galaxy overdensity power spectrum P g (the real part) given by [86,90,91,95] P g (k, z) = b + f µ 2 2 + 2 b + f µ 2 A Y 2 + A 2 Y 4 + µ 2 B 2 Y 2 P m (k, z) ,(24) where Y = k H and P m is the matter power spectrum given by V(φ)=φ, z = 0 ǫ i =20, P g (k) ǫ i =20, P k (k) ǫ i =20, P m (k) In Fig. 5, we have plotted the line of sight (µ = 1) observed galaxy power spectrum for the linear potential at z = 0 using Eq. (24). In all the subsequent plots we put µ = 1 and by P k we mean the observed galaxy power spectrum keeping only the bias and the Kaiser redshift space distortion terms i.e. without A and B terms. So, in all the subsequent plots P m (k, z) = Ak ns−4 T (k) 2 |∆ m (k, z)| |Φ(k, 0)| 2 ,(25)P k (k, z) = b + f µ 2 2 P m (k, z) .(26) From Fig. 5 we see that the observed galaxy power spectrum without GR corrections (i.e., P k ) enhances from the standard matter power spectrum P m and the enhancement is present on all scales i.e. P k shifts with an almost constant factor to higher values compared to P m on all scales. When the GR corrections (A and B terms) are included, the full observed galaxy power spectrum P g enhances further from P k on larger scales only, which is quite obvious because the relativistic effects are negligible on sub-Hubble scales. In Fig. 5, the vertical line is the exact horizon scale (k = aH) at z = 0. Next we study the deviations in P m , P k and P g for Galileon models from ΛCDM in Fig. 6. Firstly, the deviations in P m for different models from ΛCDM comes through ∆ m (k, z) and Φ(k, 0) through Eq. (25). So, the deviation in P m from ΛCDM is due to these two competing terms. In Fig. 3, we have already shown that the deviation in ∆ m is not substantial. So the main contribution comes from the difference in gravitational potential Φ(k, 0). On large scales, this has an extra contribution from dark energy perturbation and hence this result the suppression in P m on large scales from ΛCDM model. This is shown in the left most panel in Fig. 6. Compared to the deviations in P m , the deviations in P k comes due to the extra contribution from f . In Fig. 4 we have already seen that the deviations in f are marginal and also has a maximum at z ≈ 0.5 and except for very low redshifts, the deviations are almost scale independent. Hence the deviation in P k is mostly similar to that in P m . Only around z ∼ 0.5, it is bit higher than P m due to the maximum contribution from f . This is shown in middle panel of Fig. 6. In Fig. 5 we have seen that, due to the extra GR correction, P g deviates from P k only on larger scales otherwise they are almost same on smaller scales. So, the deviations in P g follow the exact deviations in P k on smaller scales which is clear from the middle and right panels of Fig. 6. On larger scales, however, there is a extra effect due to GR correction terms as described through terms A and B. Due to this, there is a large suppression from ΛCDM on large scales and smaller redshifts. This is shown in the right panel of Fig. 6. One can also notice that in all the figures, the deviations are always higher for ǫ i = 0 compared to non zero values for ǫ i . Given the fact that non zero ǫ i represents Galileon models and ǫ i = 0 represents standard quintessence, one can conclude that Galileon models are harder to distinguigh from ΛCDM model compared to standard quintessence. Finally in Fig. 7, we consider other phenomenological potentials like squared and inverse-squared potentials and compare them with linear potential. It is shown that the linear potential has marginally higher deviation from ΛCDM compared to the other potentials. V. SUMMARY AND CONCLUSION In this paper, we study the observed galaxy power power spectra in cubic Galileon model with a linear potential which preserves the shift symmetry. In this scenario potential is responsible for late time acceleration. Although there is a higher derivative term in the action, the equation motion is still second order and the theory is free from ghost. We have considered thawing dynamics of the Galileon field. We form a single autonomous system involving both the background evolution and the linear perturbation equation for matter and dark energy. We show that the deviation from ΛCDM in comoving matter density contrast ∆ m and growth rate f is not substantial for cubic Galileon models. The gravitational potential gets slightly enhanced on large scales compared to ΛCDM due to the added contribution from the perturbed Galileon field. The observed galaxy power spectrum contains several correction terms related to redshift space distortion as well as other relativistic corrections that are present on large scales only. Due to the presence of the these terms, on large scales there are substantial deviation from the ΛCDM model in observed galaxy power spectrum P g . But compared to standard quintessence, these deviation are small in Galileon model. This makes Galileon models hard to distinguish from ΛCDM even on larger scales. We also consider some phenomenological potentials like squared and inverse-squared potentials which break shift symmetry and show that the deviations from ΛCDM in observed galaxy power spectrum for these potentials are always less than the linear potential which preserves the shift symmetry. In future, we aim to extend this study to massive gravity [30] and generalized Proca theories for gravity [99]. VI. ACKNOWLEDGEMENTS B.R.D. thanks CSIR, Govt. of India for financial support through SRF scheme (No:09/466(0157)/2012-EMR-I). We also acknowledge the usage of HOPE-a Python Just-In-Time compiler for astrophysical computations [100]. Eq. (11) into Eq. (8) we get evolution equation for the gravitational potential Φ. By varying the action (1) we can calculate the Euler-Lagrangian equation order by order and in the first order perturbation we get evolution equation for the δφ. 6x 2 (ǫ(ǫ(ǫ(15ǫ + 88) + 132) + 72) + 16) + 12(ǫ(ǫ(26 − 3ǫ) + 60) + 36)) + 96) +9Q 6 x 4 ǫ(ǫ(ǫ(x 2 (3ǫ(ǫ + 2) + 4) + 42ǫ + 92) + 64) + 16)], FIG. 1 : 1Behaviour of the Equation of state for the Galileon field w φ as a function of redshift for different ǫi with linear potential with Ωm0 = 0.28. FIG. 2 : 2Percentage deviation in Φ from ΛCDM model for different ǫi with linear potential. FIG. 3 : 3Percentage deviation in comoving density contrast ∆m from ΛCDM model for different ǫi with linear potential. FIG. 4 : 4Percentage deviation in f from ΛCDM model for different ǫi with linear potential. FIG. 5 :. 5Dashed-dotted, dashed and continuous lines are for the usual matter power spectrum Pm (Eq. (25)), the galaxy power spectrum taking only Kaiser term P k (Eq.(26)) and the full observed galaxy power spectrum Pg (Eq.(24)) respectively for ǫi = 20. The vertical blue line is the horizon scales at z = 0.which is valid on all scales. One can check that Eq.(25) reduces to the standard definition of the matter power spectrum on sub-Hubble scales given by P m (k, z) ∝ k ns T (k) 2 |δm(k,The constant A is determined by the σ 8 normalisation. Here we use the Eisenstein-Hu transfer function for T (k). In the σ 8 normalisation we put scalar spectral index of primordial power spectrum n s = 0.96, σ 8 = 0.8, h = 0.7, Ω m0 = 0.28 and Ω b0 = 0.05. FIG. 6 : 6Percentage deviation in P (k) from ΛCDM model for different ǫi with linear potential as a function k: negative values in y-axis means they are all suppressed from ΛCDM. Left most plots are for standard matter power spectra Pm given by Eq. (25), middle plots are for power spectra with Kaiser redshift space distortion term included and the right ones for full observed galaxy power spectra Pg given by Eq. (24) with GR corrections. FIG. 7 : 7Percentage deviation in P (k) from ΛCDM model for different potentials as a function of k at z = 0 and for ǫi = 20: negative values in y-axis means they are all suppressed from ΛCDM. . A G Riess, Supernova Search Teamastro-ph/9805201Astron. J. 1161009A. G. Riess et al. (Supernova Search Team), Astron. J. 116, 1009 (1998), astro-ph/9805201. . S Perlmutter, Supernova Cosmology Projectastro-ph/9812133Astrophys. J. 517S. Perlmutter et al. (Supernova Cosmology Project), Astrophys. J. 517, 565 (1999), astro-ph/9812133. . D N Spergel, WMAPastro-ph/0302209Astrophys. J. Suppl. 148D. N. Spergel et al. (WMAP), Astrophys. J. Suppl. 148, 175 (2003), astro-ph/0302209. . G Hinshaw, WMAPastro-ph/0302217Astrophys. J. Suppl. 148G. Hinshaw et al. (WMAP), Astrophys. J. Suppl. 148, 135 (2003), astro-ph/0302217. . P A R Ade, Planck1502.01589ArXiv e-printsP. A. R. Ade et al. (Planck), ArXiv e-prints (2015), 1502.01589. . P A R Ade, Planck1502.02114Astron. Astrophys. 594P. A. R. Ade et al. (Planck), Astron. Astrophys. 594, A20 (2016), 1502.02114. . T Delubac, J E Bautista, N G Busca, J Rich, D Kirkby, S Bailey, A Font-Ribera, A Slosar, K.-G Lee, M M Pieri, 1404.1801Astron. Astrophys. 57459T. Delubac, J. E. Bautista, N. G. Busca, J. Rich, D. Kirkby, S. Bailey, A. Font-Ribera, A. Slosar, K.-G. Lee, M. M. Pieri, et al., Astron. Astrophys. 574, A59 (2015), 1404.1801. . M Ata, 1705.06373M. Ata et al. (2017), 1705.06373. . A A Starobinsky, Phys. Lett. 9199A. A. Starobinsky, Phys. Lett. B91, 99 (1980). . A H Guth, Phys. Rev. 23347A. H. Guth, Phys. Rev. D23, 347 (1981). . A D Linde, Phys. Lett. 129177A. D. Linde, Phys. Lett. B129, 177 (1983). . A D Linde, Phys. Lett. 108389A. D. Linde, Phys. Lett. B108, 389 (1982). A R Liddle, astro-ph/9901124Proceedings, Summer School in High-energy physics and cosmology. Summer School in High-energy physics and cosmologyTrieste, ItalyA. R. Liddle, in Proceedings, Summer School in High-energy physics and cosmology: Trieste, Italy, June 29-July 17, 1998 (1999), pp. 260-295, astro-ph/9901124, URL http://alice.cern.ch/format/showfull?sysnb=0301651. . E J Copeland, M Sami, S Tsujikawa, hep-th/0603057Int. J. Mod. Phys. 151753E. J. Copeland, M. Sami, and S. Tsujikawa, Int. J. Mod. Phys. D15, 1753 (2006), hep-th/0603057. . V Sahni, A A Starobinsky, astro-ph/9904398Int. J. Mod. Phys. 9V. Sahni and A. A. Starobinsky, Int. J. Mod. Phys. D9, 373 (2000), astro-ph/9904398. . T Padmanabhan, astro-ph/0603114AIP Conf. Proc. 861179T. Padmanabhan, AIP Conf. Proc. 861, 179 (2006), [,179(2006)], astro-ph/0603114. . J Frieman, M Turner, D Huterer, 0803.0982Ann. Rev. Astron. Astrophys. 46J. Frieman, M. Turner, and D. Huterer, Ann. Rev. Astron. Astrophys. 46, 385 (2008), 0803.0982. . T Clifton, P G Ferreira, A Padilla, C Skordis, Phys. Rept. 513T. Clifton, P. G. Ferreira, A. Padilla, and C. Skordis, Phys. Rept. 513, 1 (2012), 1106.2476. . K Hinterbichler, 1105.3735Rev. Mod. Phys. 84671K. Hinterbichler, Rev. Mod. Phys. 84, 671 (2012), 1105.3735. . A De Felice, S Tsujikawa, 1002.4928Living Rev. Rel. 13A. De Felice and S. Tsujikawa, Living Rev. Rel. 13, 3 (2010), 1002.4928. . C De Rham, 1401.4173Living Rev. Rel. 17C. de Rham, Living Rev. Rel. 17, 7 (2014), 1401.4173. . C De Rham, 1204.5492Comptes Rendus Physique. 13C. de Rham, Comptes Rendus Physique 13, 666 (2012), 1204.5492. . G W Horndeski, Int. J. Theor. Phys. 10363G. W. Horndeski, Int. J. Theor. Phys. 10, 363 (1974). . G R Dvali, G Gabadadze, M Porrati, hep-th/0005016Phys. Lett. 485208G. R. Dvali, G. Gabadadze, and M. Porrati, Phys. Lett. B485, 208 (2000), hep-th/0005016. . W Hu, I Sawicki, Phys. Rev. 76W. Hu and I. Sawicki, Phys. Rev. D76, 064004 (2007), 0705.1158. . L Amendola, R Gannouji, D Polarski, S Tsujikawa, gr-qc/0612180Phys. Rev. 7583504L. Amendola, R. Gannouji, D. Polarski, and S. Tsujikawa, Phys. Rev. D75, 083504 (2007), gr-qc/0612180. . A Nicolis, R Rattazzi, E Trincherini, Phys. Rev. 79A. Nicolis, R. Rattazzi, and E. Trincherini, Phys. Rev. D79, 064036 (2009), 0811.2197. . C De Rham, G Gabadadze, A J Tolley, 1011.1232Phys. Rev. Lett. 106231101C. de Rham, G. Gabadadze, and A. J. Tolley, Phys. Rev. Lett. 106, 231101 (2011), 1011.1232. . C De Rham, G Gabadadze, L Heisenberg, D Pirtskhalava, 1010.1780Phys. Rev. 83103516C. de Rham, G. Gabadadze, L. Heisenberg, and D. Pirtskhalava, Phys. Rev. D83, 103516 (2011), 1010.1780. . C De Rham, L Heisenberg, 1106.3312Phys. Rev. 8443503C. de Rham and L. Heisenberg, Phys. Rev. D84, 043503 (2011), 1106.3312. . L Heisenberg, R Kimura, K Yamamoto, 1403.2049Phys. Rev. 89103008L. Heisenberg, R. Kimura, and K. Yamamoto, Phys. Rev. D89, 103008 (2014), 1403.2049. . C Deffayet, O Pujolas, I Sawicki, A Vikman, 1008.0048JCAP. 101026C. Deffayet, O. Pujolas, I. Sawicki, and A. Vikman, JCAP 1010, 026 (2010), 1008.0048. . A De Felice, S Tsujikawa, 1008.4236Phys. Rev. 84124029A. De Felice and S. Tsujikawa, Phys. Rev. D84, 124029 (2011), 1008.4236. . J Martin, 1205.3365Comptes Rendus Physique. 13J. Martin, Comptes Rendus Physique 13, 566 (2012), 1205.3365. . P J Steinhardt, L.-M Wang, I Zlatev, astro-ph/9812313Phys. Rev. 59123504P. J. Steinhardt, L.-M. Wang, and I. Zlatev, Phys. Rev. D59, 123504 (1999), astro-ph/9812313. . A G Riess, 1604.01424ArXiv e-printsA. G. Riess et al., ArXiv e-prints (2016), 1604.01424. . H Hildebrandt, 1606.05338Mon. Not. Roy. Astron. Soc. 4651454H. Hildebrandt et al., Mon. Not. Roy. Astron. Soc. 465, 1454 (2017), 1606.05338. . C Heymans, 1303.1808Mon. Not. Roy. Astron. Soc. 4322433C. Heymans et al., Mon. Not. Roy. Astron. Soc. 432, 2433 (2013), 1303.1808. . V Bonvin, 1607.01790ArXiv e-printsV. Bonvin et al., ArXiv e-prints (2016), 1607.01790. . C Wetterich, Nucl. Phys. 302645C. Wetterich, Nucl. Phys. B302, 645 (1988). . C Wetterich, Nucl. Phys. 302668C. Wetterich, Nucl. Phys. B302, 668 (1988). . B Ratra, P J E Peebles, Phys. Rev. 373406B. Ratra and P. J. E. Peebles, Phys. Rev. D37, 3406 (1988). . R J Scherrer, A A Sen, 0712.3450Phys. Rev. 7783515R. J. Scherrer and A. A. Sen, Phys. Rev. D77, 083515 (2008), 0712.3450. . T Chiba, 0902.4037Phys. Rev. 7983517Erratum: Phys. Rev.D80,109902(2009)T. Chiba, Phys. Rev. D79, 083517 (2009), [Erratum: Phys. Rev.D80,109902(2009)], 0902.4037. . R R Caldwell, R Dave, P J Steinhardt, astro-ph/9708069Phys. Rev. Lett. 801582R. R. Caldwell, R. Dave, and P. J. Steinhardt, Phys. Rev. Lett. 80, 1582 (1998), astro-ph/9708069. . I Zlatev, L.-M Wang, P J Steinhardt, astro-ph/9807002Phys. Rev. Lett. 82896I. Zlatev, L.-M. Wang, and P. J. Steinhardt, Phys. Rev. Lett. 82, 896 (1999), astro-ph/9807002. . L Amendola, astro-ph/9908023Phys. Rev. 6243511L. Amendola, Phys. Rev. D62, 043511 (2000), astro-ph/9908023. . V Sahni, L.-M Wang, astro-ph/9910097Phys. Rev. 62103517V. Sahni and L.-M. Wang, Phys. Rev. D62, 103517 (2000), astro-ph/9910097. . F Perrotta, C Baccigalupi, S Matarrese, astro-ph/9906066Phys. Rev. 6123507F. Perrotta, C. Baccigalupi, and S. Matarrese, Phys. Rev. D61, 023507 (1999), astro-ph/9906066. . V Sahni, M Sami, T Souradeep, gr-qc/0105121Phys. Rev. 6523518V. Sahni, M. Sami, and T. Souradeep, Phys. Rev. D65, 023518 (2002), gr-qc/0105121. . M Wali Hossain, R Myrzakulov, M Sami, E N Saridakis, 1410.6100Int. J. Mod. Phys. 241530014M. Wali Hossain, R. Myrzakulov, M. Sami, and E. N. Saridakis, Int. J. Mod. Phys. D24, 1530014 (2015), 1410.6100. . R R Caldwell, astro-ph/9908168Phys. Lett. 54523R. R. Caldwell, Phys. Lett. B545, 23 (2002), astro-ph/9908168. . E Elizalde, S Nojiri, S D Odintsov, hep-th/0405034Phys. Rev. 7043539E. Elizalde, S. Nojiri, and S. D. Odintsov, Phys. Rev. D70, 043539 (2004), hep-th/0405034. . A Sen, hep-th/0203211JHEP. 0448A. Sen, JHEP 04, 048 (2002), hep-th/0203211. . A Sen, hep-th/0203265JHEP. 0765A. Sen, JHEP 07, 065 (2002), hep-th/0203265. . G W Gibbons, hep-th/0204008Phys. Lett. 5371G. W. Gibbons, Phys. Lett. B537, 1 (2002), hep-th/0204008. . M R Garousi, M Sami, S Tsujikawa, hep-th/0402075Phys. Rev. 7043536M. R. Garousi, M. Sami, and S. Tsujikawa, Phys. Rev. D70, 043536 (2004), hep-th/0402075. . E J Copeland, M R Garousi, M Sami, S Tsujikawa, hep-th/0411192Phys. Rev. 7143003E. J. Copeland, M. R. Garousi, M. Sami, and S. Tsujikawa, Phys. Rev. D71, 043003 (2005), hep-th/0411192. . C Armendariz-Picon, V F Mukhanov, P J Steinhardt, astro-ph/0004134Phys. Rev. Lett. 854438C. Armendariz-Picon, V. F. Mukhanov, and P. J. Steinhardt, Phys. Rev. Lett. 85, 4438 (2000), astro-ph/0004134. . C Armendariz-Picon, V F Mukhanov, P J Steinhardt, astro-ph/0006373Phys. Rev. 63103510C. Armendariz-Picon, V. F. Mukhanov, and P. J. Steinhardt, Phys. Rev. D63, 103510 (2001), astro-ph/0006373. . A D , gr-qc/0511158Class. Quant. Grav. 231557A. D. Rendall, Class. Quant. Grav. 23, 1557 (2006), gr-qc/0511158. . M C Bento, O Bertolami, A A Sen, gr-qc/0202064Phys. Rev. 6643507M. C. Bento, O. Bertolami, and A. A. Sen, Phys. Rev. D66, 043507 (2002), gr-qc/0202064. . M D C Bento, O Bertolami, A A Sen, astro-ph/0210468Phys. Rev. 6763003M. d. C. Bento, O. Bertolami, and A. A. Sen, Phys. Rev. D67, 063003 (2003), astro-ph/0210468. . M A Luty, M Porrati, R Rattazzi, hep-th/0303116JHEP. 0929M. A. Luty, M. Porrati, and R. Rattazzi, JHEP 09, 029 (2003), hep-th/0303116. . C Deffayet, G Esposito-Farese, A Vikman, 0901.1314Phys. Rev. 7984003C. Deffayet, G. Esposito-Farese, and A. Vikman, Phys. Rev. D79, 084003 (2009), 0901.1314. . R P Woodard, astro-ph/0601672Lect. Notes Phys. 720403R. P. Woodard, Lect. Notes Phys. 720, 403 (2007), astro-ph/0601672. . A I Vainshtein, Phys. Lett. 39393A. I. Vainshtein, Phys. Lett. B39, 393 (1972). . H Van Dam, M J G Veltman, Nucl. Phys. 22397H. van Dam and M. J. G. Veltman, Nucl. Phys. B22, 397 (1970). . V I Zakharov, Pisma Zh. Eksp. Teor. Fiz. 12312JETP Lett.V. I. Zakharov, JETP Lett. 12, 312 (1970), [Pisma Zh. Eksp. Teor. Fiz.12,447(1970)]. . M Fierz, W Pauli, Proc. Roy. Soc. Lond. 173211M. Fierz and W. Pauli, Proc. Roy. Soc. Lond. A173, 211 (1939). . N Chow, J Khoury, 0905.1325Phys. Rev. 8024037N. Chow and J. Khoury, Phys. Rev. D80, 024037 (2009), 0905.1325. . F P Silva, K Koyama, 0909.4538Phys. Rev. 80121301F. P. Silva and K. Koyama, Phys. Rev. D80, 121301 (2009), 0909.4538. . T Kobayashi, 1003.3281Phys. Rev. 81103533T. Kobayashi, Phys. Rev. D81, 103533 (2010), 1003.3281. . T Kobayashi, H Tashiro, D Suzuki, 0912.4641Phys. Rev. 8163513T. Kobayashi, H. Tashiro, and D. Suzuki, Phys. Rev. D81, 063513 (2010), 0912.4641. . R Gannouji, M Sami, 1004.2808Phys. Rev. 8224011R. Gannouji and M. Sami, Phys. Rev. D82, 024011 (2010), 1004.2808. . A De Felice, S Mukohyama, S Tsujikawa, 1006.0281Phys. Rev. 8223524A. De Felice, S. Mukohyama, and S. Tsujikawa, Phys. Rev. D82, 023524 (2010), 1006.0281. . A De Felice, S Tsujikawa, 1007.2700Phys. Rev. Lett. 105111301A. De Felice and S. Tsujikawa, Phys. Rev. Lett. 105, 111301 (2010), 1007.2700. . A Ali, R Gannouji, M Sami, 1008.1588Phys. Rev. 82103015A. Ali, R. Gannouji, and M. Sami, Phys. Rev. D82, 103015 (2010), 1008.1588. . D F Mota, M Sandstad, T Zlosnik, 1009.6151JHEP. 1251D. F. Mota, M. Sandstad, and T. Zlosnik, JHEP 12, 051 (2010), 1009.6151. . A Ali, R Gannouji, M W Hossain, M Sami, 1207.3959Phys. Lett. 718A. Ali, R. Gannouji, M. W. Hossain, and M. Sami, Phys. Lett. B718, 5 (2012), 1207.3959. . M W Hossain, A A Sen, 1201.6192Phys. Lett. 713140M. W. Hossain and A. A. Sen, Phys. Lett. B713, 140 (2012), 1201.6192. . M W Hossain, 1704.07956M. W. Hossain, ArXiv e-prints (2017), 1704.07956. . J Yoo, A L Fitzpatrick, M Zaldarriaga, 0907.0707Phys. Rev. 8083514J. Yoo, A. L. Fitzpatrick, and M. Zaldarriaga, Phys. Rev. D80, 083514 (2009), 0907.0707. . C Bonvin, R Durrer, 1105.5280Phys. Rev. 8463505C. Bonvin and R. Durrer, Phys. Rev. D84, 063505 (2011), 1105.5280. . A Challinor, A Lewis, 1105.5292Phys. Rev. 8443516A. Challinor and A. Lewis, Phys. Rev. D84, 043516 (2011), 1105.5292. . D G A Duniya, D Bertacca, R Maartens, 1502.06424Phys. Rev. 9163530D. G. A. Duniya, D. Bertacca, and R. Maartens, Phys. Rev. D91, 063530 (2015), 1502.06424. . B R Dinda, A A Sen, 1607.05123B. R. Dinda and A. A. Sen (2016), 1607.05123. . S Unnikrishnan, H K Jassal, T R Seshadri, Phys. Rev. D. 78801S. Unnikrishnan, H. K. Jassal, and T. R. Seshadri, Phys. Rev. D 78, 123504 (2008), 0801.2017. . B R Dinda, A A Sen, 1607.05123ArXiv e-printsB. R. Dinda and A. A. Sen, ArXiv e-prints (2016), 1607.05123. . D Duniya, 1606.00712ArXiv e-printsD. Duniya, ArXiv e-prints (2016), 1606.00712. . D Duniya, D Bertacca, R Maartens, 1305.4509JCAP. 131015D. Duniya, D. Bertacca, and R. Maartens, JCAP 1310, 015 (2013), 1305.4509. . N Kaiser, Mon. Not. Roy. Astron. Soc. 2271N. Kaiser, Mon. Not. Roy. Astron. Soc. 227, 1 (1987). . R Moessner, B Jain, J V Villumsen, astro-ph/9708271Mon. Not. Roy. Astron. Soc. 294R. Moessner, B. Jain, and J. V. Villumsen, Mon. Not. Roy. Astron. Soc. 294, 291 (1998), astro-ph/9708271. . C Bonvin, 1409.2224Class. Quant. Grav. 31234002C. Bonvin, Class. Quant. Grav. 31, 234002 (2014), 1409.2224. . D Jeong, F Schmidt, C M Hirata, 1107.5427Phys. Rev. 8523504D. Jeong, F. Schmidt, and C. M. Hirata, Phys. Rev. D85, 023504 (2012), 1107.5427. . J Yoo, N Hamaus, U Seljak, M Zaldarriaga, Phys. Rev. 86J. Yoo, N. Hamaus, U. Seljak, and M. Zaldarriaga, Phys. Rev. D86, 063514 (2012), 1206.5809. . D Bertacca, R Maartens, A Raccanelli, C Clarkson, 1205.5221JCAP. 121025D. Bertacca, R. Maartens, A. Raccanelli, and C. Clarkson, JCAP 1210, 025 (2012), 1205.5221. . D Duniya, 1505.03436Gen. Rel. Grav. 48D. Duniya, Gen. Rel. Grav. 48, 52 (2016), 1505.03436. . A De Felice, L Heisenberg, R Kase, S Mukohyama, S Tsujikawa, Y.-L Zhang, 1603.05806JCAP. 160648A. De Felice, L. Heisenberg, R. Kase, S. Mukohyama, S. Tsujikawa, and Y.-l. Zhang, JCAP 1606, 048 (2016), 1603.05806. . J Akeret, L Gamper, A Amara, A Refregier, 1410.4345Astronomy and Computing. 10J. Akeret, L. Gamper, A. Amara, and A. Refregier, Astronomy and Computing 10, 1 (2015), 1410.4345.
[]
[ "Cartography for Martian Trojans", "Cartography for Martian Trojans" ]
[ "Serge Tabachnik \nTheoretical Physics\nDepartment of Physics\n1 Keble RdOX1 3NPOxfordUK\n", "N Wyn Evans \nTheoretical Physics\nDepartment of Physics\n1 Keble RdOX1 3NPOxfordUK\n" ]
[ "Theoretical Physics\nDepartment of Physics\n1 Keble RdOX1 3NPOxfordUK", "Theoretical Physics\nDepartment of Physics\n1 Keble RdOX1 3NPOxfordUK" ]
[]
The last few months have seen the discovery of a second Martian Trojan (1998 VF31), as well as two further possible candidates (1998 QH56 and 1998 SD4). Together with the previously discovered Martian satellite 5261 Eureka, these are the only known possible solar system Trojan asteroids not associated with Jupiter. Here, maps of the locations of the stable Trojan trajectories of Mars are presented. These are constructed by integrating an ensemble of in-plane and inclined orbits in the vicinity of the Martian Lagrange points for between 25 million and 60 million years. The survivors occupy a band of inclinations between 15 • and 40 • and longitudes between 240 • and 330 • at the L 5 Lagrange point. Around the L 4 point, stable Trojans inhabit two bands of inclinations (15 • < i < 30 • and 32 • < i < 40 • ) with longitudes restricted between 25 • and 120 • . Both 5261 Eureka and 1998 VF31 lie deep within one of the stable zones, which suggests they may be of primordial origin. AroundMars, the number of such undiscovered primordial objects with sizes greater than 1 km may be as high as ∼ 50. The two candidates 1998 QH56 and 1998 SD4 are not presently on Trojan orbits and will enter the sphere of influence of Mars within half a million years.
10.1086/312019
[ "https://arxiv.org/pdf/astro-ph/9904085v1.pdf" ]
119,375,891
astro-ph/9904085
2e51b83f6f52a76291e2b4e05b2987fc83fcadaa
Cartography for Martian Trojans Apr 1999 Serge Tabachnik Theoretical Physics Department of Physics 1 Keble RdOX1 3NPOxfordUK N Wyn Evans Theoretical Physics Department of Physics 1 Keble RdOX1 3NPOxfordUK Cartography for Martian Trojans Apr 1999Received ; accepted -2 -arXiv:astro-ph/9904085v1 7Subject headings: Solar System: general -planets and satellites: general -minor The last few months have seen the discovery of a second Martian Trojan (1998 VF31), as well as two further possible candidates (1998 QH56 and 1998 SD4). Together with the previously discovered Martian satellite 5261 Eureka, these are the only known possible solar system Trojan asteroids not associated with Jupiter. Here, maps of the locations of the stable Trojan trajectories of Mars are presented. These are constructed by integrating an ensemble of in-plane and inclined orbits in the vicinity of the Martian Lagrange points for between 25 million and 60 million years. The survivors occupy a band of inclinations between 15 • and 40 • and longitudes between 240 • and 330 • at the L 5 Lagrange point. Around the L 4 point, stable Trojans inhabit two bands of inclinations (15 • < i < 30 • and 32 • < i < 40 • ) with longitudes restricted between 25 • and 120 • . Both 5261 Eureka and 1998 VF31 lie deep within one of the stable zones, which suggests they may be of primordial origin. AroundMars, the number of such undiscovered primordial objects with sizes greater than 1 km may be as high as ∼ 50. The two candidates 1998 QH56 and 1998 SD4 are not presently on Trojan orbits and will enter the sphere of influence of Mars within half a million years. Mars possesses Trojans, the existence of such objects around the larger terrestrial planets also merits very serious attention. There are Trojan orbits associated with Venus and the Earth that survive for tens of millions of years (e.g., Tabachnik & Evans 1998). If objects populating such orbits exist, they must be small else they would have been found by now. Saha & Tremaine (1992 have taken the symplectic integrators developed by Wisdom & Holman (1991) and added individual planetary timesteps to provide a fast code that it is tailor-made for long numerical integrations of low eccentricity orbits in a nearly for Mercury moving outwards, so that Neptune has a timestep of 2.5 years. The Trojan particles all have the same timestep as Mercury. These values were chosen after some experimentation to ensure the relative energy error has a peak amplitude of ≈ 10 −6 over the tens of million year integration timespans. After each timestep, the Trojan test particles are examined to see whether their orbits have become hyperbolic or if they have entered the planet's sphere of influence (defined as r s = a p M 2/5 p where a p and M p are the semimajor axis and mass of the planet). If so, they are terminated. In methodology, our calculations are very similar to the magisterial work on the Trojan problem for the four giant planets by Holman & Wisdom (1993). The earlier calculations of , 1995 on the Trojans of Mars for timespans of between tens of thousands and 6 million years have also proved influential. Our integrations of Trojan orbits are pursued for durations ranging from 25 to 60 million years, the longest integration periods currently available. Nonetheless, the orbits have been followed for only a tiny fraction of the age of the Solar System (∼ 4.5 Gigayears), so it is wise to remain a little cautious about our results. On the basis of 4 million year timespan integrations, claim that stable Martian Trojans have inclinations between 15 • and 30 • and between 32 • and 44 • with respect to Jupiter's orbit. Our longer integrations seem to suggest a more complex picture. MARTIAN TROJANS Mikkola & Innanen's instability strip between 30 • and 32 • can be detected in Figure 1, but only for objects near L 4 with initial longitudes ∼ < 60 • . In particular, this instability strip does not exist around L 5 and here Trojans with starting inclinations 30 • < i < 32 • seem to be stable -as is also evidenced by the recent discovery of 1998 VF31. Marked on the figure are the instantaneous positions of the two certain Martian Trojans, namely 5261 Eureka (marked as a red circle) and 1998 VF31 (a green circle), as well as the two candidates 1998 QH56 (a blue circle) and 1998 SD4 (a yellow circle). It is delightful to see that the two securely established Trojans lie within the stable zone, which was computed by Tabachnik & Evans (1998) before the discovery of 1998 VF31. In fact, they live deep within the heart of the zone, suggesting that they may even be primordial. The two candidates (1998 QH56 and1998 SD4) lie closer to the rim. Let us finally note that Trojans starting off in or near the plane of Mars' orbit are unstable. This has been confirmed by an extensive survey of in-plane Martian Trojans. On integrating 792 test particles with vanishing inclination but with a range of longitudes and semimajor axes, we found that all are unstable on timescales of 60 million years. Martian Trojans with low inclinations are not expected. It is useful to an observer hoping to discover further Trojans to provide plots of the probability density. Accordingly, let us re-simulate the stable zones with much greater resolution. This is accomplished by placing a total of 746 test particles every 1 • in initial inclination and every 5 • in initial longitude so as to span completely the stable regions. This ensemble of orbits is then integrated and the orbital elements are sampled every 2.5 years to provide the plots displayed in Figure 2. The upper panel shows the meshed surface of the probability density as a function of both inclination to the invariable plane and longitude with respect to the planet. The asymmetry between the two Lagrange points is evident. The lower panels show the projections of the meshed surface onto the principal planes -in particular, for the inclination plot, we have shown the contribution at each Lagrange point separately. There are a number of interesting conclusions to be drawn from the plots. First, as shown by the dotted line, the probability density is bimodal at L 4 . It possesses a flattish maximum at inclinations between 15 • and 30 • and then falls sharply, before rising to a second maximum at 36 • . At L 5 , all inclinations between 15 • and 40 • carry a significant probability, though the smaller inclinations in this band are most favored. It is within these inclination windows that the observational effort should be most concentrated. Second, the probability density is peaked at longitudes of ∼ 60 • (L 4 ) and ∼ 300 • (L 5 ). The most likely place to observe one of these Trojans is indeed at the classical locations of the Lagrange points. This is not intuitively obvious, as any individual Trojan is most likely to be seen at the turning points of its longitudinal libration. There are two reasons why this effect is not evident in our probability density plots. First, our figures refer to an ensemble of Trojans uniformly populating the stable zone. So, the shape of the stable zone also plays an important role in controlling the position of the maximum of the probability density. Second, the positions of the Lagrange points themselves are oscillating and so the turning points of the longitudinal libration do not occur at the same locations, thus smearing out the enhancement effect. Table 1 lists the orbital elements of the two secure Martian Trojans and the two candidates, as recorded by the Minor Planet Center. From the instantaneous elements, it is straightforward to simulate the trajectories of the objects. Figure 3 shows the orbits plotted in the plane of longitude (with respect to Mars) versus semimajor axis. As the figures illustrate, both 5261 Eureka and 1998 VF31 are stable and maintain their tadpole character (see e.g., Garfinkel 1977) for durations of 50 million years. Based on preliminary orbital elements, integrated the orbit of 5261 Eureka and found that its longitudinal libration was large, quoting 297 • ± 26 • as the typical range in the longitudinal angle. Our orbit of 5261 Eureka, based on the latest orbital elements, seems to show a smaller libration of 285 − 314 • . The remaining two objects that have been suggested as Martian Trojans, 1998 QH56 and 1998 SD4, both enter the sphere of influence of Marsin the former case after ∼ 500 000 years, in the latter case after ∼ 100 000 years. Although the orbits are Mars crossing, their eccentricities remain low and their inclinations oscillate tightly about mean values until the Mars' sphere of influence is entered. It is possible that these objects were once Trojans and have been ejected from the stable zones, a possibility that receives some support from their locations in Figure 1 at the fringes of the stable zones. Of course, another possibility is that they are ejected asteroids from the Main Belt. The fact that both confirmed Martian Trojans lie deep within the stable zones in Figure 1 suggests that these objects may be primordial. If so, we can get a crude estimate of possible numbers by extrapolation from the number of Main Belt asteroids (c.f., Holman 1997, Evans & Tabachnik 1999. The number of Main Belt asteroids N MB is N MB ∼ < Σ MB A MB f , Keplerian force field. In our simulations, the model of the Solar System consists of the eight planets from Mercury to Neptune, together with test particles starting near the Lagrange points. The effect of Pluto on the evolution of Martian Trojans is quite negligible. Of course, the Trojan test particles are perturbed by the Sun and planets but do not themselves exert any gravitational forces. The initial positions and velocities of the planets, as well as their masses, are provided by the JPL Planetary and Lunar Ephemerides DE405 and the starting epoch is JD 2440400.5 (28 June 1969). All our simulations include the most important post-Newtonian corrections, as well as the effects of the Moon. Individual timesteps are invaluable for this work, as orbital periods are much smaller in the Inner Solar System than the Outer. For all the computations described in this Letter, the timestep for Mercury is 14.27 days. The timesteps of the planets are in the ratio 1 : 2 : 2 : 4 : 8 : 8 : 64 : 64 Figure 1 1shows the results of our first experiment. Here, the orbits of 1080 Trojan test particles around Mars are integrated for 25 million years. The initial inclinations of the test particles (with respect to the plane of Mars' orbit) are spaced every 2 • from 0 • to 90 • and the initial longitudes (again with respect to Mars) are spaced every 15 • from 0 • to 360 • . The starting semimajor axes and the eccentricities of the Trojans are the same as the parent planet. Only the test particles surviving till the end of the 25 million year integration are marked on the Figure. The survivors occupy a band of inclinations between 10 • and 40 • and longitudes between 30 • and 120 • (the L 4 Lagrange point) or 240 • and 330 • (the L 5 point). where A MB is the area of the Main Belt, Σ MB is the surface density of the proto-planetary disk and f is the fraction of primordial objects that survive ejection (which we assume to be a universal constant). Let us take the Main Belt to be centered on 2.75 AU with a width of 1.5 AU. The belt of Martian Trojans is centered on 1.52 AU and has a width of ∼ < 0.0025 AU. If the primordial surface density falls off inversely proportional to distance, then the number of Martian Trojans N MT is N MT ∼ < of known Main Belt asteroids with diameters ∼ > 1 km is ∼ > 40000, which suggests that the number of Martian Trojans is ∼ > 50. by the recent discovery of a new Mars Trojans (1998 VF31) as well as further possible candidates(1998 QH56, 1998, this paper has provided maps of the stable zones for Martian Trojans and estimates of the numbers of undiscovered objects. For Mars, the observational effort should be concentrated at inclinations satisfying 15 • < i < 30 • and 32 • < i < 40 • for the L 4 Lagrange point and between 15 • and 40 • for L 5 . These are the spots where the probability density is significant (seeFigure 2), though the lower inclinations in these bands are slightly more favored than the higher. Trojans in or close the orbital plane of Mars are unstable. Crude estimates suggest there may be as many as∼ 50 undiscovered Martian Trojans with sizes ∼ > 1 km . The orbits of 5261 Eureka and 1998 VF31 remain Trojan-like for durations of at least 50 million years. The other candidates, ∼ < 0.5 million years. NWE is supported by the Royal Society, while ST acknowledges financial help from the European Community. We wish to thank John Chambers, Luke Dones, Seppo Mikkola, Prasenjit Saha and Scott Tremaine for many helpful comments and suggestions. We are also grateful for the remarkable service to the academic community provided by the Minor Planet Center. The anonymous referee helpfully provided improved orbital elements for the Trojan candidates for our integrations. Fig. 1 . 1-This figure shows the stability zones of the inclined Trojans of Mars. The horizontal axis marks the longitude measured from Mars and the vertical axis the inclination with respect to Mars of the starting positions of test particles. At outset, the array of particles has inclinations spaced every 2 • and longitudes spaced every 15 • . The initial semimajor axes and eccentricities of the Trojans are the same as Mars. Only the particles surviving till the end of the 25 million year integration are marked on the figure, which provides a map of the stable regions. All the objects starting in-plane do not persist and only the inclined Trojans are stable. Also marked on the figure are the instantaneous positions of the two MartianTrojans, namely 5261 Eureka (marked as a red circle) and 1998 VF31 (a green circle), as well as the asteroids 1998 QH56 (a blue circle) and 1998 SD4 (a yellow circle). Fig. 2 . 2-These figures show the most likely places to observe new Martian Trojans. They display the two-dimensional probability density as a function of the inclination with respect to the invariable plane and the longitude with respect to Mars (upper panel) together with the projections onto the principal planes (lower panels). The figures are constructed by re-simulating the stable regions displayed in Figure 1 at much greater resolution. 746test particles are placed every 1 • in inclination and every 5 • in longitude so as to span the stable region and the trajectories are sampled every 2.5 years for 50 000 years. The overall normalisation of the probability density is arbitrary. In the inclination plots, the contributions from the Lagrange points are separated -broken lines refer to L 4 and unbroken lines refer to L 5 . Fig. 3 . 3-Plots of the longitude versus semimajor axis are shown for the orbits of 5261 Eureka (upper panel) 1998 VF31 (lower panel). The orbits are integrated for 50 million years and are sampled every 10 000 years. Table 1 : 1This table lists some of the properties of the two definite Martian Trojans, as well as two suggested candidates. These include the instantaneous semimajor axis a, eccentricity e, inclination from the J2000 plane i and longitude measured from Mars λ. The epoch is JD 2451200.5 (22 January 1999). The magnitude H and the approximate diameter of the object (inferred using albedos of 0.05 − 0.25) are also given. Most of this information is abstracted from Minor Planet Circulars 30250 and 33085 (Eureka and 1998 QH56) and Minor PlanetElectronic Circular 1998-W04 and 1998-S20 (1998 VF31 and 1998.-13 - Table 2 : 2This table lists some of the properties of the orbits of the two confirmed Martian Trojans, inferred from numerical integrations. The table gives the maximum variation during the entirety of the 50 million year integration timespan in the semimajor axis ∆a, in the eccentricity ∆e, in the inclination ∆i and in the longitude measured from Mars ∆λ. Both Trojans oscillate around the Lagrange point L 5 and the superscript and subscript indicate the extent of the angular libration. Part of this includes the oscillation of the Lagrange point itself, so the final column D is the peak to peak angular libration measured from the Lagrange point. Semimajor Axis [AU]Remaining Test Particles near Mars (25 Myr) QH56 and 1998 SD4, are not currently Trojans, though it is conceivable that they may once have been. Both objects will probably enter the sphere of influence of Mars after J M A Danby, Fundamentals of Celestial Mechanics. Willmann-Bell, Richmond Erdi B65149Danby J.M.A. 1988, Fundamentals of Celestial Mechanics, Willmann-Bell, Richmond Erdi B. 1997, Cel. Mech., 65, 149 N W Evans, S Tabachnik, press Garfinkel B. 82368AJEvans N.W., Tabachnik S. 1999, Nature, in press Garfinkel B. 1977, AJ, 82, 368 . M J Holman, J Wisdom, AJ. 105Holman M.J., Wisdom J. 1993, AJ, 105, 1987 . M J Holman, Nature. 387785Holman M.J. 1997, Nature, 387, 785 C Kowal, Physical Studies of Minor Planets. 185Gehrels, NASA SP-267Kowal C. 1971, In "Physical Studies of Minor Planets", ed T. Gehrels, NASA SP-267, p. 185 . S Mikkola, K Innanen, AJ. 100290Mikkola S., Innanen K. 1990, AJ, 1990, 100, 290 . S Mikkola, K Innanen, AJ. 1041641Mikkola S., Innanen K. 1992, AJ, 104, 1641 . S Mikkola, K Innanen, AJ. 1071879Mikkola S., Innanen K. 1994, AJ, 107, 1879 . S Mikkola, K Innanen, K Muinonen, E Bowell, Cel. Mech. 5853Mikkola S., Innanen K., Muinonen K., Bowell E. 1994, Cel. Mech., 58, 53 . S Mikkola, K Innanen, Earth. 71195Moon & PlanetsMikkola S., Innanen K. 1995, Earth, Moon & Planets, 71, 195 . P Saha, S D Tremaine, AJ. 1041633Saha P., Tremaine S.D. 1992, AJ, 104, 1633 . P Saha, S D Tremaine, AJ. 108Saha P., Tremaine S.D. 1994, AJ, 108, 1962 Protostars and Planets IV. S Tabachnik, N W Evans, University of California at Santa Barbaraposter paper presented atTabachnik S., Evans N.W. 1998, poster paper presented at "Protostars and Planets IV", University of California at Santa Barbara. C W Tombaugh, Planets and Satellites. G.P. Kuiper & B.M. MiddlehurstUniversity of Chicago Press12Tombaugh C.W. 1961, In "Planets and Satellites", ed G.P. Kuiper & B.M. Middlehurst, University of Chicago Press, p. 12
[]
[ "Ghost Imaging with Blackbody Radiation", "Ghost Imaging with Blackbody Radiation" ]
[ "Yangjian Cai ", "Shi-Yao Zhu ", "\nDepartment of Physics\nInstitute of Optics\nDepartment of Physics\nHong Kong Baptist University\nHong KongChina\n", "\nZheJiang University\n310027HangzhouChina\n" ]
[ "Department of Physics\nInstitute of Optics\nDepartment of Physics\nHong Kong Baptist University\nHong KongChina", "ZheJiang University\n310027HangzhouChina" ]
[]
We present a theoretical study of ghost imaging by using blackbody radiation source. A Gaussian thin lens equation for the ghost imaging, which depends on both paths, is derived. The dependences of the visibility and quality of the image on the transverse size and temperature of the blackbody are studied. The main differences between the ghost imaging by using the blackbody radiation and by using the entangled photon pairs are image-forming equation, and the visibility and quality of the image.
null
[ "https://export.arxiv.org/pdf/quant-ph/0407240v1.pdf" ]
6,475,084
quant-ph/0407240
ac42d3e189b4039e2bb0e0de1f9e705c14ef1635
Ghost Imaging with Blackbody Radiation 29 Jul 2004 Yangjian Cai Shi-Yao Zhu Department of Physics Institute of Optics Department of Physics Hong Kong Baptist University Hong KongChina ZheJiang University 310027HangzhouChina Ghost Imaging with Blackbody Radiation 29 Jul 20041 We present a theoretical study of ghost imaging by using blackbody radiation source. A Gaussian thin lens equation for the ghost imaging, which depends on both paths, is derived. The dependences of the visibility and quality of the image on the transverse size and temperature of the blackbody are studied. The main differences between the ghost imaging by using the blackbody radiation and by using the entangled photon pairs are image-forming equation, and the visibility and quality of the image. ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) (1) , ) , ( ) ( ) ( , ,, , , 2 2 1u u u I u I dx dx dx dx x E x E x E x E u x h u x h u x h u x h u E u E u E u E u u G Γ + > >< < = × = = * * ∞ ∞ − ∞ ∞ − ∞ ∞ − ∞ ∞ − * * * * ∫ ∫ ∫ ∫ where ( ) 1 1 1 ,u x h , () 2 2 2 ,u x h are the response functions of the two paths through which the blackbody radiation passes, and ( ) ( ) ( ) ( ) 2 , 1 , , ) ( 2 1 2 1 2 1 = >= < ∫ ∫ ∞ ∞ − ∞ ∞ − * * i dx dx x E x E u x h u x h u I i i i i i ,(2)( ) ( ) ( ) ∫ ∫ ∞ ∞ − ∞ ∞ − = Γ 2 1 2 2 * 2 1 1 1 2 * 1 2 1 , , ) ( ) ( , dx dx u x h u x h x E x E u u . (3) > < ) ( i u I is the second order correlation function at the same space point (the intensity at the i-th detector ), and depend only on the i-th path. ) , ( 2 1 u u Γ is the second order cross correlation function at two different points, which is related to both detectors and depends on both paths. Both > < ) ( 1 u I > < ) ( 2 u I and ) , ( 2 1 u u Γ have contribution to the coincident counting rates. In Eq. 1, the fourth order correlation is replaced by the second order correlation under the condition of 0 ) ( >= < i x E for the blackbody radiation [17]. For the blackbody radiation the second order correlation function takes the following form [17]: ( ) ( ) ( ) ( ) [ ] , exp 1 / exp 2 8 1 3 1 2 2 0 3 2 * 1 k d i k k k T k h E E j i ij B j i r r k r r − ⋅         − −         = ∫ δ ω ω ε π h (4) where T is the blackbody's temperature, and B k is the Boltzmann's constant. For simplicity, we consider the source in one-dimension. The numerical calculation shows that ( ) ( ) 2 * 1 x E x E is a quasi-Gaussian distribution over ( ) 2 1 x x − with a temperature dependent width [17]. Therefore, we can approximately write the second order correlation function as ( ) ( ) 2 ) ( exp 4 exp ) , ( 2 2 2 1 2 2 2 2 1 1 2 * 1         −         + − ≈ g I g x x x x x I x E x E σ σ σ ,(5) where g σ is the correlation length and is temperature dependent. As the temperature goes to infinity, g σ approaches zero and ) , ( 1 g I σ r infinity. Here we introduced a Gaussian function         + − 2 2 2 2 1 4 exp I x x σ to take into account the finite surface size of the blackbody with I σ the transverse size. With the help of Collin's formula and the detail information of the two paths, ( ) 1 1 1 ,u x h and ( ) 2 2 2 ,u x h [18], we obtain ( ) ( ) ( ) ( ) ( ) ( ) . 2 exp 2 exp 2 exp 2 exp ) ( ) ( 1 ) ( 2 1 2 1 2 1 1 2 2 2 2 2 2 2 2 2 2 1 2 1 1 1 2 1 2 2 1 1 1 2 1 1 2 * 1 2 1 2 1 2 1 dv dv dx dx u u v v z i v v x x z i u u v v z i v v x x z i v H v H x E x E z z u I       + −       + −       + − − ×       + − − > < >= < ∫ ∫ ∫ ∫ ∞ ∞ − ∞ ∞ − ∞ ∞ − ∞ ∞ − λ π λ π λ π λ π λ ( 6) For a fixed 1 u , > < ) ( 1 u I is a constant, and it is easy to show that 0 2 ) ( B u I >= < =const. The image pattern is determined by the cross correlation function, ( ) ( ) ( ) ( ) ( ) ( ) ( ) [ ] (7) ) ( exp 2 exp 2 2 exp ) ( ) ( ) , ( 1 2 1 2 1 2 1 2 2 2 2 2 2 2 2 2 2 1 1 1 2 1 2 2 1 1 1 2 1 1 1 2 * 1 2 / 1 2 2 1 3 2 * 1 2 1 dv dx dx l l ik z z ik u d u x x a b i u u v v z i v v x x z i v H x E x E b z z i u E u E u u − + + −       + −       + − − + − − > <         − = > =< Γ ∫ ∫ ∫ ∞ ∞ − ∞ ∞ − ∞ ∞ − λ π λ π λ π λ If we set f l z l 1 1 1 2 1 1 = + − ,(9) Eq. (8) is reduced to             − − − − − − − − − − =                 − − −         =               = Γ a u H a z I u u λ .(11) This is a perfect image of the object with an amplification of 2 a , which is obtained under Eq. 9) is not satisfied, the image needs to be obtained numerically. In Fig.2, we plot the evolution from the ghost image into an interference pattern of a double slits with slit width a , and distance of the two slits d, when we vary 2 l from satisfying to not satisfying Eq. (9). The transmission function of the double slits is 2 2 2 2 and 2 2 2 2 for 1 ) ( a d v a d a d v a d v H + < < − + − < < − − = and =0 otherwise,u u Max = = Γ = ,(12) Under the condition of obtaining Eq. (11), 0 → g σ and ∞ → I σ , the visibility of the image is zero. The images of the double slit aperture for different transverse sizes and transverse coherence widths (temperature) of the blackbody are shown in Fig.3 and Fig. 4, respectively. From Fig.3, we can find that when the source's transverse size I σ increases, the quality of the image increases, while the visibility decreases. From Fig.4, we can find that when the source's transverse coherence width g σ decreases, quality of the image increases, while the visibility decreases. Small Q value corresponds to high image quality. The dependences of the visibility and quality of the image of the double slit aperture on the transverse coherence width are shown in Fig.5 and Fig.6, respectively. High quality is companied by poor visibility, and good visibility companied by low quality. In order to observe the classical ghost image with blackbody radiation, the selection of suitable transverse size and transverse coherence width is essential. visibility goes to zero. The nature of the ghost imaging is due to the entanglement in the entangled photon pair and is due to the Hanbury Brown-Twiss Effect (low coherence with fluctuations but not completely no coherence) in the blackbody radiation. only for obtaining the analytical expression of Eq. (11) . When I σ and g σ are finite or Eq. ( FIG. 2 2The image pattern of a double slit in the scheme of Fig.need to consider the visibility of the image. We define the visibility of the image as Fig. 3 3The image of the double slit aperture for different source's transverse size I Fig. 4 4The image of a double slit aperture for different source'study the dependence of the visibility and quality of the image on the source's transverse coherence width g σ and transverse size I σ in details. We define an image Fig. 5 . 5Evolution of the visibility of the image of a double slit aperture versus transverse coherence width g σ for different transverse sizes of the source, I σ . Fig. 6 . 6Evolution of the quality of the image of a double slit aperture versus transverse coherence width g σ for different transverse sizes of the source, I σ . Now, we ask ourselves whether the classical ghost image is the same as the quantum ghost image with the entangled photon pairs. Comparing Eqs.(9)with the corresponding equations for the quantum ghost image[2], the following differences have been found: (1) 1 1 z l + in the imaging formation equation for the quantum case is replaced by. If a 50% phase conjugate mirror is used for the beam split, we will have ) ( ) (with 0 z the distance between the black body and the beam split.If the lens is in path one, we will have f S S S, that is to say , there is no background noise and consequently the visibility is high. In the classical case the background noise limits the visibility.If we take out the lens in path two (seeFig. 1), we have the ghost interference. With the same calculation above, we can obtain the same interference fringes as obtained from the quantum ghost interference except the replacement of 1 z − by 1 z + .In conclusion, we have invested the ghost image created with blackbody radiation by using the optical coherence theory. The ghost image formation depends on both paths. To obtain the ghost image, a Gaussian thin lens equation must be satisfied. The ghost image is gradually blurred out, when the temperature (or the size) decreases. The quality of the ghost image increases with the increase of the temperature and the size of the blackbody. As the temperature and the size both increase to infinity, we will have a perfect image, but the . T B Pittman, Y H Shih, D V Streakalov, A V Sergienko, Phys. Rev. A. 523429T. B. Pittman, Y. H. Shih, D. V. Streakalov, and A. V. Sergienko, Phys. Rev. A 52, R3429 (1995). . P H S Ribeiro, G A Barbosa, Phys. Rev. A. 543489P. H. S. Ribeiro, and G. A. Barbosa, Phys. Rev. A 54, 3489 (1996) . T B Pittman, D V Streakalov, D N Klyshko, M H Rubin, A V Sergienko, Y H Shih, Phys. Rev. A. 532804T. B. Pittman, D.V. Streakalov, D. N. Klyshko, M. H. Rubin, A. V. Sergienko, and Y. H. Shih, Phys. Rev. A 53, 2804 (1996) . A Gatti, E Brambilla, L A Lugiato, Phys. Rev. Lett. 831763A. Gatti, E. Brambilla, and L. A. Lugiato, Phys. Rev. Lett. 83, 1763 (1999) . A F Abouraddy, M B Nasr, B E A Saleh, A V Sergienko, M C Teich, Phys. Rev. A. 6363803A. F. Abouraddy, M. B. Nasr, B. E. A. Saleh, A. V. Sergienko, and M. C. Teich, Phys. Rev. A 63, 063803 (2001) . B E A Saleh, A F Abouraddy, A V Sergienko, M C Teich, Phys. Rev. A. 6243816B. E. A. Saleh, A. F. Abouraddy, A. V. Sergienko, and M.C. Teich, Phys. Rev. A 62, 043816 (2001) . M D Angelo, M V Chekhova, Y H Shih, Phys. Rev. Lett. 8713602M. D. Angelo, M. V. Chekhova, and Y. H. Shih, Phys. Rev. Lett. 87, 013602 (2001) . D P Caetano, P H S Ribeiro, Phys. Rev. A. 6823805D. P. Caetano and P. H. S. Ribeiro, Phys. Rev. A 68, 023805 (2003) . G Brida, E Cagliero, G Falzetta, M Genovese, M Gramegna, E Predazzi, Phys. Rev. A. 6833803G. Brida, E. Cagliero, G. Falzetta, M. Genovese, M. Gramegna, and E. Predazzi, Phys. Rev. A 68, 033803 (2003) . A Gatti, E Brambilla, L A Lugiato, Phys. Rev. Lett. 90133603A. Gatti, E. Brambilla, and L. A. Lugiato, Phys. Rev. Lett. 90, 133603 (2003) . A F Abouraddy, B E A Saleh, A V Sergienko, M C Teich, Phys. Rev. Lett. 87123602A. F. Abouraddy, B. E. A. Saleh, A.V. Sergienko, and M.C.Teich, Phys. Rev. Lett. 87, 123602 (2001) . R S Bennink, S J Bentley, R W Boyd, Phys. Rev. Lett. 89113601R. S. Bennink, S. J. Bentley, and R. W. Boyd, Phys. Rev. Lett. 89, 113601 (2002) . R S Bennink, S J Bentley, R W Boyd, J C Howell, Phys. Rev. Lett. 9233601R. S. Bennink, S. J. Bentley, R.W. Boyd, J. C. Howell, Phys. Rev. Lett. 92, 033601 (2004) . M D Angelo, Y H Shih, quant-ph/0302146M. D. Angelo, and Y.H.Shih, quant-ph/0302146 . A Gatti, E Brambilla, M Bache, L A Lugiato, quant-ph/0307187A. Gatti, E. Brambilla, M. Bache, and L. A. Lugiato, quant-ph/0307187 . J Chen, S Han, Phys. Rev. Lett. 9293903J. Chen and S. Han, Phys. Rev. Lett. 92, 093903 (2004) L Mandel, E Wolf, Optical Coherence and Quantum Optics. Cambridge, New York13L. Mandel and E. Wolf, Optical Coherence and Quantum Optics (Cambridge, New York, 1995), Chap.8 and 13. . S A Collins, J. Opt. Soc. Am. 601168S. A. Collins. J. Opt. Soc. Am. 60, 1168 (1970).
[]
[ "Fano Resonant Optical coatings platform for Full Gamut and High Purity Structural Colors", "Fano Resonant Optical coatings platform for Full Gamut and High Purity Structural Colors" ]
[ "Mohamed Elkabbash \nThe Institute of Optics\nUniversity of Rochester\n14627RochesterNYUSA\n\nCurrent address: Research Laboratory of Electronics\nMIT\n02139CambridgeMAUSA\n", "Nathaniel Hoffman \nDepartment of Physics\nCase Western Reserve University\n10600 Euclid Avenue44106ClevelandOhioUSA\n", "Andrew R Lininger \nDepartment of Physics\nCase Western Reserve University\n10600 Euclid Avenue44106ClevelandOhioUSA\n", "Sohail A Jalil \nThe Institute of Optics\nUniversity of Rochester\n14627RochesterNYUSA\n", "Theodore Letsou \nDepartment of Physics\nCase Western Reserve University\n10600 Euclid Avenue44106ClevelandOhioUSA\n", "Michael Hinczewski \nDepartment of Physics\nCase Western Reserve University\n10600 Euclid Avenue44106ClevelandOhioUSA\n", "Giuseppe Strangi \nDepartment of Physics\nCase Western Reserve University\n10600 Euclid Avenue44106ClevelandOhioUSA\n", "Chunlei Guo \nThe Institute of Optics\nUniversity of Rochester\n14627RochesterNYUSA\n" ]
[ "The Institute of Optics\nUniversity of Rochester\n14627RochesterNYUSA", "Current address: Research Laboratory of Electronics\nMIT\n02139CambridgeMAUSA", "Department of Physics\nCase Western Reserve University\n10600 Euclid Avenue44106ClevelandOhioUSA", "Department of Physics\nCase Western Reserve University\n10600 Euclid Avenue44106ClevelandOhioUSA", "The Institute of Optics\nUniversity of Rochester\n14627RochesterNYUSA", "Department of Physics\nCase Western Reserve University\n10600 Euclid Avenue44106ClevelandOhioUSA", "Department of Physics\nCase Western Reserve University\n10600 Euclid Avenue44106ClevelandOhioUSA", "Department of Physics\nCase Western Reserve University\n10600 Euclid Avenue44106ClevelandOhioUSA", "The Institute of Optics\nUniversity of Rochester\n14627RochesterNYUSA" ]
[]
Structural coloring is a photostable and environmentally friendly coloring approach that harnesses optical interference and nanophotonic resonances to obtain colors with a range of applications including steganography, décor, data storage, and anticounterfeiting measures. We show that optical coatings exhibiting the photonic Fano Resonance is an ideal platform for structural coloring-it provides full color access, high color purity, high brightness, controlled iridescence, and scalable manufacturability. We show that an additional oxide film deposited on Fano resonant optical coatings (FROCs) increases the color purity (up to 97%) and color gamut coverage range (> 99% coverage of the sRGB and Adobe color spaces). For coloring applications that do not require high spatial resolution, FROCs provide a significant advantage over existing structural coloring schemes.Introduction:In nature, colors are mostly produced either through pigments or structures. While the former comes from molecular absorption, the latter, structural coloring (SC), is the result of optical interference from a structured surface. Structural colors offer several advantages over pigmentsthey are photostable, immune to chemical degradation, and environmentally friendly. In addition, a wide range of colors can be produced 1 and dynamically reconfigured 2 using the same material.Furthermore, structured surfaces can have multiple functionalities, e.g., creating hydrophobic or antibacterial colored metals 3,4 .Several structural coloring schemes have been previously introduced including multilayer films 5 , thin film nanocavities 6 , plasmonic nanostructures 1 , dielectric nanostructures 7 , and photonic crystals 8 with applications in decoration 9 , colorimetric sensing 10 , data storage 11 , anticounterfeiting 12 , display technologies 13 , colorful photovoltaic cells 14 , among others 15 . An ideal structural coloring platform should span a wide color gamut, producing colors with high and controllable purity-how monochromatic or pure the color is-and brightness -the relative intensity of the reflected color-and allowing control over the colors' angle dependence. For many applications, it should also be scalable and inexpensive to fabricate. However, no existing scheme can satisfy all the above qualities simultaneously1,7,9,14. Figure 1| Spectral properties of Fano Resonant Optical Coatings: (a) A FROC consists of two coupled light absorbers; a broadband absorber and a narrowband (Fabry-Perot) absorber. A FROC exhibits a reflection peak at the Fabry-Perot cavity resonance. (b) The measured reflection from a Fabry-Perot cavity with different dielectric thickness. (c) The measured reflection from the same Fabry-Perot cavities shown in (b) after depositing a 15 nm Ge film to create a FROC. The incidence angle in (b) and (c) is 15 o . Recently, we proposed a new type of optical coatings that exhibits the photonic Fano Resonance effect 16 . Fano Resonant Optical Coatings (FROCs) enjoy unique optical properties that cannot be reproduced with existing optical coatings such as metallic films, anti-reflective coatings, transmission filters, light absorbers, and dielectric mirrors. Figure 1a describes the composition of FROCs. FROCs are produced by coupling a broadband nanocavity (representing the continuum)with a narrowband Fabry-Perot nanocavity (representing a discrete state). The resonant interference between the nanocavities produces the well-known asymmetric Fano resonance line-
null
[ "https://export.arxiv.org/pdf/2208.03777v1.pdf" ]
251,403,187
2208.03777
2517d24c2309a91be6a93eed542dbbe545b868f6
Fano Resonant Optical coatings platform for Full Gamut and High Purity Structural Colors Mohamed Elkabbash The Institute of Optics University of Rochester 14627RochesterNYUSA Current address: Research Laboratory of Electronics MIT 02139CambridgeMAUSA Nathaniel Hoffman Department of Physics Case Western Reserve University 10600 Euclid Avenue44106ClevelandOhioUSA Andrew R Lininger Department of Physics Case Western Reserve University 10600 Euclid Avenue44106ClevelandOhioUSA Sohail A Jalil The Institute of Optics University of Rochester 14627RochesterNYUSA Theodore Letsou Department of Physics Case Western Reserve University 10600 Euclid Avenue44106ClevelandOhioUSA Michael Hinczewski Department of Physics Case Western Reserve University 10600 Euclid Avenue44106ClevelandOhioUSA Giuseppe Strangi Department of Physics Case Western Reserve University 10600 Euclid Avenue44106ClevelandOhioUSA Chunlei Guo The Institute of Optics University of Rochester 14627RochesterNYUSA Fano Resonant Optical coatings platform for Full Gamut and High Purity Structural Colors 1 (G.S.), [email protected] (C.G.). †These authors contributed equally. Structural coloring is a photostable and environmentally friendly coloring approach that harnesses optical interference and nanophotonic resonances to obtain colors with a range of applications including steganography, décor, data storage, and anticounterfeiting measures. We show that optical coatings exhibiting the photonic Fano Resonance is an ideal platform for structural coloring-it provides full color access, high color purity, high brightness, controlled iridescence, and scalable manufacturability. We show that an additional oxide film deposited on Fano resonant optical coatings (FROCs) increases the color purity (up to 97%) and color gamut coverage range (> 99% coverage of the sRGB and Adobe color spaces). For coloring applications that do not require high spatial resolution, FROCs provide a significant advantage over existing structural coloring schemes.Introduction:In nature, colors are mostly produced either through pigments or structures. While the former comes from molecular absorption, the latter, structural coloring (SC), is the result of optical interference from a structured surface. Structural colors offer several advantages over pigmentsthey are photostable, immune to chemical degradation, and environmentally friendly. In addition, a wide range of colors can be produced 1 and dynamically reconfigured 2 using the same material.Furthermore, structured surfaces can have multiple functionalities, e.g., creating hydrophobic or antibacterial colored metals 3,4 .Several structural coloring schemes have been previously introduced including multilayer films 5 , thin film nanocavities 6 , plasmonic nanostructures 1 , dielectric nanostructures 7 , and photonic crystals 8 with applications in decoration 9 , colorimetric sensing 10 , data storage 11 , anticounterfeiting 12 , display technologies 13 , colorful photovoltaic cells 14 , among others 15 . An ideal structural coloring platform should span a wide color gamut, producing colors with high and controllable purity-how monochromatic or pure the color is-and brightness -the relative intensity of the reflected color-and allowing control over the colors' angle dependence. For many applications, it should also be scalable and inexpensive to fabricate. However, no existing scheme can satisfy all the above qualities simultaneously1,7,9,14. Figure 1| Spectral properties of Fano Resonant Optical Coatings: (a) A FROC consists of two coupled light absorbers; a broadband absorber and a narrowband (Fabry-Perot) absorber. A FROC exhibits a reflection peak at the Fabry-Perot cavity resonance. (b) The measured reflection from a Fabry-Perot cavity with different dielectric thickness. (c) The measured reflection from the same Fabry-Perot cavities shown in (b) after depositing a 15 nm Ge film to create a FROC. The incidence angle in (b) and (c) is 15 o . Recently, we proposed a new type of optical coatings that exhibits the photonic Fano Resonance effect 16 . Fano Resonant Optical Coatings (FROCs) enjoy unique optical properties that cannot be reproduced with existing optical coatings such as metallic films, anti-reflective coatings, transmission filters, light absorbers, and dielectric mirrors. Figure 1a describes the composition of FROCs. FROCs are produced by coupling a broadband nanocavity (representing the continuum)with a narrowband Fabry-Perot nanocavity (representing a discrete state). The resonant interference between the nanocavities produces the well-known asymmetric Fano resonance line- shape. In this work, we develop a class of reflective FROCs by using a reflective and opaque material as substrates, which lead to a highly reflective resonant peak that corresponds to the narrowband nanocavity's resonance. We investigate the color properties of FROCs numerically and experimentally. We show that the reflective FROCs are an ideal platform for structural coloring, producing colors spanning a wide color gamut with high brightness and high purity. The dependence of the color on the incident angle can be controlled through the cavity material, making FROCs suitable for a variety of applications that demand angle independence, e.g., decoration, or angle dependence, e.g., anti-counterfeit measures. Structural coloring with FROCs can find new applications where strong and broadband optical absorption and high purity colors are required, for example, colorful solar thermal generation panels, colorful photovoltaic panels 8 , colorful thermophotovoltaic panels, and colorful solar thermoelectric generators. These renewable energy sources often have the same color, black or dark blue, which makes them aesthetically unappealing. Results and Discussion: Throughout this work, we compare the coloring performance of FROCs to Fabry-Perot (FP) thin film nanocavities since they are the closest platform in terms of structure and physics to FROCs 16 . While Fabry-Perot nanocavities produce colors through selective absorption, mainly reflecting all colors except for the specific cavity resonance wavelength. On the other hand, FROCs produce colors through selective reflection (Fig. 1a). Figure 1b shows the measured p-polarized reflection spectrum of Fabry-Perot nanocavities consisting of Ag (20 nm)-TiO2-Ag (100 nm) by varying the thickness of the TiO2 film from 35 nm to 70 nm. Figure 1c shows the measured reflection spectrum of the same Fabry-Perot nanocavities after adding a 15 nm Ge layer to convert them into FROCs. The reflection lines produced by the FROCs can span the visible spectrum and target a relatively narrow range of wavelengths for each individual FROC by changing the cavity's thickness. Images of the reflected color for FROC and MDM structures with a TiO2 thickness ranging from 35 nm to 150 nm are shown in Fig. 3a and Fig. 3b, respectively. FROCs are capable of reflecting blue, green, and red colors by simply increasing the dielectric thickness. Figure 3c shows a photograph of two FP cavities with CWRU and U of R letters "printed" on them by depositing a 15nm Ge layer and converting these regions to a FROC. By controlling the spatial distribution of the deposited layer, this printing method could be adapted for optical archival data storage and encrypting messages. Experimental reflection lines from different FP cavities (blue dots) and FROCs (red dots) are presented in the CIE 1931 color space (see Methods) that links distributions of electromagnetic wavelengths to visually perceived colors (Fig. 3d). The experimental results agree with the numerical calculations of colors produced by FROCs vs. FP cavities (see Supplementary Information Fig. S1, Fig. S2, and Fig. S3). High purity green colors were difficult to obtain using conventional FROCs and were obtained using silica capped FROCs (red stars) as we will discuss below. High purity green and red colors are difficult to obtain. This agrees with the purity results and the corresponding colors of FROCs shown Fig. 2c and Fig. 2d. This is because obtaining red and green colors requires having high reflectance at longer wavelengths. However, the measured reflection from the Ge-Ag broadband absorber is > 0.15 at short wavelengths < 500 nm (Fig. 4a). Consequently, the color purity drops since other colors are reflected. To suppress the stronger reflection at shorter wavelengths, a 50nm silicon dioxide capping layer was added as shown in Fig. 4a. Figure 4b shows the measured reflection for a FROC that produces a green color with and without the SiO2 capping layer. The SiO2 capped FROC exhibits significantly reduced reflection at short wavelengths and relatively small change near the resonance peak. This combination leads to an overall increase in the colorimetric purity. Fig. 4c and Fig. 4d show the swatch array and corresponding color purity of the SiO2 capped FROC design, respectively. By comparing Fig. 4d and Fig. 2d, a wider range of colors and greater purities can be obtained as compared to the original FROC structure. A purity level of > 97% can be reached in the yellow region of the spectra (Supplementary information, Fig. S4 and Fig. S5) which is significantly higher than other ultrahigh purity structural color platforms 7 . Another important metric is brightness, i.e., the reflected intensity compared to the incident light at the peak resonance wavelength. The calculated reflectance from FROCs is > 0.9 ( Supplementary information, Fig. S2) and the measured reflectance ranges from 0.63-0.85 as shown in Figure 5b. The lower reflectance in the measured films can be improved by depositing higher quality silver films and precisely controlling thickness 19 . Finally, we assess the color range covered by silica Supplementary information, Fig. S7). Access to a wide color gamut with a similar reflection profile has been realized recently with multipolar metasurfaces, whichin contrast with FROCs-require intense nanolithography 7 . Conclusions and Outlook: Figure 5d compares the structural coloring performance metrics of FROCs, dielectric nanostructures, plasmonic nanostructures, and photonic crystals. Indeed, FROCs outperform existing structural coloring methods by simultaneously offering high purity, access to a wide range of colors, high brightness, angular control, and cheap and scalable fabrication 1 . Because thin film structural coloring in general has lower spatial resolution compared to nanostructures based structural coloring, the latter remains advantageous for high density coloring. Durable FROCs can be made with ceramic materials 20 . We believe that FROCs are particularly suitable for colored solar thermal panels as they are efficient in absorbing the solar spectrum 16 while reflecting an on demand, narrowband color. In addition, by using amorphous Si instead of Ge and utilizing the metal films in the FROC to act as electrodes, it is possible to realize colorful photovoltaic cells 14 using the well-established thin film deposition technologies. Methods: Sample fabrication: The FROC films were deposited on a glass substrate (Micro slides, Corning) using electronbeam evaporation for Ge (3 Å s -1 ) and TiO2 (1 Å s -1 ) pellets and thermal evaporation Ag (20 Å s -1 ), with the deposition rates specified for each material. The silica capped FROC films were deposited on a glass substrate (2948, Corning) using electron-beam evaporation for Ge (0.5 Å s -1 ), TiO2 (1 Å s -1 ), and SiO2 (0.8 Å s -1 ), and DC magnetron sputtering for Ag (2 Å s -1 ). All deposition materials were purchased from Kurt J. Lesker. Deposited layer thicknesses were measured with spectroscopic ellipsometry (J. A. Woollam). Numerical calculation of the reflection and absorption spectra: Numerical reflectance and absorbance spectra were generated using a transfer matrix-based simulation model written in Mathematica and Python. Spectral optical constants for the multiple materials were obtained variously from the Brendel-Bormann model (Ag), fits to the experimental materials (SiO2, TiO2), and an amorphous experimental model for Ge. Transmittance was zero for all structures. Absorbance was calculated as the complementary to reflectance, or = 1 − . Reflection measurements: Experimental angular reflectance measurements were performed using a variable-angle high-resolution spectroscopic ellipsometer (V-VASE, J. A. Woollam). Sample transmittance was zero for all angles and wavelengths. Color analysis: The reflectance spectra to CIE 1931-xyz colorspace conversation was performed in Python utilizing interpolations of the standard observer distributions. 21,22 The XYZ tristimulus values are given as: Color swatch arrays were generated by transforming the calculated chromaticity values into their sRGB equivalents using a matrix transform calculated from reference primaries with the D65 reference white and sRGB companding (IEC 61966-2-1 standard). Colors were generated with matplotlib in Python. Excitation purity is calculated as: = | − | / | − | where s, w, and d are the CIE 1931 (x, y) coordinates for the measured spectra point, white point, and dominant wavelength point, respectively. Total CIE x-y space coverage is calculated as the area of the smallest convex hull encompassing all of the desired CIE (x, y) points. Area is presented relative to the area of the full visible light color gamut. Sufficient resolution was obtained in the numerical simulations to approach a smooth hull. Figure 2| 2|Colorimetric properties of FROCs: Swatch array and the corresponding color purity for Fabry-Perot (FP) nanocavities (a) and (b) and FROCs (c) and (d). (e) CIE 1931 color space showing the colors corresponding to the calculated reflection spectrum of FP nano-cavities (black circles) and FROC (blue circles) with varying cavity thicknesses. To examine the structural coloring properties of FROCs vs. Fabry-Perot (FP) cavities, we calculate the colors produced from FP cavities vs. FROCs by varying the top metal film thickness and the cavity thickness. Figure 2a shows a swatch array for FP cavities consisting of a metaldielectric-metal stack [top to bottom: Ag (Y nm)-TiO2 (X nm)-Ag (100 nm)] as a function of the top Ag film thickness and the TiO2 dielectric cavity thickness. A swatch is the perceived color for a person viewing the sample. The produced colors are Cyan-Magenta-Yellow (CMY) colors since FP cavities are selective absorbers and CMY colors are subtractive colors 17 . Note the restricted color palette from FP nanocavities.Figure 2bshows the corresponding purity of the FP cavities (see Methods for more details). The purity of FP cavities is limited since all the colors are reflected except within the absorbed wavelength range, making near-monochromatic reflection impossible. Figure 2c 2cshows a swatch array of FROCs that are identical to the FP cavities with an added absorbing thin film [Ge (15 nm)] to generate the desired Fano resonance. The selective reflection from FROCs enables access to a wide range of hues from blue to red, including green. The colors enjoy significantly high purity as shown in Figure 2d 18 . The white point corresponds to the spectrum of the illuminant, i.e., white light. FROCs access the huge color gamut since they provide selective reflection at different wavelengths by simply changing the dielectric thickness. Figure 3| 3|Structural coloring with FROCs: (a) and (b) Show photographs of fabricated FP nanocavities and FROCs. The color purity of FROCs is evident in (c) where the letters of U of R and CWRU are printed on an MDM cavity by depositing 15 nm Ge layer. (d) The CIE 1931 color space showing the colors corresponding to the measured reflection spectrum of FP nano-cavities (blue circles) and FROCs (red circles). FROCs demonstrate higher purity as they are further away from the white point (black dot). Silica capped FROCs (red stars) show higher purity in the green region of the color space as we discuss later in the manuscript. Figure 4| 4|Silica capped FROCs: (a) Measured reflectance of a Ge-Ag broadband absorber (red line) vs. a silica capped broadband absorber (black line). Adding a silica film reduces the overall reflectance from the broadband absorber. (b) Measured reflectance of a FROC with a Fano resonance peak within the green wavelength range with and without a silica cap. The suppressed reflectance at shorter wavelengths is evident. (c) and (d) show a swatch array and color purity for silica capped FROCs (SiO2 -50 nm) for different thickness of the nanocavity and top Ag film. Figure 5| 5|Angle dependence, brightness, and accessible color gamut of FROCs: (a) Swatch array of a silica capped FROC for a constant optical length by varying the refractive index and incidence angle. The angle dependence can be controlled by the refractive index of the dielectric cavity. (b) Measured reflectance from different FROCs showing a peak reflectance ranging from 0.63-0.85. The high reflectance corresponds to high brightness, a desirable property in structural colors. (c) CIE 1931 chromaticity diagram of Silica capped FROCs for different dielectric cavity thicknesses at normal (0°) incidence. The sRGB and Adobe RGB subspaces are shown for comparison. The FROC obtains 43% coverage of the total CIE 1931 diagram. The white point is shown as an open circle. (d) Comparison between the coloring performance of FROC vs. other structural coloring platforms. In addition, FROC's iridescence can be controlled by simply changing the refractive index of the dielectric cavity.Figure 5ashows a swatch array (angle of incidence vs. cavity refractive index) for a range of FROCs with the same optical length = = 700 , where is the dielectric index and is its thickness, as a function of incidence angle. For high refractive indices, FROCs color over nearly the entire range of incident angles. Note that colors' angle independence may not be a desired property in some applications, e.g., anti-counterfeit measures. capped FROCs. We obtain the CIE 1931 chromaticity coordinates of a stack of SiO2 (50 nm) -Ge (15 nm)-Ag (48 nm)-TiO2 (x nm)-Ag (100 nm) and vary the thickness of the TiO2 layer. A total coverage of 42.9% is achieved at 0 o viewing angle(Figure 5c). Adding a silica capping layer improves the covered color gamut of the bare FROC which is only 28% at 0 viewing angle. At 55 o viewing angle, the covered percentage of the CIE color space is 60% coverage(Supplementary information, Fig. S6). We also calculate the coverage of the sRGB color space (the standard color space for the web), and the Adobe RGB color space (the color space that encompasses most of the colors achievable on CMYK color printers). The theoretical silica capped FROCs exhibit high coverage of both color spaces, with 89.3% of the sRGB color space and 85.5% of the Adobe color space, at normal incidence. The coverage percentage increases to > 99% coverage of both color spaces at 55 o viewing angle ( as the illuminant spectrum, R as the spectral reflectance, k as a constant factor, and , , as the standard observer functions. The integration is over the visible spectrum. The CIE 1931xyz values are then given as:= / ,= / , and = 1 − − , for = + + . Note that at constant luminance, the chromaticity is defined by x and y. Authors Contribution: M.E. developed the approach and designed the project. C.G., G. S., and M. H. supervised the project. N. H. and A. L. performed color analysis. A. L. and S. A. J. fabricated the samples. A. L., M. E., T. L. performed reflection measurements. M. E. wrote the manuscript with inputs from N. H. and A. L. All authors discussed the results. Ethics declarations: Figure S2| Colors from FROCs: (a) The calculated colors corresponding to FROCs cavity by varying the dielectric (SiO2) thickness from 80 nm to 210 nm. The narrow reflection lines shown in (b) lead to high purity colors. Figure S3| S3|CIE 1931 color space showing the colors corresponding to the calculated reflection spectrum of FP nano-cavities (black circles) and FROC (blue circles) with varying cavity thicknesses. Figure S4| S4|The color purity of Silica capped FROCs consisting of SiO2 (50 nm) -Ge (15 nm)-Ag (24 nm)-TiO2 (x nm)-Ag (100 nm). The TiO2 thickness is varied. Purity levels > 97% can be achieved over a wide angular range. Figure S5| S5|Maximum purity for color-optimized FROCs over a range of visible wavelengths. a FROCs ( Ge (15 nm)-Ag (x nm)-TiO2 (x nm)-Ag (100 nm) ), b silica capped FROCs ( with SiO2 (50 nm) ), and c Fabry-Perot cavities (Ag (x nm)-TiO2 (x nm)-Ag (100 nm) ), for 0 o and 55 o viewing angles. Figure S6| S6|CIE 1931 chromaticity space percent coverage for the simulated spectra of a FROCs ( Ge (15 nm)-Ag (24 nm)-TiO2 (x nm)-Ag (100 nm) ) and b silica capped FROCs ( with SiO2 (50 nm) ) for a range of viewing angles. The thickness of the TiO2 layer is varied. The maximum area coverage occurs at 55 o viewing angle. The silica capped FROC has a higher overall CIE area coverage. Figure S7| S7|CIE 1931 chromaticity coordinates for the simulated spectra of silica capped FROCs with SiO2 (50 nm) -Ge (15 nm)-Ag (24 nm)-TiO2 (x nm)-Ag (100 nm) at 0 o and 55 o viewing angle. The thickness of the TiO2 layer is varied. The sRGB and Adobe RGB color spaces are shown for comparison. FROCs exhibit high coverage of both color spaces through a wide range of viewing angles. Competing interests A patent application has been filed on the Fano resonance optical coating scheme in this work.Data availability:Data are available upon request from the corresponding authors.Supplementary InformationFano Resonant Optical coatings platform for Full Gamut and High Purity Structural ColorsMohamed ElKabbash 1,2, †, *, Nathaniel Hoffman 3, † , Andrew R Lininger 3, † , Sohail A. Jalil 1 , Theodore Letsou 3 , Michael Hinczewski 3, * , Giuseppe Strangi 3, * , and Chunlei Guo 1, * Plasmonic colour generation. A Kristensen, Nature Reviews Materials. 2Kristensen, A. et al. Plasmonic colour generation. Nature Reviews Materials 2, 1-14 (2016). Dynamic Color Generation with Electrically Tunable Thin Film Optical Coatings. K V Sreekanth, Nano letters. 21Sreekanth, K. V. et al. Dynamic Color Generation with Electrically Tunable Thin Film Optical Coatings. Nano letters 21, 10070-10075 (2021). Creating superhydrophobic and antibacterial surfaces on gold by femtosecond laser pulses. S A Jalil, Applied Surface Science. 506144952Jalil, S. A. et al. Creating superhydrophobic and antibacterial surfaces on gold by femtosecond laser pulses. Applied Surface Science 506, 144952 (2020). Multifunctional surfaces produced by femtosecond laser pulses. A Vorobyev, C Guo, Journal of Applied Physics. 11733103Vorobyev, A. & Guo, C. Multifunctional surfaces produced by femtosecond laser pulses. Journal of Applied Physics 117, 033103 (2015). J.-P Vigneron, P Simonis, Advances in insect physiology. Elsevier38Vigneron, J.-P. & Simonis, P. in Advances in insect physiology Vol. 38 181-218 (Elsevier, 2010). Iridescence-free and narrowband perfect light absorption in critically coupled metal high-index dielectric cavities. M Elkabbash, 10.1364/OL.42.003598Opt. Lett. 42ElKabbash, M. et al. Iridescence-free and narrowband perfect light absorption in critically coupled metal high-index dielectric cavities. Opt. Lett. 42, 3598-3601, doi:10.1364/OL.42.003598 (2017). Ultrahighly saturated structural colors enhanced by multipolar-modulated metasurfaces. B Yang, Nano letters. 19Yang, B. et al. Ultrahighly saturated structural colors enhanced by multipolar-modulated metasurfaces. Nano letters 19, 4221-4228 (2019). Effect of defective microstructure and film thickness on the reflective structural color of self-assembled colloidal crystals. T Liu, B Vansaders, S C Glotzer, M J Solomon, ACS Applied Materials & Interfaces. 12Liu, T., VanSaders, B., Glotzer, S. C. & Solomon, M. J. Effect of defective microstructure and film thickness on the reflective structural color of self-assembled colloidal crystals. ACS Applied Materials & Interfaces 12, 9842-9850 (2020). Printing colour at the optical diffraction limit. K Kumar, Nature nanotechnology. 7Kumar, K. et al. Printing colour at the optical diffraction limit. Nature nanotechnology 7, 557-561 (2012). Hyperchromatic structural color for perceptually enhanced sensing by the naked eye. T H Talukdar, B Mccoy, S K Timmins, T Khan, J D Ryckman, Proceedings of the National Academy of Sciences. 117Talukdar, T. H., McCoy, B., Timmins, S. K., Khan, T. & Ryckman, J. D. Hyperchromatic structural color for perceptually enhanced sensing by the naked eye. Proceedings of the National Academy of Sciences 117, 30107-30117 (2020). M Rezaei, H Jiang, R Qarehbaghi, M Naghshineh, B Kaminska, Advanced Fabrication Technologies for Micro/Nano Optics and Photonics VIII. 93740O (International Society for Optics and Photonics). Rezaei, M., Jiang, H., Qarehbaghi, R., Naghshineh, M. & Kaminska, B. in Advanced Fabrication Technologies for Micro/Nano Optics and Photonics VIII. 93740O (International Society for Optics and Photonics). Research on thin film anticounterfeiting coatings at the National Research Council of Canada. J Dobrowolski, F Ho, A Waldorf, Appl. Opt. 28Dobrowolski, J., Ho, F. & Waldorf, A. Research on thin film anticounterfeiting coatings at the National Research Council of Canada. Appl. Opt. 28, 2702-2717 (1989). Tunable structural colors on display. A Tittl, Light: Science & Applications. 11Tittl, A. Tunable structural colors on display. Light: Science & Applications 11, 1-2 (2022). Structural colors from Fano resonances. Y Shen, Acs Photonics. 2Shen, Y. et al. Structural colors from Fano resonances. Acs Photonics 2, 27-32 (2015). . K Sreekanth, SpringerSreekanth, K. et al. (Springer, 2019). Fano-resonant ultrathin film optical coatings. M Elkabbash, Nature Nanotechnology. 16ElKabbash, M. et al. Fano-resonant ultrathin film optical coatings. Nature Nanotechnology 16, 440-446 (2021). Defining Deep-Subwavelength-Resolution, Wide-Color-Gamut, and Large-Viewing-Angle Flexible Subtractive Colors with an Ultrathin Asymmetric Fabry-Perot Lossy Cavity. J Zhao, Advanced Optical Materials. 71900646Zhao, J. et al. Defining Deep-Subwavelength-Resolution, Wide-Color-Gamut, and Large- Viewing-Angle Flexible Subtractive Colors with an Ultrathin Asymmetric Fabry-Perot Lossy Cavity. Advanced Optical Materials 7, 1900646 (2019). Laser-induced plasmonic colours on metals. J.-M Guay, 10.1038/ncomms16095Nature Communications. 816095Guay, J.-M. et al. Laser-induced plasmonic colours on metals. Nature Communications 8, 16095, doi:10.1038/ncomms16095 Plasmonic films can easily be better: rules and recipes. K M Mcpeak, ACS photonics. 2McPeak, K. M. et al. Plasmonic films can easily be better: rules and recipes. ACS photonics 2, 326-333 (2015). Wear-resistant surface coloring by ultrathin optical coatings. J Geng, PhotoniX. 3Geng, J. et al. Wear-resistant surface coloring by ultrathin optical coatings. PhotoniX 3, 1- 11 (2022). Department of Physics. Current address: Research Laboratory of Electronics. Rochester, NY 14627, USA; MIT, Cambridge, MA, 02139, USA 3; Euclid Avenue, Cleveland, Ohio 44106, USA2The Institute of Optics, University of Rochester ; Case Western Reserve UniversityThe Institute of Optics, University of Rochester, Rochester, NY 14627, USA. 2. Current address: Research Laboratory of Electronics, MIT, Cambridge, MA, 02139, USA 3. Department of Physics, Case Western Reserve University, 10600 Euclid Avenue, Cleveland, Ohio 44106, USA. The calculated colors corresponding to MDM cavities by varying the dielectric (SiO2) thickness from 80 nm to 210 nm. The narrow absorption lines shown in (b) lead to low purity colors. Figure S1| Colors from Fabry-Perot MDM cavities: (a). Figure S1| Colors from Fabry-Perot MDM cavities: (a) The calculated colors corresponding to MDM cavities by varying the dielectric (SiO2) thickness from 80 nm to 210 nm. The narrow absorption lines shown in (b) lead to low purity colors.
[]
[ "Approximate Optimality and the Risk/Reward Tradeoff in a Class of Bandit Problems *", "Approximate Optimality and the Risk/Reward Tradeoff in a Class of Bandit Problems *" ]
[ "Zengjing Chen ", "Larry G Epstein ", "Guodong Zhang " ]
[]
[]
This paper studies a multi-armed bandit problem where payoff distributions are known but where the riskiness of payoffs matters. The decision-maker is assumed to pursue strategies that are approximately optimal for large horizons. By exploiting the tractability afforded by asymptotics, analytical results regarding the risk/reward tradeoff are derived. The key technical tool is a new central limit theorem.
null
[ "https://export.arxiv.org/pdf/2210.08077v1.pdf" ]
252,918,710
2210.08077
03df9f982b98e61adb4c54778dcb9e283001acb1
Approximate Optimality and the Risk/Reward Tradeoff in a Class of Bandit Problems * 14 Oct 2022 October 18, 2022 Zengjing Chen Larry G Epstein Guodong Zhang Approximate Optimality and the Risk/Reward Tradeoff in a Class of Bandit Problems * 14 Oct 2022 October 18, 2022multi-armed banditrisk/reward tradeofflarge-horizon ap- proximationscentral limit theoremsemivarianceasymptotics This paper studies a multi-armed bandit problem where payoff distributions are known but where the riskiness of payoffs matters. The decision-maker is assumed to pursue strategies that are approximately optimal for large horizons. By exploiting the tractability afforded by asymptotics, analytical results regarding the risk/reward tradeoff are derived. The key technical tool is a new central limit theorem. Introduction We study the following sequential choice problem, a version of the bandit problem. There are K arms (or actions), each yielding a random payoff. Payoffs are independent across arms and for a given arm across distinct trials. At each stage i = 1, 2, ..., n, the decision-maker (DM) chooses one arm, knowing both the realized payoffs from previously chosen arms, and the distribution of the payoff for each arm. She chooses a strategy ex ante to maximize expected utility. Because we are interested in varying horizons, we define a strategy for an infinite horizon, and then to use its truncation for any given finite horizon. Refer to a strategy as asymptotically optimal if the expected utility it implies in the limit as horizon n → ∞ is at least as large as that implied by any other strategy; or equivalently, if it is approximately optimal for large horizons. We study large-horizon approximations to the value (indirect utility) of the bandit problem and corresponding asymptotically optimal strategies. The bandit framework has spawned many applications, many of which are covered, for example, in Berry and Fristedt (1985), in the more recent textbooklike treatment of the literature by Slivkins (2022), and, for economic applications, in Bergemann and Välimäki (2008). Consider three concrete settings that fit our model well. 1 Gambling: A gambler chooses sequentially which of several given slot machines to play. News site: Each visitor to a site decides whether to click depending on the news header presented to her. The website (DM) chooses the header (arm) with clicks being the payoffs. Users are drawn independently from a fixed distribution. Ad selection: A website (DM) displays an ad (arm) for each visitor, who is an i.i.d. draw as above. If she clicks, the payoff to the website is a predetermined price, depending on the ad and paid by the advertiser. Importantly for the fit with our model, in all three settings payoffs are realized quickly after an arm is chosen, and plausibly a large number of trials occur in a relatively short period of time. We have two related reasons for studying asymptotics. First, from the modeler's perspective, it promotes tractability and the derivation of analytical results. Bandit problems are notoriously difficult to solve analytically, as opposed to numerically, given nonindifference to risk which is our focus here. Most of the literature assumes (a finite horizon and) that choices are driven by expected total rewards. Studies that explicitly address risk attitudes include Sani, Lazaric and Munos (2013), Zimin, Ibsen-Jensen and Chatterjee (2014), Vakili and Zhao (2016), and Cassel, Manor and Zeevi (2021). They assume regret minimization rather than expected utility maximization, and focus on computational algorithms rather than on qualitative theoretical results. Further, they are motivated by the nature of learning about unknown payoff distributions, and thus the exploration/exploitation tradeoff, while we assume known distributions and focus instead on the risk/reward tradeoff. 2 Theorem 3 gives analytical results on the latter tradeoff by exploiting the advantages of large-horizon approximations. A second reason for studying asymptotics is that tractability may be a concern also for the decision-maker within the model who cannot fully comprehend her extremely complicated large (but finite) horizon optimization problem. Thus, she seeks a strategy that is approximately optimal if her horizon is sufficiently long. (Accordingly, our analysis should be viewed as more descriptive than prescriptive.) The presumption that a large-horizon heuristic can alleviate cognitive limitations is supported by two of our results: (i) asymptotic optimality depends on payoff distributions and the values they induce only through their means and variances (Theorem 1), that is, DM need not know more about the distributions; and (ii) also by the relative simplicity of the explicit asymptotically optimal strategies in some cases (Theorem 3). The focus on asymptotics leads to other noteworthy features of our analysis. First, unsurprisingly, it leads to our exploiting limit theorems, most notably a central limit theorem (CLT). The classical CLT considers a sequence (X i ) of identically and independently distributed random variables, hence having a fixed mean and variance, which assumptions are adequate for evaluation of the repeated play of a single arm. However, in the bandit problem, we are interested in evaluating strategies which, in general, permit switching arms, and hence also payoff distributions, at any stage. Accordingly, in our CLT means and variances of (X i ) can vary with i subject only to the restriction that they lie in a fixed set. The CLT (Proposition 6) is the key technical result underlying our results about bandits. The central role played by limit theorems is reflected also in our specification of the von Neumann-Morgenstern (vNM) utility index u. Two attributes of random payoff streams are assumed to be important. Accordingly, u : R 2 −→ R has two arguments, namely the sample average and the √ n-weighted average of deviations from conditional means, exactly the statistics whose limiting distributions are the focus in the LLN (law of large numbers) and CLT respectively. The function u itself is restricted only by technical conditions. Nevertheless, the resulting model is both tractable and also flexible enough to accommodate interesting special cases (for example, a form of mean-variance, and another specification where variance is replaced by semivariance). The bandit model and main results follow in the next section. Most proofs are provided in section 3. Proofs of remaining details are collected in the Supplementary Appendix. The bandit model Preliminaries Let (Ω, F , P ) be the probability space on which all subsequent random variables are defined. The random variables X k , 1 ≤ k ≤ K, represent the random rewards from the K arms, and {X k,n : n ≥ 1} denote their independent and identically distributed copies. We assume that each X k has a finite mean and variance, denoted by µ k := E P [X k ], σ 2 k := V ar P [X k ] , 1 ≤ k ≤ K.(1) The largest and smallest means and variances are given by µ = max{µ 1 , · · · , µ K }, µ = min{µ 1 , · · · , µ K },(2)σ 2 = max{σ 2 1 , · · · , σ 2 K }, σ 2 = min{σ 2 1 , · · · , σ 2 K }. The set of mean-variance pairs is A = { µ k , σ 2 k : 1 ≤ k ≤ K}.(3) The convex hull of A, denoted co (A), is a convex polygon. Denote by A ext its set of extreme points. A strategy θ is a sequence of {1, · · · , K}-valued random variables, θ = (θ 1 , · · · , θ n , · · · ). θ selects arm k at round n in states for which θ n = k. Thus the corresponding reward is Z θ n given by Z θ n = X k,n where θ n = k.(4) The strategy θ is admissible if θ n is H θ n−1 -measurable for all n ≥ 1, where H θ n−1 = σ{Z θ 1 , · · · , Z θ n−1 } for n > 1, and H θ 0 = {∅, Ω}. The dependence of H θ n−1 on the strategy captures the fact that the relevant history at any stage consists not only of past payoffs but also of which arms were chosen. As an example, the strategy of alternating between arms 1 and 2, as in Theorem 3(iv), is thus rendered admissible. The set of all admissible strategies is Θ. (All strategies considered below will be admissible, even where not specified explicitly.) Utility For each horizon n, we specify the expected utility function U n used to evaluate strategies θ and the payoff streams that they generate. Let u : R 2 −→ R be the corresponding von-Neumann Morgenstern (vNM) utility index and define U n by U n (θ) = E P u 1 n n i=1 Z θ i , n i=1 1 √ n Z θ i − E P [Z θ i |H θ i−1 ] .(5) The two arguments of u correspond to the two attributes or characteristics of a random payoff stream that are taken into account. The first argument of u is the sample average outcome under strategy θ, and the second, the √ n-weighted average of deviations from conditional means, represents sample volatility. The presence of conditional rather than unconditional means reflects the sequential nature of the setting. As for the √ n-weighting, as is familiar from discussions of the classical LLN and CLT, the scaling by 1 n implies "too little" weight for finite samples, particularly when considering volatility. Observe that the second argument has zero expected value relative to the measure P . Though one might have expected the term (as volatility) to be replaced by its square or by its absolute value, the important point is that its evaluation be nonlinear, and here nonlinearity enters via u. Remark The specification (5) is ad hoc in the sense of (currently) lacking axiomatic foundations. We propose it because it seems plausible and it delivers novel results. In addition, we are not aware of any other model of preference over random payoff streams of arbitrary finite length that has axiomatic foundations and that has something interesting to say in our context. The special case of (5) where u is constant in its second argument can be axiomatized, but imposes a priori that only means matter when choosing between arms and hence is too special (Theorem 3(iv)). Take the further special case where u is linear but where payoffs are denominated in utils. This is the expected additive utility model (discounting can be added) that is the workhorse model in economics. However, it does not work well in our setting, for example, in the applied contexts in the introduction. We take the underlying payoffs or rewards at each stage to be objective quantities, such as the number of clicks or of dollars. In all these cases, the relevant payoff when choosing a strategy is the sum of single stage payoffs, e.g. the total number of clicks, or in more formal terms, stage payoffs are perfect substitutes. However, discounted expected utility with nonlinear stage utility index models them as imperfect substitutes. Utility has a particularly transparent form when θ = θ µ,σ specifies choosing an arm described by the pair µ, σ 2 repeatedly regardless of previous outcomes. In this case payoffs are i.i.d. with mean µ and variance σ 2 . Thus the conditional expectation appearing in (5) equals µ, and the classical LLN and CLT imply that in the large horizon limit risk is described by the normal distribution N 0, σ 2 and lim n→∞ U n (θ µ,σ ) = u (µ, ·) dN 0, σ 2 .(6) Consequently, if u (µ, ·) is concave, then (asymptotic) risk aversion is indicated in the sense that lim n→∞ U n (θ µ,σ ) ≤ u (µ, 0) . Here are examples of utility indices u and the implied utility functions U n that will be referred to again in the sequel. Example (utility indices) (u.1) u (x, y) = ϕ (x) + αy. Then U n (θ) = E P ϕ 1 n n i=1 Z θ i (u.2) u (x, y) = ϕ ((1 − α) x + αy), where 0 < α ≤ 1. Then U n (θ) = E P ϕ (1 − α) 1 n n i=1 Z θ i + α 1 √ n n i=1 Z θ i − E P [Z θ i |H θ i−1 ] (u.3) (Mean-variance) u (x, y) = x − αy 2 , where α > 0. Then U n (θ) = 1 n E P n i=1 Z θ i − α 1 n V ar P n i=1 Z θ i − E P [Z θ i |H θ i−1 ](7)= 1 n n i=1 E P Z θ i − α V ar P Z θ i − E P [Z θ i |H θ i−1 ] , which is a form of the classic mean-variance specification for our setting. 3 For any arm µ, σ 2 that is played repeatedly, U n (θ µ,σ ) = µ − ασ 2 , for every n. (u.4) (Mean-semivariance) u (x, y) = x − αy 2 I (−∞,0) (y).(8) Only negative cumulative deviations from (conditional) means are penalized. Then, given θ and letting Y = n i=1 Z θ i − E P [Z θ i |H θ i−1 ] , V ar P [Y ] in (7) is replaced by the semi- variance E P Y 2 I Y <0 . 4 If θ = θ µ,σ , then U n (θ µ,σ ) −→ n→∞ µ − α 0 −∞ y 2 dN 0, σ 2 = µ − ασ 2 /2. (u.5) u (x, y) = x − αI (−∞,0) (y) . Then, only the existence of a shortfall, and not its size, matters. For instance, U n (θ µ,σ ) = µ − αP 1 √ n n i=1 Z θ µ,σ i − E P [Z θ µ,σ i |H θ µ,σ i−1 ] < 0 (9) −→ n→∞ µ − αN (0,σ 2 ) (−∞, 0) = µ − α/2. Optimization and the value of a set of arms Given a horizon of length n, DM solves the following optimization problem: V n ≡ sup θ∈Θ E P U n (θ) .(10) The finite horizon problem is generally not tractable, even when u has the special form (u.1). For reasons of tractability, Bayesian models in the literature typically take ϕ in (u.1) to be linear, reducing the problem to maximization of expected total rewards, but at the cost of assuming risk neutrality. Instead, we consider large horizons and approximate optimality (see next subsection). Then we can accommodate a much more general class of utility indices. The first step in developing asymptotics is to define V ≡ lim n→∞ V n .(11) Our first theorem proves that V is well-defined, that is, values have a limit, and more. 5 3 The second equality follows from the fact that, for i = j, Z θ i − E P [Z θ i |H θ i−1 ] and Z θ j − E P [Z θ j |H θ j−1 ] have zero covariance under P . 4 It has often been argued, including by Markowitz (1959), that investors are more concerned with downside risk than with variance, and hence that semivariance is a better measure of the relevant risk. 5 Below ||(x, y)|| denotes the Euclidean norm. Theorem 1 Let u ∈ C(R 2 ) and let payoffs to the K arms satisfy (1). Suppose further that there exists g ≥ 1 such that u satisfies the growth condition |u(x, y)| ≤ c(1+||(x, y)|| g−1 ), and that payoffs satisfy sup 1≤k≤K E P [|X k | g ] < ∞. Let σ ≥ 0, that is, the existence of an arm with zero variance is allowed. Then: (i) Values have a limit: lim n→∞ V n exists. (ii) Only means and variances matter: Consider another set of arms, described by the random payoffs X ′ k , 1 ≤ k ≤ K ′ , and denote the corresponding set of mean-variance pairs by A ′ and the corresponding values by V ′ n and V ′ . Let the mean-variance pairs µ ′ k , σ ′ 2 k be defined by the obvious counterpart of (1). Then A ′ = A =⇒ V ′ = V . Thus we can write V = V (A ) = V { µ k , σ 2 k : 1 ≤ k ≤ K} . (iii) Extreme arms are enough: V (A ) = V A ext .(12) Remark The assumption that u is continuous rules out example (u.5). However, because these functions can be approximated by continuous functions, the CLT (Proposition 6) and subsequently the above theorem, can be extended to cover them as well. (See our paper (2022, section A.3), for example, where we extend from continuous functions to indicators.) Similarly for results below. Because the details are standard, we will ignore the discontinuity of (u.5). Section 3 provides a proof of (i), based largely on our CLT (Proposition 6), and also gives two alternative expressions for the limit V . (ii) describes a simplification for the decision-maker afforded by adoption of the infinite-horizon heuristic -she need only know and take into account the means and variances for each arm. In addition, it permits identifying an arm with its mean-variance pair; thus we will often refer to a pair µ, σ 2 as an arm. (iii) describes a further possible simplification for DM -she need only consider "extreme arms", that is, the extreme points of co (A), the polygon generated by A. All other arms are redundant. For example, given two arms µ 1 , σ 2 1 and µ 2 , σ 2 2 , then any arm lying on the straight line between them has no value (asymptotically), even if it moderates large differences in the mean-variance characteristics of the two given arms. For another implication of (iii), and the fact that A is contained in the rectangle defined by the four pairs on the right, one obtains that V (A) ≤ V (µ, σ 2 ), (µ, σ 2 ), (µ, σ 2 ), (µ, σ 2 ) . Moreover, note that both (ii) and (iii) are true under weak (nonparametric) assumptions on u, for example, without any assumptions about monotonicity or risk attitudes. Therefore, they accommodate situations that feature targets, aspiration levels, loss aversion, and other deviations from the common assumption of global monotonicity and risk aversion. The sufficiency of means and variances might be expected from the classic CLT, and arises here for similar reasons. We turn to intuition for (iii). Consider the evaluation of arm k in the context of making the contingent decision for stage i. If the horizon n is large, then the payoff to arm k contributes little to the averages determining overall utility. Accordingly, a second-order Taylor series expansion provides a good approximation to the incremental benefit from arm k, which expansion, to order O n −1 , is linear in µ k , σ 2 k . Therefore, the value when maximizing over the K arms (asymptotically) equals that when maximizing over the convex hull co(A), or over its set of extreme points A ext , as asserted in (12). In more economic terms, extreme arms are sufficient because switching suitably between them across stages can, in the infinite-horizon limit, replicate or improve upon the payoff distribution achievable by any one of the K arm(s). 6 . Strategies and the risk/reward tradeoff Turn to strategies. Given the K arms corresponding to A, the strategy θ * is asymptotically optimal if lim n→∞ E P U n (θ * ) = V (A) . It follows that θ * is approximately optimal for large horizons in that: for every ǫ > 0, there exists n * such that | U n (θ * ) − V n |< ǫ if n > n * . Say that µ, σ 2 is feasible if it lies in A. Theorem 1(iii) states that DM can limit herself to strategies that choose between extreme arms. More can be said under added assumptions on the utility index and what is feasible, as illustrated by the next result. Theorem 2 Adopt the assumptions in Theorem 1, and assume that σ > 0. If u(x, y) is increasing in x and concave in y, and if (µ, σ 2 ) is feasible, then: the strategy of always choosing an arm exhibiting (µ, σ 2 ) is asymptotically optimal, and the corresponding limiting value, defined in (11), is given by V = E P [u(µ, σB 1 )] = u (µ, ·) dN 0, σ 2 . Here (B t ) denotes a standard Brownian motion under the probability space (Ω, F , P ). Intuition argues for the choice of µ, σ 2 at stage n if there are no later trials remaining, but may seem myopic more generally. Notably, the strategy of always choosing the high-mean/low-variance pair is not in general optimal given a finite horizon (even apart from the fact that arms may not be adequately characterized by mean and variance alone). That it is asymptotically optimal demonstrates a simplifying feature of the long-horizon heuristic. An additional comment is that one can similarly consider three other possible combinations of monotonicity and curvature assumptions for u, where each property is assumed to hold globally. For example, if u(x, y) is decreasing in x and concave (convex) in y, then it is asymptotically optimal to always choose an arm exhibiting (µ, σ 2 ) ((µ, σ 2 )) if it is feasible. However, the theorem does not provide any insight into the risk/reward tradeoff that is at the core of decision-making under uncertainty. Under common assumptions about monotonicity and risk aversion, the tradeoff concerns the increase in mean reward needed to compensate the individual for facing an increase in risk (for example, a larger variance). But Theorem 2 assumes that there exists an arm having both the largest mean and the smallest variance, thus ruling out the need for DM to make such a tradeoff. Next we investigate asymptotic optimality when the risk/reward tradeoff is integral. For greater clarity, we do so in a canonical setting where there are 2 arms (K = 2), where only µ 1 , σ 2 1 and µ 2 , σ 2 2 are feasible, 7 and where µ 1 > µ 2 , σ 1 > σ 2 > 0.(13) Parts (i) and (ii) describe conditions under which it is asymptotically optimal to specialize in one arm, that is, to choose that arm always (at every stage and history). The remaining parts give conditions under which specializing in one arm is not asymptotically optimal (that is, not even approximately optimal for large horizons). Some results are limited to utility specifications in the Example. Theorem 3 Adopt the assumptions in Theorem 1 and consider the 2-arm case above. Then, for each of the following specifications of u, the indicated strategy is asymptotically optimal and V denotes the corresponding limiting value defined in (11). (i) Let u : R 2 −→ R be twice continuously differentiable. Suppose that ∂ x u (x, y) (µ 1 − µ 2 )+ 1 2 ∂ 2 yy u (x, y) σ 2 1 − σ 2 2 ≥ 0 for all (x, y) ∈ R 2 .(14) Then specializing in arm 1 always is asymptotically optimal and, (by (6)), V = u (µ 1 , ·) dN 0, σ 2 1 . If ∂ x u is everywhere positive, then (14) is equiv- alent to − 1 2 ∂ 2 yy u (x, y) ∂ x u (x, y) ≤ µ 1 − µ 2 σ 2 1 − σ 2 2 for all (x, y) ∈ R 2 .(15) 7 By Theorem 1, results would be unaffected if there were other arms lying on the straight line joining µ 1 , σ 2 1 and µ 2 , σ 2 2 . Extensions to K > 2 arms are outlined briefly in the remark near the end of this section. (14) is reversed, then it is asymptotically optimal to specialize in arm 2. When the inequality in (ii) Adopt the conditions on u in (i), and assume that ∂ x u (x, y) > 0 for all (x, y) ∈ R 2 . Suppose further that − 1 2 ∂ 2 yy u ∂ x u = α > 0 for all (x, y) ∈ R 2 .(16) Then specializing in arm 1 (arm 2) is asymptotically optimal if α ≤ ( ≥ ) µ 1 − µ 2 σ 2 1 − σ 2 2 .(17) Both strategies are asymptotically optimal when there is equality in (17). (iii) Let u (x, y) = x − αy 2 I (−∞,0) (y) , α > 0. Observe that µ 1 − µ 2 σ 2 1 − σ 2 2 < α < α, where the critical values α and α are given by α ≡ 2(µ 1 − µ 2 ) (σ 1 + 2σ 2 )(σ 1 − σ 2 ) , α ≡ 2(µ 1 − µ 2 ) σ 2 (σ 1 − σ 2 ) . If α ≤ µ 1 −µ 2 σ 2 1 −σ 2 2 , then specializing in arm 1 is asymptotically optimal. If α < α (respectively α < α), then specializing in arm 1 (arm 2) is NOT asymptotically optimal. (iv) Let u (x, y) = x − αI (−∞,0) (y), α > 0. Specializing in arm 2 is not asymptotically optimal for any α, and, if α ′ ≡ 2(µ 1 − µ 2 )σ 1 (σ 1 − σ 2 ) < α, then neither is specializing in arm 1. (v) Let u (x, y) = ϕ (x) + αy, ϕ ∈ C (R) and α ∈ R. Fix x * ∈ arg max µ 1 ≤x≤µ 2 ϕ(x), and let λ ∈ [0, 1] be such that x * = λµ 1 + (1 − λ)µ 2 . Denote by ψ i the number times that arm 1 is chosen in first i stages. Let the strategy θ * choose arm 1 at stage 1, and also at stage i + 1, (i ≥ 1), if and only if 8 Then θ * is asymptotically optimal and V = max ψ i i ≤ λ.µ 2 ≤x≤µ 1 ϕ(x). Further, specializing in one arm is asymptotically optimal if and only if max{ϕ (µ 1 ) , ϕ (µ 2 )} = max µ 2 ≤x≤µ 1 ϕ(x). We discuss each part in turn. (i) Focus on (15). Intuition derives from interpretation of −∂ 2 yy u/∂ x u as a (local) measure of risk aversion that is a (slight) variant of the Arrow-Pratt measure (Pratt, 1964).The relatively small degree of risk aversion indicated in (15) implies that the larger mean for arm 1 more than compensates for its larger variance. Moreover, this is true contingent at each stage, regardless of history, because the inequality in (15) is satisfied globally. Though the Arrow-Pratt argument is well-known and applies also here (with the minor extension to risks with two attributes), it might be worthwhile to couch it in our context. To do so, fix (x, y), and let DM use the utility index u (x + ·, y + ·). Consider the arm ǫ 2 µ, ǫ 2 σ 2 , where ǫ > 0 has the effect, when small, of scaling down both the mean and variance of payoffs by ǫ 2 . By (6), the limiting expected utility of using this arm repeatedly equals 9 v (ǫ, x, y) = E P u x + ǫ 2 µ, y + ǫσB 1 . Set µ = − 1 2 ∂ 2 yy u (x, y) ∂ x u (x, y) σ 2 .(18) Then v (ǫ, x, y) = u (x, y) up to the second-order in a Taylor series expansion about ǫ = 0 (hence up to the first-order in ǫ 2 or in the corresponding variance). In that sense, −∂ 2 yy u (x, y) /∂ x u (x, y) gives twice the mean-variance ratio needed to render a small risk about (x, y) asymptotically neutral. (ii) This is an immediate consequence of (i) that we include in the statement because the consequence of the indicated constancy warrants emphasis. Two examples of functions u covered by (ii) are the mean-variance model (u.3) and Example (u.2) when ϕ is an exponential.At first glance, the implication regarding the unimportance of diversification might seem surprising, especially given its central role in portfolio theory. Of course, diversification in portfolio theory refers to the simultaneous holding of several assets, which, interpreting each arm as an asset, is excluded here. But diversification over time is permitted and that is its meaning here. The result that specialization in one arm over time is asymptotically optimal given (16) can be understood as follows. Considering the factors that might lead to different arms being chosen at two different stages, note first that the payoff distribution for each arm is unchanged by assumption. Second, though a finite-horizon induces a nonstationarity that can affect choices, our decision-maker is, roughly speaking, acting as if solving an infinite-horizon problem. That leaves only the variation of risk attitude with past outcomes, which is excluded if −∂ 2 yy u/∂ x u is constant. (iii) The mean-semivariance model agrees partially with the mean-variance model in that for both (17) implies the asymptotic optimality of choosing (the high mean, high variance) arm 1 throughout. However, their agreement ends there. In particular, for α < α < α, specializing in one arm is not asymptotically optimal. Here is some intuition. Since only negative deviations are penalized, it is as though DM faces, or perceives, less risk than what is measured by σ 2 . Alternatively, in our preferred interpretation, for any given risk measured by variance, DM is less averse to that risk in the present model, as if her effective α is smaller than its nominal magnitude. Moreover, risk aversion varies across stages. For example, contingent on cumulative past deviations being positive (negative) at stage m, it is relatively unlikely (likely) that future choices will lead later to negative cumulative deviations, and thus variance is less (more) of a concern. Such endogenous changes in risk aversion can lead to specialization in a single arm being dominated. Thus, for example, such specialization is not even approximately optimal in large horizons if α < α < α. In finance, it has been argued (Nantell and Price, 1979; Klebaner et al, 2017) that the change from variance to semivariance has limited consequences for received asset market theory. In contrast, a similar change in the bandit problem context leads to qualitative differences regarding the importance of diversification. (iv) For this utility specification, it is never asymptotically optimal to specialize in the low mean, low variance arm. Indeed, by (9), specializing in the high mean, high variance arm is superior for large horizons, and this is true given only the ordinal assumption (13) about their means and variances. However, the latter strategy is also not asymptotically optimal for large enough α, and the set of parameter values (α ′ , ∞) where asymptotic optimality of arm 1 fails depends on the numerical values of means and variances. For example, the set grows as σ 1 increases (keeping µ 1 , µ 2 and σ 2 fixed) -a larger variance makes it more likely that repeated choice of arm 1 will produce a cumulative shortfall, which is tolerable only if the associated penalty parameter α is even smaller. (v) Condition (14) suggests that either nonmonotonicity (e.g. a change in the sign of ∂ x u), or variable risk aversion (e.g. a change in the sign or magnitude of ∂ 2 yy u) might lead to the asymptotic optimality of switching between arms. This case illustrates the former factor, with the interpretation that DM is targeting x * , a maximizer of ϕ, while being indifferent to risk. Because of the linearity of u (x, ·), variances do not matter. For example, when ϕ is increasing, arm 1 is chosen always because of its larger mean, regardless of how risky it is. Nonlinearity of ϕ does not matter asymptotically as in the classic LLN. Remark It is straightforward to extend the theorem to an arbitrary set of K arms. For example, in (i), with ∂ x u everywhere positive, specializing in arm j is asymptotically optimal if j ∈ arg max k=1,...,K {µ k − ( − 1 2 ∂ 2 yy u(x,y) ∂xu )σ 2 k } for all (x, y) , which simplifies in the obvious way under the constancy condition (16). In conclusion, we emphasize that payoff distributions are unrestricted in our model -they are not assumed to be adequately summarized by means and variances. That is a result (Theorem 1). Accordingly, it is only because of our asymptotic analysis that the conditions in the above theorem giving information about the risk/reward tradeoff take on such a simple form. Proofs We remind the reader of the following notation used in this section: µ, µ and σ 2 , σ 2 are the bounds of means and variances given in (2), A denotes the set of mean-variance pairs of all K arms, and A ext ⊂ A denotes the set of extreme points of co (A). Pairs consisting of mean and standard deviation (rather than variance) will also be important, and thus it is convenient to define The following lemma gives properties of {Z θ n } that will be used repeatedly. [A] = {(µ, σ) : µ, σ 2 ∈ A}, and [A] ext = {(µ, σ) : µ, σ 2 ∈ A ext } Let B = {B t = (B (1) t , B Lemma 4 The rewards {Z θ n : n ≥ 1} defined in (4) satisfy the following: (1) For any n ≥ 1, µ = ess sup θ∈Θ E P [Z θ n |H θ n−1 ], µ = ess inf θ∈Θ E P [Z θ n |H θ n−1 ] σ 2 = ess sup θ∈Θ E P Z θ n − E P [Z θ n |H θ n−1 ] 2 |H θ n−1 σ 2 = ess inf θ∈Θ E P Z θ n − E P [Z θ n |H θ n−1 ] 2 |H θ n−1 . (2) For any θ ∈ Θ and n ≥ 1, let U θ n−1 be any θ-dependent (dependent only on (θ 1 , · · · , θ n−1 )) and H θ n−1 -measurable random variable. For any bounded measurable functions f 0 , f 1 and f 2 on R, let ψ(x, y) = f 0 (x) + f 1 (x)y + f 2 (x)y 2 , (x, y) ∈ R 2 . Then sup θ∈Θ EP ψ U θ n−1 , Z θ n = sup θ∈Θ EP max 1≤k≤K ψ k U θ n−1 where, for all x ∈R and 1 ≤ k ≤ K, ψ k (x) = E P [ψ(x, X k,n )] = f 0 (x) + µ k f 1 (x) + (µ 2 k + σ 2 k ) f 2 (x).(19) Proof: (1) {Z θ n } satisfy, for any θ ∈ Θ and n ≥ 1, E P [Z θ n |H θ n−1 ] = K k=1 I {θn=k} E P [X k,n |H θ n−1 ] = K k=1 I {θn=k} E P [X k,n ] = K k=1 I {θn=k} µ k . Combine with the definitions of µ and µ in (2) to derive ess sup θ∈Θ E P [Z θ n |H θ n−1 ] = µ, ess inf θ∈Θ E P [Z θ n |H θ n−1 ] = µ. The other two equalities can be proven similarly. (2) For any θ ∈ Θ and n ≥ 1, let U θ n−1 be a H θ n−1 -measurable random variable, which thus depends on (θ 1 , · · · , θ n−1 ). By direct calculation we obtain that sup θ∈Θ E P ψ U θ n−1 , Z θ n = sup θ∈Θ E P K k=1 I {θn=k} E P [ψ U θ n−1 , X k,n |H θ n−1 ] = sup θ∈Θ E P max 1≤k≤K ψ k U θ n−1 , where ψ k is given in (19). Following Peng (2019), our arguments make use of nonlinear partial differential equations (PDEs) and viscosity solutions. The following is taken from Theorems 2.1.2, C.3.4 and C.4.5 in Peng's book. Lemma 5 For given T > 0, consider the following PDE: ∂ t v(t, x, y) + G ∂ x v(t, x, y), ∂ 2 yy v(t, x, y) = 0, (t, x, y) ∈ [0, T ) × R 2 v(T, x, y) = u(x, y),(20) where u ∈ C(R 2 ). Suppose that G is continuous on R 2 and satisfies the following conditions, for all (p, q), (p ′ , q ′ ) ∈ R 2 : G(p, q) ≤ G(p, q ′ ), whenever q ≤ q ′ ,(21)G(p, q) − G(p ′ , q ′ ) ≤ G(p − p ′ , q − q ′ ), (22) G(λp, λq) = λG(p, q), for λ ≥ 0.(23) Then, for any u ∈ C(R 2 ) satisfying a polynomial growth condition, there exists a unique v ∈ C([0, T ] × R 2 ) such that v is a viscosity solution of the PDE (20). Moreover, if ∃λ > 0 such that, for all p, q, q ′ ∈ R , G(p, q) − G(p, q ′ ) ≥ λ(q − q ′ ), and if the initial condition u is uniformly bounded, then for each 0 < ǫ < T , ∃β ∈ (0, 1) such that v C 1+β/2,2+β ([0,T −ǫ]×R 2 ) < ∞.(24) Here · C 1+β/2,2+β ([0,T −ǫ]×R 2 ) is a norm on C 1+β/2,2+β ([0, T − ǫ] × R 2 ) , the set of (continuous and) suitably differentiable functions on [0, T − ǫ] × R 2 . (The condition (24) is due to Krylov (1987); see also Peng (2019, Ch. 2.1). Some detail is provided in the Appendix.) Proof of Theorem 1 We first prove a nonlinear central limit theorem for the bandit problem. The values V n and V are defined in (10) and (11) respectively. Proposition 6 (CLT) Let u ∈ C b,Lip (R 2 ), the class of all bounded and Lipschitz continuous functions on R 2 , and adopt all other assumptions and the notation in Theorem 1. Assume that σ > 0. 10 Then lim n→∞ V n = V = sup a∈[A](0,1) E P u 1 0 a (1) s ds, 1 0 a (2) s dB (2) s (25) = sup a∈[A] ext (0,1) E P u 1 0 a (1) s ds, 1 0 a (2) s dB (2) s .(26) Lemma 11 in the Appendix shows that the Proposition is valid for all u ∈ C R 2 satisfying a growth condition. The following immediate corollary is used frequently in later proofs of Theorems 2 and 3 (the Appendix contains a proof). Corollary 7 For all u ∈ C R 2 satisfying a polynomial growth condition, the limit in (25) can be described also by the solution of a PDE. Specifically, V = v(0, 0, 0),(27) where v is the solution of PDE (20), with function G given by G(p, q) = sup (µ,σ 2 )∈A µp + 1 2 σ 2 q , (p, q) ∈ R 2 .(28) Remark There is related literature on CLTs. Chen and Epstein (2022) and Chen, Epstein and Zhang (2022) have nonlinear CLTs, which, when translated into the bandits context, restrict differences between arms either by assuming that they all have the identical variance (in the former paper), or the identical mean (in the latter paper). These restrictions preclude study of the risk/reward tradeoff. In addition, their objective is to obtain simple closed-form expressions for the limit (what we denote by V ), and for that purpose they adopt specific functional forms for u, special cases of Example (u.2). In contrast, Proposition 6 and its corollary apply to a much more general class of utility indices. Moreover, as this paper shows, in spite of the complexity of the expression for V , it is the basis for a range of results about the bandit problem even allowing unrestricted heterogeneity across arms. It is to be acknowledged, however, that, to our knowledge, our earlier paper (2022) is the first, and only other paper, to apply a nonlinear CLT to study bandit problems, though subject to the restrictions noted above. 11 (2019), for example). The connection to sequential decision-making is not addressed, for example, strategies do not appear in their formulation. Another difference is their adoption of a "sublinear expectation space" framework, while we work within a standard and more familiar probability space framework. Next we proceed with lemmas that will lead to a proof of the CLT. They assume u ∈ C 3 b (R 2 ) and relate to the functions {H t } t∈[0,1] defined by, for all (x, y) ∈ R 2 , H t (x, y) = sup a∈[A](t,1+h) E P u x + 1+h t a (1) s ds, y + 1+h t a (2) s dB (2) s ,(29) where h > 0 is fixed and dependence on h is suppressed notationally. In addition, we often write z = (z 1 , z 2 ) = (x, y) and define |z − (1) H t ∈ C 2 b (R 2 ) and the first and second derivatives of H t are uniformly bounded for all t ∈ [0, 1]. z ′ | β = |z 1 − z ′ 1 | β + |z 2 − z ′ 2 | β . (2) There exist constants L > 0 and β ∈ (0, 1), independent of t, such that for any (z 1 , z 2 ), (z ′ 1 , z ′ 2 ) ∈ R 2 , |∂ 2 zizj H t (z 1 , z 2 ) − ∂ 2 zizj H t (z ′ 1 , z ′ 2 )| ≤ L(|z 1 − z ′ 1 | β + |z 2 − z ′ 2 | β ), i, j = 1, 2. (3) Dynamic programming principle: For any δ (5) There exists a constant C 0 > 0 such that ∈ [0, 1 + h − t], H t (x, y) = sup a∈[A](t,t+δ) E P H t+δ x + t+δ t a (1) s ds, y + t+δ t a (2) s dB (2) s , (x, y) ∈ R 2 .sup (x,y)∈R 2 |H 1 (x, y) − u(x, y)| ≤ C 0 h sup (x,y)∈R 2 |H 0 (x, y) − ψ(x, y)| ≤ C 0 h, where ψ(x, y) = sup a∈[A](0,1) E P u x + 1 0 a (1) s ds, y + 1 0 a (2) s dB (2) s . Proof: For any t ∈ [0, 1 + h] and (x, y) ∈ R 2 , we define the function v(t, x, y) = H t (x, y). Then v is the solution of the HJB-equation (20) with function G given in (28) (Yong and Zhou (1999, Theorem 5.2, Ch. 4)). By Lemma 5, ∃β ∈ (0, 1) such that v C 1+β/2,2+β ([0,1]×R 2 ) < ∞. (For the reader's convenience, we include the definition of the norm in the Appendix.) This proves both (1) and (2). (3) follows directly from the classical dynamic programming principle (Yong and Zhou (1999, Theorem 3.3, Ch. 4)). Prove (4) where C is a constant that depends only on µ, µ, σ 2 , the uniform bound of ∂ 2 xx H t , ∂ 2 xy H t , and constant L in (2). Prove (5): Use Ito's formula to check that sup (x,y)∈R 2 |H 1 (x, y) − u(x, y)| = sup (x,y)∈R 2 sup a∈[A](1,1+h) E P 1+h 1 ∂ x u x + s 1 a (1) s ds, y + s 1 a (2) s dB (2) s a (1) s ds + 1 2 1+h 1 ∂ 2 yy u x + s 1 a (1) s ds, y + s 1 a (2) s dB (2) s (a (2) s ) 2 ds ≤ C 0 h, where the constant C 0 depends only on µ, µ, σ 2 and the uniform bound of ∂ x u, ∂ 2 yy u. Similarly, we can prove that sup (x,y)∈R 2 |H 0 (x, y) − ψ(x, y)| ≤ C 0 h. Lemma 9 Take G to be the function defined in (28), let {H t } t∈[0,1] be the functions defined in (29), and define {L m,n } n m=1 by 12 L m,n (z) = H m n (z) + 1 n G ∂ z1 H m n (z), ∂ 2 z2z2 H m n (z) , z ∈ R 2 .(30) For any θ ∈ Θ and n ≥ 1, define S θ n = n i=1 Z θ i , S θ n = n i=1 Z θ i , Z θ n = Z θ n − E P [Z θ n |H θ n−1 ]. where e(m, n) is given by e(m, n) = sup θ∈Θ E P H m n S θ m−1 n , S θ m−1 √ n + ∂ z1 H m n S θ m−1 n , S θ m−1 √ n Z θ m n +∂ z2 H m n S θ m−1 n , S θ m−1 √ n Z θ m √ n + ∂ 2 z2z2 H m n S θ m−1 n , S θ m−1 √ n (Z θ m ) 2 2n . By Lemma 8, parts (1) and (2), ∃C > 0, β ∈ (0, 1) such that sup t∈[0,1] sup z∈R 2 |∂ 2 zizj H t (z)| ≤ C, sup t∈[0,1] sup z,z ′ ∈R 2 ,z =z ′ |∂ 2 zizj H t (z) − ∂ 2 zizj H t (z ′ )| |z − z ′ | β ≤ C, i, j = 1, 2. It follows from Taylor's expansion that ∀ǫ > 0 ∃δ > 0 (depending only on C and ǫ), such that ∀z, z ′ ∈ R 2 , and ∀t ∈ [0, 1], 13 H t (z + z ′ ) − H t (z) − D z H t (z)z ′ − 1 2 tr z ′⊤ D 2 z H t (z)z ′ ≤ǫ|z ′ | 2 I {|z ′ |<δ} + 2C|z ′ | 2 I {|z ′ |≥δ} .(34) Set z = The convergence is due to the finiteness of µ, µ and σ. This proves (32). 13 Here Dz : = (∂z i ) 2 i=1 and D 2 z := (∂ 2 z i z j ) 2 i,j=1 . Combine with Lemma 4 and show that e(m, n) = sup θ∈Θ E P H m n S θ m−1 n , S θ m−1 √ n + ∂ z1 H m n S θ m−1 n , S θ m−1 √ n Z θ m n +∂ 2 z2z2 H m n S θ m−1 n , S θ m−1 √ n (Z θ m ) 2 2n = sup θ∈Θ E P H m n S θ m−1 n , S θ m−1 √ n + max 1≤k≤K E P ∂ z1 H m n S θ m−1 n , S θ m−1 √ n µ k n +∂ 2 z2z2 H m n S θ m−1 n , S θ m−1 √ n σ 2 k 2n = sup θ∈Θ E P L m,n S θ m−1 n , S θ m−1 √ n . This proves (33), and completes the proof of (31). Proof of Proposition 6: We prove it for u ∈ C ∞ b (R 2 ). This suffices because any u ∈ C b,Lip (R 2 ) can be approximated uniformly by a sequence of functions Combine the latter with Lemma 8, part (5), to obtain in C ∞ b (R 2 ) (seeV − sup a∈[A](0,1) E P u 1 0 a (1) s ds, 1 0 a (2) s dB (2) s = lim n→∞ sup θ∈Θ E P u S θ n n , S θ n √ n − sup a∈[A](0,1) E P u 1 0 a (1) s ds, 1 0 a (2) s dB (2) s ≤ lim n→∞ sup θ∈Θ E P ϕ S θ n n , S θ n √ n − sup θ∈Θ E P H 1 S θ n n , S θ n √ n + lim n→∞ sup θ∈Θ E P H 1 S θ n n , S θ n √ n − H 0 (0, 0) + H 0 (0, 0) − sup a∈[A](0,1) E P u 1 0 a (1) s ds, 1 0 a (2) s dB (2) s ≤C 0 h, where the constant C 0 depends only on µ, µ, σ and the uniform bound of ∂ x u and ∂ 2 yy u. By the arbitrariness of h, the proof of (25) is completed. Finally, prove (26). Let G be defined by (28), and define, for all (p, q) ∈R 2 , G ext (p, q) = sup (µ,σ 2 )∈A ext µp + 1 2 σ 2 q . Then G(p, q) = G ext (p, q) ∀(p, q) ∈ R 2 .(35) The proof is completed by applying a Comparison Theorem (Peng (2019, Theorem C.2.5)). Proof of Theorem 1: All the results can be obtained from Proposition 6 and Lemma 10. That u need only satisfy continuity and the stated growth condition is implied by Lemma 2.4.12 and Exercise 2.5.7 in Peng (2019) (or by Rosenthal's inequality in Zhang (2016)). For the convenience of readers, we provide a proof in the Appendix (Lemma 11). Proof of Theorem 2 We are given that u(x, y) is increasing in x and concave in y, and (µ, σ 2 ) ∈A. For any t ∈ [0, 1] and (x, y) ∈ R 2 , define the function v(t, x, y) = E P [u(x + (1 − t)µ, y + σ(B (2) 1 − B (2) t ))]. Then v(0, 0, 0) = E P [u(µ, σB 1 ] = u(µ, ·)dN(0, σ 2 ). By the (classic) Feynman-Kac formula (Mao (2008, Theorem 2.8.3)), v is the solution of the (linear parabolic) PDE ∂ t v(t, x, y) + µ∂ x v(t, x, y) + 1 2 σ 2 ∂ 2 yy v(t, x, y) = 0, (t, x, y) ∈ [0, 1) × R 2 v(1, x, y) = u(x, y). (36) Since u(x, y) is increasing in x and concave in y, it follows that v(t, x, y) is increasing in x and concave in y for any t ∈ [0, 1], that is, ∂ x v(t, x, y) ≥ 0 and ∂ 2 yy v(t, x, y) ≤ 0, ∀(t, x, y) ∈ [0, 1) × R 2 . Given also (µ, σ 2 ) ∈A, it follows that sup (µ,σ 2 )∈A µ∂ x v + 1 2 σ 2 ∂ 2 yy v = µ∂ x v + 1 2 σ 2 ∂ 2 yy v, and hence that v solves the PDE (20). By uniqueness of the solution (Lemma 5), and (27), conclude that V = v(0, 0, 0) = u(µ, ·)dN(0, σ 2 ). Proof of Theorem 3 Throughout we assume that A = {(µ 1 , σ 2 1 ), (µ 2 , σ 2 2 )}. Proof of (i): The proof consists of three steps. Step 1: From Theorem 1(i) and (27), it follows that lim n→∞ V n = lim n→∞ sup θ∈Θ E P u S θ n n , S θ n √ n = v(0, 0, 0) where v(t, x, y) solves the PDE (20). Step 2: Prove that the following function v solves the above PDE: v(t, x, y) =E P [u(x + (1 − t)µ 1 , y + σ 1 (B (2) 1 − B (2) t ))](37)= R u(x + (1 − t)µ 1 , y + √ 1 − tσ 1 r) 1 √ 2π e − r 2 2 dr By the Feynman-Kac formula,v solves ∂ tv (t, x, y) + µ 1 ∂ xv (t, x, y) + 1 2 σ 2 1 ∂ 2 yyv (t, x, y) = 0, (t, x, y) ∈ [0, 1) × R 2 v(1, x, y) = u(x, y). (38) From (37) and assumption (14), it follows that, for all (t, x, y) ∈ [0, 1) × R 2 , 1 2 σ 2 1 ∂ 2 yyv (t, x, y) + µ 1 ∂ xv (t, x, y) ≥ 1 2 σ 2 2 ∂ 2 yyv (t, x, y) + µ 2 ∂ xv (t, x, y), that is, sup (µ,σ 2 )∈A µ∂ xv + 1 2 σ 2 ∂ 2 yyv = µ 1 ∂ xv + 1 2 σ 2 1 ∂ 2 yyv .(39) Thusv solves the PDE (20). By uniqueness of the solution (Lemma 5), conclude that lim n→∞ V n = v(0, 0, 0) =v(0, 0, 0) = u(µ 1 , ·)dN(0, σ 2 1 ). Step 3: If θ * denotes the strategy of choosing arm 1 always, then, using Step 1, lim n→∞ E P u S θ * n n , S θ * n √ n = E P [u(µ 1 , σ 1 B (2) 1 )] = v (0, 0, 0) = V . Hence θ * is asymptotically optimal. Proof of (iii): Case 1 (α ≤ µ 1 −µ 2 σ 2 1 −σ 2 2 ): Define v by (37). Although u is not twice differentiable, we can calculate ∂ x v and ∂ 2 yy v directly to obtain ∂ x v = 1 and ∂ 2 yy v = −2αΦ( −y σ1 √ 1−t ). Therefore, α < µ 1 − µ 2 σ 2 1 − σ 2 2 =⇒ µ 1 − αΦ( −y √ 1 − tσ 1 )σ 2 1 > µ 2 − αΦ( −y √ 1 − tσ 1 )σ 2 2 =⇒ µ 1 ∂ x v + 1 2 σ 2 1 ∂ 2 yy v > µ 2 ∂ x v + 1 2 σ 2 2 ∂ 2 yy v. Proceed as in the proof of (i). 14 Case 2 (α < α < α): To prove that single-arm strategies are not asymptotically optimal, it is enough to show that 14 But, if we assume the reverse inequality in (17), then corresponding implications fail. For example, if y > 0 is sufficiently large which would make Φ( −y √ 1−tσ ) close to zero for σ = σ 1 , σ 2 . t ≥ 0, then the last two inequalities above could remain valid even though E P u 1 0â (1) s ds, 1 0â (2) s dB (2) s > max i=1,2 E P u µ i , σ i B (2) 1 ,(40)α > (µ 1 − µ 2 ) / σ 2 1 − σ 2 2 . for someâ = (â (1) s ,â (2) s ) ∈ [A](0, 1). Then Proposition 6 implies that V = sup a∈[A](0,1) E P u 1 0 a (1) s ds, 1 0 a (2) s dB (2) s ≥E P u 1 0â (1) s ds, 1 0â (2) s dB (2) s > max i=1,2 E P u µ i , σ i B (2) 1 . Take (â (1) s ,â (2) s ) = (µ 1 , σ 1 )I {W σ 1 ,σ 2 s ≥0} + (µ 2 , σ 2 )I {W σ 1 ,σ 2 s <0} ,(41) where W σ1,σ2 s is an oscillating Brownian motion, that is, the solution of the stochastic differential equation (SDE) W σ1,σ2 t = t 0 σ 1 I {W σ 1 ,σ 2 s ≥0} + σ 2 I {W σ 1 ,σ 2 s <0} dB (2) s . By Keilson and Wellner (1978, Theorem 1), the probability density of W σ1,σ2 t is q (t, ·), where q (t, y) =        q * y; σ 2 1 t 2σ2 σ1+σ2 y ≥ 0 q * y; σ 2 2 t 2σ1 σ1+σ2 y < 0(42) and q * (y; σ 2 ) = 1 √ 2πσ exp −(y/σ) 2 /2 is the pdf for N 0, σ 2 . Using this pdf, we can calculate E P u 1 0â (1) s ds, 1 0â (2) s dB (2) s =E P 1 0 µ 1 I {W σ 1 ,σ 2 s ≥0} + µ 2 I {W σ 1 ,σ 2 s <0} ds − αE P (W σ1,σ2 1 ) 2 I {W σ 1 ,σ 2 1 ≤0} =µ 1 1 0 P (W σ1,σ2 s ≥ 0)ds + µ 2 1 0 P (W σ1,σ2 s < 0)ds − α 0 −∞ y 2 q(1, y)dy =µ 1 1 0 ∞ 0 q(s, y)dyds + µ 2 1 0 0 −∞ q(s, y)dyds − α 0 −∞ y 2 q(1, y)dy =µ 1 σ 2 σ 1 + σ 2 + µ 2 σ 1 σ 1 + σ 2 − α σ 1 σ 2 2 σ 1 + σ 2 . Therefore, (40) is satisfied if and only if α = 2(µ 1 − µ 2 ) (σ 1 + 2σ 2 )(σ 1 − σ 2 ) < α < 2(µ 1 − µ 2 ) σ 2 (σ 1 − σ 2 ) = α.(43) Proofs for other assertions regarding cases α < α and α < α are apparent from the above. Proof of (iv): The proof is similar to that for (iii). Specifically, prove that (40) is satisfied for the process (â (1) s ,â (2) s ) if α satisfies the asserted inequality α ′ < α, where (â (1) s ,â (2) s ) = (µ 1 , σ 1 )I {W σ 2 ,σ 1 s <0} + (µ 2 , σ 2 )I {W σ 2 ,σ 1 s ≥0} , and W σ2,σ1 s is the oscillating Brownian motion given by W σ2,σ1 t = t 0 σ 1 I {W σ 2 ,σ 1 s <0} + σ 2 I {W σ 2 ,σ 1 s ≥0} dB (2) s . The process W σ2,σ1 t admits a probability density analogous to (42). Proof of (v): For i ≥ 1, we have Z θ * i = X k,i where θ * i = k, and {X k,i : i ≥ 1} are i.i.d. Then E P ϕ 1 n n i=1 Z θ * i = E P ϕ ψ n n ψ n i=1 X 1,i ψ n + n − ψ n n n−ψ n i=1 X 2,i n − ψ n Since ψ n /n → λ as n → ∞, combine with the classical LLN for {X 1,i : i ≥ 1} and {X 2,i : i ≥ 1} to obtain lim n→∞ E P ϕ 1 n n i=1 Z θ * i = ϕ (λµ 1 + (1 − λ)µ 2 ) = ϕ(x * ). Therefore, θ * is asymptotically optimal because, by Proposition 6, The remaining assertion is implied by the fact that lim n−→∞ U n (θ µ,σ ) = ϕ (µ) for each µ, σ 2 . A Supplementary Appendix Lemma 10 Proposition 6 still holds if σ = 0. Proof: As in the proof of Proposition 6, it suffices to take u ∈C ∞ b (R 2 ). Given σ = 0, we add a perturbation to the random returns of the K arms. For any 1 ≤ k ≤ K and n ≥ 1, let X ǫ k,n = X k,n + ǫζ n , where ǫ > 0 is a fixed small constant and {ζ n } is a sequence of i.i.d. standard normal random variables, independent with {X k,n }. Then, for any θ ∈ Θ and n ≥ 1, the corresponding reward is denoted by Z θ,ǫ n = Z θ n + ǫζ n , and the corresponding set of mean-variance pairs is denoted by A ǫ = {(µ k,ǫ , σ 2 k,ǫ ) : 1 ≤ k ≤ K}, where µ k,ǫ = µ k and σ 2 k,ǫ = σ 2 k + ǫ 2 . The corresponding bounds are µ ǫ , µ ǫ , σ 2 ǫ , and σ 2 ǫ > 0. Define We also have V ǫ n = sup θ∈Θ E P u n i=1 Z θ,ǫ i n , n i=1 (Z θ,ǫ i − E P [Z θ,ǫ i |H θ i−1 ]|V n − V ǫ n | 2 ≤ Cǫ 2 E P n i=1 ζ i n 2 + n i=1 ζ i √ n 2 ≤ 2Cǫ 2 , where the constant C depends only on the bounds of ∂ x u and ∂ y u. Letting as ǫ → 0 in (44), the CLT (25) is proven for σ = 0. Similar arguments show that (26) is also valid. Lemma 11 Our CLT, Proposition 6, is valid also if u is continuous and, for some g ≥ 1 and c > 0, |u(x, y)| ≤ c(1 + ||(x.y)|| g−1 ) and sup 1≤k≤K E P [|X k | g ] < ∞. Since sup 1≤k≤K E P [|X k | g ] < ∞, (46), and subsequently also the Lemma, are proven. Proof of Corollary 7: Lemma 11 proves the extension for Proposition 6. To prove (27) Similarly, given α, β ∈ (0, 1), v C α,β (Γ) = v C(Γ) + sup (t,z),(t ′ ,z ′ )∈Γ,(t,z) =(t ′ ,z ′ ) |v(t, z) − v(t ′ , z ′ )| |t − t ′ | α + |z − z ′ | β v C 1+α,1+β (Γ) = v C α,β (Γ) + ∂ t v C α,β (Γ) + 2 i=1 ∂ zi v C α,β (Γ) . v C 1+α,2+β (Γ) = v C 1+α,1+β (Γ) + 2 i,j=1 ∂ 2 zizj v C α,β (Γ) . The corresponding subspaces of C(Γ) in which the correspondent derivatives exist and the above norms are finite are denoted respectively by C 1+α,1+β (Γ) and C 1+α,2+β (Γ). Therefore, the first and second derivatives v(t, z) with respect to z exist and the related norms are finite. In particular, ∃L > 0 such that sup (t,z),(t,z ′ )∈Γ,z =z ′ |v(t, z) − v(t, z ′ )| |z − z ′ | β < L. In the proof of Lemma 8, we applied the preceding to v(t, z) = H t (z). } be a two-dimensional standard Brownian motion defined on (Ω, F , P ), and let {F t } be the natural filtration generated by (B t ). For a fixed T > 0, and any 0 ≤ t ≤ s ≤ T , let [A](t, T ) denote the set of all {F s }-progressively measurable processes, a = {a s = } : [t, T ] × Ω → [A] ⊂ R 2 . Finally, [A] ext (t, T ) is defined similarly by restricting the images of each process a to lie in [A] ext . Lemma 8 8The functions {H t } t∈[0,1] satisfy the following properties: ( 4 ) 4For the function G given in (28) , as n → ∞, as n → ∞ and ǫ → 0. Approximation Lemma in Feller (1971, Ch. VIII)). For small enough h > 0, we continue to use {H t (x, y)} t∈[0,1+h] as defined in (29). Let {L m,n (x, y)} n m=1 be the functions defined in (30). I 1n + I 2n .Application of Lemma 9 implies that |I 1n | → 0 as n → ∞. ≤ ϕ(x * ). ǫ (t, x, y) is the solution of PDE(20) with function G ǫ instead of G, G ǫ (p, q) and Zhou (1999, Propn. 5.10, Ch. 4), ∃C ′ > 0 such that|v ǫ (t, x, y) − v(t, x, y)| ≤ C ′ √ ǫ, ∀(t, x, y) ∈ [0, 1) × R 2 . , (x, y) ∈ R 2 .As in the proof of Lemma 8(1), for u ∈ C b,Lip (R 2 ), it can be checked that(Yong and Zhou (1999, Theorem 5.2 in Chapter 4)), v is the unique viscosity solution of the HJB-equation (20) with function G given in (28). Then we have ∈ C(R 2 ) with growth condition, the value function is still the unique viscosity solution of the PDE (20) with function G given in (28). Supporting details can be found in Pham (2009, p.66) or Aivaliotis and Palczewski (2010, Corollary 4.7). The Krylov norm: W use the notation in Krylov (1987, Section 1.1); see also in Peng (2019, Chapter 2.1). For Γ be a subset of [0, ∞) × R 2 , C(Γ) denotes all continuous functions v defined on Γ, in the relative topology on Γ, with a finite norm, v C(Γ) = sup (t,z)∈Γ |v(t, z)|. The second and third are adapted from Slivkins. 2 Though it is important to understand both tradeoffs and their interactions, as an initial step we focus on only one in this paper, that being the tradeoff for which there exists very limited theoretical analysis. Note also that we will show that only the means and variances of distributions need be known. Theorem 3 (iii)-(v) and their proofs give conditions under whcih there are gains from switching. Part (v) deals with the special case where variances can be ignored, (because DM is indifferent to differences in variances), and hence the extremes are defined by the means alone. Asymptotically optimal strategies are not unique. For example, if λ = 1/2, then alternating between arms (deterministically), that is, choosing arms according to the sequence 121212..., is also asymptotically optimal. B 1 is the time 1 value of a standard Brownian motion, and hence is distributed as N (0, 1). See Lemma 10 for the extension to σ= 0. In particular, they adopt (u.2), with α = 1 and ϕ having the form ϕ (y) = ϕ 1 (y − c) if y ≥ c, and = −λ −1 ϕ 1 (−λ(y − c)) if y < c, for some function ϕ 1 and c ∈ R. This functional form is motivated by loss aversion, but from the perspective of this paper is very special. Again, z = (z 1 , z 2 ) = (x, y). Tutorial for viscosity solutions in optimal control of diffusions. Available at SSRN 1582548. G Aivaliotis, J Palczewski, Aivaliotis, G. and J. Palczewski (2010). Tutorial for viscosity solutions in optimal control of diffusions. Available at SSRN 1582548. Bandit problems. D Bergemann, J Välimäki, The New Palgrave Dictionary of Economics. Palgrave MacmillanLondonPalgrave MacmillanBergemann, D. and J. Välimäki (2008). Bandit problems. In Palgrave Macmillan (eds.) The New Palgrave Dictionary of Economics. Palgrave Macmillan, London. Bandit Problems. D Berry, B Fristedt, Chapman HallLondonBerry, D. and B. Fristedt (1985). Bandit Problems. Chapman Hall, London. A general approach to multiarmed bandits under risk criteria. A Cassel, S Mannor, A Zeevi, Proc. Machine Learn. Res. 75Cassel, A., S. Mannor and A. Zeevi (2018). A general approach to multi- armed bandits under risk criteria. Proc. Machine Learn. Res. 75:1-12. A central limit theorem for sets of measures. Z Chen, L G Epstein, Stoch. Process. Appl. 152Chen, Z. and L.G. Epstein (2022). A central limit theorem for sets of mea- sures. Stoch. Process. Appl. 152, 424-451. A central limit theorem, loss aversion and multi-armed bandits. Z Chen, L G Epstein, G Zhang, arXiv:2106.05472v2math.PRChen, Z., L.G. Epstein and G. Zhang (2022). A central limit theorem, loss aversion and multi-armed bandits. arXiv:2106.05472v2 [math.PR]. Limit theorems with rate of convergence under sublinear expectations. X Fang, S Peng, Q M Shao, Y Song, Bernoulli. 254AFang, X., S. Peng, Q.M. Shao and Y. Song (2019). Limit theorems with rate of convergence under sublinear expectations. Bernoulli 25(4A), 2564-2596. W Feller, An Introduction to Probability Theory and its Applications. New YorkJohn Wiley and SonsFeller, W. (1971). An Introduction to Probability Theory and its Applica- tions,Vol.II. Second Edition. John Wiley and Sons, New York. Oscillating Brownian motion. J Keilson, J A Wellner, J. Appl. Probab. 152Keilson, J. and J.A. Wellner (1978). Oscillating Brownian motion. J. Appl. Probab. 15(2), 300-310. Optimal portfolios with downside risk. F Klebaner, Z Landsman, U Makov, J Yao, Quant. Finan. 17Klebaner, F., Z. Landsman, U. Makov and J. Yao (2017). Optimal portfolios with downside risk. Quant. Finan. 17, 315-325. Nonlinear Parabolic and Elliptic Equations of the Second Order. Reidel. Original Russian version by Nauka. N V Krylov, MoscowKrylov, N.V.: (1987) Nonlinear Parabolic and Elliptic Equations of the Second Order. Reidel. Original Russian version by Nauka, Moscow (1985). X Mao, Stochastic Differential Equations and Applications. Woodhead PublishingMao, X. (2008). Stochastic Differential Equations and Applications. Wood- head Publishing. Portfolio Selection. H Markowitz, Yale U. PressNew HavenMarkowitz H. (1959). Portfolio Selection. Yale U. Press, New Haven. An analytical comparison of variance and semivariance capital market theories. T J Nantell, B Price, J. Finan. Quant. Anal. 14Nantell, T.J. and B. Price (1979). An analytical comparison of variance and semivariance capital market theories. J. Finan. Quant. Anal. 14, 221-242. G-expectation, G-Brownian motion and related stochastic calculus of Itô type. S Peng, F E Benth, G Di Nunno, T Lindstrøm, B Øksendal, 10.1007/978-3-540-70847-625Stoch. Anal. Appl. Abel Symposia. Zhang, T.2SpringerPeng, S. (2007). G-expectation, G-Brownian motion and related stochas- tic calculus of Itô type. In: Benth, F.E., Di Nunno, G., Lindstrøm, T., Øksendal, B., Zhang, T. (eds) Stoch. Anal. Appl. Abel Symposia, vol 2. Springer, Berlin, https://doi.org/10.1007/978-3-540-70847-6 25. Nonlinear Expectations and Stochastic Calculus under Uncertainty: with Robust CLT and G-Brownian Motion. S Peng, Springer NaturePeng, S. (2019). Nonlinear Expectations and Stochastic Calculus under Un- certainty: with Robust CLT and G-Brownian Motion. Springer Nature. H Pham, Continuous-Time Stochastic Control and Optimization with Financial Applications. Springer Science & Business Media61Pham, H. (2009). Continuous-Time Stochastic Control and Optimization with Financial Applications (vol. 61). Springer Science & Business Media. Risk aversion in the small and in the large. J W Pratt, Econometrica. 321/2Pratt, J.W. (1964). Risk aversion in the small and in the large. Economet- rica 32(1/2), 122-136. Risk-aversion in multi-armed bandits. A Sani, A Lazaric, R Munos, arXiv:1301.1936v1cs.LGSani, A., A. Lazaric and R. Munos (2013). Risk-aversion in multi-armed bandits. arXiv:1301.1936v1 [cs.LG]. Introduction to Multi-Armed Bandits. A Slivkins, arXiv:1904.07272v7cs.LGSlivkins, A. (2022). Introduction to Multi-Armed Bandits. arXiv:1904.07272v7 [cs.LG]. Risk-averse multi-armed bandit problems under mean-variance measure. S Vakili, Q Zhao, 10.1109/JSTSP.2016.2592622IEEE J. Selected Topics in Signal Processing. Vakili, S. and Q. Zhao (2016). Risk-averse multi-armed bandit problems un- der mean-variance measure. IEEE J. Selected Topics in Signal Processing, Digital object identifier 10.1109/JSTSP.2016.2592622. J Yong, X Y Zhou, Stochastic Controls: Hamiltonian Systems and HJB Equations. Springer Science & Business Media43Yong, J. and X.Y. Zhou (1999). Stochastic Controls: Hamiltonian Systems and HJB Equations (vol. 43). Springer Science & Business Media. Rosenthal's inequalities for independent and negatively dependent random variables under sub-linear expectations with applications. L Zhang, Science China Math. 594Zhang, L. (2016). Rosenthal's inequalities for independent and negatively dependent random variables under sub-linear expectations with applica- tions. Science China Math. 59(4), 751-768. Generalized riskaversion in stochastic multi-armed bandits. A Zimin, R Ibsen-Jensen, K Chatterjee, arXiv:1405.0833cs.LGZimin, A., R. Ibsen-Jensen and K. Chatterjee (2014). Generalized risk- aversion in stochastic multi-armed bandits. arXiv:1405.0833 [cs.LG]. Step 1: Prove the CLT for any u ∈ C b (R 2 ) with compact support (constant outside a compact subset of R 2 ). Refer to it as "the CLT. In this case, ∀ǫ > 0 ∃û ∈ C b,Lip (R 2 ) such thatProof: We prove that (25) remains valid. Refer to it as "the CLT." Step 1: Prove the CLT for any u ∈ C b (R 2 ) with compact support (constant outside a compact subset of R 2 ). In this case, ∀ǫ > 0 ∃û ∈ C b,Lip (R 2 ) such that
[]
[ "The variability and radial velocity of planetary nebulae central stars", "The variability and radial velocity of planetary nebulae central stars" ]
[ "Ali ", "A ", "MindilA ", "\nAstronomy, Space Science & Meteorology Department\nFaculty of Science\nCairo University\n12613Giza\n", "\nDepartment of Physics, College of Science\nUniversity of Jeddah\nJeddahSaudi Arabia\n", "\nAli & Mindil\n\n" ]
[ "Astronomy, Space Science & Meteorology Department\nFaculty of Science\nCairo University\n12613Giza", "Department of Physics, College of Science\nUniversity of Jeddah\nJeddahSaudi Arabia", "Ali & Mindil\n" ]
[ "Astronomy and Astrophysics" ]
The extremely accurate estimates of stellar variability and radial velocity in the Gaia Data Release 3 (Gaia DR3) have enabled us to examine the close binarity and radial velocity (RV) of central stars (CSs) of planetary nebulae (PNe). This study is twofold: (1) searching for new close binary CSs candidates to better understand how binarity affects the formation and evolution of PNe; and (2) extending the sample size of known RV of PNe in order to understand their kinematics and the dynamics of the Milky Way. As a target sample, we used all true, possible, and likely PNe available in the literature. Then, we looked for their matched Gaia DR3 sources that provide measurements of variability and RV. As a result, we detected the first large collection of trustworthy photometric variability of 26 symbiotic stars (SySts) and 82 CSs. In this CS group, there are 24 sources already classified as true close binary CSs in the literature. Hence, we discovered 58 new close binary CS candidates.This close binary (CB) sample represents more than half of what is currently available in the literature. In addition, we identified the radial velocities for 51 PNe. To our knowledge, 24 of these were measured for the first time. The RV measurements predicted by Gaia, based on the Doppler shift of the CS absorption lines, and those derived from nebular emission lines, show satisfactory agreement except for a few extremely high-velocity PNe.
10.1088/1674-4527/acbe94
[ "https://export.arxiv.org/pdf/2302.08008v1.pdf" ]
256,901,058
2302.08008
9ee4570c9daff6ef5a14c65e1a218b9dfa73803f
The variability and radial velocity of planetary nebulae central stars February 17. 2023 Ali A MindilA Astronomy, Space Science & Meteorology Department Faculty of Science Cairo University 12613Giza Department of Physics, College of Science University of Jeddah JeddahSaudi Arabia Ali & Mindil The variability and radial velocity of planetary nebulae central stars Astronomy and Astrophysics 136February 17. 2023Received 20XX Month Day; accepted 20XX Month Day* Corresponding authorPlanetary nebulae: stellar variability -radial velocity The extremely accurate estimates of stellar variability and radial velocity in the Gaia Data Release 3 (Gaia DR3) have enabled us to examine the close binarity and radial velocity (RV) of central stars (CSs) of planetary nebulae (PNe). This study is twofold: (1) searching for new close binary CSs candidates to better understand how binarity affects the formation and evolution of PNe; and (2) extending the sample size of known RV of PNe in order to understand their kinematics and the dynamics of the Milky Way. As a target sample, we used all true, possible, and likely PNe available in the literature. Then, we looked for their matched Gaia DR3 sources that provide measurements of variability and RV. As a result, we detected the first large collection of trustworthy photometric variability of 26 symbiotic stars (SySts) and 82 CSs. In this CS group, there are 24 sources already classified as true close binary CSs in the literature. Hence, we discovered 58 new close binary CS candidates.This close binary (CB) sample represents more than half of what is currently available in the literature. In addition, we identified the radial velocities for 51 PNe. To our knowledge, 24 of these were measured for the first time. The RV measurements predicted by Gaia, based on the Doppler shift of the CS absorption lines, and those derived from nebular emission lines, show satisfactory agreement except for a few extremely high-velocity PNe. INTRODUCTION Gaia is an ESA project that aims to create a 3-D representation of the Milky Way galaxy. The Gaia Data Release 1 (Gaia DR1) appeared in September 2016. This was followed by Gaia Data Release 2 (Gaia DR2) in April 2018 and Early Data Release 3 (Gaia EDR3) in December 2020. The Gaia DR3 was published on June 13, 2022. It provides positions and apparent magnitudes of ∼ 1.8 billion sources, as well as parallaxes, proper motions, and colors of ∼ 1.5 billion objects. In comparison to Gaia DR2, Gaia DR3 exhibits significant improvements in astrometric and photometric accuracy, precision, and homogeneity. In addition to updating earlier releases, Gaia EDR3 contains new data, such as astrophysical parameters (Creevey et al. 2022), BP/RP spectra (De Angeli et al. 2022), and variability classification (Eyer et al. 2022). According to Eyer et al. (2022), Gaia DR1 has ∼ 3000, Gaia DR2 has ∼ 550, 000, and Gaia DR3 has ∼ 10.5 million variable sources. The topic of binarity has important implications for our understanding of cataclysmic variables and novae, type Ia supernovae, symbiotic stars, and other phenomena such as the production of astrophysical jets (Boffin & Jones 2019). Binary interactions occur between stars of all sizes and orbital separations, ranging from compact white dwarfs with 5-minutes orbital periods to giant stars with hundred-years orbital periods. From an observational and theoretical perspective, all stars with masses ranging from 1 to 8 solar masses-roughly 95% of the galaxy's stellar population-will undergo the PN stage of evolution. As a consequence, studying the binary CSs of PNe is crucial to our understanding of many astrophysical phenomena that have traditionally been attributed to single-star evolution (Aller et al. 2020). The HASH catalog (Parker et al. 2016) contains ∼ 3500 PNe, with 80% displaying complex morphologies that differ from sphericity, such as elliptical, bipolar, and multipolar PNe, as well as various internal features such as multi-shells, jets, and knots. Currently, there is widespread agreement on the importance of CS binarity in understanding PNe divergence from sphericity, where the various morphologies of PNe can no longer be explained by single-star models. Since the number of detected binary CSs has increased significantly over the past decade, it has become obvious that the wide variety of PNe morphologies and some of their unusual chemical properties are the products of binary evolution. A common-envelope event is the best method for generating an axisymmetric PN via binary interaction. The CB fraction is the most rigorous binarity test in PN formation. Despite the challenge of finding CS infrared excesses, De Marco et al. (2013) successfully employed the technique to calculate a binary fraction, obtaining a value of 67-78% based on I-band excess and 100-107% based on J-band excess. variability and radial velocity of PNCSs 3 Douchin et al. (2015) obtained I-band fraction of 40±20% and J-band fraction of 62±30% using an improved method and a larger sample. Estimates of the binary fraction range from 20% for photometrically detectable CB to 60-80% for those identified using the radial velocity variability and infrared excesses approaches. According to Boffin & Jones (2019), the bipolar and multipolar PNe are the result of CB stellar evolution. Wesson et al. (2018) identified a link between the high abundance discrepancy factors (adfs) of PNe and their CSs binarity. It was found that all PNe of binary CSs with a period less than 1.15 days had adfs larger than 10 and electron densities of less than 1000 cm −3 , whereas those with longer periods had lower adfs and significantly higher electron densities. In addition, they noted that any PN with an extreme adf must contain a close binary CS. In addition to ground-based observations of close binary CSs (e.g. Hillwig et al. (2016), Miszalski et al. (2011b, Jones et al. (2010), Miszalski et al. (2009)), space-based observations, such as those from the Transiting Exoplanet Survey Spacecraft-TESS (Aller et al. 2020) and Kepler satellite (Jacoby et al. 2021), have detected a significant number of CB candidates. Most of the photometric variability of these objects can be attributed to the effect of a companion star on the nebular CS. Gaia DR3 has radial velocities for ∼ 34 million stars and RV spectrometer spectra for almost a million stars (Katz et al. 2022). The Gaia Data Release 4 (Gaia DR4), which will analyze 66 months of data, will extend all RV spectra to a G-magnitude of 16.2 and reveal the RV of ∼ 100 million stars. In the present article, we aim to uncover the variability and RV of the CSs of PNe using the recent release of the Gaia project. We showed the PNe data sample and the approach used for extracting the variability and RV from the Gaia DR3 database in Section 2. Section 3 contains the results and discussion, whereas Section 4 has the conclusions. THE VARIABILITY AND RV DATA. The RV spectrometer is a medium-resolution spectrograph (R ≈ 11500) covering the wavelength range 846 -870 nm (Cropper et al. 2018). In total,∼ 10.5 million objects have been identified as variables in the Gaia DR3. Eyer et al. (2022) have reported the presence of 35 types and sub-types of variable objects, where the output of the variability analysis amounts to 17 tables containing a total of 365 parameters. The stellar photometric variability is stored in the gaiadr3.gaia source table in the field phot variable flag. The combined RVs and their formal uncertainties are, respectively, stored in the radial velocity and radial velocity error fields in the gaiadr3.gaia source table. To achieve the goals of this article, we searched the Gaia DR3 database for all stars whose positions matched those of PN central stars listed in the HASH catalog as true, possible, and likely PNe. They detected the variability of these objects not as a direct result of extracting the phot variable flag identifier in the gaiadr2.gaia source module, but according to a method that depends on the flux, magnitude, and color uncertainties of the object (for more details, see Chornay et al. (2021). Using the database sample, which is composed of roughly 3500 PNe, we found 113 CSs showing photometric variability through the phot variable flag identifier in the gaiadr3.gaia source table. Looking for more information on each PN in the SIMBAD database and the HASH catalog, we noticed that 27 PNe have been re-classified as SySts and four as M type, Hot subdwarf, Mira variable, and Wolf-Rayet stars (see Table 2). The remaining 82 CSs are associated with 75 true, 4 likely, and 3 possible PNe (Table 1). From this list, there are 24 CSs have been documented as possessing CB systems in the literature. As a result, we have detected 58 new close binary CS candidates. This set represents more than half of the known closed binary CSs (Boffin & Jones 2019). The binarity of CS may be inferred from its color. The central star is often blue owing to its enormous UV radiation, but there are also a lot of red CSs. This might be explained by the fact that the visible light of the main sequence or red giant companion dominates the CS color. Table 1 shows that approximately 70% of the CSs are red (B-R > 0.0), implying that they are possibly close binaries. In addition, Table 1 lists the periodicity time and the reference of each CS, which is considered a true close binary in the literature. Furthermore, we examined the list of variable CSs presented by Chornay et al. (2021), where we found only 4 stars (listed in Table 1, in boldface style) were explicitly defined as variables using the phot variable flag identifier. The morphological type of each PN that was retrieved from the HASH catalog is given in Table 1. As predicted by most current theories, the majority of suspected close binary CSs (85%) are surrounded by bipolar and elliptical nebulae (Boffin & Jones 2019). Moreover, ∼ 50% of these nebulae have multiple shells. In addition, Table 1 contains 9 PNe previously identified as having CB central stars and high adfs (Hf 2-2; A66 41; A66 63; K 1-2; Sp 1; HaTr 4, Hen 2-248; NGC 6026; M 3-16). Symbiotic stars have the longest orbital periods of all interacting binaries. It consists of an evolved, cool star transferring mass to a much hotter, brighter, compact partner (Iłkiewicz & Mikołajewska 2017). Because the spectra of SySts are similar to those of PNe and HII regions, all the objects in Table 2 were previously thought to be PNe. It is worth noting that all SySts in Table 2 are red in color. variability and radial velocity of PNCSs 5 Radial velocity of PNCSs The pioneering work for determining the RV of PNe was given by Schneider & Terzian (1983) who published the heliocentric RV for 524 PNe. The next compilation (867 PNe) was reported by Durand et al. (1998). Beaulieu et al. (1999) reported the RV of 45 PNe lying in the southern galactic bulge. Based on high dispersion spectra, Richer et al. (2017) reported the RV of 76 PNe. Numerous other individual RVs are scattered throughout the literature (e.g. Ali et al. 2016;Ali & Dopita 2017. All the above measurements were derived from the Doppler shift of the nebular spectra. The Gaia mission opened a new window for calculating the RV from the spectra of the observed CSs. The Gaia RV was found by measuring the Doppler shift of a template spectrum and then comparing it to the spectrum that was seen. Using the current release, we were able to detect the RV for 51 PNe, including updated values for 14 PNe recorded by Ali et al. (2022). Table 3 lists the newly detected radial velocities by Gaia DR3 and those obtained by Durand et al. (1998). The estimated median uncertainty of this compilation is 12.2%. In Figure 1, we compared the new RV measurements with those given by Durand et al. (1998). The diagonal line indicates the 1:1 matches. In general, the RVs computed from the spectra of both the CSs (Gaia DR3) and their associated nebulae are in good agreement. However, there are a few outliers related to high-velocity objects, such as H 2-24, SB 15, and Th 3-14. At first glance, all outlier objects have galactic longitudes of 0 to 10 and 350 to 360 degrees, and galactic latitudes of 0 to ± 10 degrees, indicating that they are located in the direction of the galactic bulge. To figure out the cause of this discrepancy, we examined the possible physical reasons, such as the interaction between planetary nebulae and the interstellar medium (ISM), the effect of nebular electron density, and the accuracy of radial velocity measurements deduced from PNe and CSs. We found that none of these nebulae interact with the ISM, and the available electron density data for these objects did not provide a reasonable explanation. In addition, the accuracy of RV measurements derived from nebular emission lines is suitable. Thus, we examined the parameters that Gaia used to calculate the RV. We extracted two additional parameters relevant to the Gaia RV calculations: rv nb transits (the number of transits used to calculate the RV) and rv visibility periods used (the number of visibility periods used to estimate the RV). Table 3, shows the previous two parameter values in columns 6 and 7. The preceding two parameters show that these outliers have a lower number of transits and shorter visibility periods compared with other RV measurements in Table 3. As a result, we may infer that the difference in RV measurements between the Gaia and nebular lines for these outlier objects is due to Gaia's inaccurate RV measurements. 6 Ali & Mindil CONCLUSIONS We have discovered 82 planetary nebulae associated with close binary central star candidates. To our knowledge, 58 members of this group have been found for the first time. This group of close binary central stars comprises roughly half of all objects known in the literature. We also discovered photometric variability in 26 symbiotic stars and four stars of different types. Moreover, we detected the radial velocities of 51 planetary nebulae, 27 of which were identified for the first time. With a few exceptions, there is good agreement between the radial velocities measured from the absorption lines of the central stars and those measured from the emission lines of the planetary nebulae. In future work, we plan to extract the available photometric variability identifiers from the Gaia DR3 database to build the light curves for some of the objects mentioned in Table 1. We also plan to use the 74-inch telescope at the Kottami observatory, Egypt, to perform a time-series photometric study for a few of the detected close binary central star candidates. Simultaneously, we will search the TESS and Kepler Sky Surveys, as well as the OGLE variable star catalog, for data that will allow us to confirm the binarity of the newly detected objects. Table 3. variability and radial velocity of PNCSs 9 (1) Jacoby et al. al. (2021) reported a list of 58 likely close binary CSs using the photometric data in Gaia DR2. Fig. 1 : 1The RVs derived from the Gaia DR3 against those reported byDurand et al. (1998). The diagonal line refers to the 1:1 correlation. The numbers on the plot assign the PNe numbers that listed in ( 2021 ); ( 2 ) 20212Miszalski et al. (2009); (3) Hillwig et al. (2016); (4) Jones et al. (2010); (5) Pollacco & Bell (1994);(6) Afşar & Ibanoǧlu (2008); (7) Corradi et al. (2015); (8) Douchin et al. (2015); (9) Miszalski et al. (2011a); (10) Munday et al. (2020); (11) Aller et al. (2020); (12) Exter et al. (2003); (13)Ciardullo et al. (1999). Table 1 : 1The variable CSs in Gaia DR3. The magnitude and color of the CS are indicated by the G and B-R parameters, respectively. The letters T, L, and P stand for true, likely, and possible PNe. Bipolar, elliptical, round, irregular, and quasi-Stellar PN shapes are denoted by the primary morphological keys B, E, R, I, and S, respectively. Multiple shells, point symmetry, ring structure, and asymmetry are denoted by the internal structure symbols m, p, r, and s, respectively.PN Name l b Gaia DR3 designation G B-R PN Status Shape Ref. PN Name l b Gaia DR3 designation G B-R PN Status Shape Ref. PN PC 12 0.17 17.25 4130784921205604736 15.2 0.6 T Bmpr 1 IC 2165 221.32 -12.39 2999839084924027776 17.5 -0.2 T Emrs PN Bl O 0.88 -1.57 4056603178196321792 16.5 1.8 T S PFP 1 222.13 3.91 3058094200264637312 15.8 -0.6 T Rar PPA J1800-2904 1.52 -2.85 4062356711999251328 18.1 T s PN M 3-2 240.37 -7.63 5609860130542365824 16.3 0.5 T Bms PN ShWi 7 1.80 -3.88 4050366645122261504 17.9 1.2 T B 2 PG 1034+001 247.55 47.75 3806885288337214848 13.2 -0.6 T na 11 Terz N 2111 3.96 1.66 4067312696253910272 16.9 4.4 T Ea PN K 1-2 253.58 10.78 5647809392112960000 17.0 0.2 T Baps 12 PN H 2-24 4.33 1.84 4068460105422978048 15.3 4.3 T Ba PN M 3-6 253.97 5.78 5639472001599302528 13.2 0.0 T E PN Hf 2-2 5.14 -8.90 4048497024309080064 17.2 0.0 T Ems 3 LoTr 3 265.11 -4.21 5521499734013833984 13.0 0.7 T Rr PN H 2-22 6.34 3.33 4117062676912301184 18.5 1.6 T B PN Lo 4 274.31 9.11 5414927915911816704 16.6 -0.4 T Ears PN PBOZ 29 6.59 3.41 4118615354715439872 16.1 1.6 L S Wray 16-55 277.62 -1.73 5308685822467307008 12.2 5.9 T S NGC 6629 9.41 -5.05 4089517157442187008 12.7 0.5 T Ems PN G281.1-00.4 281.18 -0.48 5259854002824501248 17.7 3.3 P na PN A66 41 9.66 10.51 4136835641106850432 16.2 0.3 T Bas 4 PN K 1-22 283.67 25.31 5399388964749811456 16.7 0.4 T Ears 13 PN Sa 3-111 14.27 4.21 4147061232357104384 17.0 3.7 T S DS 1 283.90 9.73 5362804330246457344 12.1 -0.3 T Ims 2 PN M 1-46 16.45 -1.98 4103910524954236928 12.8 0.8 T Rmprs Hen 2-70 293.61 1.20 5335879596943573888 15.7 2.0 T Bamps PTB 43 16.62 -4.05 4102825336944868480 17.3 0.9 T S NGC 4361 294.11 43.63 3519614068578061568 13.1 -0.5 T Emps PN PM 1-308 34.58 -11.75 4210278482327706496 13.1 0.6 T na PN G305.9-01.2 305.93 -1.27 5859151160662602752 19.3 2.9 T B PN G039.0-04.0 39.08 -4.10 4292267621344388864 14.3 1.5 T Emr Hen 2-99 309.00 -4.24 5851865148069389568 13.2 0.4 T Ers VSP 2-30 49.32 2.38 4320639728629291776 13.6 1.2 L S PN SuWt 2 311.05 2.48 5870592987893097984 11.9 0.6 T Eamrs PN A66 63 53.89 -3.03 1820963913284517504 15.0 0.3 T Bps 5,6,7 Hen 2-107 312.61 -1.90 5854138766383247232 14.6 1.2 T Ea NGC 6891 54.20 -12.11 1803234906762692736 12.3 -0.2 T Emrs 8 PN Sp 1 329.08 1.96 5982072132545824128 13.7 0.7 T Ramrs 3,4 PN G054.5+01.8 54.59 1.85 4515887189511585792 18.6 1.4 L E PN Mz 3 331.73 -1.01 5934701559547878144 13.2 1.8 P na PN A66 46 55.41 16.03 4585381817643702528 15.0 -0.2 T Eas 5,6,7 PN HaTr 7 332.51 -16.91 5911656865276078080 14.8 -0.3 T Eas 3 PN K 3-51 56.83 -6.96 1821791540605697152 17.2 -0.1 T R IC 4642 334.39 -9.35 5923374773032038528 15.9 -0.2 T Ems IRAS 19461+2419 60.99 -0.57 2020643612977496704 18.6 2.5 T S PN HaTr 4 335.25 -3.62 5937103069115240192 16.8 0.7 T B NGC 6720 63.17 13.98 2090486618786534784 15.6 -0.5 T Emrs MPA J1637-4911 335.95 -1.35 5940883018248096640 17.2 2.2 L S PN Ps 1 65.02 -27.31 1745948362385436544 14.7 0.3 T s Hen 2-248 341.51 -9.18 5946831685377720576 15.4 -0.3 T S 2 ETHOS 1 68.10 10.99 2050526964622031744 17.2 -0.1 T Bmps 9,10 NGC 6026 341.60 13.70 6011169161583903488 13.1 0.1 T Eas 2 MWP 1 80.36 -10.41 1855295171732158080 13.0 -0.5 T Baps IC 1266 345.24 -8.83 5954912374289120896 11.3 0.0 T Rars PN M 1-77 89.38 -2.27 1971995510535755648 11.9 1.0 T Sm PN Tc1 345.24 -8.83 5954912374289120896 11.3 0.0 T Rars PN K 1-16 94.03 27.43 2160562927224840576 15.0 -0.6 T B IC 4637 345.48 0.14 5966769881320062208 12.5 0.6 T Eaprs NGC 40 120.02 9.87 537481007814722688 11.5 0.3 T Bms PPA J1747-3435 355.33 -3.21 4041711044735017856 18.8 0.8 T Es 2 PB 9 122.72 70.36 1531053247144552704 18.0 0.7 T Eams PN M 1-27 356.53 -2.39 4053955824662571648 13.9 1.4 T R PB 4 123.11 70.19 1531068915184317568 17.3 0.6 T Emrs PN M 4-4 357.03 2.44 4058620300987916160 16.5 3.8 T Ear 1 PB 1 123.18 70.08 1531072827896228352 17.9 0.3 T Ems PN Al 2-O 358.01 -2.74 4043622756199128064 16.0 2.4 T E 2 NAME TS 01 136.00 55.97 846615127231002880 18.0 -0.3 T Es PN Al 2-R 358.75 -2.76 4055678213978728320 15.4 5.3 T B PN HFG 1 136.38 5.55 468033345145186816 14.0 0.7 T Eamrs 2 PHR J1752-3116 358.77 -2.50 4055698280071328640 16.1 3.4 T S NGC 1501 144.56 6.55 473712872456844544 14.2 0.6 T Ems JaSt 65 358.99 -1.55 4055974360527366272 17.7 2.4 T S LTNF 1 144.81 65.85 786919754746647424 15.1 0.4 T Bas 2 PN M 3-16 359.18 -2.30 4056131006637615488 17.1 1.1 T Em 2 NGC 2371 189.16 19.84 885587110718845568 14.8 -0.4 T Bmps PN PM 1-166 359.24 1.22 4060159376692627840 15.3 3.2 P B PN MaC 2-1 205.87 -26.73 3211200438511961088 15.8 -0.4 T S PN M 3-44 359.39 -1.81 4056355822397882880 16.0 2.1 T B PN A66 30 208.56 33.29 660071056749861888 14.4 -0.2 T Ramrs 1 PN Th 3-35 359.39 1.40 4060214180437611904 19.8 2.8 T S 1 PHR J0650+0013 212.64 -0.07 3113542949606809088 15.2 1.1 T Bmps Terz N 19 359.89 5.25 4109665712365779968 19.0 1.1 T B Table 2 : 2The variable symbiotic stars in Gaia DR3.# target id l b Gaia DR3 designation G B-R Type 1 PN ShWi 5 1.21 -3.90 Gaia DR3 4050209822908746240 15.3 1.2 Symbiotic Star 2 PN H 1-45 2.02 -2.06 Gaia DR3 4062646712567004416 14.3 3.0 Symbiotic Star 3 PN Ap 1-11 3.12 -4.63 Gaia DR3 4050848540419995776 13.2 3.1 Symbiotic Star 4 PN H 2-43 3.49 -4.87 Gaia DR3 4050670827750135040 13.6 1.2 Symbiotic Star 5 IRAS 17554-2628 3.58 -1.22 Gaia DR3 4064034330564300928 19.8 2.8 Symbiotic Star 6 PN M 3-18 7.57 1.44 Gaia DR3 4070389125449668608 11.9 5.0 Symbiotic Star 7 PN Th 4-4 8.31 3.73 Gaia DR3 4119029875002043392 14.6 3.4 Symbiotic Star 8 PN M 2-9 10.90 18.06 Gaia DR3 4335188603873318656 13.9 1.3 Symbiotic Star 9 PN K 3-9 23.91 -1.54 Gaia DR3 4155672680486693120 15.3 2.6 Symbiotic Star 10 PN Ap 3-1 37.64 -2.97 Gaia DR3 4268140453591785984 14.3 3.9 Symbiotic Star 11 PN M 4-16 61.79 2.11 Gaia DR3 2022052808961769088 16.6 1.3 Symbiotic Star 12 Hen 2-468 75.94 -4.44 Gaia DR3 1870194997404105856 12.6 2.8 Symbiotic Star 13 PN M 1-2 133.12 -8.64 Gaia DR3 360112911622101120 12.6 1.2 Symbiotic Star 14 Hen 2-34 274.19 2.58 Gaia DR3 5409069172514684416 14.7 2.7 Symbiotic Star 15 Hen 2-25 275.22 -3.71 Gaia DR3 5310613021532357632 14.7 0.8 Symbiotic Star 16 Hen 2-106 312.03 -2.03 Gaia DR3 5853777267581362176 13.3 0.8 Symbiotic Star 17 Hen 2-104 315.48 9.46 Gaia DR3 6089564718596906880 13.6 0.8 Symbiotic Star 18 Hen 2-134 319.22 -9.35 Gaia DR3 5822400362454690688 12.1 2.5 Symbiotic Star 19 Hen 2-127 325.54 4.18 Gaia DR3 5889726659221998592 14.5 2.4 Symbiotic Star 20 PN Cn 1-2 326.41 -10.94 Gaia DR3 5818044445302448000 10.6 1.8 Symbiotic Star 21 PN Cn 1-1 330.78 4.15 Gaia DR3 5982979264021123968 10.8 1.1 Symbiotic Star 22 Hen 2-156 338.94 5.36 Gaia DR3 5992529686406981248 12.4 2.1 Symbiotic Star 23 Hen 2-176 339.39 0.74 Gaia DR3 5943382139466094720 13.6 4.1 Symbiotic Star 24 Hen 2-171 346.03 8.55 Gaia DR3 6020686328090453888 14.9 5.5 Symbiotic Star 25 PN H 2-4 352.95 3.93 Gaia DR3 5979902864926562176 14.2 3.2 Symbiotic Star 26 PN M 2-24 356.99 -5.80 Gaia DR3 4042147516455759744 15.1 0.8 Symbiotic Star 27 PN Th 3-20 357.41 2.62 Gaia DR3 4058701527427641472 14.0 2.8 Symbiotic Star 28 PN K 4-26 37.18 -6.85 Gaia DR3 4263728319553777408 13.9 6.7 Mira Variable Candidate 29 PN K 4-36 44.44 -10.38 Gaia DR3 4290522180961855872 12.7 4.7 M star 30 CD-48 6027 283.90 9.73 Gaia DR3 5362804330246457344 12.1 -0.3 Hot Subdwarf 31 Hen 2-58 289.18 -0.70 Gaia DR3 5338220285385672064 7.3 0.9 Wolf-Rayet Table 3 : 3The CS radial velocity of PNe in Gaia DR3. The symbol (:) in column 4 refers to the RV measurement with high uncertainty.# PN name Galactic coordinate RV (km/s) rv nb transits rv visibility periods used l b Gaia DR3 Durand et al. (1998) 1 MPA J1803-3043 0.4 -4.22 -130.5±6.0 3 2 2 Ap 1-11 3.12 -4.63 62.0±3.9 4 4 3 H 2-24 4.33 1.84 28.0±3.8 -198.2± 4.1 6 3 4 M 1-44 4.97 -4.96 -5.9±0.5 -75±11 4 4 5 SB 15 9.3 -6.53 -16.5 ± 4.7 165 ± 15 5 4 6 PN G009.8-07.5 9.87 -7.56 -23.3 ± 1.1 -32 ± 30 2 2 7 PN V-V 3-4 13.45 -4.25 -16.3 ± 4.7 7 7 8 UCAC4 374-117003 15.54 0.34 21.3 ± 2.2 8 8 9 SS 318 17.02 11.1 -36.7 ± 2.4 19 11 10 K 2-7 19.41 -19.66 -18.4 ± 0.5 14 12 11 PN G019.5-04.9 19.53 -4.96 -20.9 ± 1.4 13 9 12 Pe 1-15 25.91 -2.18 37.7 ± 3.7 6 4 13 IPHASX J191716.4+033447 39.08 -4.1 -4.2 ± 8.6: 9 9 14 K 1-14 45.61 24.32 -19.0 ± 1.0 63 22 15 VSP 2-30 49.32 2.38 -13.4 ± 1.9 27 18 16 Me 1-1 52.54 -2.96 -1.5 ± 11.4: -6 ± 7 10 8 17 NGC 7008 93.41 5.49 -71.9 ± 7.7 -75.7 ± 2.7 18 11 18 IRAS 21282+5050 93.99 -0.12 28.2 ± 7.3 20 15 19 K 1-6 107.04 21.38 -55.7 ± 13.7 13 13 20 A 82 114.07 -4.67 -38.4 ± 2.3 -30.5 ± 3.3 23 16 21 PN M 1-2 133.12 -8.63 -28.4 ± 8.5 -12.1 ± 2 13 11 22 WeBo 1 135.67 1 -25.4 ± 4.6 33 17 23 NGC 1514 165.53 -15.29 42.3 ± 1.3 59.8 ± 4.4 21 10 24 H 3-75 193.65 -9.58 11.6 ± 2.6 22.9 ± 2 26 11 25 NGC 2346 215.7 3.62 31.5 ± 3.0 21.8 ± 0.9 13 10 26 PHR j0701-0749 221 -1.41 44.9 ± 2.9 14 11 27 LoTr 1 228.21 -22.14 15.8 ± 19.5: 31 19 28 PN V-V 1-7 235.44 1.89 38.9 ± 0.7 27 17 29 WRAY 15-158 255.33 -3.64 24.4 ± 2.3 22 19 30 LoTr 3 265.11 -4.21 16.2 ± 5.5 49 ± 3 19 15 31 NGC 3132 272.11 12.4 -11.1 ± 1.6 -16 ± 4.1 18 13 32 Hen 2-36 279.61 -3.19 -0.1 ± 1.5: -7.1 ± 2 18 10 33 Hen 2-51 288.88 -5.22 13.5 ± 3.9 8 ± 3 13 12 34 Al 1 291.1 -39.66 1.1 ± 2.3: 16 15 35 Hen 2-70 293.61 1.2 64.6 ± 4.8 14 13 36 PN A66 35 303.57 40 -38.1 ± 4.6 -6.6 ± 3.8 9 8 37 SuWt 2 311.05 2.48 -15.7 ± 7.5 -40 ± 9 17 11 38 MPA J1508-6455 316.77 -5.8 -39.2 ± 11.9 43 17 39 Hen 2-134 319.22 -9.35 9.0 ± 0.7 45 21 40 PM 1-89 324.09 3.53 -68.5 ± 3.0 -81 ± 20 13 12 41 Hen 3-1312 (Sast 2-12) 334.84 -7.46 -70.6 ± 1.0 -77 ± -7 25 13 42 LoTr 5 339.89 88.46 -8.3 ± 1.7 21 13 43 Vd 1-1 344.27 4.75 -70.6 ± 1.0 -142.1 ± 2.5 11 10 44 SB 38 352.8 -8.41 -35.0 ± 7.0 59 ± 15 9 6 45 PHR J1711-3210 353.28 4.25 27.2 ± 1.7 10 6 46 PN G354.8+01.6 354.89 1.63 13.6 ± 6.6 13 10 47 Pe 1-11 358.01 -5.16 -11.9 ± 0.2 -130.6 ± 14 12 5 48 M 3-8 358.24 4.29 101.7 ± 3.4 95 ± 11 7 4 49 PHR J1752-3116 358.77 -2.5 -21.7 ± 8.4 2 1 50 Hen 3-1863 359.28 -33.5 4.3 ± 10.2: 19 14 51 Th 3-14 359.3 4.76 26.2 ± 20.4: -239.2 ± 14 7 5 Acknowledgements The authors would like to thank the reviewer for his or her constructive suggestions that helped enhance the original manuscript. . M Afşar, C Ibanoǧlu, MNRAS. 391802Afşar, M., & Ibanoǧlu, C. 2008, MNRAS, 391, 802 . A Ali, E Algarni, A Mindil, S A Alghamdi, Research in Astronomy and Astrophysics. 2285013Ali, A., Algarni, E., Mindil, A., & Alghamdi, S. A. 2022, Research in Astronomy and Astrophysics, 22, 085013 . A Ali, M A Dopita, PASA. 3436Ali, A., & Dopita, M. A. 2017, PASA, 34, e036 . A Ali, M A Dopita, MNRAS. 4843251Ali, A., & Dopita, M. A. 2019, MNRAS, 484, 3251 . A Ali, M A Dopita, H M Basurah, MNRAS. 4621393Ali, A., Dopita, M. A., Basurah, H. M., et al. 2016, MNRAS, 462, 1393 . A Aller, J Lillo-Box, D Jones, L F Miranda, S Barceló Forteza, A&A. 635128Aller, A., Lillo-Box, J., Jones, D., Miranda, L. F., & Barceló Forteza, S. 2020, A&A, 635, A128 . S F Beaulieu, M A Dopita, K C Freeman, ApJ. 515610Beaulieu, S. F., Dopita, M. A., & Freeman, K. C. 1999, ApJ, 515, 610 The Importance of Binaries in the Formation and Evolution of Planetary Nebulae. H M J Boffin, D Jones, Boffin, H. M. J., & Jones, D. 2019, The Importance of Binaries in the Formation and Evolution of Planetary Nebulae . N Chornay, N A Walton, D Jones, A&A. 64895Chornay, N., Walton, N. A., Jones, D., et al. 2021, A&A, 648, A95 . R Ciardullo, H E Bond, M S Sipior, AJ. 118488Ciardullo, R., Bond, H. E., Sipior, M. S., et al. 1999, AJ, 118, 488 . R L M Corradi, J García-Rojas, D Jones, P Rodríguez-Gil, ApJ. 80399Corradi, R. L. M., García-Rojas, J., Jones, D., & Rodríguez-Gil, P. 2015, ApJ, 803, 99 . O L Creevey, R Sordo, F Pailler, arXiv:2206.05864arXiv e-printsCreevey, O. L., Sordo, R., Pailler, F., et al. 2022, arXiv e-prints, arXiv:2206.05864 M Cropper, D Katz, P Sartoretti, A5 variability and radial velocity of PNCSs. 616Cropper, M., Katz, D., Sartoretti, P., et al. 2018, A&A, 616, A5 variability and radial velocity of PNCSs 7 . F De Angeli, M Weiler, P Montegriffo, arXiv:2206.06143arXiv e-printsDe Angeli, F., Weiler, M., Montegriffo, P., et al. 2022, arXiv e-prints, arXiv:2206.06143 . De Marco, O Passy, J.-C Frew, D J Moe, M Jacoby, G H , MNRAS. 4282118De Marco, O., Passy, J.-C., Frew, D. J., Moe, M., & Jacoby, G. H. 2013, MNRAS, 428, 2118 . D Douchin, O De Marco, D J Frew, MNRAS. 4483132Douchin, D., De Marco, O., Frew, D. J., et al. 2015, MNRAS, 448, 3132 . S Durand, A Acker, A Zijlstra, A&AS. 13213Durand, S., Acker, A., & Zijlstra, A. 1998, A&AS, 132, 13 . K M Exter, D L Pollacco, S A Bell, MNRAS. 3411349Exter, K. M., Pollacco, D. L., & Bell, S. A. 2003, MNRAS, 341, 1349 . L Eyer, M Audard, B Holl, arXiv:2206.06416arXiv e-printsEyer, L., Audard, M., Holl, B., et al. 2022, arXiv e-prints, arXiv:2206.06416 . T C Hillwig, H E Bond, D J Frew, S C Schaub, E H L Bodman, AJ. 15234Hillwig, T. C., Bond, H. E., Frew, D. J., Schaub, S. C., & Bodman, E. H. L. 2016, AJ, 152, 34 . K Iłkiewicz, J Mikołajewska, A&A. 606110Iłkiewicz, K., & Mikołajewska, J. 2017, A&A, 606, A110 . G H Jacoby, T C Hillwig, D Jones, MNRAS. 5065223Jacoby, G. H., Hillwig, T. C., Jones, D., et al. 2021, MNRAS, 506, 5223 . D Jones, M Lloyd, M Santander-García, MNRAS. 4082312Jones, D., Lloyd, M., Santander-García, M., et al. 2010, MNRAS, 408, 2312 . D Katz, P Sartoretti, A Guerrier, arXiv:2206.05902arXiv e-printsKatz, D., Sartoretti, P., Guerrier, A., et al. 2022, arXiv e-prints, arXiv:2206.05902 . B Miszalski, A Acker, A F J Moffat, Q A Parker, A Udalski, A&A. 496813Miszalski, B., Acker, A., Moffat, A. F. J., Parker, Q. A., & Udalski, A. 2009, A&A, 496, 813 . B Miszalski, R L M Corradi, H M J Boffin, MNRAS. 4131264Miszalski, B., Corradi, R. L. M., Boffin, H. M. J., et al. 2011a, MNRAS, 413, 1264 . B Miszalski, D Jones, P Rodríguez-Gil, A&A. 531158Miszalski, B., Jones, D., Rodríguez-Gil, P., et al. 2011b, A&A, 531, A158 . J Munday, D Jones, J García-Rojas, MNRAS. 4986005Munday, J., Jones, D., García-Rojas, J., et al. 2020, MNRAS, 498, 6005 . Q A Parker, I S Bojičić, D J Frew, Journal of Physics Conference Series. 72832008Journal of Physics Conference SeriesParker, Q. A., Bojičić, I. S., & Frew, D. J. 2016, in Journal of Physics Conference Series, Vol. 728, Journal of Physics Conference Series, 032008 . D L Pollacco, S A Bell, MNRAS. 267452Pollacco, D. L., & Bell, S. A. 1994, MNRAS, 267, 452 . M G Richer, G Suárez, J A López, M T García Díaz, AJ. 153140Richer, M. G., Suárez, G., López, J. A., & García Díaz, M. T. 2017, AJ, 153, 140 . S E Schneider, Y Terzian, ApJ. 27461Schneider, S. E., & Terzian, Y. 1983, ApJ, 274, L61 . R Wesson, D Jones, J García-Rojas, H M J Boffin, R L Corradi, MNRAS. 4804589Wesson, R., Jones, D., García-Rojas, J., Boffin, H. M. J., & Corradi, R. L. M. 2018, MNRAS, 480, 4589
[]
[ "Cooling with fermionic reservoir", "Cooling with fermionic reservoir", "Cooling with fermionic reservoir", "Cooling with fermionic reservoir" ]
[ "Gabriella G Damas \nInstituto de Física\nUniversidade Federal de Goiás\n74.001-970Goiânia -GOBrazil\n", "Rogério J De Assis \nInstituto de Física\nUniversidade Federal de Goiás\n74.001-970Goiânia -GOBrazil\n\nDepartamento de Física\nUniversidade Federal de São Carlos\n13.565-905São Carlos -SPBrazil\n", "Norton G De Almeida \nInstituto de Física\nUniversidade Federal de Goiás\n74.001-970Goiânia -GOBrazil\n", "Gabriella G Damas \nInstituto de Física\nUniversidade Federal de Goiás\n74.001-970Goiânia -GOBrazil\n", "Rogério J De Assis \nInstituto de Física\nUniversidade Federal de Goiás\n74.001-970Goiânia -GOBrazil\n\nDepartamento de Física\nUniversidade Federal de São Carlos\n13.565-905São Carlos -SPBrazil\n", "Norton G De Almeida \nInstituto de Física\nUniversidade Federal de Goiás\n74.001-970Goiânia -GOBrazil\n" ]
[ "Instituto de Física\nUniversidade Federal de Goiás\n74.001-970Goiânia -GOBrazil", "Instituto de Física\nUniversidade Federal de Goiás\n74.001-970Goiânia -GOBrazil", "Departamento de Física\nUniversidade Federal de São Carlos\n13.565-905São Carlos -SPBrazil", "Instituto de Física\nUniversidade Federal de Goiás\n74.001-970Goiânia -GOBrazil", "Instituto de Física\nUniversidade Federal de Goiás\n74.001-970Goiânia -GOBrazil", "Instituto de Física\nUniversidade Federal de Goiás\n74.001-970Goiânia -GOBrazil", "Departamento de Física\nUniversidade Federal de São Carlos\n13.565-905São Carlos -SPBrazil", "Instituto de Física\nUniversidade Federal de Goiás\n74.001-970Goiânia -GOBrazil" ]
[]
Recently, much emphasis has been given to genuinely quantum reservoirs generically called fermionic reservoirs. These reservoirs are characterized by having finite levels, as opposed to bosonic reservoirs, which have infinite levels that can be populated via an increase in temperature. Given this, some studies are being carried out to explore the advantages of using quantum reservoirs, in particular in the operation of heat machines. In this work, we make a comparative study of a thermal refrigerator operating in the presence of either a bosonic or a fermionic reservoir, and we show that fermionic reservoirs have advantages over bosonic ones. We propose an explanation for the origin of these advantages by analyzing both the asymptotic behavior of the states of the qubits and the exchange rates between these qubits and their respective reservoirs.
10.1103/physreve.107.034128
[ "https://export.arxiv.org/pdf/2207.08862v1.pdf" ]
257,620,145
2207.08862
26354789b6a5a2679db53866d351cdf54a94ecb6
Cooling with fermionic reservoir Gabriella G Damas Instituto de Física Universidade Federal de Goiás 74.001-970Goiânia -GOBrazil Rogério J De Assis Instituto de Física Universidade Federal de Goiás 74.001-970Goiânia -GOBrazil Departamento de Física Universidade Federal de São Carlos 13.565-905São Carlos -SPBrazil Norton G De Almeida Instituto de Física Universidade Federal de Goiás 74.001-970Goiânia -GOBrazil Cooling with fermionic reservoir numbers: 0530-d0520-y0570Ln Recently, much emphasis has been given to genuinely quantum reservoirs generically called fermionic reservoirs. These reservoirs are characterized by having finite levels, as opposed to bosonic reservoirs, which have infinite levels that can be populated via an increase in temperature. Given this, some studies are being carried out to explore the advantages of using quantum reservoirs, in particular in the operation of heat machines. In this work, we make a comparative study of a thermal refrigerator operating in the presence of either a bosonic or a fermionic reservoir, and we show that fermionic reservoirs have advantages over bosonic ones. We propose an explanation for the origin of these advantages by analyzing both the asymptotic behavior of the states of the qubits and the exchange rates between these qubits and their respective reservoirs. I. INTRODUCTION With the development of quantum thermodynamics [1][2][3], there is an increasing interest in the realization of thermal devices operating at the quantum limit [4][5][6][7][8][9][10]. In particular, heat engines whose working substance consists of systems with finite levels of energy, such as two-level systems, can absorb from or deliver to their surroundings quantities of energy as small as their corresponding energy gaps. In contrast, bosonic working substances, modeled by quantum harmonic oscillators, have infinitely many levels, where higher and higher levels can be populated by increasing temperature. Comparative studies exploring the difference between a bosonic and a fermionic working substance demonstrate that there are advantages in considering working substances with finite levels for quantum engines [11][12][13][14]. In particular, systems with finite energy levels, such as two-level systems, can exhibit stationary states with population inversion [15][16][17], giving rise to absolute negative temperatures [18,19]. The population inversion associated with negative temperatures of the system requires thermal reservoirs built with fermionic substances, as experimentally demonstrated in [15,16,20]. This inverted population effect has been explored in some works [20][21][22], with remarkable impact on the efficiency of heat engines, as experimentally shown in Ref. [17,20]. On the other hand, fermionic reservoirs [11,[23][24][25][26][27][28][29] built with two-level substances whose energy gap is E does not necessarily need to present population inversion, in which case its temperature T remains positive, with the average excitation number given by the Fermi-Dirac distribution n = 1/ e E/T + 1 < 0.5, in contrast with bosonic substances where the average excitation number is given by the Bose-Einstein distribution n = 1/ e E/T − 1 . In the condition of positive temperature, one could imagine that there would be no gain in considering fermionic reservoirs. However, in this work we consider a study of case in which a refrigerator built with two-level substances can present advantages when operating in a fermionic environment as compared to a bosonic one, with both environments at positive temperatures. As the operating conditions are kept the same for both environments, our results emphasize that the presented advantage stems from the quantum nature of the fermionic reservoir. II. MODEL In the present work, we consider a self-contained quantum refrigerator (SCQR) composed of three interacting qubits, each in contact with a specific thermal reservoir. This SCQR was first proposed in Ref. [25], in which the authors took into account only bosonic reservoirs. Recently, we investigated this SCQR operating with one of the reservoirs being a fermionic one at a negative temperature, see Ref. [30]. Here, as in Ref. [25], we approach the case in which qubits 1, 2, and 3 interact respectively with a thermal reservoir at a cold temperature T c > 0, a thermal reservoir at a "room" temperature T r > 0, and a thermal reservoir at a hot temperature T h > 0 -see the schematic shown in Fig. 1. The device in question works like a refrigerator when T 1 − T c < 0, where T 1 is the temperature of qubit 1. In this case, therefore, heat flows from the cold reservoir to qubit 1. However, considering the asymptotic state, this only occurs if the relations E 3 = E 2 − E 1 , with E k being the energy gap of qubit k (k = 1, 2, 3), and T c < T r < T h are satisfied [25]. We assume the weak coupling limit and the Markovian regime governing the dynamics of the SCQR, such that arXiv:2207.08862v1 [quant-ph] 18 Jul 2022 Figure 1. Schematic representation of the SCQR refrigerator and its respective thermal reservoirs. The SCQR is composed of three interacting qubits having energy gaps E1, E2, and E3 in contact with their respective reservoirs. Here, T h is the temperature of the hot reservoir, Tr is the temperature of the "room" reservoir, Tc is the temperature of the cold reservoir, and g is the coupling constant between the qubits. the master equation is [26,31] dρ dt = −i [H 0 + H int ] + 3 k=1 Γ ↓ B(F ),k σ −,k ρσ +,k − 1 2 {σ +,k σ −,k , ρ} + 3 k=1 Γ ↑ B(F ),k σ +,k ρσ −,k − 1 2 {σ −,k σ +,k , ρ} . (1) Here, the free qubits Hamiltonian H 0 and the three-body interaction Hamiltonian H int are given by H 0 = 1 2 E 1 σ z,1 + 1 2 E 3 σ z,3 + 1 2 E 3 σ z,3(2) and H int = g (σ −,1 σ +,2 σ −,3 + σ +,1 σ −,2 σ +,3 ) ,(3) where σ z,k is the z Pauli operator for qubit k, g is the coupling constant, and σ −,k (σ +,k ) is the lowering (raising) Pauli operator for qubit k. Note that Eq. (1) governs the dynamics of either bosonic [31] and fermionic [23,24,26] thermal reservoirs: if qubit k is interacting with a bosonic (fermionic) thermal reservoir, Γ ↓ B,k = γ k (1 + n B,k ) (Γ ↓ F,k = γ k (1 − n F,k )) and Γ ↑ B,k = γ k n B,k (Γ ↑ F,k = γ k n F,k ), where γ k is the dissipation rate and n B,k = 1/(e E k /T φ k − 1) (n F,k = 1/(e E k /T φ k + 1)) is the average excitation number, being φ 1 = c, φ 2 = r, and φ 3 = h. Note that since for bosons Γ ↓ B,k = γ k (1 + n B,k ), Γ ↑ B,k = γ k n B,k and for fermions the average excitation number is limited to 0.5 for positive temperatures, then Γ ↓ B,k and Γ ↑ B,k is always greater than Γ ↓ F,k and Γ ↑ F,k . To obtain the asymptotic state of Eq. (1) we used the quantum optics toolbox [32,33]. III. RESULTS To compare the SCQR operating in the different configurations involving bosonic and fermionic reservoirs, we start by fixing the energies E 1 = 1, E 2 = 5, and E 3 = 4; the temperatures T c = 1, 1.5, 2, and T r = 2; the coupling constant g = 10 −2 ; and the dissipation rates γ 1 = γ 2 = γ 3 = g. Next, we let T h vary from 10 −1 to 10 3 . Fig. 2(a) shows the temperature difference T 1 − T c versus T h (on logarithmic scale) for the SCQR working in a bosonic environment for the cold temperatures T c = 1 (dotted green line), T c = 1.5 (dashed red line), and T c = 2 (solid blue line). As said before, cooling occurs when T 1 − T c < 0. Similarly, Fig. 2(b) shows T 1 − T c as a function of T h for the same cold temperatures, but now the SCQR is surrounded by fermionic reservoirs. In Fig. 2(a), T 1 − T c decreases to a minimum value and then increases until it stabilizes at a negative value close to zero, while, in Fig. 2(b), T 1 − T c stabilizes at its minimum value. Thus, the SCQR in the fermionic environment has an advantage over the bosonic one, as its efficiency in cooling qubit 1 does not decrease at higher values of T h . Furthermore, under fermionic reservoirs, qubit 1 reaches lower minimum temperature values than when under bosonic reservoirs, as can be seen from the difference T 1 −T c , which is more negative for the fermionic environment (compare Figs. 2(a) and 2(b)). According to our numerical simulations, when considering the bosonic environment, the minimum values for T 1 are T 1 = 0.95 (when T c = 1), T 1 = 1.41 (when T c = 1.5), and T 1 = 1.87 (when T c = 2). On the other hand, when considering three fermionic reservoirs, since the values for T 1 continue to decrease with increasing T h , we take the minimum value for T 1 when T h = 100. These minimum values are T 1 = 0.82 (when T c = 1), T 1 = 1.09 (when T c = 1.5), and T 1 = 1.29 (when T c = 2). By considering T c as a reference we can then calculate the cooling percentage (|T 1 − T c | /T c ) × 100 to compare how much the fermionic and bosonic reservoirs cools qubit 1 -see Tab. I, where 3B (3F) stands for three bosonic (fermionic) reservoirs. Tab. I shows a significant difference in the cooling percentage for the two sets of reservoirs: it is always higher when using three fermionic reservoirs, thus clearly showing that the fermionic environment is far more efficient in decreasing the temperature T 1 than the bosonic one. Also, this percentage is better the higher the reference temperature T c , such that for T c = 2, it can reach up to more than four times the value reached using only bosonic reservoirs. For lower values of the cold temperatures T c , the percentage difference decreases but using fermionic reservoirs, the cooling when T c = 1 is still more than double that of the case of bosonic reservoirs alone. It is worth mentioning that by fixing the SCQR parameters as we did, there is a limit to cooling qubit 1. As we found numerically, the corresponding lowest cooling percentage reached by qubit 1, irrespective of the type of reservoir used, occurs when T c ∼ 0.48. For temperatures lower than T c ∼ 0.48, T 1 − T c > 0, meaning that the SCQR no longer works. Also, the percentage of cooling decreases more and more as T c approaches 0.48 for both reservoirs. However, the percentage of cooling when using fermionic reservoirs remains higher, as shown in Tab. II. As we have seen, for fixed parameters we cannot cool down qubit 1 to zero absolute. However, there is a strategy to keep up cooling toward zero absolute, which is to isolate qubit 1 from its environment. This condition, obtained by imposing γ 1 → 0 or equivalently Γ ↓ B(F ),1 → 0 and Γ ↑ B(F ),1 → 0 in (1), allows us to obtain the following analytical solution for the temperature of qubit 1: T 1 = T c 1 + E3 E1 1 − Tc T h ,(4) from which we can see that, if we let E 3 /E 1 → ∞, then T 1 → 0. This result, obtained in Ref. [25], shows that there is no fundamental limit to cool down to zero absolute, provided we can perfectly isolate qubit 1. So far we have considered fermionic reservoirs for all qubits in the SCQR. Other possibilities include the cases of combinations of bosonic and fermionic reservoirs. In fact, considering the fermionic reservoir as a quantum resource, it may be interesting to consider cases where only one or two fermionic reservoirs are used. For this, it is necessary to consider which qubit the fermionic reservoir is associated with. Let us use a notation in which B (F) denotes the bosonic (fermionic) reservoir and the order in which it appears in the sequence indicates which qubit that reservoir is attached to. For example, the sequence BFB indicates that qubit 1 is subjected to a bosonic reservoir, qubit 2 to a fermionic reservoir, and the third qubit to a bosonic reservoir. Next, we investigate all configurations numerically and grouped the results in Tab. III, ordering from highest to lowest percentage of cooling and following the same procedure as in the previous tables, i.e., we took the minimum value for T 1 . Interestingly, and c ontrary to what one might think, the best case does not occur when three fermionic reservoirs are used. As Tab. III shows, the greatest cooling range occurs for FBF case, i.e., when only qubit 1 and 3 are bound to fermionic reservoirs. Although the difference between the FBF and FFF configurations is small, it is still notable that the cooling percentage is higher when only two fermionic reservoirs are used instead of three. Here, for example, BFF means qubit 1 bound to a bosonic reservoir and qubits 2 and 3 bound to fermionic reservoirs. Note that the highest percentage of cooling occurs for FBF configuration, meaning that qubit 1 is bound to a fermionic reservoir, qubit 2 is bound to a bosonic reservoir and qubit 3 is bound to another fermionic reservoir. Note that the FBF configuration presents the smallest exchange rates for qubit 1. This explains why it is more effective for cooling qubit 1 -see main text. The temperatures used are Tc = 2, Tr = 2, and T h varies to minimize the values of T1, as the SCQR behavior changes as shown in Figs. 2(a) and 2(b). The same pattern occurs if we use other values for Tc. In this regard, note that the FFB and BFF sequences, although each also contain only two fermionic reservoirs and one bosonic reservoir, they have a lower cooling percentage than that of the FBF sequence. For instance, for T c = 2, the cooling percentage of FBF is 29.72%, which is higher than that for sequence FFB (25.09%) and BFF (10.33%). The explanation for this fact is given below. Remembering that the lowest temperature for qubit 1, which is the qubit we want to cool, occurs when it is completely isolated, it is to be expected, therefore, that when the exchange rates Γ ↑ F (B),1 and Γ ↓ F (B),1 of qubit 1 with its reservoir are the lowest possible, the cooling will take place more effectively, with perfect insulation being the best case. In Tab. IV we show the exchange rates Γ ↑ F (B),k and Γ ↓ F (B),k , k = 1, 2, 3, for the k-th qubit for temperatures T c = 2, T r = 2, and T h = 10. From Tab. IV we see that the Γ ↑ F (B),1 and Γ ↓ F (B),1 rates are the smallest whenever the sequence starts with F, and remain the smallest irrespective of the temperatures used, according to our simulations. Another relevant point to be considered, which we have already shown in Figs. 2(a) is that bosonic reservoirs have the disadvantage of making the cooling non-monotonic, and therefore less effective. Thus, obtaining better cooling percentages requires that the last reservoir be fermionic, as we also verified in our numerical simulations. The role of the nature of the second reservoir and its relevance to cooling percentages is quite complex. In our numerical simulations, we were able to identify that the best cooling percentages occur, whenever one bosonic reservoir and two fermionic reservoirs are used, in the sequence FBF -see Tab. III. Regarding other configurations with lower cooling percentages but yet involving two fermionic and one bosonic reservoir, note for example that the FFB sequence may have higher or lower cooling percentages than the BFF sequence depending on whether the temperature T c is higher or lower than unity. IV. CONCLUSION Recent studies on heat machines have used quantum reservoirs as a resource to obtain better performances both in engines and in refrigerators [12,21,22,34]. For example, fermionic reservoirs have been explored in previous works, especially in their purely quantum characteristic of presenting population inversion [17,20], which, in turn, is associated with negative effective temperatures [3,18]. Here we explore the quantum nature of fermionic reservoirs without taking population inversion into account, such that we restrict to the domain of positive temperatures. Using a qubit-based refrigerator model proposed in Ref. [25], we show that, once the operating parameters of the refrigerator are fixed, the use of fermionic reservoirs allows to obtain better results, with respect to the cooling capacity, than the use of bosonic reservoirs. We have verified, for example, that when the qubit to be cooled cannot be perfectly insulated, the use of only fermionic reservoirs allows to reach lower temperatures than the use of only bosonic reservoirs. In addition, contrary to what might be thought, the cooling can be more effective, in the sense of obtaining a higher percentage of cooling, when instead of three, only two fermionic reservoirs are used. We show that an explanation of this somewhat unexpected result is due to the exchanged rates between qubit 1 and its reservoir as well as to the behavior of the asymptotic cooling of qubit 1 when subjected to different types of reservoirs. In summary, when the condition for perfect insulation cannot be reached, our results unequivocally demonstrate the superiority of the fermionic reservoir in the process of cooling qubits to the lowest possible temperatures. We acknowledge financial support from the Brazilian agencies: Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES), financial code 001, National Council for Scientific and Technological Develop- Figure 2 . 2Temperature difference T1 − Tc versus T h for (a) three bosonic and (b) three fermionic reservoirs, considering three different values of Tc: Tc = 1 (dotted green line), Tc = 1.5 (dashed red line) and Tc = 2 (solid blue line). Refrigeration occurs for T1 − Tc < 0. Note the difference in behavior in the two figures: while in (a) temperature T1 reaches a minimum and then starts to increase, in (b) T1 decreases monotonically, practically stabilizing for sufficiently high T h , thus indicating that the lowest temperatures reached by qubit 1 occur for fermionic reservoirs. Table IV . IVExchange rates Γ ↓ B(F ),k and Γ ↑ B(F ),k of the qubit k with their respective reservoirs. From this Table, we see that the lowest (highest) exchange rates occur for fermionic (bosonic) reservoirs. This work was performed as part of the Brazilian National Institute of Science and Technology (INCT) for Quantum Information. 311612/2021-0 and 301500São Paulo Research Foundation (FAPESP), grant 2021/04672-0, and Goiás State Research Support Foundation (FAPEG). grant 465469/2014-0ment (CNPq), grant 311612/2021-0 and 301500/2018- 5, São Paulo Research Foundation (FAPESP), grant 2021/04672-0, and Goiás State Research Support Foun- dation (FAPEG). This work was performed as part of the Brazilian National Institute of Science and Technology (INCT) for Quantum Information, grant 465469/2014-0. Quantum Thermodynamics: Emergence of Thermodynamic Behavior Within Composite Quantum Systems. J Gemmer, M Michel, G Mahler, Lecture Notes in Physics. SpringerJ. Gemmer, M. Michel, and G. Mahler, Quantum Ther- modynamics: Emergence of Thermodynamic Behavior Within Composite Quantum Systems, Lecture Notes in Physics (Springer Berlin Heidelberg, 2004). F Binder, L Correa, C Gogolin, J Anders, G Adesso, Thermodynamics in the Quantum Regime: Fundamental Aspects and New Directions, Fundamental Theories of Physics. Springer International PublishingF. Binder, L. Correa, C. Gogolin, J. Anders, and G. Adesso, Thermodynamics in the Quantum Regime: Fundamental Aspects and New Directions, Fundamental Theories of Physics (Springer International Publishing, 2019). . P Strasberg, A Winter, 10.1103/PRXQuantum.2.030202PRX Quantum. 230202P. Strasberg and A. Winter, PRX Quantum 2, 030202 (2021). . O Abah, J Roßnagel, G Jacob, S Deffner, F Schmidt-Kaler, K Singer, E Lutz, 10.1103/PhysRevLett.109.203006Phys. Rev. Lett. 109203006O. Abah, J. Roßnagel, G. Jacob, S. Deffner, F. Schmidt- Kaler, K. Singer, and E. Lutz, Phys. Rev. Lett. 109, 203006 (2012). R Alicki, http:/arxiv.org/abs/https:/doi.org/10.1142/S1230161214400022Open Systems & Information Dynamics. 211440002R. Alicki, Open Systems & Informa- tion Dynamics 21, 1440002 (2014), https://doi.org/10.1142/S1230161214400022. . A Alecce, F Galve, N L Gullo, L Dell&apos;anna, F Plastina, R Zambrini, 10.1088/1367-2630/17/7/075007New Journal of Physics. 1775007A. Alecce, F. Galve, N. L. Gullo, L. Dell'Anna, F. Plas- tina, and R. Zambrini, New Journal of Physics 17, 075007 (2015). . J Roßnagel, S T Dawkins, K N Tolazzi, O Abah, E Lutz, F Schmidt-Kaler, K Singer, https:/www.science.org/doi/abs/10.1126/science.aad6320Science. 352325J. Roßnagel, S. T. Dawkins, K. N. Tolazzi, O. Abah, E. Lutz, F. Schmidt-Kaler, and K. Singer, Science 352, 325 (2016). . P A Camati, J F G Santos, R M Serra, 10.1103/PhysRevA.99.062103Phys. Rev. A. 9962103P. A. Camati, J. F. G. Santos, and R. M. Serra, Phys. Rev. A 99, 062103 (2019). . P A Erdman, V Cavina, R Fazio, F Taddei, V Giovannetti, 10.1088/1367-2630/ab4dcaNew Journal of Physics. 21103049P. A. Erdman, V. Cavina, R. Fazio, F. Taddei, and V. Giovannetti, New Journal of Physics 21, 103049 (2019). . J.-F Chen, C.-P Sun, H Dong, 10.1103/PhysRevE.100.062140Phys. Rev. E. 10062140J.-F. Chen, C.-P. Sun, and H. Dong, Phys. Rev. E 100, 062140 (2019). . M J Henrich, F Rempp, G Mahler, 10.1140/epjst/e2007-00371-8The European Physical Journal Special Topics. 151157M. J. Henrich, F. Rempp, and G. Mahler, The European Physical Journal Special Topics 151, 157 (2007). . R J De Assis, J S Sales, J A R Da Cunha, N G De Almeida, 10.1103/PhysRevE.102.052131Phys. Rev. E. 10252131R. J. de Assis, J. S. Sales, J. A. R. da Cunha, and N. G. de Almeida, Phys. Rev. E 102, 052131 (2020). . U Mendes, J Sales, N Almeida, https:/iopscience.iop.org/article/10.1088/1361-6455/ac291a#referencesJournal of Physics B: Atomic, Molecular and Optical Physics. U. Mendes, J. Sales, and N. Almeida, Journal of Physics B: Atomic, Molecular and Optical Physics (2021). . A El Makouri, A Slaoui, M Daoud, arXiv: Quan- tumPhysics. A. El Makouri, A. Slaoui, and M. Daoud, arXiv: Quan- tum Physics (2022). . L D Carr, 10.1126/science.1232558Science. 339L. D. Carr, Science 339, 42 (2013), . http:/arxiv.org/abs/https:/www.science.org/doi/pdf/10.1126/science.1232558https://www.science.org/doi/pdf/10.1126/science.1232558. . S Braun, J P Ronzheimer, M Schreiber, S S Hodgman, T Rom, I Bloch, U Schneider, 10.1126/science.1227831Science. 339S. Braun, J. P. Ronzheimer, M. Schreiber, S. S. Hodgman, T. Rom, I. Bloch, and U. Schneider, Science 339, 52 (2013), . http:/arxiv.org/abs/https:/www.science.org/doi/pdf/10.1126/science.1227831https://www.science.org/doi/pdf/10.1126/science.1227831. . R J De Assis, C J Villas-Boas, N G De Almeida, 10.1088/1361-6455/ab0117Journal of Physics B: Atomic, Molecular and Optical Physics. 5265501R. J. de Assis, C. J. Villas-Boas, and N. G. de Almeida, Journal of Physics B: Atomic, Molecular and Optical Physics 52, 065501 (2019). . E Abraham, O Penrose, 10.1103/PhysRevE.95.012125Phys. Rev. E. 9512125E. Abraham and O. Penrose, Phys. Rev. E 95, 012125 (2017). . H Struchtrup, 10.1103/PhysRevLett.120.250602Phys. Rev. Lett. 120250602H. Struchtrup, Phys. Rev. Lett. 120, 250602 (2018). . T M Mendonça, A M Souza, R J De Assis, N G De Almeida, R S Sarthour, I S Oliveira, C J Villas-Boas, 10.1103/PhysRevResearch.2.043419Phys. Rev. Research. 243419T. M. Mendonça, A. M. Souza, R. J. de Assis, N. G. de Almeida, R. S. Sarthour, I. S. Oliveira, and C. J. Villas-Boas, Phys. Rev. Research 2, 043419 (2020). . P T Landsberg, R J Tykodi, A M Tremblay, 10.1088/0305-4470/13/3/039Journal of Physics A Mathematical General. 131063P. T. Landsberg, R. J. Tykodi, and A. M. Tremblay, Journal of Physics A Mathematical General 13, 1063 (1980). . J.-Y Xi, H.-T Quan, 10.1088/0253-6102/68/3/347Communications in Theoretical Physics. 68347J.-Y. Xi and H.-T. Quan, Communications in Theoretical Physics 68, 347 (2017). . E Artacho, L M Falicov, 10.1103/PhysRevB.47.1190Phys. Rev. B. 471190E. Artacho and L. M. Falicov, Phys. Rev. B 47, 1190 (1993). . G A Álvarez, A Ajoy, X Peng, D Suter, 10.1103/PhysRevA.82.042306Phys. Rev. A. 8242306G. A. Álvarez, A. Ajoy, X. Peng, and D. Suter, Phys. Rev. A 82, 042306 (2010). . N Linden, S Popescu, P Skrzypczyk, 10.1103/PhysRevLett.105.130401Phys. Rev. Lett. 105130401N. Linden, S. Popescu, and P. Skrzypczyk, Phys. Rev. Lett. 105, 130401 (2010). . P Li, B Jia, https:/link.aps.org/doi/10.1103/PhysRevE.83.062104Phys Rev E Stat Nonlin Soft Matter Phys. 8362104P. Li and B. Jia, Phys Rev E Stat Nonlin Soft Matter Phys 83, 062104 (2011). . A Nüßeler, I Dhand, S F Huelga, M B Plenio, 10.1103/PhysRevB.101.155134Phys. Rev. B. 101155134A. Nüßeler, I. Dhand, S. F. Huelga, and M. B. Plenio, Phys. Rev. B 101, 155134 (2020). . L Del Re, B Rost, A F Kemper, J K Freericks, 10.1103/PhysRevB.102.125112Phys. Rev. B. 102125112L. Del Re, B. Rost, A. F. Kemper, and J. K. Freericks, Phys. Rev. B 102, 125112 (2020). V A Mikhailov, N V Troshkin, arXiv: Quantum Physics. V. A. Mikhailov and N. V. Troshkin, arXiv: Quantum Physics (2020). . G G Damas, R J De Assis, N G De Almeida, arXiv: Quantum PhysicsG. G. Damas, R. J. de Assis, and N. G. de Almeida, arXiv: Quantum Physics (2022). H Breuer, P Breuer, F Petruccione, S Petruccione, The Theory of Open Quantum Systems. Oxford University PressH. Breuer, P. Breuer, F. Petruccione, and S. Petruc- cione, The Theory of Open Quantum Systems (Oxford University Press, 2002). . J Johansson, P Nation, F Nori, 10.1016/j.cpc.2012.02.021Computer Physics Communications. 1831760J. Johansson, P. Nation, and F. Nori, Computer Physics Communications 183, 1760 (2012). . J Johansson, P Nation, F Nori, 10.1016/j.cpc.2012.11.019Computer Physics Communications. 1841234J. Johansson, P. Nation, and F. Nori, Computer Physics Communications 184, 1234 (2013). . R J De Assis, T M De Mendonça, C J Villas-Boas, A M Souza, R S Sarthour, I S Oliveira, N G De Almeida, 10.1103/PhysRevLett.122.240602Phys. Rev. Lett. 122240602R. J. de Assis, T. M. de Mendonça, C. J. Villas-Boas, A. M. de Souza, R. S. Sarthour, I. S. Oliveira, and N. G. de Almeida, Phys. Rev. Lett. 122, 240602 (2019).
[]
[ "Forecasting constraints on deviations from general relativity in f (Q) gravity with standard sirens", "Forecasting constraints on deviations from general relativity in f (Q) gravity with standard sirens" ]
[ "Rocco D&apos;agostino \nScuola Superiore Meridionale (SSM)\nLargo S. Marcellino 10I-80138NapoliItaly\n\nIstituto Nazionale di Fisica Nucleare (INFN)\nSezione di Napoli\nVia Cinthia 9I-80126NapoliItaly\n", "Rafael C Nunes \nInstituto de Física\nUniversidade Federal do Rio Grande do Sul\nPorto Alegre RS\n91501-970Brazil\n\nDivisão de Astrofísica\nInstituto Nacional de Pesquisas Espaciais\nAvenida dos Astronautas 175812227-010São José dos CamposSPBrazil\n" ]
[ "Scuola Superiore Meridionale (SSM)\nLargo S. Marcellino 10I-80138NapoliItaly", "Istituto Nazionale di Fisica Nucleare (INFN)\nSezione di Napoli\nVia Cinthia 9I-80126NapoliItaly", "Instituto de Física\nUniversidade Federal do Rio Grande do Sul\nPorto Alegre RS\n91501-970Brazil", "Divisão de Astrofísica\nInstituto Nacional de Pesquisas Espaciais\nAvenida dos Astronautas 175812227-010São José dos CamposSPBrazil" ]
[]
key role to probe new (or rule out) MG or DE models. Gravitational wave (GW) astronomy provides an unprecedented opportunity to test gravitational physics in that direction. Currently, more than 90 coalescing compact binary events have already been observed during the three running stages of the LIGO/VIRGO mission[11]. One of the most promising prospects is the observation of standard siren (SS) events[12,13]. The latter are the GW analog of the astronomical standard candles and might be a powerful tool in view of constraining cosmological parameters through the information encoded in the luminosity distance provided by these events. To date, one event has been observed through a binary neutron star (BNS) merger at z = 0.01, namely the GW170817 event[14,15]. Preliminary cosmological information and the consequences of this observation are important to the understanding of our Universe locally. These observations were used to measure the Hubble constant[16]and also to impose strong constraints on MG/DE scenarios (see[17]for a review).On the other hand, the detectability rate of the SS events from the current LIGO/VIRGO sensitivity is expected to be very low, as well as difficult to reach large cosmic distances. The central importance of GW astronomy is testified by the plans for the construction of several GW observatories in the future, such as the underground-based interferometers ET [18] and Cosmic Explore[19], and space-based interferometers such as LISA[20], DECIGO [21] and TianQin[22], among others, to observe GWs in the most diverse frequency bands. The implications of cosmological studies using the SS
10.1103/physrevd.106.124053
[ "https://export.arxiv.org/pdf/2210.11935v2.pdf" ]
253,080,768
2210.11935
25c582a4caf8a3a8e1615e19e805a302eca66da8
Forecasting constraints on deviations from general relativity in f (Q) gravity with standard sirens Rocco D&apos;agostino Scuola Superiore Meridionale (SSM) Largo S. Marcellino 10I-80138NapoliItaly Istituto Nazionale di Fisica Nucleare (INFN) Sezione di Napoli Via Cinthia 9I-80126NapoliItaly Rafael C Nunes Instituto de Física Universidade Federal do Rio Grande do Sul Porto Alegre RS 91501-970Brazil Divisão de Astrofísica Instituto Nacional de Pesquisas Espaciais Avenida dos Astronautas 175812227-010São José dos CamposSPBrazil Forecasting constraints on deviations from general relativity in f (Q) gravity with standard sirens key role to probe new (or rule out) MG or DE models. Gravitational wave (GW) astronomy provides an unprecedented opportunity to test gravitational physics in that direction. Currently, more than 90 coalescing compact binary events have already been observed during the three running stages of the LIGO/VIRGO mission[11]. One of the most promising prospects is the observation of standard siren (SS) events[12,13]. The latter are the GW analog of the astronomical standard candles and might be a powerful tool in view of constraining cosmological parameters through the information encoded in the luminosity distance provided by these events. To date, one event has been observed through a binary neutron star (BNS) merger at z = 0.01, namely the GW170817 event[14,15]. Preliminary cosmological information and the consequences of this observation are important to the understanding of our Universe locally. These observations were used to measure the Hubble constant[16]and also to impose strong constraints on MG/DE scenarios (see[17]for a review).On the other hand, the detectability rate of the SS events from the current LIGO/VIRGO sensitivity is expected to be very low, as well as difficult to reach large cosmic distances. The central importance of GW astronomy is testified by the plans for the construction of several GW observatories in the future, such as the underground-based interferometers ET [18] and Cosmic Explore[19], and space-based interferometers such as LISA[20], DECIGO [21] and TianQin[22], among others, to observe GWs in the most diverse frequency bands. The implications of cosmological studies using the SS In this work, we explore how modified gravity theories based on the non-metricity scalar, known as f (Q) gravity, affect the propagation of gravitational waves from inspiraling of binary systems. We discuss forecast constraints on f (Q) gravity by considering standard siren events in two contexts: i) simulated sources of gravitational waves as black hole -neutron star binary systems, emitting in the frequency band of the third-generation detector represented by the Einstein Telescope (ET); ii) three standard siren mock catalogs based on the merger of massive black hole binaries that are expected to be observed in the operating frequency band of the Laser Interferometer Space Antenna (LISA). We find that, within the ET sensitivity, in combination with supernova and cosmic chronometer data, it will be possible to test deviations from general relativity at < 3% accuracy in the redshift range 0 < z < 5, while the main free parameter of the theory is globally constrained at 1.6% accuracy within the same range. In light of LISA's forecasts, combined with supernova and cosmic chronometer data, in the best scenario, we find that the main free parameter of the theory will be constrained at 1.6% accuracy up to high redshifts. Therefore, we conclude that future gravitational wave observations by ET and LISA will provide a unique way to test, with good accuracy, the nature of gravity up to very large cosmic distances. I. INTRODUCTION One of the greatest challenges in contemporary physics is to provide a suitable description of the nature of the dark sector of the Universe, namely, dark matter and dark energy (DE) [1][2][3], which constitute together approximately 95% of the energy density of the cosmic content. The simplest possible explanation for DE, namely the cosmological constant Λ, relates its nature to the vacuum energy density. Due to its great success to explain the majority of the observations, the Lambda-Cold Dark Matter (ΛCDM) model is considered the standard model of cosmology. Nonetheless, the cosmological constant leads to serious problems from the theoretical point of view [4][5][6]. Alternatively to the Λ term, one can consider extra degrees of freedom with a gravitational origin, i.e., arising from a gravitational modification that possesses general relativity (GR) as a particular limit. The modified gravity (MG) scenarios, in fact, may allow for extensions of the ΛCDM model and can drive the accelerated expansion of the Universe at late times, as well as explain various observations at the cosmological and astrophysical levels (see [7][8][9][10] for a review). From an observational perspective, looking for new astrophysical sources, through a direct manifestation of gravitational effects, can provide rich physical information about the nature of gravity, which should play a have motivated focused studies on the nature of DE, MG, dark matter, and several other fundamental questions in modern cosmology . Looking through the geometrical character of gravity, it is pertinent to explore which equivalent manners gravity can be geometrized in. In fact, besides curvature, the other two fundamental quantities associated with the connection of a metric space are torsion and non-metricity [51]. Among several viable candidates for MG theories, it has been proposed to construct scenarios where the gravitational interaction is mediated by non-metricity, while curvature and torsion are vanishing [51][52][53][54]. These classes of models are known as f (Q) gravity, where Q is the non-metricity scalar. This approach could be important to describe gravity at a fundamental level because gravity can be dealt with as a gauge theory not requiring a priori the validity of the Equivalence Principle. In the f (Q) gravity context, the main dynamical equations in presence of matter have been derived in [55]. From this study, modifications in the gravity sector emerge with respect to the ΛCDM model. Furthermore, observational constraints on the f (Q) gravity have been performed using different observational probes for several parameterizations of the f (Q) function [56][57][58][59][60][61][62][63][64][65][66][67][68][69]. The aim of this work is to obtain forecast constraints on f (Q) gravity in light of three mock SS catalogs based on the merger of massive black hole binaries that are expected to be observed in the LISA operating frequency band, as well as from a mock SS catalog from black holeneutron star mergers within the sensitivity predicted for the ET mission. In [61], a study was carried out to constrain the f (Q) gravity through SS events. However, the present work differs from the previous one in two main aspects. Firstly, we here estimate deviations from GR by means of a different parameterization. Our choice, indeed, is based on a robust model-independent approach that minimizes possible a priori biases towards a particular f (Q) cosmological scenario. As a result, contrary to the aforementioned work, we do not assume a ΛCDM background evolution. Secondly, with regards to the ET perspective, we here use a mock catalog of black holeneutron star mergers, from which we simulate detections up to redshift z = 5. This paper is structured as follows. In Sec. II, we introduce the f (Q) gravity framework and specify our theoretical setup. In Sec. III, we present the datasets and the methodology used in our study. In Sec. IV, we show the results of our analysis and discuss the main physical consequences of our findings. Finally, in Sec. V, we outline our final considerations and perspectives. II. f (Q) GRAVITY AND COSMOLOGY A fruitful way to obtain new hints on cosmic acceleration and, consequently, test the underlying gravitational theory, is to consider a different geometrical approach with respect to the Riemannian formulation. Specifically, in the present study, we shall explore the features of nonmetricity at the cosmological level. For this purpose, we recall the most general form of the affine connection [70]: Γ λ µν = λ µν + K λ µν + L λ µν ,(1) where λ µν is the Levi-Civita connection: λ µν ≡ 1 2 g λβ (∂ µ g βν + ∂ ν g βµ − ∂ β g µν ) ,(2) with g µν being the metric tensor. The last two terms of Eq. (1) are the contortion and disformation tensors, respectively: K λ µν ≡ 1 2 g λβ (T µβν + T νβµ + T βµν ) ,(3)L λ µν ≡ 1 2 g λβ (−Q µβν − Q νβµ + Q βµν )(4) where T λ µν ≡ Γ λ µν − Γ λ νµ is the torsion tensor, while the non-metricity tensor reads Q ρµν ≡ ∇ ρ g µν = ∂ ρ g µν − Γ β ρµ g βν − Γ β ρν g µβ .(5) Therefore, the metric-affine spacetime is specified by the choice of the connection. In our study, we assume that geometry is provided by non-metricity, whereas torsion and curvature are both zero. Two independent traces can be associated with the non-metricity tensor depending on the contraction order, namely Q µ = Q α µ α andQ µ = Q α µα . It follows that the non-metricity scalar can be expressed as [71] Q = − 1 4 Q αβµ Q αβµ + 1 2 Q αβµ Q βµα + 1 4 Q α Q α − 1 2 Q αQ α . (6) As for the cases of curvature-free or torsionless scenarios, one may consider theories of gravity that are based on a generic function of the non-metricity scalar, the socalled f (Q) theories, whose action is given by 1 S = d 4 x √ −g 1 2 f (Q) + L m ,(7) where L m is the matter field Lagrangian, and g is the determinant of g µν . Notice that, up to a total derivative, the above action and the Einstein-Hilbert one are equivalent for f (Q) = Q. Thus, GR is recovered as soon as the connections are globally vanishing and the non-metricity tensor can be written in terms of the metric only [51,72]. Varying action 7 with respect to the metric provides us with the field equations [55]: 2 √ −g ∇ α √ −g g βν f Q − 1 2 L αµβ − 1 8 g αµ Q β + g αβ Q µ + 1 4 g µβ (Q α −Q α ) + f Q − 1 2 L µαβ − 1 8 g µα Q β + g µβ Q α + 1 4 g αβ (Q µ −Q µ ) Q ναβ + 1 2 δ µ ν f = T µ ν ,(8) where T µν = − 2 √ −g δ √ −gLm δg µν is the energy-momentum tensor, and we have defined f Q ≡ ∂f ∂Q . In order to analyze the cosmological features of f (Q) gravity, let us consider the Friedman-Lemaître-Robertson-Walker (FLRW) metric with zero spatial curvature: ds 2 = −dt 2 + a(t) 2 δ ij dx i dx j ,(9) where a(t) is the scale factor as a function of the cosmic time t. To avoid trivial solutions that cannot go beyond GR, we assume the coincident gauge [73,74], where the tangent space and spacetime share the same origin. Under this choice, the modified Friedmann equations take the form 6H 2 f Q − 1 2 f = ρ ,(10)12H 2 f QQ + f Q Ḣ = − 1 2 (ρ + p) ,(11) where p and ρ represent, respectively, the total pressure and density of the cosmic fluid. Furthermore, the nonmetricity scalar is related to the Hubble parameter, H ≡ a/a, through [59]. Q = 6H 2 .(12) As we focus our analysis on the late stages of the Universe's evolution, we can safely neglect the radiation contribution. Also, we assume that the cosmic fluid is totally made of pressureless matter, thus p = 0 and ρ = 3H 2 0 Ω m0 (1 + z) 3 ,(13) where z ≡ a −1 − 1 is the redshift, and H 0 and Ω m0 are the Hubble constant and the current matter density parameter, respectively 2 . To work out the cosmic dynamics in f (Q) gravity, one needs to specify the non-metricity function. A common approach is to assume a priori the form of f (Q) and then check for possible deviations from GR arising from the resulting dynamics. However, such a procedure may be affected by misleading conclusions due to possible biases inherent in the chosen model. Nevertheless, the aforementioned issues might be alleviated by resorting to the cosmographic method [75][76][77][78], which has proven to be a powerful tool when applied to DE/MG scenarios [79][80][81][82][83]. In the specific case of f (Q) gravity, we shall adopt the results obtained in the previous work [59], where the functional form of f (Q) has been reconstructed by means of a kinematic model-independent analysis on the background low-redshift measurements. Thus, in the present study, we consider the function f (Q) = α + βQ n ,(14) where α, β and n are treated as free parameters. Besides being suggested directly from observations, this test function allows for a simple test of the deviations from GR (ΛCDM), which is recovered for β = 1 = n and α = 0 (α > 0). The extra free parameters with respect to the ΛCDM model affect also the cosmological evolution at the perturbation level, as attested by the effective gravitational constant, G eff ≡ G/f Q [55]. In particular, taking into account Eqs. (12) and (14), we find G eff (z) G = (6H 2 (z)) 1−n nβ ,(15) for β = 0 = n. The effect induced by the effective gravitational constant on the GW propagation is measured through the quantity [84] d GW (z) = G eff (z) G eff (0) d L (z) ,(16) where d L (z) is the background luminosity distance, d L (z) = (1 + z) z 0 dz H(z ) .(17) Thus, in view of Eq. (15), from Eq. (16) we obtain d GW (z) = E(z) n−1 d L (z) ,(18) where E(z) ≡ H(z)/H 0 is the dimensionless Hubble parameter. It is worth stressing that, as soon as n = 1 = β, G eff = G as in GR, and the GW propagation recovers the predictions of the ΛCDM model, characterized by E ΛCDM (z) = Ω m0 (1 + z) 3 + 1 − Ω m0 .(19) III. DATASETS AND METHODOLOGY In light of the main scope of this work, we generate mock data inspired by the possibility of future observations of SS events. In particular, we are here interested in SS events to be detected by two different observatories, namely ET and LISA. We provide a brief description of our samples in the following. A. Einstein Telescope The ET is a third-generation ground-based detector, covering the frequency range 1 − 10 4 Hz. The ET is expected to be ten times more sensitive than the current advanced ground-based detectors. We refer the reader to [18] for a presentation of the scientific objectives of the ET observatory. The ET conceptual design study predicts an order of 10 3 -10 7 detections per year. After 10 years of operation, the ET is expected to detect ∼1000 GW SS events from the black hole-neutron star mergers up to z = 5 [29]. Our goal, thus, is to generate a luminosity distance catalog matching the expected sensitivity of the ET after 10 years of operation. In particular, we generate 1000 triples (z i , d L (z i ), σ i ), with z i being the redshift of the GW source, d L the measured luminosity distance, and σ i the uncertainty on the latter. There are three aspects to take into consideration in the mock data generation process: the fiducial cosmological model enters both in z i (or more precisely into the redshift distribution of expected sources) and d L ; the expected type of GW sources enter in z i ; finally, the instrumental and physical specifications enter in σ i . In our case, we fix the fiducial model to the Planck-ΛCDM baseline parameters [85]. The ET sensitivity we make use of in this work corresponds to the ET-D curve model 3 , which include the most relevant fundamental noise contributions [86]. The whole methodology to generate the mock data is already very well-known and widely used in the literature. The features of this methodology are well described in previous works, such as [26,29]. We display in Fig. 1 the ET simulated d L (z) measurements along with the corresponding ΛCDM best fit (see Table I). B. LISA LISA will operate in the millihertz band with the objective to be an all-sky GW survey. Science with LISA brings opportunities and challenges in terms of complications arising from its motion around Earth. Basically, LISA can be thought of as two detectors, and it will be launched in three identical drag-free spacecraft form- ing an equilateral triangle, with an arm length of about 2.5 × 10 6 km [87]. Among astrophysical sources, LISA can reach Galactic binaries, stellar origin black hole binaries and extrememass-ratio inspirals [88], and massive black hole binaries (MBHBs). See [89] for a presentation of the scientific objectives of the LISA mission. The most probable LISA sources with electromagnetic counterparts are MBHBs. In particular, MBHBs are supposed to merge in gas-rich environments and within the LISA frequency band, allowing for electromagnetic follow-ups to determine their z. Theoretical models and simulations can predict the redshift distribution and merger rate of MBHBs. Depending on the initial conditions for black hole formation at high z, there are two scenarios, namely, the light seed and the heavy seed ones. In the light seed scenario, massive black holes are assumed to grow from the remnants of population III (pop III) stars forming at z ∈ [15,20]. In the heavy seed scenario, on the other hand, massive black holes are assumed to form from the collapse of protogalactic disks. The result of the scenarios produces three categories of population models named Pop III, Delay and No Delay [90]. Our catalog is based on the model presented in [90,91]. The redshift distribution of MBHBs SS of our mock sample is displayed in Figs. 1 and 2 in [30]. In this case, we adopt the LISA sensitivity provided in [92], where the full sensitivity curve 4 is constructed by combining the galactic and the instrumental noises for a 4-year mission lifetime. Similar to the ET simulation, in the LISA mock generation data we fix the fiducial model to the Planck-ΛCDM baseline parameters [85]. In Fig. 2, we show the simulated measurements of d L (z) from the all LISA catalogs with the corresponding ΛCDM best fits (see Table I). IV. RESULTS AND DISCUSSION In this section, we present and discuss the results obtained from our numerical analysis of cosmological observations. In particular, to complement the GW SS simulated events from the ET and LISA experiments, we considered the low-redshift measurements of type Ia supernovae (SN) and cosmic chronometers (CC). We refer to Appendix A for the details on SN and CC datasets. A. Monte Carlo analysis We test deviations from GR and the ΛCDM model by using the Markov Chain Monte Carlo (MCMC) method to analyze the f (Q) model under consideration in this work. In order to estimate observational constraints on the free parameters, we apply the Metropolis-Hastings algorithm [93], where the likelihood function for the GW SS mock dataset is built under the form L GW ∝ exp    − 1 2 N i=1 d (obs) GW,i − d (th) GW (z i ) σ dGW,i 2    ,(20) where N is the size of the sample of each SS catalog. In the above equation, d In a similar way, we build the likelihood functions for the SN and CC data (see Eqs. (A3) and (A5)). As the latter are independent of the GW measurements, they may be combined with each other to obtain tighter constraints on the model parameters. To compare theoretical predictions and observational evidence, one needs to solve the modified Friedmann equations and find the cosmological dynamics. In our case, in view of Eqs. (12)-(14), Eq. (11) becomes 6 n−1 βn(2n−1)(z+1)H(z) 2n−1 H (z) = 3 2 H 2 0 Ω m0 (z+1) 3 ,(21) where we have used the relationḢ = −(1 + z)H(z)H (z) to convert the time derivative into the derivative with respect to the redshift. Thus, solving the first-order differential equation (21) by means of the initial condition H(z) = H 0 , we finally obtain H(z) = H 2n 0 + H 2 0 6 1−n Ω m0 (z + 1) 3 − 1 β(2n − 1) 1 2n ,(22) for β = 0 and n = 0, 1/2. The above solution can be then used to find the theoretical predictions for Eq. (18) with the help of Eq. (17). In the limit for β → 1 and n → 1, we recover the ΛCDM model as in Eq. (19). It is worth noticing that Eq. (22) does not involve the additive constant α of Eq. (14). This fact may be better understood by expressing the modified Friedmann equations in light of the model (14). Specifically, from Eq. (10), with the help of Eqs. (12) and (13), one finds α + 6H 2 0 Ω m0 (z + 1) 3 = 6 n β(2n − 1)H 2n ,(23) which, evaluated at the present time, provides α = 6 n β(2n − 1)H 2n 0 − 6H 2 0 Ω m0 .(24) Hence, the constant α does not represent a degree of freedom of our model, as it can always be expressed in terms of the other cosmological parameters. The physical meaning of α is easily revealed in the limit n → 1 and β → 1, when one obtains H 2 = H 2 0 Ω m0 (1 + z) 3 + α/6. Then, recalling our hypothesis of a flat universe, we immediately can interpret α as the cosmological constant. Therefore, the set of free parameters in our fitting procedure is θ = {H 0 , Ω m0 , β, n}. In particular, the estimates of β and n will quantify the deviations with respect to GR. In the realization of our MCMC analysis, the sampling is done by assuming the following uniform priors over θ 5 : In what follows, we summarize our main results. H 0 ∈ [50, 100] , (25a) Ω m0 ∈ [0, 1] ,( B. Observational constraints Before proceeding to the forecast constraints on possible deviations from GR, we summarize in Table I the results up to the 2σ confidence level (c.l.) from the statistical analyses of the ΛCDM model. First, we consider individually the four SS mock samples, namely, the ET sample and the LISA from the delay, no delay and pop III sample, respectively. As expected, given the total sample size (number of events), the accuracy on the free parameters, i.e., H 0 and Ω m0 , is higher from either ET or LISA data with respect to the SN + CC measurements. In the latter case, we find 2.2% accuracy on H 0 , while 0.9% accuracy from the ET analysis and 2.2% from LISA (no delay) analysis. The analyses using the other LISA sources provide results with an intermediate accuracy with respect to the latter cases. Thus, on the one hand, the accuracy on H 0 that will be possible to achieve from SS events and, on the other hand, the fact that SS are independent of late-time probes such as SN, CC and BAO and have different systematic errors compared to the latter, clearly show that SS will be an important complement in solving the H 0 tension in the future 6 Then, combining the SN + CC measurements with the SS mock events, we find that the accuracies on H 0 improve up to 0.8% using the ET forecasts, and 0.9% using the LISA (no delay) forecasts. Thus, the SS events at very large cosmological distances to be observed in both the ET and LISA band can improve the current observational constraint in combination with other simple geometrical measurements. The same results apply to the Ω m0 parameter (c.f. Table I). It is worth noticing that the results of the LISA (delay) sample are systematically different from those of 6 See discussion in Section IX.7 in [94] and references therein. the other two scenarios, which are roughly comparable to each other. In fact, LISA (delay) provides worse results in terms of accuracy due to the lower number of detectable SS, as also discussed in [90]. Also, it is important to comment that the inclusion of high z SS events, especially when their number density is low, may induce systematic effects in the cosmological analysis. The main results concerning the statistical analyses for the f (Q) gravity framework under consideration are summarized in Table II. In this case, we do not report the results from GWs individually since they are not predictive enough. In fact, the MCMC constraints for the f (Q) model are less stringent due to the presence of additional free parameters compared to the ΛCDM case. However, one can see the impact of considering the SS measurements from the comparison with the results based on SN + CC data only. Due to the enlarged parameter space, the error bars will naturally increase compared to the ΛCDM model. When considering the SN + CC joint analysis, we find 4% accuracy on H 0 . However, from SN + CC + ET and SN + CC + LISA (no delay) data, we find 0.9% and 1% accuracy, respectively. Once again, the analyses using other LISA sources provide intermediary results to these accuracies. Thus, clearly, we can see that the addition of SS events will improve considerably the constraints on H 0 in the context of f (Q) gravity. Now, it is interesting to turn our attention to the parameters β and n. In light of SN + CC data, we note 31% and 2.2% accuracy on β and n, respectively. When considering the SS events, from CC + SN + ET data, we find 48% and 1.6% accuracy on β and n, respectively. From SN + CC + LISA (no delay), we find 34% and 1.5% accuracy on β and n, respectively. It is worth to remark that the parameter space β − n is statistically degenerate, despite the β − n contours shows quite round shapes. In this regard, we note that the parameter n is strongly correlated with both H 0 and Ω m0 when SN + CC are considered, while β is only with Ω m0 . Here, the main parameter quantifying the model effects is n, which controls the power of gravitational correction to the GR prediction. We notice that the ad-dition of SS events from both future experiments can improve the constraints on the minimal baseline, i.e., on the parameters Ω m0 and H 0 . Apart from some statistical fluctuation, the final constraints on β and n, are practically the same. Figures Fig. 3 and 4 show the 2dimensional parameter regions at 68% and 95% c.l. and the 1-dimensional posterior distributions for the f (Q) model as results of the MCMC analysis of different combinations with SS data. Our results emerging from the SN + CC data analysis indicate no substantial evidence for deviations from GR, as the values of n are consistent with the unity at the 1σ c.l. Furthermore, on the left panel of Fig. 5, we show a statistical reconstruction at the 1σ c.l. of the effective luminosity distance, Eq. (16), under the perspective of the ET mock sample. We find an estimate of d GW /d L = 1.01 +0.03 −0.03 at z ∼ 4.5, with gradually improving precision towards low z, as expected. This means that future measurements from ET will make it possible to test deviations from GR, under the f (Q) gravity framework, at ∼3% accuracy on d GW /d L ratio. Similarly, on the right panel of nosity distance from the best-fit results using the LISA mock data. V. OUTLOOK AND FINAL REMARKS In this paper, we focused on the f (Q) theories of gravity to test possible deviations from GR in light of future GW detections. Specifically, taking into account the sensitivities of the ET and LISA experiments, we simulated mock SS events associated with black hole-neutron star binary systems and mergers of massive black hole binaries to probe the GW propagation in a FLRW Universe, where geometry is described by non-metricity. Unlike previous approaches to f (Q) gravity, our procedure relies on a robust model-independent method that minimizes possible biases induced by the choice of the underlying cosmology. For our purposes, we considered a two-parameter extension of the ΛCDM model, where the power of the non-metricity scalar quantifies corrections with respect to Einstein's theory. In doing so, we worked out the cosmic dynamics at the background level, as well as at the perturbation level in terms of the effective gravitational constant of the theory. After describing the methodology to generate mock SS measurements up to high redshifts from the perspective of the ET and LISA detectors, we presented the procedure to compare the observational evidence with the theoretical predictions. In particular, a Monte Carlo numerical integration has been applied to constrain the free parameters of the model under consideration and test deviations with respect to the standard cosmological scenario. To improve the accuracy of our results, we complemented the simulated SS measurements with typical model-independent data at low redshifts. Our analysis shows that the inclusion of the SS measurements will considerably reduce the uncertainties on the H 0 estimate. More generally, adding the SS mock data up to large distances from both the ET and LISA missions will improve the accuracy of the whole parameter space. Besides, our study indicates no statistically significant deviations with respect to the GR predictions. Finally, adopting the results emerging from our joint analyses, we inferred the behavior of the effective lumi- nosity distance up to very high redshifts. Specifically, when using the ET mock sample in combination with SN and CC data, we found that corrections to the standard luminosity distance could be tested at ∼ 3% accuracy within the f (Q) framework. On the other hand, no deviations bigger than 5% are expected from the LISA perspective when combined with SN and CC measurements. To conclude, the present study shows that future GW observations by the ET and LISA missions will offer a unique tool to test the nature of gravity up to very large cosmic distances with unprecedented precision. Ia in the redshift range 0.01 < z < 2.3. In this compilation, all the SN are standardized through the SALT2 light-curve fitter, in which the distance modulus is modelled as follows [98]: µ = m B − M + αx 1 − βC + ∆ M + ∆ B ,(A1) where m b is the B-band apparent magnitude of each SN and M is its absolute magnitude, while ∆ M and ∆ B account for the host-mass galaxy and the distance bias corrections, respectively. Moreover, x 1 and C are the stretch and color parameters of each SN light curve, respectively, with their relative coefficients α and β. On the other hand, the distance modulus predicted by a cosmological model is given as µ(z) = 5 log 10 d L (z) 1 Mpc + 25 . As shown in [99], under the assumption of a flat universe, one can compress the full SN sample into a set of cosmological model-independent measurements of E(z) −1 . This approach allows us to properly marginalize over the SN nuisance parameters in the fitting procedure. Thus, taking into account the correlations among the E −1 (z) measurements, we can write the likelihood function associated with the SN data as L SN ∝ exp − 1 2 v T C −1 SN v ,(A3) where v = E −1 obs,i − E −1 th (z i ) quantifies the difference between the measured values and the values predicted by a given cosmological model, and C SN is the covariance matrix resulting from the correlation matrix given in [99]. The second complementary dataset is built upon the differential age approach developed in [100], which represents a model-independent method to characterize the expansion of the Universe up to z < 2. In this technique, passively evolving red galaxies are used as cosmic chronometers (CC) to measure the age difference (dt) of the universe at two close redshifts (dz). Thus, one can estimate the Hubble parameter as H(z) = − 1 (1 + z) dz dt .(A4) In our analysis, we use the compilation of H(z) uncor-related measurements collected in [101] (see references therein). We can then write the likelihood function relative to the CC data as L CC ∝ exp − 1 2 N i=1 H obs,i − H th (z i ) σ H,i 2 ,(A5) where H obs,i are the observed measurements with their relative uncertainties σ H,i , while H th (z i ) are the theoretical values of the Hubble parameter obtained from using a specific cosmological model. FIG. 1 . 1Simulated luminosity distance measurements with relative 1σ uncertainties from the mock ET catalog. The black curve refers to the best-fitted ΛCDM model.. 3FIG. 2 . 2https://www.et-gw.eu/index.php/etsensitivities Simulated luminosity distance measurements with relative 1σ uncertainties from the mock LISA Delay (blue), No Delay (orange) and Pop III (green) catalogs. The black curves correspond to the ΛCDM best fits to LISA Delay (solid), No Delay (dashed) and Pop III (dotted) data. i are the simulated events with their associated uncertainties σ dGW,i , while d (th) GW (z i ) is the theoretical prediction on each ith event. 25b) β ∈ [−10, 0) ∪ (0, 10] , (25c) n ∈ [−10, 0) ∪ (0, 1/2) ∪ (1/2, 10] .(25d) TABLE II. Summary of the MCMC results at the 68% (95%) c.l. for the f (Q) model under study. For β = n = 1, we recover GR and the ΛCDM cosmological scenario. FIG. 3 . 368% and 95% c.l. marginalized contours, with posterior distributions, as a result of the MCMC analysis using the ET mock data. FIG. 4 . 4Fig. 5, we show the effective lumi-68% and 95% c.l. marginalized contours, with posterior distributions, as a result of the MCMC analysis using the LISA mock data. PACS numbers: 98.80.-k, 95.36.+x, 04.50.Kd, 04.30.Nk TABLE I . ISummary of the MCMC results at the 68% (95%) c.l. for the ΛCDM model.Dataset H0 Ωm0 β n SN + CC 68.59 +2.69(5.18) −2.69(5.46) 0.386 +0.148(0.260) −0.144(0.279) 1.361 +0.498(0.752) −0.349(0.890) 0.993 +0.022(0.044) −0.022(0.042) SN + CC + ET 67.69 +0.63(1.23) −0.62(0.21) 0.315 +0.150(0.249) −0.151(0.246) 1.149 +0.568(0.812) −0.559(0.811) 0.988 +0.016(0.033) −0.016(0.031) SN + CC + LISA (delay) 66.35 +1.16(2.26) −1.17(2.22) 0.421 +0.143(0.263) −0.149(0.254) 1.307 +0.448(0.731) −0.393(0.770) 0.996 +0.016(0.033) −0.016(0.030) SN + CC + LISA (no delay) 67.71 +0.67(1.32) −0.67(1.30) FIG. 5. Effective luminosity distance for the f (Q) model as a result of the MCMC analysis. Left panel. The solid blue curve corresponds to the mean results from SN + CC + ET data, while the area between the dotted curves accounts for the relative 1σ uncertainties. Right panel. The solid green, orange and violet curves correspond to the mean results from SN + CC + LISA (delay), SN + CC + LISA (no delay) and SN + CC + LISA (pop III) data, respectively. The prediction of the ΛCDM paradigm is shown as a black dashed line.0 1 2 3 4 0.98 1.00 1.02 1.04 1.06 0 2 4 6 8 0.98 1.00 1.02 1.04 1.06 Here, we use units such that c = 1 = 8πG. In our notation, the subscript '0' indicates the present-day values of the cosmological parameters, namely at z = 0. https://github.com/eXtremeGravityInstitute/LISA_ Sensitivity In this paper, H 0 values are expressed in units of km/s/Mpc. See also[95,96]. ACKNOWLEDGMENTS R.D. acknowledges the support of Istituto Nazionale di Fisica Nucleare (INFN)iniziativa specifica QGSKY. The authors would like to thank Angelo Ricciardone and the Cosmology Division of the ET Observational Science Board (OSB) for the useful discussion on the manuscript. The authors also thank the anonymous referee for his/her valuable comments and suggestions.Appendix A: SN and CC datasetsIn this Appendix, we provide some details of the lowredshift cosmological observables 7 we use to complement the GW mock data in the statistical analysis on the f (Q) model.The first complementary dataset we employ in our study is the Pantheon sample[97], composed of 1048 SN . P J E Peebles, B Ratra, 10.1103/RevModPhys.75.559arXiv:astro-ph/0207347Rev. Mod. Phys. 75559P. J. E. Peebles and B. Ratra, Rev. Mod. Phys. 75, 559 (2003), arXiv:astro-ph/0207347. . E J Copeland, M Sami, S Tsujikawa, 10.1142/S021827180600942XarXiv:hep-th/0603057Int. J. Mod. Phys. D. 151753E. J. Copeland, M. Sami, and S. Tsujikawa, Int. J. Mod. Phys. D 15, 1753 (2006), arXiv:hep-th/0603057. . R Agostino, 10.1103/PhysRevD.99.103524arXiv:1903.03836Phys. Rev. D. 99103524gr-qcR. D'Agostino, Phys. Rev. D 99, 103524 (2019), arXiv:1903.03836 [gr-qc]. . S Weinberg, 10.1103/RevModPhys.61.1Rev. Mod. Phys. 611S. Weinberg, Rev. Mod. Phys. 61, 1 (1989). . T Padmanabhan, 10.1016/S0370-1573(03)00120-0arXiv:hep-th/0212290Phys. Rept. 380T. Padmanabhan, Phys. Rept. 380, 235 (2003), arXiv:hep-th/0212290. . R Agostino, O Luongo, M Muccino, 10.1088/1361-6382/ac8af2arXiv:2204.02190Class. Quant. Grav. 39grqcR. D'Agostino, O. Luongo, and M. Muccino, Class. Quant. Grav. 39, 195014 (2022), arXiv:2204.02190 [gr- qc]. . T Clifton, P G Ferreira, A Padilla, C Skordis, 10.1016/j.physrep.2012.01.001arXiv:1106.2476Phys. Rept. 5131astroph.COT. Clifton, P. G. Ferreira, A. Padilla, and C. Sko- rdis, Phys. Rept. 513, 1 (2012), arXiv:1106.2476 [astro- ph.CO]. . M Ishak, 10.1007/s41114-018-0017-4arXiv:1806.10122[astro-ph.COLiving Rev. Rel. 22M. Ishak, Living Rev. Rel. 22, 1 (2019), arXiv:1806.10122 [astro-ph.CO]. . S Nojiri, S D Odintsov, V K Oikonomou, 10.1016/j.physrep.2017.06.001arXiv:1705.11098Phys. Rept. 6921gr-qcS. Nojiri, S. D. Odintsov, and V. K. Oikonomou, Phys. Rept. 692, 1 (2017), arXiv:1705.11098 [gr-qc]. E N Saridakis, arXiv:2105.12582CANTATA), (2021). gr-qcE. N. Saridakis et al. (CANTATA), (2021), arXiv:2105.12582 [gr-qc]. R Abbott, LIGO ScientificarXiv:2111.03606VIRGO, KAGRA), (2021). gr-qcR. Abbott et al. (LIGO Scientific, VIRGO, KAGRA), (2021), arXiv:2111.03606 [gr-qc]. . B F Schutz, 10.1038/323310a0Nature. 323310B. F. Schutz, Nature 323, 310 (1986). . D E Holz, S A Hughes, 10.1086/431341arXiv:astro-ph/0504616Astrophys. J. 629D. E. Holz and S. A. Hughes, Astrophys. J. 629, 15 (2005), arXiv:astro-ph/0504616. . B P Abbott, LIGO Scientific10.1103/PhysRevLett.119.161101arXiv:1710.05832Phys. Rev. Lett. 119161101gr-qcB. P. Abbott et al. (LIGO Scientific, Virgo), Phys. Rev. Lett. 119, 161101 (2017), arXiv:1710.05832 [gr-qc]. . B P Abbott, 10.3847/2041-8213/aa91c9arXiv:1710.05833Astrophys. J. Lett. 84812astro-ph.HEB. P. Abbott et al., Astrophys. J. Lett. 848, L12 (2017), arXiv:1710.05833 [astro-ph.HE]. Virgo, 1M2H, Dark Energy Camera GW-E, DES, DLT40, Las Cumbres Observatory, VINROUGE, MASTER). B P Abbott, LIGO Scientific10.1038/nature24471arXiv:1710.05835Nature. 551astro-ph.COB. P. Abbott et al. (LIGO Scientific, Virgo, 1M2H, Dark Energy Camera GW-E, DES, DLT40, Las Cumbres Observatory, VINROUGE, MASTER), Nature 551, 85 (2017), arXiv:1710.05835 [astro-ph.CO]. . R Kase, S Tsujikawa, 10.1142/S0218271819420057arXiv:1809.08735Int. J. Mod. Phys. D. 281942005gr-qcR. Kase and S. Tsujikawa, Int. J. Mod. Phys. D 28, 1942005 (2019), arXiv:1809.08735 [gr-qc]. . M Maggiore, 10.1088/1475-7516/2020/03/050arXiv:1912.02622[astro-ph.COJCAP. 0350M. Maggiore et al., JCAP 03, 050 (2020), arXiv:1912.02622 [astro-ph.CO]. . D Reitze, arXiv:1907.04833[astro-ph.IMBull. Am. Astron. Soc. 5135D. Reitze et al., Bull. Am. Astron. Soc. 51, 035 (2019), arXiv:1907.04833 [astro-ph.IM]. . P Amaro-Seoane, LISAarXiv:1702.00786[astro-ph.IMP. Amaro-Seoane et al. (LISA), (2017), arXiv:1702.00786 [astro-ph.IM]. S Kawamura, 10.1093/ptep/ptab019arXiv:2006.13545PTEP 2021, 05A105 (2021). gr-qcS. Kawamura et al., PTEP 2021, 05A105 (2021), arXiv:2006.13545 [gr-qc]. . J Luo, TianQin10.1088/0264-9381/33/3/035010arXiv:1512.02076[astro-ph.IMClass. Quant. Grav. 3335010J. Luo et al. (TianQin), Class. Quant. Grav. 33, 035010 (2016), arXiv:1512.02076 [astro-ph.IM]. . R.-G Cai, T Yang, 10.1103/PhysRevD.95.044024arXiv:1608.08008Phys. Rev. D. 9544024astro-ph.COR.-G. Cai and T. Yang, Phys. Rev. D 95, 044024 (2017), arXiv:1608.08008 [astro-ph.CO]. . M Du, W Yang, L Xu, S Pan, D F Mota, 10.1103/PhysRevD.100.043535arXiv:1812.01440Phys. Rev. D. 10043535astroph.COM. Du, W. Yang, L. Xu, S. Pan, and D. F. Mota, Phys. Rev. D 100, 043535 (2019), arXiv:1812.01440 [astro- ph.CO]. . X.-N Zhang, L.-F Wang, J.-F Zhang, X Zhang, 10.1103/PhysRevD.99.063510arXiv:1804.08379Phys. Rev. D. 9963510astro-ph.COX.-N. Zhang, L.-F. Wang, J.-F. Zhang, and X. Zhang, Phys. Rev. D 99, 063510 (2019), arXiv:1804.08379 [astro-ph.CO]. . R , R C Nunes, 10.1103/PhysRevD.100.044041arXiv:1907.05516Phys. Rev. D. 10044041gr-qcR. D'Agostino and R. C. Nunes, Phys. Rev. D 100, 044041 (2019), arXiv:1907.05516 [gr-qc]. . W Yang, S Vagnozzi, E Di Valentino, R C Nunes, S Pan, D F Mota, 10.1088/1475-7516/2019/07/037arXiv:1905.08286JCAP. 0737astro-ph.COW. Yang, S. Vagnozzi, E. Di Valentino, R. C. Nunes, S. Pan, and D. F. Mota, JCAP 07, 037 (2019), arXiv:1905.08286 [astro-ph.CO]. . X Fu, L Zhou, J Chen, 10.1103/PhysRevD.99.083523arXiv:1903.09913Phys. Rev. D. 9983523gr-qcX. Fu, L. Zhou, and J. Chen, Phys. Rev. D 99, 083523 (2019), arXiv:1903.09913 [gr-qc]. . R.-G Cai, T.-B Liu, X.-W Liu, S.-J Wang, T Yang, 10.1103/PhysRevD.97.103005arXiv:1712.00952Phys. Rev. D. 97103005astro-ph.COR.-G. Cai, T.-B. Liu, X.-W. Liu, S.-J. Wang, and T. Yang, Phys. Rev. D 97, 103005 (2018), arXiv:1712.00952 [astro-ph.CO]. . A Allahyari, R C Nunes, D F Mota, 10.1093/mnras/stac1445arXiv:2110.07634Mon. Not. Roy. Astron. Soc. 5141274astro-ph.COA. Allahyari, R. C. Nunes, and D. F. Mota, Mon. Not. Roy. Astron. Soc. 514, 1274 (2022), arXiv:2110.07634 [astro-ph.CO]. . E Belgacem, Y Dirian, S Foffa, M Maggiore, 10.1103/PhysRevD.97.104066arXiv:1712.08108Phys. Rev. D. 97104066astro-ph.COE. Belgacem, Y. Dirian, S. Foffa, and M. Maggiore, Phys. Rev. D 97, 104066 (2018), arXiv:1712.08108 [astro-ph.CO]. . A Nishizawa, S Arai, 10.1103/PhysRevD.99.104038arXiv:1901.08249Phys. Rev. D. 99104038gr-qcA. Nishizawa and S. Arai, Phys. Rev. D 99, 104038 (2019), arXiv:1901.08249 [gr-qc]. . A Bonilla, R Agostino, R C Nunes, J C N De Araujo, 10.1088/1475-7516/2020/03/015arXiv:1910.05631JCAP. 0315grqcA. Bonilla, R. D'Agostino, R. C. Nunes, and J. C. N. de Araujo, JCAP 03, 015 (2020), arXiv:1910.05631 [gr- qc]. . S D Odintsov, V K Oikonomou, R Myrzakulov, 10.3390/sym14040729arXiv:2204.00876Symmetry. 14gr-qcS. D. Odintsov, V. K. Oikonomou, and R. Myrzakulov, Symmetry 14, 729 (2022), arXiv:2204.00876 [gr-qc]. . R.-G Cai, T Yang, 10.1088/1475-7516/2021/12/017arXiv:2107.13919JCAP. 1217gr-qcR.-G. Cai and T. Yang, JCAP 12, 017 (2021), arXiv:2107.13919 [gr-qc]. . I S Matos, M O Calvão, I Waga, 10.1103/PhysRevD.103.104059arXiv:2104.10305Phys. Rev. D. 103104059gr-qcI. S. Matos, M. O. Calvão, and I. Waga, Phys. Rev. D 103, 104059 (2021), arXiv:2104.10305 [gr-qc]. . M Califano, I Martino, D Vernieri, S Capozziello, arXiv:2205.11221astro-ph.COM. Califano, I. de Martino, D. Vernieri, and S. Capozziello, (2022), arXiv:2205.11221 [astro-ph.CO]. . N Jiang, K Yagi, 10.1103/PhysRevD.103.124047arXiv:2104.04442Phys. Rev. D. 103124047gr-qcN. Jiang and K. Yagi, Phys. Rev. D 103, 124047 (2021), arXiv:2104.04442 [gr-qc]. . Y Pan, Y He, J Qi, J Li, S Cao, T Liu, J Wang, 10.3847/1538-4357/abebe0arXiv:2103.05212Astrophys. J. 911astroph.COY. Pan, Y. He, J. Qi, J. Li, S. Cao, T. Liu, and J. Wang, Astrophys. J. 911, 135 (2021), arXiv:2103.05212 [astro- ph.CO]. . G Tasinato, A Garoffolo, D Bertacca, S Matarrese, 10.1088/1475-7516/2021/06/050arXiv:2103.00155JCAP. 0650gr-qcG. Tasinato, A. Garoffolo, D. Bertacca, and S. Matar- rese, JCAP 06, 050 (2021), arXiv:2103.00155 [gr-qc]. . A Bonilla, S Kumar, R C Nunes, S Pan, 10.1093/mnras/stac687arXiv:2102.06149Mon. Not. Roy. Astron. Soc. 5124231astro-ph.COA. Bonilla, S. Kumar, R. C. Nunes, and S. Pan, Mon. Not. Roy. Astron. Soc. 512, 4231 (2022), arXiv:2102.06149 [astro-ph.CO]. . S Mukherjee, B D Wandelt, J Silk, 10.1093/mnras/stab001Mon. NotS. Mukherjee, B. D. Wandelt, and J. Silk, Mon. Not. . Roy, 10.1093/mnras/stab001arXiv:2012.15316[astro-ph.COAstron. Soc. 5021136Roy. Astron. Soc. 502, 1136 (2021), arXiv:2012.15316 [astro-ph.CO]. . M Kalomenopoulos, S Khochfar, J Gair, S Arai, 10.1093/mnras/stab557arXiv:2007.15020Mon. Not. Roy. Astron. Soc. 5033179astro-ph.COM. Kalomenopoulos, S. Khochfar, J. Gair, and S. Arai, Mon. Not. Roy. Astron. Soc. 503, 3179 (2021), arXiv:2007.15020 [astro-ph.CO]. . T Baker, I Harrison, 10.1088/1475-7516/2021/01/068arXiv:2007.13791JCAP. 0168astro-ph.COT. Baker and I. Harrison, JCAP 01, 068 (2021), arXiv:2007.13791 [astro-ph.CO]. . S Mastrogiovanni, D Steer, M Barsuglia, 10.1103/PhysRevD.102.044009arXiv:2004.01632Phys. Rev. D. 10244009gr-qcS. Mastrogiovanni, D. Steer, and M. Barsuglia, Phys. Rev. D 102, 044009 (2020), arXiv:2004.01632 [gr-qc]. . E Belgacem, S Foffa, M Maggiore, T Yang, 10.1103/PhysRevD.101.063505arXiv:1911.11497Phys. Rev. D. 10163505astroph.COE. Belgacem, S. Foffa, M. Maggiore, and T. Yang, Phys. Rev. D 101, 063505 (2020), arXiv:1911.11497 [astro- ph.CO]. . R C Nunes, M E S Alves, J C N De Araujo, 10.1103/PhysRevD.100.064012arXiv:1905.03237Phys. Rev. D. 10064012grqcR. C. Nunes, M. E. S. Alves, and J. C. N. de Araujo, Phys. Rev. D 100, 064012 (2019), arXiv:1905.03237 [gr- qc]. . M Califano, I Martino, D Vernieri, S Capozziello, arXiv:2208.13999astro-ph.COM. Califano, I. de Martino, D. Vernieri, and S. Capozziello, (2022), arXiv:2208.13999 [astro-ph.CO]. . I Harry, J Noller, arXiv:2207.10096gr-qcI. Harry and J. Noller, (2022), arXiv:2207.10096 [gr-qc]. . J M Ezquiaga, W Hu, M Lagos, M.-X Lin, 10.1088/1475-7516/2021/11/048arXiv:2108.10872JCAP. 1148astro-ph.COJ. M. Ezquiaga, W. Hu, M. Lagos, and M.-X. Lin, JCAP 11, 048 (2021), arXiv:2108.10872 [astro-ph.CO]. . J Jiménez, L Heisenberg, T S Koivisto, 10.3390/universe5070173arXiv:1903.068305hep-thJ. Beltrán Jiménez, L. Heisenberg, and T. S. Koivisto, Universe 5, 173 (2019), arXiv:1903.06830 [hep-th]. . F Bajardi, D Vernieri, S Capozziello, 10.1140/epjp/s13360-020-00918-3arXiv:2011.01248Eur. Phys. J. Plus. 135gr-qcF. Bajardi, D. Vernieri, and S. Capozziello, Eur. Phys. J. Plus 135, 912 (2020), arXiv:2011.01248 [gr-qc]. . I Ayuso, R Lazkoz, V Salzano, 10.1103/PhysRevD.103.063505arXiv:2012.00046Phys. Rev. D. 10363505astro-ph.COI. Ayuso, R. Lazkoz, and V. Salzano, Phys. Rev. D 103, 063505 (2021), arXiv:2012.00046 [astro-ph.CO]. . S Capozziello, V De Falco, C Ferrara, arXiv:2208.03011gr-qcS. Capozziello, V. De Falco, and C. Ferrara, (2022), arXiv:2208.03011 [gr-qc]. . J Jiménez, L Heisenberg, T S Koivisto, S Pekar, 10.1103/PhysRevD.101.103507arXiv:1906.10027Phys. Rev. D. 101103507gr-qcJ. Beltrán Jiménez, L. Heisenberg, T. S. Koivisto, and S. Pekar, Phys. Rev. D 101, 103507 (2020), arXiv:1906.10027 [gr-qc]. . R Lazkoz, F S N Lobo, M Ortiz-Baños, V Salzano, 10.1103/PhysRevD.100.104027arXiv:1907.13219Phys. Rev. D. 100104027gr-qcR. Lazkoz, F. S. N. Lobo, M. Ortiz-Baños, and V. Salzano, Phys. Rev. D 100, 104027 (2019), arXiv:1907.13219 [gr-qc]. . N Frusciante, 10.1103/PhysRevD.103.044021arXiv:2101.09242[astro-ph.COPhys. Rev. D. 10344021N. Frusciante, Phys. Rev. D 103, 044021 (2021), arXiv:2101.09242 [astro-ph.CO]. . I S Albuquerque, N Frusciante, 10.1016/j.dark.2022.100980arXiv:2202.04637Phys. Dark Univ. 35100980astro-ph.COI. S. Albuquerque and N. Frusciante, Phys. Dark Univ. 35, 100980 (2022), arXiv:2202.04637 [astro-ph.CO]. . S Capozziello, R , D&apos; Agostino, 10.1016/j.physletb.2022.137229arXiv:2204.01015Phys. Lett. B. 832137229gr-qcS. Capozziello and R. D'Agostino, Phys. Lett. B 832, 137229 (2022), arXiv:2204.01015 [gr-qc]. . S A Narawade, L Pati, B Mishra, S K Tripathy, 10.1016/j.dark.2022.101020arXiv:2203.14121Phys. Dark Univ. 36101020gr-qcS. A. Narawade, L. Pati, B. Mishra, and S. K. Tripathy, Phys. Dark Univ. 36, 101020 (2022), arXiv:2203.14121 [gr-qc]. . J Ferreira, T Barreiro, J Mimoso, N J Nunes, 10.1103/PhysRevD.105.123531arXiv:2203.13788Phys. Rev. D. 105123531astro-ph.COJ. Ferreira, T. Barreiro, J. Mimoso, and N. J. Nunes, Phys. Rev. D 105, 123531 (2022), arXiv:2203.13788 [astro-ph.CO]. . W Khyllep, J Dutta, E N Saridakis, K Yesmakhanova, arXiv:2207.02610gr-qcW. Khyllep, J. Dutta, E. N. Saridakis, and K. Yesmakhanova, (2022), arXiv:2207.02610 [gr-qc]. . S Mandal, P K Sahoo, 10.1016/j.physletb.2021.136786arXiv:2111.10511Phys. Lett. B. 823136786gr-qcS. Mandal and P. K. Sahoo, Phys. Lett. B 823, 136786 (2021), arXiv:2111.10511 [gr-qc]. . L Atayde, N Frusciante, 10.1103/PhysRevD.104.064052arXiv:2108.10832astro-ph.COPhys. Rev. D. 10464052L. Atayde and N. Frusciante, Phys. Rev. D 104, 064052 (2021), arXiv:2108.10832 [astro-ph.CO]. . N Dimakis, A Paliathanasis, T Christodoulakis, 10.1088/1361-6382/ac2b09arXiv:2108.01970Class. Quant. Grav. 38225003gr-qcN. Dimakis, A. Paliathanasis, and T. Christodoulakis, Class. Quant. Grav. 38, 225003 (2021), arXiv:2108.01970 [gr-qc]. . F K Anagnostopoulos, S Basilakos, E N Saridakis, 10.1016/j.physletb.2021.136634arXiv:2104.15123Phys. Lett. B. 822136634gr-qcF. K. Anagnostopoulos, S. Basilakos, and E. N. Saridakis, Phys. Lett. B 822, 136634 (2021), arXiv:2104.15123 [gr-qc]. . D Zhao, 10.1140/epjc/s10052-022-10266-4arXiv:2104.02483Eur. Phys. J. C. 82303gr-qcD. Zhao, Eur. Phys. J. C 82, 303 (2022), arXiv:2104.02483 [gr-qc]. . W Khyllep, A Paliathanasis, J Dutta, 10.1103/PhysRevD.103.103521arXiv:2103.08372Phys. Rev. D. 103103521gr-qcW. Khyllep, A. Paliathanasis, and J. Dutta, Phys. Rev. D 103, 103521 (2021), arXiv:2103.08372 [gr-qc]. . R Solanki, A De, P K Sahoo, 10.1016/j.dark.2022.100996arXiv:2203.03370Phys. Dark Univ. 36100996gr-qcR. Solanki, A. De, and P. K. Sahoo, Phys. Dark Univ. 36, 100996 (2022), arXiv:2203.03370 [gr-qc]. . L Järv, M Rünkla, M Saal, O Vilson, 10.1103/PhysRevD.97.124025arXiv:1802.00492Phys. Rev. D. 97124025gr-qcL. Järv, M. Rünkla, M. Saal, and O. Vilson, Phys. Rev. D 97, 124025 (2018), arXiv:1802.00492 [gr-qc]. . J Jiménez, L Heisenberg, T Koivisto, 10.1103/PhysRevD.98.044048arXiv:1710.03116Phys. Rev. D. 9844048grqcJ. Beltrán Jiménez, L. Heisenberg, and T. Koivisto, Phys. Rev. D 98, 044048 (2018), arXiv:1710.03116 [gr- qc]. . F Ambrosio, L Heisenberg, S Kuhn, 10.1088/1361-6382/ac3f99arXiv:2109.04209Class. Quant. Grav. 3925013grqcF. D'Ambrosio, L. Heisenberg, and S. Kuhn, Class. Quant. Grav. 39, 025013 (2022), arXiv:2109.04209 [gr- qc]. . M Hohmann, 10.1103/PhysRevD.104.124077arXiv:2109.01525Phys. Rev. D. 104124077gr-qcM. Hohmann, Phys. Rev. D 104, 124077 (2021), arXiv:2109.01525 [gr-qc]. . F Ambrosio, S D B Fell, L Heisenberg, S Kuhn, 10.1103/PhysRevD.105.024042arXiv:2109.03174Phys. Rev. D. 10524042gr-qcF. D'Ambrosio, S. D. B. Fell, L. Heisenberg, and S. Kuhn, Phys. Rev. D 105, 024042 (2022), arXiv:2109.03174 [gr-qc]. . M Visser, 10.1007/s10714-005-0134-8arXiv:gr-qc/0411131Gen. Rel. Grav. 371541M. Visser, Gen. Rel. Grav. 37, 1541 (2005), arXiv:gr- qc/0411131. . S Capozziello, R D&apos;agostino, O Luongo, 10.1093/mnras/sty422arXiv:1712.04380Mon. Not. Roy. Astron. Soc. 476astro-ph.COS. Capozziello, R. D'Agostino, and O. Luongo, Mon. Not. Roy. Astron. Soc. 476, 3924 (2018), arXiv:1712.04380 [astro-ph.CO]. . S Capozziello, R D&apos;agostino, O Luongo, 10.1093/mnras/staa871arXiv:2003.09341Mon. Not. Roy. Astron. Soc. 4942576astro-ph.COS. Capozziello, R. D'Agostino, and O. Luongo, Mon. Not. Roy. Astron. Soc. 494, 2576 (2020), arXiv:2003.09341 [astro-ph.CO]. . A Aviles, C Gruber, O Luongo, H Quevedo, 10.1103/PhysRevD.86.123516arXiv:1204.2007Phys. Rev. D. 86123516astroph.COA. Aviles, C. Gruber, O. Luongo, and H. Quevedo, Phys. Rev. D 86, 123516 (2012), arXiv:1204.2007 [astro- ph.CO]. . S Capozziello, R D&apos;agostino, O Luongo, 10.1142/S0218271819300167arXiv:1904.01427Int. J. Mod. Phys. D. 281930016grqcS. Capozziello, R. D'Agostino, and O. Luongo, Int. J. Mod. Phys. D 28, 1930016 (2019), arXiv:1904.01427 [gr- qc]. . S Mandal, D Wang, P K Sahoo, 10.1103/PhysRevD.102.124029arXiv:2011.00420Phys. Rev. D. 102124029gr-qcS. Mandal, D. Wang, and P. K. Sahoo, Phys. Rev. D 102, 124029 (2020), arXiv:2011.00420 [gr-qc]. . S Capozziello, R D&apos;agostino, O Luongo, 10.1007/s10714-018-2483-0arXiv:1806.06385Gen. Rel. Grav. 51gr-qcS. Capozziello, R. D'Agostino, and O. Luongo, Gen. Rel. Grav. 51, 2 (2019), arXiv:1806.06385 [gr-qc]. . S Capozziello, R D&apos;agostino, O Luongo, 10.1007/s10714-017-2304-xarXiv:1706.02962Gen. Rel. Grav. 49141gr-qcS. Capozziello, R. D'Agostino, and O. Luongo, Gen. Rel. Grav. 49, 141 (2017), arXiv:1706.02962 [gr-qc]. . S Capozziello, R D&apos;agostino, O Luongo, 10.1088/1475-7516/2018/05/008arXiv:1709.08407JCAP. 058gr-qcS. Capozziello, R. D'Agostino, and O. Luongo, JCAP 05, 008 (2018), arXiv:1709.08407 [gr-qc]. . E Belgacem, 10.1088/1475-7516/2019/07/024arXiv:1906.01593LISA Cosmology Working Group). 0724JCAP. astro-ph.COE. Belgacem et al. (LISA Cosmology Working Group), JCAP 07, 024 (2019), arXiv:1906.01593 [astro-ph.CO]. . N Aghanim, Planck10.1051/0004-6361/201833910arXiv:1807.06209Astron. Astrophys. 641Erratum: Astron.Astrophys. 652, C4 (2021). astro-ph.CON. Aghanim et al. (Planck), Astron. Astrophys. 641, A6 (2020), [Erratum: Astron.Astrophys. 652, C4 (2021)], arXiv:1807.06209 [astro-ph.CO]. . S Hild, 10.1088/0264-9381/28/9/094013arXiv:1012.0908Class. Quant. Grav. 2894013gr-qcS. Hild et al., Class. Quant. Grav. 28, 094013 (2011), arXiv:1012.0908 [gr-qc]. . C Cutler, 10.1103/PhysRevD.57.7089arXiv:gr-qc/9703068Phys. Rev. D. 577089C. Cutler, Phys. Rev. D 57, 7089 (1998), arXiv:gr- qc/9703068. . S Babak, J Gair, A Sesana, E Barausse, C F Sopuerta, C P L Berry, E Berti, P Amaro-Seoane, A Petiteau, A Klein, 10.1103/PhysRevD.95.103012arXiv:1703.09722Phys. Rev. D. 95103012gr-qcS. Babak, J. Gair, A. Sesana, E. Barausse, C. F. Sop- uerta, C. P. L. Berry, E. Berti, P. Amaro-Seoane, A. Pe- titeau, and A. Klein, Phys. Rev. D 95, 103012 (2017), arXiv:1703.09722 [gr-qc]. . P A Seoane, 10.1007/s10714-021-02889-xarXiv:2107.09665[astro-ph.IMGen. Rel. Grav. 54P. A. Seoane et al., Gen. Rel. Grav. 54, 3 (2022), arXiv:2107.09665 [astro-ph.IM]. . N Tamanini, C Caprini, E Barausse, A Sesana, A Klein, A Petiteau, 10.1088/1475-7516/2016/04/002arXiv:1601.07112JCAP. 042astro-ph.CON. Tamanini, C. Caprini, E. Barausse, A. Sesana, A. Klein, and A. Petiteau, JCAP 04, 002 (2016), arXiv:1601.07112 [astro-ph.CO]. . N Tamanini, 10.1088/1742-6596/840/1/012029arXiv:1612.02634astro-ph.COJ. Phys. Conf. Ser. 84012029N. Tamanini, J. Phys. Conf. Ser. 840, 012029 (2017), arXiv:1612.02634 [astro-ph.CO]. . T Robson, N J Cornish, C Liu, 10.1088/1361-6382/ab1101arXiv:1803.01944Class. Quant. Grav. 36105011astroph.HET. Robson, N. J. Cornish, and C. Liu, Class. Quant. Grav. 36, 105011 (2019), arXiv:1803.01944 [astro- ph.HE]. . W K Hastings, Biometrika. 5797W. K. Hastings, Biometrika 57, 97 (1970). . E Abdalla, 10.1016/j.jheap.2022.04.002arXiv:2203.06142[astro-ph.COJHEAp. 34E. Abdalla et al., JHEAp 34, 49 (2022), arXiv:2203.06142 [astro-ph.CO]. . R Agostino, O Luongo, 10.1103/PhysRevD.98.124013arXiv:1807.10167Phys. Rev. D. 98124013gr-qcR. D'Agostino and O. Luongo, Phys. Rev. D 98, 124013 (2018), arXiv:1807.10167 [gr-qc]. . F Bajardi, R , D&apos; Agostino, arXiv:2208.02677gr-qcF. Bajardi and R. D'Agostino, (2022), arXiv:2208.02677 [gr-qc]. Pan-STARRS1). D M Scolnic, 10.3847/1538-4357/aab9bbarXiv:1710.00845Astrophys. J. 859astro-ph.COD. M. Scolnic et al. (Pan-STARRS1), Astrophys. J. 859, 101 (2018), arXiv:1710.00845 [astro-ph.CO]. . J Guy, SNLS)10.1051/0004-6361:20066930arXiv:astro-ph/0701828Astron. Astrophys. 466J. Guy et al. (SNLS), Astron. Astrophys. 466, 11 (2007), arXiv:astro-ph/0701828. . A G Riess, 10.3847/1538-4357/aaa5a9arXiv:1710.00844astro-ph.COAstrophys. J. 853A. G. Riess et al., Astrophys. J. 853, 126 (2018), arXiv:1710.00844 [astro-ph.CO]. . R Jimenez, A Loeb, 10.1086/340549arXiv:astro-ph/0106145Astrophys. J. 573R. Jimenez and A. Loeb, Astrophys. J. 573, 37 (2002), arXiv:astro-ph/0106145. . S Capozziello, R D&apos;agostino, O Luongo, 10.1016/j.dark.2018.02.002arXiv:1712.04317Phys. Dark Univ. 20gr-qcS. Capozziello, R. D'Agostino, and O. Luongo, Phys. Dark Univ. 20, 1 (2018), arXiv:1712.04317 [gr-qc].
[ "https://github.com/eXtremeGravityInstitute/LISA_" ]
[ "Anderson localization from the replica formalism", "Anderson localization from the replica formalism" ]
[ "Alexander Altland \nInstitut für Theoretische Physik\nUniverstät zu Köln\n50937KölnGermany\n", "Alex Kamenev \nDepartment of Phhysics\nUniversity of Minnesota\n55455MinneapolisMNUSA\n", "Chushun Tian \nDepartment of Phhysics\nUniversity of Minnesota\n55455MinneapolisMNUSA\n\nKavli Institute for Theoretical Physics\nUniversity of California\n93106Santa BarbaraCAUSA\n" ]
[ "Institut für Theoretische Physik\nUniverstät zu Köln\n50937KölnGermany", "Department of Phhysics\nUniversity of Minnesota\n55455MinneapolisMNUSA", "Department of Phhysics\nUniversity of Minnesota\n55455MinneapolisMNUSA", "Kavli Institute for Theoretical Physics\nUniversity of California\n93106Santa BarbaraCAUSA" ]
[]
We study Anderson localization in quasi-one-dimensional disordered wires within the framework of the replica σ-model. Applying a semiclassical approach (geodesic action plus Gaussian fluctuations) recently introduced within the context of supersymmetry by Lamacraft, Simons and Zirnbauer [1], we compute the exact density of transmission matrix eigenvalues of superconducting wires (of symmetry class CI.) For the unitary class of metallic systems (class A) we are able to obtain the density function, save for its large transmission tail. PACS numbers: 72.15.Rn At present, there exist two theoretical approaches capable of describing strongly localized phases of disordered wires: supersymmetry (SUSY) [2] and the DMPK transfer matrix approach [3]. This represents a serious limitation in as much as both formalisms are ill-suited for generalization to the presence of Coulomb interactions (see, however, Ref.[4].) Reciprocally, it has, so far, not been possible to describe strong localization phenomena by those theories that may be applied to the analysis of interaction effects -replica field theory [5] and the Keldysh approach [6].It is the purpose of this letter to introduce a replica field theory approach, capable of describing strongly localized phases. Conceptually, our work is based on a recent paper [1] by Lamacraft, Simons and Zirnbauer (LSZ) where saddle-point techniques have been applied to analyze the SUSY generating functionals of quasi onedimensional disordered conductors. Specifically it was shown that four out of ten symmetry classes of disordered metals are semiclassically exact[7]in that the stationary phase results coincide with those obtained by DMPK methods[8]. We here show that the phenomenon of semiclassical exactness pertains to the replica formalism and, in particular, 'survives' the analytical continuation inherent to that approach. Applying the technique to the non-semiclassically exact unitary symmetry class, we find that it still produces qualitatively correct results.To introduce the replica-generalization of the method we consider a disordered superconducting wire in the presence of spin-rotation and time reversal invariance (symmetry class CI in the classification of Ref.[9].) The (thermal) transport properties of this system may be conveniently characterized in terms of the average density of transmission matrix eigenvalues, ρ(φ). Within the fermion-replica formalism the latter may be expressed through the generating functionwhere tt † is the transmission matrix with eigenvalues T j = cosh −2 (φ j /2) andθ ≡ diag(θ 1 , . . . , θ R ). Defining the function F (θ) ≡ lim R→0 d dθ1 θa→θ Z(θ), the transmission matrix eigenvalue density is obtained as [10]:). The field theoretical representation of the generating function for class CI is given bywhere g is a field of matrices g(t) ∈ Sp(2R), the functional integration extends over the Haar measure on the symplectic group, and T = L/ξ is the length of the wire, L, in units of the localization length ξ. At the left and right end point of the wire the field is subject to boundary conditions[10,11]which in the case of class CI read as g(0) = 1 1 and g(T ) = exp(iθ ⊗ σ 3 ). Here the Pauli matrix σ 3 acts in the space defining the symplectic condition g −1 = σ 2 g T σ 2 . Our strategy will be to subject the functional (1) to a straightforward stationary phase analysis[12]. Varying the action S[g] w.r.t. g, one obtains the Euler-Lagrange equation: δ g g=ḡ S[g] = 0 ⇒ ∂(ḡ −1 ∂ḡ) = 0, which integrates to the conditionḡ −1 ∂ḡ = const. The solutions to this latter equation are given byḡ = exp(iW t/T ), with constant Lie-algebra elementsW ∈ sp(2R). Evaluatinḡ g at the system boundary t = T , we obtain the condition exp(iW ) = exp(iθ ⊗σ 3 ). This is solved byW ≡θ (n) ⊗σ 3 , whereθ (n) ≡θ + 2πn andn = diag(n 1 , . . . , n R ) is a vector of integer 'winding numbers'. The saddle point action is given by S[ḡ (n) ] = 1 4T R a=1 (θ (n) a ) 2 , indicating that at length scales, T > ∼ 1, mean field configurations traversing multiply around the group manifold become energetically affordable. Physically, these configurations describe the massive (and perturbatively inaccessible) buildup of interfering superconductor diffusion modes. Their proliferation at large length scales forms the basis of the localization phenomenon.To obtain the contributions of individual saddle points,
10.1103/physrevlett.95.206601
[ "https://export.arxiv.org/pdf/cond-mat/0505328v1.pdf" ]
36,988,293
cond-mat/0505328
70c012a45212f3a4eb89dc591d133b8434331401
Anderson localization from the replica formalism 12 May 2005 Alexander Altland Institut für Theoretische Physik Universtät zu Köln 50937KölnGermany Alex Kamenev Department of Phhysics University of Minnesota 55455MinneapolisMNUSA Chushun Tian Department of Phhysics University of Minnesota 55455MinneapolisMNUSA Kavli Institute for Theoretical Physics University of California 93106Santa BarbaraCAUSA Anderson localization from the replica formalism 12 May 2005(Dated: March 23, 2022) We study Anderson localization in quasi-one-dimensional disordered wires within the framework of the replica σ-model. Applying a semiclassical approach (geodesic action plus Gaussian fluctuations) recently introduced within the context of supersymmetry by Lamacraft, Simons and Zirnbauer [1], we compute the exact density of transmission matrix eigenvalues of superconducting wires (of symmetry class CI.) For the unitary class of metallic systems (class A) we are able to obtain the density function, save for its large transmission tail. PACS numbers: 72.15.Rn At present, there exist two theoretical approaches capable of describing strongly localized phases of disordered wires: supersymmetry (SUSY) [2] and the DMPK transfer matrix approach [3]. This represents a serious limitation in as much as both formalisms are ill-suited for generalization to the presence of Coulomb interactions (see, however, Ref.[4].) Reciprocally, it has, so far, not been possible to describe strong localization phenomena by those theories that may be applied to the analysis of interaction effects -replica field theory [5] and the Keldysh approach [6].It is the purpose of this letter to introduce a replica field theory approach, capable of describing strongly localized phases. Conceptually, our work is based on a recent paper [1] by Lamacraft, Simons and Zirnbauer (LSZ) where saddle-point techniques have been applied to analyze the SUSY generating functionals of quasi onedimensional disordered conductors. Specifically it was shown that four out of ten symmetry classes of disordered metals are semiclassically exact[7]in that the stationary phase results coincide with those obtained by DMPK methods[8]. We here show that the phenomenon of semiclassical exactness pertains to the replica formalism and, in particular, 'survives' the analytical continuation inherent to that approach. Applying the technique to the non-semiclassically exact unitary symmetry class, we find that it still produces qualitatively correct results.To introduce the replica-generalization of the method we consider a disordered superconducting wire in the presence of spin-rotation and time reversal invariance (symmetry class CI in the classification of Ref.[9].) The (thermal) transport properties of this system may be conveniently characterized in terms of the average density of transmission matrix eigenvalues, ρ(φ). Within the fermion-replica formalism the latter may be expressed through the generating functionwhere tt † is the transmission matrix with eigenvalues T j = cosh −2 (φ j /2) andθ ≡ diag(θ 1 , . . . , θ R ). Defining the function F (θ) ≡ lim R→0 d dθ1 θa→θ Z(θ), the transmission matrix eigenvalue density is obtained as [10]:). The field theoretical representation of the generating function for class CI is given bywhere g is a field of matrices g(t) ∈ Sp(2R), the functional integration extends over the Haar measure on the symplectic group, and T = L/ξ is the length of the wire, L, in units of the localization length ξ. At the left and right end point of the wire the field is subject to boundary conditions[10,11]which in the case of class CI read as g(0) = 1 1 and g(T ) = exp(iθ ⊗ σ 3 ). Here the Pauli matrix σ 3 acts in the space defining the symplectic condition g −1 = σ 2 g T σ 2 . Our strategy will be to subject the functional (1) to a straightforward stationary phase analysis[12]. Varying the action S[g] w.r.t. g, one obtains the Euler-Lagrange equation: δ g g=ḡ S[g] = 0 ⇒ ∂(ḡ −1 ∂ḡ) = 0, which integrates to the conditionḡ −1 ∂ḡ = const. The solutions to this latter equation are given byḡ = exp(iW t/T ), with constant Lie-algebra elementsW ∈ sp(2R). Evaluatinḡ g at the system boundary t = T , we obtain the condition exp(iW ) = exp(iθ ⊗σ 3 ). This is solved byW ≡θ (n) ⊗σ 3 , whereθ (n) ≡θ + 2πn andn = diag(n 1 , . . . , n R ) is a vector of integer 'winding numbers'. The saddle point action is given by S[ḡ (n) ] = 1 4T R a=1 (θ (n) a ) 2 , indicating that at length scales, T > ∼ 1, mean field configurations traversing multiply around the group manifold become energetically affordable. Physically, these configurations describe the massive (and perturbatively inaccessible) buildup of interfering superconductor diffusion modes. Their proliferation at large length scales forms the basis of the localization phenomenon.To obtain the contributions of individual saddle points, We study Anderson localization in quasi-one-dimensional disordered wires within the framework of the replica σ-model. Applying a semiclassical approach (geodesic action plus Gaussian fluctuations) recently introduced within the context of supersymmetry by Lamacraft, Simons and Zirnbauer [1], we compute the exact density of transmission matrix eigenvalues of superconducting wires (of symmetry class CI.) For the unitary class of metallic systems (class A) we are able to obtain the density function, save for its large transmission tail. At present, there exist two theoretical approaches capable of describing strongly localized phases of disordered wires: supersymmetry (SUSY) [2] and the DMPK transfer matrix approach [3]. This represents a serious limitation in as much as both formalisms are ill-suited for generalization to the presence of Coulomb interactions (see, however, Ref. [4].) Reciprocally, it has, so far, not been possible to describe strong localization phenomena by those theories that may be applied to the analysis of interaction effects -replica field theory [5] and the Keldysh approach [6]. It is the purpose of this letter to introduce a replica field theory approach, capable of describing strongly localized phases. Conceptually, our work is based on a recent paper [1] by Lamacraft, Simons and Zirnbauer (LSZ) where saddle-point techniques have been applied to analyze the SUSY generating functionals of quasi onedimensional disordered conductors. Specifically it was shown that four out of ten symmetry classes of disordered metals are semiclassically exact [7] in that the stationary phase results coincide with those obtained by DMPK methods [8]. We here show that the phenomenon of semiclassical exactness pertains to the replica formalism and, in particular, 'survives' the analytical continuation inherent to that approach. Applying the technique to the non-semiclassically exact unitary symmetry class, we find that it still produces qualitatively correct results. To introduce the replica-generalization of the method we consider a disordered superconducting wire in the presence of spin-rotation and time reversal invariance (symmetry class CI in the classification of Ref. [9].) The (thermal) transport properties of this system may be conveniently characterized in terms of the average density of transmission matrix eigenvalues, ρ(φ). Within the fermion-replica formalism the latter may be expressed through the generating function Z(θ) ≡ R a=1 det 1 − sin 2 (θ a /2) tt † , where tt † is the transmission matrix with eigenvalues T j = cosh −2 (φ j /2) andθ ≡ diag(θ 1 , . . . , θ R ). Defining the function F (θ) ≡ lim R→0 d dθ1 θa→θ Z(θ) , the transmission matrix eigenvalue density is obtained as [10]: ρ(φ) = 1 2π (F (iφ + π) − F (iφ − π) ). The field theoretical representation of the generating function for class CI is given by Z(θ) = g(T ) g(0) Dg e −S[g] , S[g] = 1 8 T 0 dt tr(∂g∂g −1 ) ,(1) where g is a field of matrices g(t) ∈ Sp(2R), the functional integration extends over the Haar measure on the symplectic group, and T = L/ξ is the length of the wire, L, in units of the localization length ξ. At the left and right end point of the wire the field is subject to boundary conditions [10,11] which in the case of class CI read as g(0) = 1 1 and g(T ) = exp(iθ ⊗ σ 3 ). Here the Pauli matrix σ 3 acts in the space defining the symplectic condition g −1 = σ 2 g T σ 2 . Our strategy will be to subject the functional (1) to a straightforward stationary phase analysis [12]. Varying the action S[g] w.r.t. g, one obtains the Euler-Lagrange equation: δ g g=ḡ S[g] = 0 ⇒ ∂(ḡ −1 ∂ḡ) = 0, which integrates to the conditionḡ −1 ∂ḡ = const. The solutions to this latter equation are given byḡ = exp(iW t/T ), with constant Lie-algebra elementsW ∈ sp(2R). Evaluatinḡ g at the system boundary t = T , we obtain the condition exp(iW ) = exp(iθ ⊗σ 3 ). This is solved byW ≡θ (n) ⊗σ 3 , whereθ (n) ≡θ + 2πn andn = diag(n 1 , . . . , n R ) is a vector of integer 'winding numbers'. The saddle point action is given by S[ḡ (n) ] = 1 4T R a=1 (θ (n) a ) 2 , indicating that at length scales, T > ∼ 1, mean field configurations traversing multiply around the group manifold become energetically affordable. Physically, these configurations describe the massive (and perturbatively inaccessible) buildup of interfering superconductor diffusion modes. Their proliferation at large length scales forms the basis of the localization phenomenon. To obtain the contributions of individual saddle points, g (n) , to the generating function, we need to integrate over quadratic fluctuations. We thus generalize to field configurations g(t) = exp(iW (t))ḡ (n) , where the fields W (t) ∈ sp(2R) obey vanishing (Dirichlet) boundary conditions W (0) = W (T ) = 0. Parameterizing these fields as W = 4 µ=0 W µ ⊗ σ µ , where σ 0 = 1 1 2 and W µ are R × R hermitian matrices subject to the Lie algebra constraints W 0 = −W T 0 and W i = W T i , i = 1, 2, 3, the quadratic expansion of the action reads as: S[g] = S[ḡ (n) ] + S I [W 0 , W 3 ] + S II [W 1 , W 2 ] + O(W 3 ), where S I [W 0 , W 3 ] = 1 4 T 0 dt tr ∂W 0 ∂W 0 + ∂W 3 ∂W 3 − 2i T (W 0 ∂W 3 + W 3 ∂W 0 )θ (n) , S II [W 1 , W 2 ] = 1 2 T 0 dt tr ∂W 1 ∂W 2 + i ǫ ij3 T W i ∂W jθ (n) . The integration over the matrices W µ leads to fluctuation determinants, which may be calculated by the auxiliary identity det(−∂ 2 t + 2zT −1 ∂ t ) = sinh(z)/z, where z ∈ C, and the differential operator acts in the space of functions obeying Dirichlet boundary conditions. As a result we obtain the stationary phase generating function Z(θ) = {n} R a<a ′ θ (n) a − θ (n) a ′ /2 sin (θ (n) a − θ (n) a ′ )/2 R a≤a ′ θ (n) a + θ (n) a ′ /2 sin (θ (n) a + θ (n) a ′ )/2 exp − 1 4T R a=1 (θ (n) a ) 2 ,(2) where the first/second fluctuation factor stems from the integration over the field-doublets (W 0 , W 3 )/(W 1 , W 2 ). (In passing, we note that as an alternative to the brute force integration outlined above the result (2) can be obtained by group theoretical reasoning: according to general principles [13], the fluctuation integral around extremal (geodesic) configurationsḡ (n) on a general semi-simple Lie group is given by: α>0 α(ḡ (n) ) sin −1 (α(ḡ (n) )) exp(−S[ḡ (n) ]) , where the product extends over the system of positive roots of the group, α(ḡ (n) ). Equation (2) above is but the Sp(2R)-variant of this formula.) In the limit of coinciding boundary phases, θ a → θ, the denominators sin[(θ (n) a − θ (n) a ′ )/2] → 0, i.e. the contribution of configurationsn containing non-vanishing winding number differences n a − n a ′ = 0 diverges. (At the same time, we do know that the integration over the full group manifold must generate a finite result. Indeed, it turns out that if we first sum over all winding number configurationsn and only then take the limit of coinciding phases, all divergent factors disappear.) This divergence reflects the presence of a zero mode in the system: for uniform boundary phases,θ ∝ 1 1 R , transfor- mationsḡ (n) → exp(iV 0 )ḡ (n) exp(−iV 0 ) with constant block-diagonal V 0 = V 0 0 ⊗ σ 0 + V 0 3 ⊗ σ 3 conform with the boundary conditions but do not alter the action. As we shall see below, the presence of zero modes implies that only winding number configurations of the special form (n, 0, . . . , 0) survive the replica limit, R → 0. However, before elaborating on this point, let us evaluate the contribution Z n of the distinguished configurations to the generating function. Throughout we will denote the boundary angles by θ a ≡ θ + η a , un-derstanding that the limit η a → 0 is to be taken at some stage. (Within this representation, the 'free energy' F (θ)∂ θ1 θa→θ Z(θ) = ∂ η1 ηa→0 Z(θ +η).) The 'dangerous' product a<a ′ (. . .) in Eq. (2) then reduces to ∼ (πn/ sin(η 1 /2)) R−1 ≈ (2πn/η 1 ) R−1 ; all other contributions to Z n are finite. The appearance of a pole of (R − 1)st order hints at the presence of R − 1 complex zero modes (generated by the R − 1 components of the matrix V 0 that do not commute withĝ (n) .) At this stage, we take the limit R → 0. As a result, the divergent factor gets replaced by a 'pole of degree (−1)', i.e. the zero: η 1 /(2πn). (It is worth noting that in SUSY a contribution similar to the singularity of degree (−1) is obtained by integration over the non-compact bosonic degrees of freedom; the complementary single replica channel a = 1 corresponds to the fermionic sector.) Therefore the subsequent differentiation (F [φ] ∼ ∂ η1 ηa→0 Z) must act on this linear factor η 1 , all other occurrences of η a in Z n may be ignored. Evaluating the partition function in this manner, we obtain Z n =0 = η1 2πn θ+2πn θ+πn exp(−πn(πn + θ)/T ). We finally differentiate w.r.t. η 1 and arrive at the result ρ(φ) = ρ 0 (φ) + n =0 ρ n (φ), where the 'Drude plus weak localization term' ρ 0 = (2T ) −1 − (φ 2 + π 2 ) −1 /2, while the non-perturbative contributions are given by: ρ n (φ) = − e − π 2 T n(n+1) 2π 2 n Re φ + iπ(2n + 1) φ + iπ(n + 1) e i πnφ T .(3) This expression identically coincides with the SUSY result [1], and with the exact DMPK result [8]. To illustrate the 'crystallization' of the transmission matrix eigenvalues at the discrete values φ j ≈ 2jT , the function ρ(φ) is plotted in Fig. 1a for a few values of T . Following LSZ, the heat conductance of the wire may be obtained by integrating the result above against the weight function 1/ cosh 2 (φ/2). Summing the result of this integration over winding numbers, one obtains the asymptotic result [1] g T ≫1 ≃ 4 e −T / √ πT . Our so far analysis focused on the specific set of winding number configurations, (n, 0 . . . , 0). To understand why contributions of different structure vanish -a fact that greatly simplifies the formalism -consider the set (0, . . . , n, . . . , 0). By symmetry, winding number configurations of this type will lead to an expression similar to Z n above, only that the leading pre-factor gets replaced: η 1 /(2πn) → η a /(2πn), where a ∈ {2, . . . , R} marks the position of the non-vanishing winding number. Since, however, we still differentiate w.r.t. η 1 , this contribution vanishes in the limit η a → 0. The argument above may be generalized to generic contributions, (n 1 , n 2 , . . . , n R ) = (n, 0, . . . , 0). (By symmetry, one may order the winding numbers in an ascending order (0, . . . , 0, 1, . . . , 1, 2 . . .). Assuming that there are N n winding numbers n (where n N n = R) and choosing the boundary angle in the sector n to be θ + nη, one verifies that for any fixed configuration, the R → 0 result contains uncompensated powers of η and, therefore, vanishes.) Before proceeding, it is worthwhile to compare the mean field analysis above to the more established field theory transfer matrix method [2]. To this end, let us interpret Z(θ) = g(T )| exp(−TĤ)|1 1 as the path integral describing the (imaginary time) quantum mechanical transition amplitude |1 1 T → |g(T ) of a particle on the group space Sp(2R). The Hamiltonian corresponding to the (purely 'kinetic') action of the path integral is given byĤ = −2∆ where ∆ is the Laplace operator of the group space Sp(2R). Our analysis above has been tantamount to a semiclassical or WKB analysis of the transition amplitude. Alternatively, and more rigorously, one may employ the spectral decomposition, Z(θ) = λ ψ * λ (g)ψ λ (1 1) exp(−T ǫ λ ), where ψ λ are the eigenfunctions of the Laplace operator, ǫ λ its discrete energy eigenvalues and g ≡ g(T ). For general Lie groups (and supergroups) formal expressions for these spectral decompositions are known [14]. Noting that for large systems L ≫ ξ, only eigenstates with minimal energy ǫ λ effectively contribute to the sum, this knowledge has been used to compute the localization properties of disordered quantum wires within the SUSY formalism [2,11]. The problems with transferring this approach to the replica formalism lie with the analytical continuation from integer group dimension R to R → 0. In taking this limit, it is essential to keep track of highlying contributions to the spectral sum. These terms grow rapidly more complex which is why attempts to obtain a replica variant of the 'quantum approach' above have failed so far. Having discussed the method for a symmetry class that enjoys the semiclassical exactness, we next outline what happens in cases where this feature is absent. By way of example, consider a metallic disordered quantum wire in the absence of time-reversal invariance -the unitary symmetry class, A. In this case, the fermionic replica generating function is given by Z(θ) = Q(T ) [5,15], and the boundary configurations are given by Q(0) = σ 3 ⊗ 1 1 and Q(T ) = e −iσ2⊗θ/2 σ 3 e iσ2⊗θ/2 . Here, the two-component structure distinguishes between advanced and retarded indices. As before, the stationary phase configurations: Q(t) = e −iσ2⊗θ (n) t/(2T ) σ 3 e iσ2⊗θ (n) t/(2T ) of the functional integral do not mix different replica channels. Geometrically, they can be interpreted as trajectories (in general, with non-zero winding number, n) on the meridian of the sphere U(2)/U(1) × U(1) (the single replica manifold.) Fluctuations may be conveniently parameterized by generalization σ 3 → e iW (t) σ 3 , where W = W 1 ⊗ σ 1 + W 2 ⊗ σ 2 and W 1,2 are hermitian R × R matrices. Q(0) DQ exp − 1 8 T 0 dt tr(∂Q) 2 , where the matrix Q(t) ∈ U(2R)/U(R) × U(R) The subsequent calculations largely parallel those for class CI above. Expanding to second order in W 1,2 and performing the Gaussian integration, we again observe that only winding number configurations (n, 0, . . . , 0) survive the analytical continuation procedure, R → 0. Differentiating w.r.t. θ 1 and then putting θ a → θ, we obtain the result: ρ(φ)(2T ) −1 − n =0 (−1) n ρ n (φ), where ρ n = e − π 2 n(n+1) T 2π 2 n Re (φ+iπ)(φ+iπ(2n+1)) φ + iπ(n + 1) e i πnφ T . (The same result is obtained by saddle-point analysis of the SUSY generating functional.) In Fig. 1b, the function ρ(φ) is plotted for several values of the parameter T . For small T the density is almost constant, reflecting the Dorokhov distribution of eigenvalues [3,10]. For large values of T the spectrum crystallizes at φ j ≈ (1 + 2j)T . The lowest eigenvalue φ 0 does, indeed, correctly determine the localization length of the system. Except for the evident failure of the method at small values φ ≪ φ 0 [16], the large scale profile of the DoS is in good agreement with results obtained by the transfer matrix methods [11,17,18]. Summarizing we have shown how the localization phenomenon in quasi one-dimensional systems may be described by a semiclassical approach to fermionic-replica field theories. We were able to reproduce the exact transmission matrix eigenvalue density for symmetry class CI, while for the unitary class we obtain qualitatively correct results (except for the tails of the eigenvalue spectrum.) The comparative simplicity of the approach makes us believe that it may be successfully applied to problems that can not be treated by other means. Evidently, the next direction of research will be the study of the impact of Coulomb interactions on the localization phenomenon. We are grateful to M. R. Zirnbauer for numerous valuable discussions. This work is supported by SFB/TR 12 of the Deutsche Forschungsgemeinschaft (A. A.), A. P. Sloan foundation and the NSF grants No. DMR-0405212 (A. K.), DMR-0439026 and PHY-9907949 (C. T.). FIG. 1 : 1Top: Density of transmission eigenvalues of the superconductor class CI for T=.02 (dashed-dotted); 1 (dashed); 50 (full). Bottom: the same for the unitary class A. The negative density at small φ represents an artefact of the saddle-point approximation. . A Lamacraft, B D Simons, M R Zirnbauer, Phys. Rev. B. 7075412A. Lamacraft, B. D. Simons, and M. R. Zirnbauer, Phys. Rev. B 70, 075412 (2004). K B Efetov, Supersymmetry in Disorder and Chaos. UKCambridge University PressK. B. Efetov, Supersymmetry in Disorder and Chaos, (Cambridge University Press, UK, 1997). . O N Dorokhov, Pis'ma Zh. Eksp. Teor. Fiz. 36318JETP Lett.O. N. Dorokhov, Pis'ma Zh. Eksp. Teor. Fiz. 36, 259 (1982) [JETP Lett. 36, 318 (1982)]; . P A Mello, P Pereyra, N Kumar, Ann. Phys. (N.Y.). 181290P. A. Mello, P. Pereyra, and N. Kumar, Ann. Phys. (N.Y.) 181, 290 (1988). . G Schwiete, K B Efetov, Phys. Rev. B. 71134203G. Schwiete and K. B. Efetov, Phys. Rev. B 71, 134203 (2005). . F J Wegner, Z. Phys. B. 35207F. J. Wegner, Z. Phys. B 35, 207 (1979); . K B Efetov, A I Larkin, D E Khmelnitskii, Sov. Phys. JETP. 52568K. B. Efetov, A. I. Larkin, and D.E. Khmelnitskii, Sov. Phys. JETP 52, 568 (1980); . A M Finkel&apos;stein, Sov. Phys. JETP. 5797A. M. Finkel'stein, Sov. Phys. JETP 57, 97 (1983); . D Belitz, T R Kirkpatrick, Rev. Mod. Phys. 66261D. Belitz, and T. R. Kirkpatrick, Rev. Mod. Phys. 66, 261 (1994). . M L Horbach, G Schön, Ann. Phys. 251M. L. Horbach, and G. Schön, Ann. Phys. 2, 51 (1993); . A Kamenev, A Andreev, Phys. Rev. B. 602218A. Kamenev, and A. Andreev, Phys. Rev. B 60, 2218, (1999); . C Chamon, A W W Ludwig, C Nayak, Phys. Rev. B. 602239C. Chamon, A. W. W. Ludwig, and C. Nayak, Phys. Rev. B 60, 2239, (1999); . M V Feigel&apos;man, A I Larkin, M A Skvortsov, Phys. Rev. B. 6112361M. V. Feigel'man, A. I. Larkin, and M. A. Skvortsov, Phys. Rev. B 61, 12361 (2000). Zero-dimensional' applications of this principle include the Itzykson-Zuber formulae [19], and the mean-field approach to spectral correlations for certain symmetry classes. By a (not rigorously proven) conjecture, semiclassical exactness pertains to certain infinitedimensional manifolds. It is a well-known mathematical fact that integrals over certain manifolds (manifolds possessing a symplectic structure) may be exactly evaluated by stationary phase methods. the so-called loop groups [13It is a well-known mathematical fact that integrals over certain manifolds (manifolds possessing a symplectic structure) may be exactly evaluated by stationary phase methods. 'Zero-dimensional' applications of this princi- ple include the Itzykson-Zuber formulae [19], and the mean-field approach to spectral correlations for certain symmetry classes. By a (not rigorously proven) conjec- ture, semiclassical exactness pertains to certain infinite- dimensional manifolds, the so-called loop groups [13]. From a mathematical point of view, the quantum fields describing disordered wires of the four symmetry classes in question form manifolds of this type. From a mathematical point of view, the quantum fields describing disordered wires of the four symmetry classes in question form manifolds of this type. . P W Brouwer, A Furusaki, I A Gruzgerg, C Mudry, Phys. Rev. Lett. 851064P. W. Brouwer, A. Furusaki, I. A. Gruzgerg, and C. Mudry, Phys. Rev. Lett. 85, 1064 (2000). . A Altland, M R Zirnbauer, Phys. Rev. B. 551142A. Altland and M. R. Zirnbauer, Phys. Rev. B 55, 1142 (1997). . Yu N Nazarov, Phys. Rev. Lett. 73134Yu. N. Nazarov, Phys. Rev. Lett. 73, 134 (1994). . B Rejaei, Phys. Rev. B. 5313235B. Rejaei, Phys. Rev. B 53, 13235 (1996). On dimensional grounds, the minimal action of a spatially non-uniform field configuration scales as S[ḡ]min ∼ T −1 . To formally justify stationary phase approximation schemes we must require T −1 > 1. On dimensional grounds, the minimal action of a spa- tially non-uniform field configuration scales as S[ḡ]min ∼ T −1 . To formally justify stationary phase approximation schemes we must require T −1 > 1 . . R F Picken, J. Phys. A. 222285R. F. Picken, J. Phys. A 22, 2285 (1989). Referring for a more detailed discussion to [13], we note that for a group valued path integral, the spectral sum extends over all irreducible group representations while the product of wave functions ψ * λ (g)ψ λ (1 1) = χ λ (g) ≡ tr λ (g). Referring for a more detailed discussion to [13], we note that for a group valued path integral, the spectral sum ex- tends over all irreducible group representations while the product of wave functions ψ * λ (g)ψ λ (1 1) = χ λ (g) ≡ tr λ (g) the trace of the matrix representing the group element g in the representation λ.) The eigenvalue ǫ λ = 2c2(λ), where c2(λ) is the eigenvalue of the quadratic Casimir operator. (The unique overall commutative operator quadratic in the operators representing the Lie algebra in λ.) The spectral sums can be obtained by solving the 'Schrödinger' equation of the problem. or by subjecting winding number sums such as (2) to a Poisson summation schemeis given by the character χ λ of g (i.e. the trace of the matrix representing the group element g in the repre- sentation λ.) The eigenvalue ǫ λ = 2c2(λ), where c2(λ) is the eigenvalue of the quadratic Casimir operator. (The unique overall commutative operator quadratic in the op- erators representing the Lie algebra in λ.) The spectral sums can be obtained by solving the 'Schrödinger' equa- tion of the problem, or by subjecting winding number sums such as (2) to a Poisson summation scheme. . A Kamenev, M Mezard, J. Phys. A. 32A. Kamenev and M. Mezard, J. Phys. A 32, (1999); . Phys. Rev. B. 603944Phys. Rev. B 60, 3944 (1999); . I V Yurkevich, I V Lerner, Phys. Rev. B. 603955I. V. Yurkevich and I. V. Lerner, Phys. Rev. B 60, 3955 (1999); . E Kanzieper, Phys. Rev. Lett. 89250201E. Kanzieper, Phys. Rev. Lett. 89, 250201 (2002). Unlike with generic values of the revolution angle, θ, there is a continuous manifold of such trajectories, i.e. the functional integral contains a zero mode. The contribution of this mode to the functional is not accounted for correctly by a straightforward integration over quadratic fluctuations. To understand the failure of the semiclassical method, notice that for φ → 0, the mean field configurations are geodesics starting and ending at the same point on the sphere. In fact, we do not yet know how to treat it correctly.On physical grounds, one would expect the DoS to be exponentially small at φ ≪ φ0. To understand the failure of the semiclassical method, notice that for φ → 0, the mean field configurations are geodesics starting and end- ing at the same point on the sphere. Unlike with generic values of the revolution angle, θ, there is a continuous manifold of such trajectories, i.e. the functional integral contains a zero mode. The contribution of this mode to the functional is not accounted for correctly by a straight- forward integration over quadratic fluctuations. (In fact, we do not yet know how to treat it correctly.) . C W J Beenakker, B Rejaei, Phys. Rev. B. 497499C. W. J. Beenakker and B. Rejaei, Phys. Rev. B 49, 7499 (1994). . K M Frahm, Phys. Rev. Lett. 744706K. M. Frahm, Phys. Rev. Lett. 74, 4706 (1995). . C Itzykson, J B Zuber, J. Math. Phys. 21411C. Itzykson and J. B. Zuber, J. Math. Phys. 21, 411 (1980).
[]
[ "STOCHASTIC DIFFERENCE EQUATIONS WITH THE ALLEE EFFECT", "STOCHASTIC DIFFERENCE EQUATIONS WITH THE ALLEE EFFECT" ]
[ "Elena Braverman \nDept. of Math. and Stats\nDepartment of Mathematics\nUniversity of Calgary\n2500 University Drive N.WT2N 1N4CalgaryABCanada\n", "Alexandra Rodkina \nUniversity of the West Indies\nMona CampusKingstonJamaica (Communicated by ...\n" ]
[ "Dept. of Math. and Stats\nDepartment of Mathematics\nUniversity of Calgary\n2500 University Drive N.WT2N 1N4CalgaryABCanada", "University of the West Indies\nMona CampusKingstonJamaica (Communicated by ..." ]
[ "AIMS' Journals Volume X, Number 0X, XX 200X pp. X-XX" ]
For a truncated stochastically perturbed equation x n+1 = max{f (xn)+lχ n+1 , 0} with f (x) < x on (0, m), which corresponds to the Allee effect, we observe that for very small perturbation amplitude l, the eventual behavior is similar to a non-perturbed case: there is extinction for small initial values in (0, m − ε) and persistence for x 0 ∈ (m + δ, H] for some H satisfying H > f (H) > m. As the amplitude grows, an interval (m−ε, m+δ) of initial values arises and expands, such that with a certain probability, xn sustains in [m, H], and possibly eventually gets into the interval (0, m − ε), with a positive probability. Lower estimates for these probabilities are presented. If H is large enough, as the amplitude of perturbations grows, the Allee effect disappears: a solution persists for any positive initial value.2010 Mathematics Subject Classification. Primary: 39A50, 37H10; Secondary: 93E10, 92D25.
10.3934/dcds.2016060
[ "https://arxiv.org/pdf/1606.01928v1.pdf" ]
73,555,677
1606.01928
1d4ffc6540b876e7e10ab54648835e711e463978
STOCHASTIC DIFFERENCE EQUATIONS WITH THE ALLEE EFFECT 6 Jun 2016 Elena Braverman Dept. of Math. and Stats Department of Mathematics University of Calgary 2500 University Drive N.WT2N 1N4CalgaryABCanada Alexandra Rodkina University of the West Indies Mona CampusKingstonJamaica (Communicated by ... STOCHASTIC DIFFERENCE EQUATIONS WITH THE ALLEE EFFECT AIMS' Journals Volume X, Number 0X, XX 200X pp. X-XX 6 Jun 201610.3934/xx.xx.xx.xxManuscript submitted to For a truncated stochastically perturbed equation x n+1 = max{f (xn)+lχ n+1 , 0} with f (x) < x on (0, m), which corresponds to the Allee effect, we observe that for very small perturbation amplitude l, the eventual behavior is similar to a non-perturbed case: there is extinction for small initial values in (0, m − ε) and persistence for x 0 ∈ (m + δ, H] for some H satisfying H > f (H) > m. As the amplitude grows, an interval (m−ε, m+δ) of initial values arises and expands, such that with a certain probability, xn sustains in [m, H], and possibly eventually gets into the interval (0, m − ε), with a positive probability. Lower estimates for these probabilities are presented. If H is large enough, as the amplitude of perturbations grows, the Allee effect disappears: a solution persists for any positive initial value.2010 Mathematics Subject Classification. Primary: 39A50, 37H10; Secondary: 93E10, 92D25. 1. Introduction. Difference equations can describe population dynamics models, and, if there is no compensation for low population size, i.e. the stock recruitment is lower than mortality, the species goes to extinction, unless the initial size is large enough. This phenomenon was introduced in [1], see also [6,20]. It is called the Allee effect after [1] and can be explained by many factors: problems with finding a mate, deficiency of group defense or/and social functioning for low population densities. If the initial population size is small enough (is in the Allee zone) then the population size tends to zero as the time grows and tends to infinity. Even a small stochastic perturbation which does not tend to zero, significantly changes the situation: due to random immigration, there are large enough values of the population size for some large times even in the Allee zone, due to this occasional immigration. Thus, instead of extinction, we explore eventual low-density behavior, as well as essential persistence and solution bounds. Results on permanence of solutions for stochastic difference equations, including boundedness and persistence, were recently reviewed in [21]. For recent results on asymptotic behavior of stochastic difference equations also see [2,3,4,5,10,11,13,14,17,18,19,22] and the whole issue of Journal of Difference Equations and Applications including [21]. The influence of stochastic perturbations on population survival, chaos control and eventual cyclic behavior was investigated in [9,10,11]. It was shown that the chaotic behavior could be destroyed by either a positive deterministic [9] or stochastic noise with a positive mean [10,11]; instead of chaos, there is an attractive two-cycle. Certainly, stochastic perturbations, applied formally, can lead to negative size values. To avoid this situation, we consider the truncated stochastic difference equation x n+1 = max f (x n ) + lχ n+1 , 0 , x 0 > 0, n ∈ N. (1) Here f is a function with a possible Allee zone, for example, x n+1 = Ax 2 n B + x n e r(1−xn) ,(2) 1 described in [12] and x n+1 = Ax n B + (x n − T ) 2 ,(3) considered in [15,16]; see [6] for the detailed outline of models of the Allee effect. It is well known that, without a stochastic perturbation, if f (x) is a function such that 0 < f (x) < x for x ∈ (0, m) and f (x) > m for x > m, the eventual behavior of a solution depends on the initial condition: if 0 < x 0 < m, then the solution tends to zero (goes to extinction), if x 0 > b then the solution satisfies x n > m, i.e. persists. Sometimes high densities also lead to extinction, as in (2) and (3), we can only claim that f (x) > m for x ∈ (m, H) and conclude persistence for x 0 ∈ (m, H). However, the situation changes for (1) with a stochastic perturbation: for example, even if f has an Allee zone, the eventual expectation of a solution exceeds a positive number depending on l and the distribution of χ. Nevertheless, this effect is due to immigration only, and we will call this type of behavior blurred extinction, or eventual low density. In the present paper, we use some ideas developed in [8] for models with a randomly switching perturbation. Significant interest to discrete maps is stimulated by complicated types of behavior exhibited even by simple maps. In particular, for (2) with r large enough, whatever a positive initial value is, the chaotic solution can take values in the interval (0, ε) for any small ε > 0. Then, in practical applications the dynamics is not in fact chaotic but leads to eventual extinction as the positive density cannot be arbitrarily low. Nevertheless, if the range is separated from zero, for some maps there is an unconditional survival (persistence), independently of a positive initial value. In this note, we are mostly interested in the maps f with survival for certain initial values and an Allee zone: if x 0 is small enough, then the solution of (1) with l = 0 tends to zero, and there is an interval [a, H] ⊂ (0, ∞) which f maps into itself. The main results of the paper are the following: 1. If in (1) the value of l is small enough, the dynamics is similar to the non-stochastic case: blurred extinction (low density) for small x 0 and persistence for x 0 in a certain interval. 2. If l > 0 is large enough then, under some additional assumptions, there is an unconditional survival. 3. If the non-perturbed system has several attraction zones then for any initial condition, the solution can become persistent with large enough lower bound, whenever l is large enough. The paper is organized as follows. After describing all relevant assumptions and notations in Section 2, we state that for perturbations small enough, there is the same Allee effect as in the deterministic case, in Section 3. The result that there may exist large enough perturbation amplitudes ensuring survival for any positive initial condition, is also included in Section 3. Further, Section 4 deals with the case when, for certain initial conditions, both persistence and low-density behavior are possible, with a positive probability, while for other initial conditions, a.s. persistence or a.s. low-density behavior is guaranteed. For initial values leading to different types of dynamics, lower bounds for probabilities of each types of dynamics are developed in Section 4. The case when the deterministic equation has more than 2 positive fixed point, is considered in Section 5. The results are illustrated with numerical examples in Section 6, and Section 7 involves a short summary and discussion. 2. Preliminaries. Let (Ω, F , (F n ) n∈N , P) be a complete filtered probability space. Let χ := (χ n ) n∈N be a sequence of independent random variables with the zero mean. The filtration (F n ) n∈N is supposed to be naturally generated by the sequence (χ n ) n∈N , namely F n = σ {χ 1 , . . . , χ n }. In the paper we assume that stochastic perturbation χ in the equation (1) satisfies the following assumption Assumption 1. (χ n ) n∈N is a sequence of independent and identically distributed continuous random variables with the density function φ(x), such that φ(x) > 0, x ∈ (−1, 1), φ(x) ≡ 0, x / ∈ [−1, 1]. We use the standard abbreviation "a.s." for the wordings "almost sure" or "almost surely" with respect to the fixed probability measure P throughout the text. A detailed discussion of stochastic concepts and notation may be found in, for example, Shiryaev [23]. Everywhere below, for each t ∈ [0, ∞), we denote by [t] the integer part of t. Before we proceed further, let us introduce assumptions on the function f in (1). Assumption 2. f : (0, ∞) → (0, ∞) is continuous, f (0) = 0, and there exist positive numbers a and H, a < H, such that (i) f H := max x∈[0,H] f (x) < H; (ii) f (x) > f (a) > a, x ∈ (a, H]. So far we have not supposed that there is an Allee zone, where for small initial values, a solution of the non-perturbed system tends to zero. This is included in the next condition. Assumption 3. There is a point b 1 > 0 such that f (x) < x and f (x) ≤ f (b 1 ) for x ∈ (0, b 1 ). 3. Unconditional Persistence and Low-Density Behavior. In this section, we consider the case when the type of perturbation and the initial condition allow us to predict a.s. the eventual behavior of the solution. Lemma 3.1 indicates a small initial interval, where the Allee effect is observed, for small enough perturbations. Lemma 3.2 presents the range of initial conditions which guarantee permanence of solutions, for l small enough. However, for large enough l and appropriate f , the Allee effect completely disappears under a stochastic perturbation, see Theorem 3.3. l ≤ b 1 − f (b 1 )(4) and x 0 ∈ [0, b 1 ]. Then, x n ∈ [0, b 1 ] for all n ∈ N. Proof. For x 0 ∈ [0, b 1 ], we have f (x 0 ) ≤ f (b 1 ) and, a.s. on Ω, x 1 = f (x 0 ) + lχ 1 ≤ f (b 1 ) + l ≤ f (b 1 ) + b 1 − f (b 1 ) ≤ b 1 . Similarly, the induction step implies x n ∈ [0, b 1 ] for all n ∈ N, a.s. F (x) . Let us introduce the function F (x) = f (x) − x, x ∈ [0, ∞).(5) Lemma 3.2. Let Assumptions 1 and 2 hold, and (x n ) be a solution of equation (1), with the noise amplitude l satisfying l < min{H − f H , F (a)},(7) with an arbitrary initial value x 0 ∈ (0, H). Then, a.s., for all n ∈ N, (i) x n ≤ H; (ii) if in addition x 0 ∈ (a, H) then x n ∈ (a, H). Proof. If, for some ω ∈ Ω and n ∈ N, we have x n (ω) ≤ H, then by Assumption 2, (i), and (7), x n+1 (ω) = f (x n (ω)) + lχ n+1 (ω) ≤ f H + l < H. If, for some ω ∈ Ω and n ∈ N, we also have x n (ω) ∈ (a, H], then by Assumption 2, (ii), and (7), (6), (x n ) be a solution of equation (1) with l satisfying (7) and x n+1 (ω) = f (x n (ω)) + lχ n+1 (ω) > f (a) − l = f (a) − a + a − l > l + a − l = a.l > b − f (b) = −F (b),(8) and x 0 ∈ (0, H). Then, a.s., x n eventually gets into the interval (a, H) and stays there. Proof. By Lemma 3.2, it is sufficient to prove that x n ∈ (a, H) for some n ∈ N, a.s. Let δ > 0 satisfy l > b − f (b) + δ (in particular, we can take δ = α(l − b + f (b)) for any α ∈ (0, 1)). We define p 1 := P ω ∈ Ω : χ(ω) ∈ b − f (b) + δ l , 1 , K := a δ + 1.(9) By Lemma 3.2 we only have to consider the case x 0 ∈ (0, a]. Let us note that for any x n ∈ (0, a] and χ n+1 ∈ b − f (b) + δ l , 1 , we have x n+1 = f (x n ) + lχ n+1 ≥f (x n ) − x n + x n + l b − f (b) + δ l ≥f (b) − b + x n + b − f (b) + δ = x n + δ. By Assumption 1, p 1 > 0, moreover, the probability p K := P ω ∈ Ω : χ i (ω) ∈ b − f (b) + δ l , 1 i = 1, . . . , K = p K 1 > 0.(10) Thus, the probability p out := P ω ∈ Ω : χ j (ω) ∈ −1, b − f (b) + δ l for some j ∈ {1, . . . , K} = 1 − p K 1 ∈ (0, 1). If all χ i , i = j + 1, j + 2, j + K, are in b − f (b) + δ l , 1 , then x j+1 ≥ x j + δ, x j+2 ≥ x j+1 + δ ≥ x j + 2δ, . . . , x j+K ≥ x j + Kδ > a. By Lemma 3.2, it is sufficient to show that the probability p s = 0, where p s := P ω ∈ Ω : among any K successive j , there is χ j (ω) ∈ −1, b − f (b) + δ l = 0. Let us take some ε > 0 and prove that p s < ε. Among any K successive j, there is χ j in the above interval with probability p out < 1. In particular, there is such χ j among j = 1, . . . , K, with probability p out , as well as among j = K + 1, . . . , 2K, and in any of non-intersecting sets j = nK, nK + 1, . . . , (n + 1)K − 1, n = 0, . . . m − 1. The probability that there is χ j in the above interval among any K successive χ j among j = 1, . . . , mK − 1, is p m out , and p s ≤ p m out . Since p m out < ε as soon as m > ln ε/ ln(p out ), we conclude that p s = 0, which completes the proof. Dynamics Depending on Perturbations (the case l < b − f (b) ). In this section we assume that l < b − f (b) = −F (b),(11) where b is defined in (6), and f corresponds to the system with an Allee effect. As we assume an upper bound for the perturbation, the dynamics is expected to be dependent on the initial condition: low density if the initial condition is small enough and sustainable (persistent) for a large enough initial condition. We recall that a solution (x n ) is persistent if there exist n 0 ∈ N and a > 0 such that x n > a for any n ≥ n 0 . In a non-stochastic case, if the system exhibits the Allee effect, then for a small initial condition, the solution tends to zero. However, in the case of both truncation and stochastic perturbations satisfying Assumption 1, the expectation of x n exceeds a certain positive number. The density function φ(x) is positive, thus α := 1 0 xφ(x) dx > 0.(12) Lemma 4.1. Suppose that Assumption 1 holds and f : (0, ∞) → (0, ∞) is a continuous function. Then the expectation of the solution (x n ) of (1) is not less than α defined in (12). Proof. From (1), x n ≥ max{lχ n , 0}, thus the expectation of x n is not less than 1 −1 l max{x, 0}φ(x) dx = 0 −1 0 φ(x) dx + 1 0 xφ(x) dx = α, which concludes the proof. 4.1. A.s. persistence and a.s. low density areas. Suppose that Assumptions 2,3 hold with b 1 ≥ b, where b is denoted in (6) and l satisfies (7). Then we can introduce positive numbers u l := sup{u < a : F (u) < l}(13) and v l := inf{v > b : F (v) > −l},(14) where F is defined in (5). (6) and l satisfies (7), (11). Let (x n ) be a solution to (1) with an arbitrary initial value x 0 ∈ [0, H]. Let u l be defined as in (13) and v l be defined as in (14). Then the following statements are valid. Theorem 4.2. Suppose that Assumptions 1 -3 hold with b 1 ≥ b, where b is denoted in ( (13) and (14) are non-empty and u l ≤ a, v l ≥ b. By continuity of f and Assumptions 2,3 we have i) b < v l < u l < a. (ii) F (u l ) = l, F (v l ) = −l, F (x) ≥ l, for x ∈ (u l , a), F (x) ≤ −l, for x ∈ (b, v l ). (iii) If x 0 ∈ (0, v l ), there exists n 1 ∈ N such that x n ∈ [0, b] a.s. for n ≥ n 1 . (iv) If x 0 ∈ (u l , H) then x persists a.s.; moreover, there exists n 2 ∈ N such that x n ∈ [a, H] a.s. for n ≥ n 2 . Proof. Since b ∈ {u < a : f (u) − u < l}, a ∈ {v > b : v − f (v) < l}, both sets inu l < a, v l > b, F (u l ) = l, F (v l ) = −l. So u l = v l , v l ∈ {u < a : F (u) < l} =⇒ v l < u l , which completes the proof of (i)-(ii). (iii) Define ∆ l (y) := inf x∈[b,y] {x − f (x) − l}. Note that ∆ l (b) = b − f (b) − l > 0, ∆ l (v l ) = 0, and the function ∆ l : [b, v l ] → [b − f (b) − l, 0] is non-increasing. Then, for each x 0 ∈ (b, v l ) and each x ∈ (b, x 0 ), we have ∆ l (x 0 ) ≤ ∆ l (x), x − f (x) − l ≥ ∆ l (x). So, a.s., x 1 = f (x 0 ) + lχ 1 ≤ f (x 0 ) + l ≤ f (x 0 ) + x 0 − f (x 0 ) − ∆ l (x 0 ) = x 0 − ∆ l (x 0 ). If x 1 ≤ b we stop. If x 1 > b, we have, a.s., x 2 = f (x 1 ) + lχ 2 ≤ f (x 1 ) + l ≤ f (x 1 ) + x 1 − f (x 1 ) − ∆ l (x 1 ) ≤ x 0 − ∆ l (x 0 ) − ∆ l (x 1 ) ≤ x 0 − 2∆ l (x 0 ). Thus, after at most K steps, where K = v l − b ∆ l (x 0 ) + 1,[v l , a] → [0, f (a) − a − l] is non-decreasing. Then, for each x 0 ∈ (u l , a) and each x ∈ (x 0 , a), we have∆ l (x 0 ) ≤∆ l (x), f (x) − x − l ≥∆ l (x). So, a.s., x 1 = f (x 0 ) + lχ 1 ≥ f (x 0 ) − l ≥ f (x 0 ) +∆ l (x 0 ) − f (x 0 ) + x 0 = x 0 +∆ l (x 0 ). If x 1 ≥ a we stop. If x 1 < a, we have, a.s., x 2 = f (x 1 ) + lχ 2 ≥ f (x 1 ) − l ≥ f (x 1 ) +∆ l (x 1 ) − f (x 1 ) + x 1 ≥ x 0 +∆ l (x 0 ) +∆ l (x 1 ) ≥ x 0 + 2∆ l (x 0 ). Thus, after at most K steps, where As everywhere above, in this subsection we assume that Assumptions 1-3 and conditions (7), (11) hold. Based on this, we can define K = a − u l ∆ l (x 0 ) + 1,β l = inf{b < x < a : F (x) > l}, α l = sup{b < x < a : F (x) < −l}.(15) Note that, since F (a) > l, F (b) < −l, and F is continuous, both sets in the right-hand-sides of formulae in (15) are non-empty. Let u l and v l be defined as in (13) and (14), respectively. Note that v l < β l ≤ u l , v l ≤ α l < u l , and max x∈[b,β l ] F (x) ≤ l, min x∈[α l ,a] F (x) ≥ −l. The points a, b, u l , v l , α l , β l are illustrated in Figure 1. Remark 3. It is possible that α l > β l , see Example 1 and Fig. 1, right. However, as F is continuous, F (b) < −l, F (β l ) = l, F (α l ) = −l, F (a) > l, the inequality α l > β l immediately implies that there are at least 3 fixed points of f on (b, a). In this case we are able to prove only "essential extinction" for x 0 ∈ (v l , β l ) and persistence for x 0 ∈ (α l , u l ) (see Lemma 4.3 below). However, if α l < β l , for each x 0 ∈ (α, β) ⊂ (α l , β l ), a solution persists with a positive probability and also reaches the interval [0, b] with a positive probability. So solutions with the initial value on the non-empty interval (α l , β l ) demonstrate mixed behavior (see Corollary 2 below). 6 Example 1. Consider (1) with f (x) =                  3x 3 + (x − 2) 2 , 0 ≤ x ≤ 1; x − sin(π(x − 1)) − 1 4 , 1 < x ≤ 5; 8.55x 8 + (x − 6) 2 , 5 ≤ x. We can take a = 5. Let x 0 ∈ (α l , u l ]. Define A = A(x 0 ) := min x∈[x0,u l ] F (x) > −l(16) and p 1 = p 1 (x 0 ) := P ω ∈ Ω : χ(ω) ≥ 1 − l + A 2l , K 1 = K 1 (x 0 ) := 2(u l − x 0 ) l + A + 1.(17) Let x 0 ∈ [v l , β l ). Define B = B(x 0 ) := max x∈[v l ,x0] F (x) < l(18) and (7) and (11), where b is defined as in (6). p 2 = p 2 (x 0 ) := P ω ∈ Ω : χ(ω) ≤ −1 + l − B 2l , K 2 = K 2 (x 0 ) := 2(x 0 − v l ) l − B + 1.(19) Let (x n ) be a solution to (1) with x 0 ∈ [0, H], and α l , β l be denoted by (15). Then the following statements are valid. (i) If x 0 ∈ (α l , H] then the solution x n will eventually get into the interval [a, H] with the persistence probability P p such that P p ≥ p K1 1 , where p 1 and K 1 are defined in (17). (ii) If x 0 ∈ [0, β l ) then the solution x n will eventually get into the interval [0, b] with the "low density" ("essential extinction") probability P e satisfying P e ≥ p K2 2 , where p 2 and K 2 are defined in (19). Proof. Let u l and v l be defined by (13), (14), respectively. By Theorem 4.2, it is enough to prove (i) for x 0 ∈ (α l , u l ] and (ii) for x 0 ∈ [v l , β l ). (i) Let A, K 1 and p 1 be defined, respectively, as in (16) and (17). We set Ω k := ω ∈ Ω : χ k (ω) ≥ 1 − l + A 2l , k = 1, 2, . . . , K 1 , A := K1 k=1 Ω k . Note that P [A] = p K1 1 . We prove that For each ω ∈ A there exists a number n ≤ K 1 , such that x n (ω) > u l . By (16), F (x) ≥ A > −l, for any x ∈ [x 0 , u l ]. Since A ⊆ Ω 1 we have, on A, x 1 = f (x 0 ) + lχ 1 = x 0 + F (x 0 ) + lχ 1 ≥ x 0 + A + l 1 − l + A 2l = x 0 + l + A 2 . Similarly, for each k = 1, 2, . . . , K 2 − 1, if x k ∈ [x 0 , u l ], since A ⊆ Ω k and F (x k ) ≥ A > −l, we have, on A, x k+1 ≥ x k + l + A 2 . The set A can be presented as A = A 11 ∪ A 12 , A 11 ∩ A 12 = ∅, where A 11 := {ω ∈ A : x 1 (ω) > u l } , A 12 := ω ∈ A : x 1 (ω) ∈ x 0 + l + A 2 , u l .x 2 ≥ x 1 + l + A 2 ≥ x 0 + 2 l + A 2 . Presenting A 12 in the same way as above, A 12 = A 21 ∪ A 22 , A 21 ∩ A 22 = ∅, where A 21 := {ω ∈ A 12 : x 2 (ω) > u l } , A 22 := ω ∈ A 12 : x 2 (ω) ∈ x 0 + 2 l + A 2 , u l ,A k−1,2 = A k1 ∪ A k2 , A k1 ∩ A k2 = ∅, where A k1 := {ω ∈ A k−1,2 : x k (ω) > u l } , A k2 := ω ∈ A k−1,2 : x k (ω) ∈ x 0 + k l + A 2 , u l . When P [A k,2 ] = 0, we have, a.s., A = ∪ k i=1 A i1 , so (20) holds with n = i on A i1 , i = 1, 2, . . . , k. When P [A k,2 ] > 0, we continue the process. However, by (17), x 0 + K 1 l+A 2 > u l , so A K1,2 = ∅. Then A can be presented as ∪ k i=1 A i1 where k does not exceed K 1 . This proves (20), so the solution reaches the interval [u l , H] after at most K 1 steps with a probability at least p K1 1 . Application of Theorem 4.2, (iv), completes the proof of (i). Part (ii) can be proved in a similar way. For B, K 2 and p 2 defined, respectively, as in (18) and (19), we set Γ k := ω ∈ Ω : χ k (ω) < −1 + l − B 2l , k = 1, 2, . . . , K 2 , B := K2 k=1 Γ k , and notice that P [B] = p K2 2 . By (18), F (x) ≤ B < l, for any x ∈ [v l , x 0 ]. Then, on B, if x k ∈ [v l , x 0 ], k = 1, 2, . . . , K 2 − 1, we get x k+1 ≤ x k − l − B 2 . Noting that x 0 − K 2 l−B 2 < v l , we show that for each ω ∈ B, there exists a number n ≤ K 2 , such that x n (ω) < v l . So the solution reaches the interval [0, v l ] after at most K 2 steps with the probability at least p K2 2 . Application of Theorem 4.2, (iii), completes the proof of (ii). Remark 4. Under the assumptions of Lemma 4.3, (i) the persistence probability P p and the "low density" probability P e depend on x 0 ; (ii) the number K 1 indicates the number of steps necessary for a solution x n with the initial value x 0 ∈ (α l , u l ) to get into the interval (u l , H]. Respectively, K 2 is the number of steps required for a solution with the initial value x 0 ∈ (v l , β l ) to get into the interval (0, v l ). Remark 5. Estimations of probabilities P p (x 0 ) and P e (x 0 ) are far from being sharp. They can be improved under the assumption that F is increasing, if, on each step, we estimate the new probability to move right F (x) < l. In order to prove (i), we choose A(α) := min x∈[α,u l ] F (x) > −l, B(β) := min x∈[v l ,β] F (x) < l, p 1 (α) := P ω ∈ Ω : χ(ω) ≥ 1 − l + A(α) 2l , K 1 (α) := 2(u l − α) l + A(α)) + 1, p 2 (β) := P ω ∈ Ω : χ(ω) ≤ −1 + l − B(β) 2l , K 2 (β) := 2(β − v l ) l − B(β) + 1. Taking any x 0 ∈ [α, u l ] and following the proof of Lemma 4.3, after at most K 1 (α) steps we have on ∩ K1(α) i=1 {ω ∈ Ω i : χ i (ω) ≥ 1 − (l + A(α))/(2l)} : x n ≥ x 0 + K 1 (α) l + A(α) 2 = α + 2(u l − α) l + A(α)) + 1 l + A(α) 2 ≥ α + u l − α = u l . So the persistence probability P p satisfies the first of the two estimates P p ≥ p 1 (α) K1(α) , P e ≥ p 2 (β) K2(β) .(21) A similar estimation can be done for any x 0 ∈ [v l , β], and the "low density" probability P e satisfies the second estimate in (21). Case (ii) follows from case (i), since for any x 0 ∈ (α, β), estimations of both probabilities P p and P e in (21) are valid. The proof of the following Lemma is straightforward and thus will be omitted. 1. The inequality |F (x)| < l, x ∈ (v l , u l ),(22) is equivalent to β l = u l and α l = v l . 2. In particular, condition (22) holds if f (x 2 ) − f (x 1 ) > x 2 − x 1 for any v l ≤ x 1 < x 2 ≤ u l .(23) Remark 6. Note that (i) u l is a non-decreasing function of l, while v l is a non-increasing function of l. So, for l 1 < l 2 < b we have (v l1 , u l1 ) ⊆ (v l2 , u l2 ). (ii) β l is a non-decreasing function of l, while α l is a non-increasing function of l. (iii) If condition (22) holds for some l = l 1 , it however can fail for some l = l 2 < l 1 (see Example 2). (iv) If condition (23) holds for some l = l 1 then (23), and therefore (22), will be fulfilled for all l = l 2 < l 1 . In the following example we demonstrate the case when (22) holds for some l = l 1 but does not hold for any smaller l. Example 2. Consider (1) with f (x) =                      16x 15 + (x − 3) 2 , 0 ≤ x ≤ 2, x − 1 4x sin π 2 x , 1 < x ≤ 12, x − 10 1 + (x − 13) 2 + 11, 12 < x. (6), l satisfies conditions (7) and (11), and condition (22) holds. Let u l be defined as in (13) and v l be defined as in (14), (x n ) be a solution of (1) with x 0 ∈ [0, H]. Then the following statements are valid. Then P e (x 0 ) → 1 as x 0 ↓ v l and P p (x 0 ) → 1 as x 0 ↑ u l . (i) If x 0 ∈ (0, v l ) then there exists n 1 ∈ N such that x n ∈ [0, b] a.s. for n ≥ n 1 . (ii) If x 0 ∈ (u l , Proof. Let us prove that P p (x 0 ) → 1 as x 0 ↑ u l . The other case can be treated similarly. By uniform continuity of F on the interval [0, H], for any ε ∈ (0, 2C) we can find δ 1 = δ 1 (ε) such that |F (x) − F (y)| ≤ lε 2C for |x − y| < δ 1 , ∀x, y ∈ [0, H]. Let δ = δ(ε) ≤ min δ 1 (ε), lε 2C and Ω (1) ε := ω ∈ Ω : χ 1 (ω) ≥ −1 + ε C . Note that since ε C < 2, the set Ω ε is non-empty and P Ω (1) ε = 1 −1+ε/C φ(s)ds = 1 − −1+ε/C −1 φ(s)ds ≥ 1 − ε. Let 0 < u l − x 0 < δ, then |l − F (x 0 )| = |F (u l ) − F (x 0 )| ≤ lε 2C , or F (x 0 ) ≥ l − lε 2C , and, on Ω ε , we have x 1 ∈ (u l , H), since x 1 = x 0 + F (x 0 ) + lχ 1 ≥ x 0 + l − lε 2C + l −1 + ε C = x 0 + lε 2C > u l − δ + lε 2C l ≥ u l . 1 max{−1,1−εi} φ(t)dt,(27)µ i := P{ω ∈ Ω : χ(ω) < −1 + δ i } = min{−1+δi,1} −1 φ(t)dt.(28) Theorem 4.7. Assume that Assumptions 1 -3 hold, b, u l and v l are denoted in (6), (13) and (14), respectively, and l satisfies conditions (7) and (11). If the function F increases on [v l , u l ] then a solution to (1) with the initial value x 0 ∈ [0, H] persists with a positive probability P p (x 0 ) ≥ K1 i=1 λ i(29) and eventually belongs to (0, b) with a positive probability P e (x 0 ) ≥ K1 i=1 µ i ,(30) where K 1 and K 2 are introduced in (24), while λ i and µ i are denoted in (27) and (28), respectively. Proof. Denote Ω i := {ω ∈ Ω : χ i (ω) > 1 − ε i }, then P{Ω i } = λ i . On Ω 1 , we have x 1 = x 0 + F (x 0 ) + lχ 1 ≥ x 0 + F (x 0 ) + l − l + F (x 0 ) 2 = x 0 + l + F (x 0 ) 2 , or x 1 − x 0 ≥ ε.(31) Further, assume that on ∩ i j=1 Ω j we have x i ≥ x 0 + iε. Then on ∩ i j=1 Ω j , either x i ≥ u l or x i < u l . In the former case, by Theorem 4.2, x persists and P {x K1 ≥ a} = P {x i ≥ u l } ≥ P ∩ i j=1 Ω j = i i=1 λ j ≥ K1 i=1 λ j . In the latter case, due to monotonicity of F , we have x i+1 = x i + F (x i ) + lχ i+1 >x i + F (x 0 + iε) + l − l + 2F (x 0 + iε) − F (x 0 ) 2 =x i + l + F (x 0 ) 2 = x i + ε > x 0 + (i + 1)ε. By induction, either x i ≥ u l for some i = 1, . . . , K 1 or x i ≥ x 0 + iε, for all i = 1, . . . , K 1 , and hence on ∩ K1 j=1 Ω j , x K1 ≥ u l . To conclude the estimate for P p (x 0 ), by Theorem 4.2, part (iv), for a given x 0 , we have P {x K1 ≥ a} = P {x K1 ≥ u l } ≥ P ∩ K1 j=1 Ω j = K1 i=1 λ i . The estimate for P e is justified similarly. Both estimates for probabilities P p (x 0 ) and P e (x 0 ) in Corollary 3 can be writen in a more explicit form in the case when the density φ is bounded below by the constant h > 0, function F is differentiable on [b, a] and its derivative is bounded from below. φ(x) ≥ h,(32) then the estimates (29) and (30) lead to the inequalities P p (x 0 ) ≥ h K1 K1 i=1 ε i , P e (x 0 ) ≥ h K2 K2 i=1 δ i . (ii) Let, in addition to (32), for some κ > 0 and all x, y ∈ [v l , u l ], |F (y) − F (x)| ≥ κ|x − y|.(33) Then estimates (29) and (30) imply P p (x 0 ) ≥ h K1 K1 i=1 ε 0 + κ(i − 1)ε l , P e (x 0 ) ≥ h K2 K2 i=1 δ 0 + κ(i − 1)δ l , and substitution of values from (24)-(26) implies P p (x 0 ) ≥ h K1 ε l K1 K1 i=1 (1 + κ(i − 1)) , P e (x 0 ) ≥ h K2 δ l K2 K2 i=1 (1 + κ(i − 1)) . (iii) If, in addition to conditions of (ii), χ are uniformly distributed, then h = 1/2 and estimates (29) and (30) take forms P p (x 0 ) ≥ ε 2l K1 K1 i=1 (1 + κ(i − 1)) , P e (x 0 ) ≥ δ 2l K2 K2 i=1 (1 + κ(i − 1)) . Proof. We only have to prove the estimates of ε i in (ii) ε i = l + 2F (x 0 + (i − 1)ε) − F (x 0 ) 2l = l + F (x 0 ) 2l + F (x 0 + (i − 1)ε) − F (x 0 ) l ≥ ε 0 + κ(i − 1)ε l , and note that ε 0 = ε l . The estimates are valid since x 0 + (i − 1)ε ≤ u l . (i) f i := max x∈(0,Hi) f (x) < H i , i = 1, . . . , k; (ii) f (x) > f (a i ) > a i , x ∈ (a i , H i ], i = 1, . . . , k.l < min{H i − f i , f (a i ) − a i }.(34)If x 0 ∈ (a i , H i ) then x n ∈ (a i , H i ). If in addition l > max x∈[0,a1] (−F (x))(35) then, for an arbitrary initial value x 0 ∈ (0, H 1 ), a.s., x n eventually gets into the interval (a 1 , H 1 ) and stays there. Proof. Let x 0 ∈ (a i , H i ), then by (34) x 1 = f (x 0 ) + lχ 1 (ω) < H i − l + l = H i and x 1 = f (x 0 ) + lχ 1 (ω) > a i + l − l = a i . Similarly, x n ∈ (a i , H i ) implies x n+1 ∈ (a i , H i ), the induction step concludes the proof of the first part. If, in addition, (35) holds and x 0 ∈ [0, a 1 ) then the result follows from Theorem 3.3, where we assume a 1 = a, H 1 = H. Then all the conditions of Lemma 3.3 are satisfied, and, a.s., x n eventually gets into the interval (a 1 , H 1 ) and stays there, which completes the proof. Example 3. Consider (1) with f (x) = x − sin x. There is the Allee effect on [0, π]. The function f (x) is monotone increasing, satisfies f (x) < x on (2πk, (2k + 1)πk), k = 0, 1, . . . and f (x) > x for x ∈ ((2k − 1)π, 2πk), k ∈ N. Each of the intervals (πk, π(k + 1)) is mapped onto itself. For example, we can choose a k ∈ (2k − 1)π, 2k − 3 4 π , H k ∈ 2k + 3 4 π, (2k + 1)π , k ∈ N. By Lemma 5.1, for appropriate l, once x 0 ∈ (a k , H k ), we have x 0 ∈ (a k , H k ), k ∈ N. If l = 0 (the deterministic case) and x 0 ∈ ((2k − 1)π, (2k + 1)π) then x n → 2πk as n → ∞. f (x) < a i+1 , i ∈ N. 13 6. Numerical Examples. The equations in Examples 5 and 6 satisfy Assumptions 1, 2 and 3. As model examples, we can consider (2) and (3). Example 5. Consider (1) with f (x) := 4x 2 + (x − 3) 2 , x > 0.(36) The fixed points of f in (36) are c = 3 − √ 2 ≈ 1.586 and d = 3 + √ 2 ≈ 4.414. The maximum f m ≈ 6.317 is attained at x m = √ 11 ≈ 3.317. Also, f (f m ) ≈ 1.943 and the value of For (1) with f as in (36), l = 0.2 and x 0 ∈ [0, v l ) = [0, 0.36) we have low density behavior (Fig. 3, left), for x 0 ∈ (u l , H] = (1.74, 6.5] we have persistence (Fig. 3, right). If x 0 ∈ (v l , u l ) ≈ (0.361, 1.74), then solutions can either sustain or have eventually low density (Fig. 3, middle). All numerical runs correspond to the case when χ has a uniform distribution on [−1, 1]. . We observe that for l > −F (b), say, l = 0.04, we have eventual persistence even for small x 0 = 0.01 (Fig. 6, left) while observe Allee effect for smaller l = 0.01 < −F (b) and the same initial value (Fig. 6, right). This example illustrates the possibility to alleviate the Allee effect with large enough random noise. Fig. 6 (left) also illustrates the multi-step lifts to get into the persistence area. 7. Discussion. Complicated and chaotic behavior of even simple discrete systems leads to high risk of extinction. However, frequenly observed persistence suggested that there are some mechanisms for this type of dynamics. In the present paper, we proposed two mechanisms for sustaining a positive expectation in populations experiencing the Allee effect: d 1 = {x > d : f (x) = c} is d 1 = 11 3 − √ 2 ≈ 6. 1. By Lemma 4.1, in the presence of a stochastic perturbation, there is a positive eventual expectation for any solution, independently of initial conditions. This can be treated as persistence thanks to some sustained levels of occasional immigration. However, the lower solution bound is still zero, and even expected solution averages are rather small and matched to this immigration probability distribution. 2. The second mechanism is more important for sustainability of populations. It assumes that there is a substantial range of values, where extinction due to either Allee effect or its combination with overpopulation reaction is impossible. For example, under contest competition [7] with the remaining population levels sufficient to sustain, even for initial values in the Allee zone, large enough stochastic perturbations lead to persistence. Specifically, the amplitude should exceed the maximal population loss in the Allee area, and at the same time should not endanger the original sustainability area. The result can be viewed as follows: if there is the Allee effect and sustainable dynamics for a large interval of values, introduction of a potentially large enough stochastic perturbation can lead to persistence, for any initial conditions. For smaller perturbation amplitudes, there are 3 types of initial values: attracted to low dynamics a.s., a.s. persistent and those which can demonstrate each type of dynamics with a positive probability. As illustrated in Section 6, all three types of dynamics are possible. In this paper we consider only bounded stochastic perturbations. The assumption of boundedness along with the properties of the function f allows to construct a "trap", the interval [a, H], into which any solution eventually gets and stays there. Assume for a moment that in equation (1) instead of bounded we have normally distributed χ n . Applying the approach of the proof of Theorem 3.3 for bounded stochastic perturbations, we can show that for any initial value x 0 > 0, a solution x n eventually gets into the interval (a, H), a.s. However, if χ n can take any negative value with nonzero probability, applying the same method, we can show that there is a "sequence" of negative noises with an absolute value exceeding H pushing the solution out of the interval (a, H), a.s. Thus, a.s., for any n 1 ∈ N, there is an n ≥ n 1 such that x n = 0. So the conclusions of Lemma 3.2, (ii), and Theorem 3.3 are no longer valid. Note that from the population model's point of view the assumption that the noise is bounded is hardly a limitation since in nature there are no unbounded noises. For a normal type of noise, considering its truncation can be a reasonable approach to the problem. 8. Acknowledgment. The research was partially supported by NSERC grants RGPIN/261351-2010 and RGPIN-2015-05976 and also by AIM SQuaRE program. The authors are grateful to the ananymous reviewer whose valuable comments contributed to the present form of the paper. Lemma 3 . 1 . 31Let Assumptions 1 and 3 hold, f (b 1 ) < b 1 . Let x n be a solution of equation (1) with Remark 1 . 1Assumption 3 holds for non-decreasing f such that f (x) < x for x small enough. In this case, once it is satisfied for a given b 1 > 0, this is also true for anyb ∈ (0, b 1 ). For example, if f (x) < x on (0, b 2 ) and f (b 2 ) = b 2 ,we can take any b 1 < b 2 in Assumption 3. Then, the continuous function F (x) = f (x) − x is negative on (0, b 2 ) and vanishes at the end of the interval, so it attains its minimum at a point inside the interval. Moreover, if Assumptions 2 and 3 hold, we have F (a) > a, and also there is a minimum of F (x) on [0, a] attained on (0, a) at a point b: b = min β > 0 : F (β) = min x∈[0,a] Remark 2 . 2Lemma 3.2 implies persistence of solutions with initial values x 0 ∈ (a, H). Theorem 3. 3 . 3Let Assumptions 1 and 2 hold, b be defined in Corollary 1 . 1Under the assumptions of Theorem 3.3, if in addition we assume f (x) < x − l for x > H, then, for any initial condition x 0 ∈ [0, ∞), all solutions eventually belong to the interval (a, H). x n gets into the interval (0, b) and by Lemma 3.1 stays there a.s. (iv) Define∆ l (y) := inf x∈[y,a] {f (x) − x − l}, and note that∆ l (a) = f (a) − a − l > 0,∆ l (v l ) = 0, and the function∆ l : Figure 1 : 1An illustration of the points a, b, u l , v l , α l , β l in the two cases: (left) α l < β l and (right) α l > β l . 2, f (a) ≈ 5.4286, H = 7, f H < 6.8, f (H) = 6.65 > a, F (H) = −0.35. Here the minimum− 5 4 of F (x) is first attained at b = well. We consider l < min{F (a), H − f H , −F (b)}, so we can take l < min{0.2286, 0.2, 1.25}. Then, it is easy to see that α l ∈ (3.so β l < α l . There are exactly 4 fixed points of f (x) = x − sin(π(x − 1)) − 1/4 on[1,5] which are arcsin(0.25)/π + 1, 2 − arcsin(0.25)/π, arcsin(0.25)/π + 3, 4 − arcsin(0.25)/π, and a fixed point ≈ 5.106 on (5, 5.2). Lemma 4. 3 . 3Let Assumptions 1-3 hold and l satisfy conditions (A + l)/2 units (respectively, left (l − B)/2 units), see Theorem 4.7 and Corollary 4 below. Corollary 2. Let the conditions of Lemma 4.3 hold, and x n be a solution of (1) with the initial value x 0 ∈ [0, H]. (i) If α ∈ (α l , u l ), we can estimate the persistence probability P p (α) uniformly for all initial values x 0 ∈ [α, H]. Similarly, if β ∈ (v l , β l ) we can estimate the "low density" probability P e (β) uniformly for all initial values x 0 ∈ [0, β]. (ii) If α l < β l , for each x 0 ∈ (α l , β l ) a solution persists with a positive probability and also reaches the interval [0, b] with a positive probability. For (α, β) ⊂ (α l , β l ), we can find estimation of P p and P e valid for all x 0 ∈ (α, β). Proof. If α ∈ (α l , a) then min x∈[α,a] F (x) > −l, and if β ∈ (b, β l ) then max x∈[b,β] Lemma 4. 4 . 4Let the assumptions of Lemma 4.3 hold. Then b ≈ 0.945, F (b) ≈ −0.1584. The maximum of f (x) for x > 12 is attained at x ≈ 13.162 and equals f H ≈ 14.081. Take a = 12.3, H = 14.5, f (a) ≈ 12.5436, F (a) ≈ 0.2436, f (H) ≈ 12.3846, F (H) ≈ −2.1154, H − f H ≈ 0.419, then we can take any l < 0.24. On [2, 12] local maxima of F (x) = 1/12, 1/28, 1/44 are attained at x = 3, 7, 11, respectively, and local minima of F (x) = −1/20, −1/36 at x = 5, 9, respectively. Thus for l ∈ (1/12, 6/25), inequality (22) holds while for l ∈ (0, 1/12) it fails. Theorem 4.2, Lemma 4.3 and Corollary 2 imply the following result. Theorem 4 . 5 . 45Suppose that Assumptions 1 -3 hold, b be denoted in H) then x persists a.s.; moreover, there exists n 2 ∈ N such that x n ≥ a a.s. for n ≥ n 2 .(iii) If x 0 ∈ (v l , u l ) then xpersists with a positive probability and eventually belongs to (0, b) with a positive probability. Lemma 4.6. Let assumptions of Theorem 4.2 hold, and the density φ be bounded on [−1, 1] by some C > 0: φ(x) ≤ C, x ∈ [−1, 1]. Corollary 4 . 4(i) If for some h > 0 and all x ∈ [−1, 1] 5 . 5Multistability. So far we have considered only one bounded open subinterval (a, H) ⊂ (0, ∞), which f mapped into (a + l, H − l). However, there may be several non-intersecting subintervals with this property.Assumption 4. Assume that f : [0, ∞) → [0, ∞) is continuous, f (0) = 0, f (x) > 0 for x > 0 and there exist positive numbers a i and H i , a i < H i , i = 1, . . . , k and H i < a i+1 , i = 1, . . . , k − 1, such that Lemma 5. 1 . 1Let Assumptions 1 and 4 hold, (x n ) be a solution of equation (1) with l satisfying, for some particular i ∈ {1, 2, . . . , k}, Example 4 . 4Consider(1) with the function f (x) = x − sin x + 0.5x sin x, which experiences the Allee effect and multistability. However, F (x) = (0.5x − 1) sin x is unbounded, and it is hardly possible to find disjoint intervals (a i , H i ) mapped into themselves such that min x∈[ai,Hi] f (x) > H i−1 , max x∈[ai,Hi] 937 . 937Let us choose a = 1.8, H = 6.5, then f (a) ≈ 2.093, f (H) ≈ 1.825, then F (a) = 0.293, F (H) ≈ −4, 675. We consider l = 0.2 < 0.293, for illustration of (36) see Fig. 2. Furthermore, b ≈ 0.907, and F (b) ≈ −0.3384. For any l < 0.293, there is a domain (0, v l ), starting with which we have low density behavior, and (u l , H) which eventually leads a.s. to (a, H). Let us take l = 0.2, then u l ≈ 1.74, v l ≈ 0.361. Figure 2 : 2The graph of the function in (36); the fixed points are c ≈ 1.586 and d ≈ 4.414, the maximum ≈ 6.317 is attained at ≈ 3.317. Figure 3 : 3Several runs of (1) with f as in (36) for x 0 ∈ [0, 0.36) (left), x 0 ∈ (0.361, 1.74) (middle) and x 0 ∈ (1.74, 6.5] (right) for l = 0.2.Let us illustrate the dependency of the probability of the solution to sustain on the initial point x 0 ∈ (u l , v l ).Fig. 4presents 10 random runs starting with x 0 = 1.4, 1.5, 1.6, 1.7(Fig. 4, from left to right).For comparison, let us present several simulations for smaller l = 0.05, seeFig. 5. Figure 4 :Figure 5 : 45Ten runs of (1) with f as in (36) for each of x 0 = 1.4 (left), x 0 = 1.5, 1.6 (middle) and x 0 Ten runs of (1) with f as in(36)for each of x 0 = 0.4 (left), x 0 = 1.58 (middle) and x 0 = 1.8 (right) for l = 0.05. points are c ≈ 0.0833 and d ≈ 1.2037, the maximum f m ≈ 1.3688 is attained at ≈ 0.8508. The minimum of F (x) on [0, c] is attained at b ≈ 0.0392 and equals F (b) ≈ −0.0186. Take a = 0.2, H = 1.8 > f m , f (a) ≈ 0.3602, F (a) ≈ 0.16, f (H) ≈ 0.6886 > f (a), −F (H) ≈ 1.111; we can choose l < 0.16. If l ∈ (−F (b), 0.16), or l ∈ (0.0186, 0.16), we have persistence for any initial condition. All numerical runs are for the case when χ is uniformly distributed on [−1, 1] Figure 6 : 6Ten runs of (1) with f as in (37) for x 0 = 0.01, l = 0.04 (left), and l = 0.01 (right). x n gets into the interval (a, H) and stays there, a.s., by Lemma 3.2.4.2.Mixed behavior. So far we have considered the areas starting with which the solution is guaranteed to sustain (and be in [a, H]) or to stay in the neighbourhood [0, b] of zero. Let us consider a more complicated case when a solution can either eventually persist or eventually belong to [0, b]. We single out intervals starting with which a solution can change domains of attraction, switch between persistence and low-density behavior. In particular, we obtain lower bounds for probabilities that eventually x n ∈ [a, H] and x n ∈ [0, b]. we consider again two cases:P [A 22 ] = 0 and P [A 22 ] > 0. If P [A 22 ] = 0, we have, a.s., A = A 11 ∪ A 21 , so(20) holds with n = 1 on A 11 and n = 2 on A 21 . If P [A 22 ] > 0 we continue the process.Analogously, if P [A k−1,2 ] > 0, for some k < K 1 , we set This implieswhich completes the proof.4.3. F is increasing on (b, a). When F is increasing on (b, a), we can state the following corollary of Theorem 4.5 and Lemma 4.3, since in(16)and(18)Corollary 3. Let, in addition to assumptions of Theorem 4.5, the function F be increasing on [b, a]. Then, for each l ∈ (0, −F (b)), we haveand eventually belongs to (0, b) with a positive probability P e (x 0 ) ≥ p K2 2 , whereandIn the following Theorem we improve estimations of persistence and low-density behavior probabilities P p (x 0 ) and P e (x 0 ), when x 0 ∈ (v l , u l ). The estimates are based on evaluating at each step the new probability to move right (F (x 0 ) + l)/2 units (respectively, left (l − F (x 0 ))/2 units). Let us introduce the following notation:ε 0 := (F (x 0 ) + l)/(2l) ∈ (0, 1),δ 0 := (l − F (x 0 ))/(2l) ∈ (0, 1), δ i := l − 2F (x 0 + iε) + F (x 0 ) 2l , i = 1, . . . , K 2 ,λ i := P{ω ∈ Ω : χ(ω) > 1 − ε i } = Animal Aggregations, a Study in General Sociology. W C Allee, University of Chicago PressChicagoW. C. Allee, Animal Aggregations, a Study in General Sociology, University of Chicago Press, Chicago, 1931. On local stability for a nonlinear difference equation with a non-hyperbolic equilibrium and fading stochastic perturbations. J A D Appleby, G Berkolaiko, A Rodkina, J. Difference Equ. Appl. 14J. A. D. Appleby, G. Berkolaiko and A. Rodkina, On local stability for a nonlinear difference equation with a non-hyperbolic equilibrium and fading stochastic perturbations, J. Difference Equ. Appl. 14 (2008), 923-951. Non-exponential stability and decay rates in nonlinear stochastic difference equations with unbounded noise. J A D Appleby, G Berkolaiko, A Rodkina, Stochastics. 81J. A. D. Appleby, G. Berkolaiko and A. Rodkina, Non-exponential stability and decay rates in nonlinear stochastic difference equations with unbounded noise, Stochastics 81 (2009), 99-127. On stochastic stabilization of difference equations. J A D Appleby, X Mao, A Rodkina, A , Dynamics of Continuous and Discrete System. 15J. A. D. Appleby, X. Mao and A. Rodkina, A. On stochastic stabilization of difference equations, Dynamics of Continuous and Discrete System 15 (2006), 843-857. Almost sure convergence of solutions to nonhomogeneous stochastic difference equation. G Berkolaiko, A Rodkina, J. Difference Equ. Appl. 12G. Berkolaiko and A. Rodkina, Almost sure convergence of solutions to nonhomogeneous stochastic difference equation, J. Difference Equ. Appl. 12 (2006), 535-553. Single-species models of the Allee effect: extinction boundaries, sex ratios and mate encounters. D S Boukal, L Berec, J. Theor. Biol. 218D. S. Boukal and L. Berec, Single-species models of the Allee effect: extinction boundaries, sex ratios and mate encounters, J. Theor. Biol. 218 (2002), 375-394. . F Brauer, C Castillo-Chavez, Mathematical Models in Population Biology and Epidemiology. Springer-VerlagF. Brauer and C. Castillo-Chavez, Mathematical Models in Population Biology and Epidemiology, Springer-Verlag New York, 2001. Random perturbations of difference equations with Allee effect: switch of stability properties. E Braverman, Proceedings of the Workshop Future Directions in Difference Equations. the Workshop Future Directions in Difference EquationsVigo69Colecc. Congr.E. Braverman, Random perturbations of difference equations with Allee effect: switch of stability properties, Proceedings of the Workshop Future Directions in Difference Equations, 51-60, Colecc. Congr., 69, Univ. Vigo, Serv. Publ., Vigo, 2011. Chaotic and stable perturbed maps: 2-cycles and spatial models. E Braverman, J J Haroutunian, Chaos. 20E. Braverman and J. J. Haroutunian, Chaotic and stable perturbed maps: 2-cycles and spatial models, Chaos 20 (2010). Stabilization of two-cycles of difference equations with stochastic perturbations. E Braverman, A Rodkina, J. Difference Equ. Appl. 19E. Braverman and A. Rodkina, Stabilization of two-cycles of difference equations with stochastic perturbations, J. Differ- ence Equ. Appl. 19 (2013), 1192-1212. Difference equations of Ricker and logistic types under bounded stochastic perturbations with positive mean. E Braverman, A Rodkina, Comput. Math. Appl. 66E. Braverman and A. Rodkina, Difference equations of Ricker and logistic types under bounded stochastic perturbations with positive mean, Comput. Math. Appl. 66 (2013), 2281-2294. . M A Burgman, S Ferson, H R Akćakaya, Risk Assessment in Conservation Biology. 16Chapman & HallM. A. Burgman, S. Ferson and H. R. Akćakaya, Risk Assessment in Conservation Biology, London: Chapman & Hall, 1993. 16 Backward stochastic difference equations and nearly time-consistent nonlinear expectations. S N Cohen, R J Elliott, SIAM J. Control Optim. 49S. N. Cohen and R. J. Elliott, Backward stochastic difference equations and nearly time-consistent nonlinear expectations, SIAM J. Control Optim. 49 (2011), 125-139. Instability and stability of solutions of systems of nonlinear stochastic difference equations with diagonal noise. N Dokuchaev, A Rodkina, J. Difference Equ. Appl. 14N. Dokuchaev and A. Rodkina, Instability and stability of solutions of systems of nonlinear stochastic difference equations with diagonal noise, J. Difference Equ. Appl. 14 (2014), 744-764. . F C Hoppensteadt, Mathematical Methods of Population Biology. Cambridge University PressF. C. Hoppensteadt, Mathematical Methods of Population Biology, Cambridge, MA: Cambridge University Press, 1982. Cooperation, optimal density and low density thresholds: yet another modification of the logistic model. J Jacobs, Oecologia. 64J. Jacobs, Cooperation, optimal density and low density thresholds: yet another modification of the logistic model, Oecologia 64 (1984), 389-395. Constrained stability and instability of polynomial difference equations with state-dependent noise. C Kelly, A Rodkina, Discrete Contin. Dyn. Syst. Ser. B. 11C. Kelly and A. Rodkina, Constrained stability and instability of polynomial difference equations with state-dependent noise, Discrete Contin. Dyn. Syst. Ser. B 11 (2009), 913-933. Some conditions for boundedness of solutions of difference Volterra equations. V Kolmanovskii, L Shaikhet, Appl. Math. Lett. 16V. Kolmanovskii and L. Shaikhet, Some conditions for boundedness of solutions of difference Volterra equations, Appl. Math. Lett. 16 (2003), 857-862. On delay-dependent stability for vector nonlinear stochastic delay-difference equations with Volterra diffusion term. A Rodkina, M Basin, Syst. Control Lett. 56A. Rodkina and M. Basin, On delay-dependent stability for vector nonlinear stochastic delay-difference equations with Volterra diffusion term, Syst. Control Lett. 56 (2007), 423-430. Allee effect, extinctions, and chaotic transients in simple population models, Theor. S J Schreiber, Popul. Biol. 64S. J. Schreiber, Allee effect, extinctions, and chaotic transients in simple population models, Theor. Popul. Biol. 64 (2003), 201-209. Persistence for stochastic difference equations: a mini-review. S J Schreiber, J. Difference Equ. Appl. 18S. J. Schreiber, Persistence for stochastic difference equations: a mini-review, J. Difference Equ. Appl. 18 (2012), 1381-1403. L Shaikhet, Lyapunov Functionals and Stability of Stochastic Difference Equations. LondonSpringerL. Shaikhet, Lyapunov Functionals and Stability of Stochastic Difference Equations, Springer, London, 2011. E-mail address: [email protected] E-mail address: alexandra. A N Shiryaev, Probability. Springer2nd edition. [email protected]. N. Shiryaev, Probability (2nd edition), Springer, Berlin, 1996. E-mail address: [email protected] E-mail address: [email protected]
[]
[ "Revisiting Adversarial Attacks on Graph Neural Networks for Graph Classification", "Revisiting Adversarial Attacks on Graph Neural Networks for Graph Classification" ]
[ "Beini Xie ", "Heng Chang ", "Xin Wang ", "Tian Bian ", "Shiji Zhou ", "Daixin Wang ", "Zhiqiang Zhang ", "Fellow, IEEEWenwu Zhu " ]
[]
[]
Graph neural networks (GNNs) have achieved tremendous success in the task of graph classification and diverse downstream real-world applications. Despite their success, existing approaches are either limited to structure attacks or restricted to local information. This calls for a more general attack framework on graph classification, which faces significant challenges due to the complexity of generating local-node-level adversarial examples using the global-graph-level information. To address this "global-to-local" problem, we present a general framework CAMA to generate adversarial examples by manipulating graph structure and node features in a hierarchical style. Specifically, we make use of Graph Class Activation Mapping and its variant to produce node-level importance corresponding to the graph classification task. Then through a heuristic design of algorithms, we can perform both feature and structure attacks under unnoticeable perturbation budgets with the help of both node-level and subgraph-level importance. Experiments towards attacking four state-of-the-art graph classification models on six real-world benchmarks verify the flexibility and effectiveness of our framework.
10.48550/arxiv.2208.06651
[ "https://export.arxiv.org/pdf/2208.06651v1.pdf" ]
251,564,814
2208.06651
b4c4a0a057b81f65d94daafba0f196936830130d
Revisiting Adversarial Attacks on Graph Neural Networks for Graph Classification Beini Xie Heng Chang Xin Wang Tian Bian Shiji Zhou Daixin Wang Zhiqiang Zhang Fellow, IEEEWenwu Zhu Revisiting Adversarial Attacks on Graph Neural Networks for Graph Classification 1Index Terms-Adversarial attackdeep graph learninggraph neural networksgraph classification Graph neural networks (GNNs) have achieved tremendous success in the task of graph classification and diverse downstream real-world applications. Despite their success, existing approaches are either limited to structure attacks or restricted to local information. This calls for a more general attack framework on graph classification, which faces significant challenges due to the complexity of generating local-node-level adversarial examples using the global-graph-level information. To address this "global-to-local" problem, we present a general framework CAMA to generate adversarial examples by manipulating graph structure and node features in a hierarchical style. Specifically, we make use of Graph Class Activation Mapping and its variant to produce node-level importance corresponding to the graph classification task. Then through a heuristic design of algorithms, we can perform both feature and structure attacks under unnoticeable perturbation budgets with the help of both node-level and subgraph-level importance. Experiments towards attacking four state-of-the-art graph classification models on six real-world benchmarks verify the flexibility and effectiveness of our framework. INTRODUCTION G RAPH structured data is ubiquitous for modeling relations and interactions at the level of node classification [1], edge prediction [2], and graph classification [4]. Among them, graph classification plays a vital role in a wide range of domains [3]. For instance, in social network analysis, fake news detection could be treated as a binary graph classification problem on Twitter's news propagation networks [6]. As powerful tools that illustrate the expressive capability of deep learning on graphs, the family of Graph Neural Networks (GNNs) has gained tremendous popularity over the past few years in graph classification and downstream real-world applications [7], [8], [9]. Despite the powerful ability of GNNs for graph representation learning, their vulnerability to potentially existing adversarial examples on graph-structured data has been revealed recently [10]. This lack of robustness of GNNs would be exploited by fraudsters and spammers and provoke dissent on their applications in security-critical domains. For example, deliberately modified personal identity information will cause credit card fraud [11]. Same as the utilization of graph-structured data, adversarial attacks on graphs could also be broadly categorized as node and graph level following the type of tasks. Studies towards node-level adversarial attacks are quite comprehensive from various perspectives [12], [13], [14], [15], [16]. However, in contrast to the remarkable and relatively mature framework for adversarial attacks on node-focused tasks, systematic research and general attacking framework for adversarial attacks on graph classification are still lacking regardless of its vast importance. Compared with perturbations for node-level classification, migrating these adversarial examples to graph-level tasks is a non-trivial problem, since they have different goals for optimization from local to global scale. A desired attack framework towards graph classification should be general to conduct both feature and structure attack. Moreover, the effective attack on one graph classification model is expected to be able to successfully transferred to other graph classifiers (an illustration can be noticed from Figure 1). In a nutshell, researches on graph classification adversarial attacks still face three main challenges: arXiv:2208.06651v1 [cs.SI] 13 Aug 2022 • Given that graph classification tasks depend on efficiently learning global graph representation from local node embeddings via pooling functions, it is complex to exploit information of global-graph-level classification to generate local-node-level adversarial examples; • Most current approaches could only attack graph structure. However, node features might contain more fruitful information. For example, personal identity information and loan history apparently matter more in credit models and fraud detection. Thus for a more realistic and pratical condition, we need a general attack framework that could both manipulate node features and graph structure; • Existing attack methods for graph classification using gradient information only consider the training of target models and could not reflect the information from the global graph structure, which might easily result in the generated adversarial edges trapped around a single node as we observe from experiments. Methods with black-box optimization algorithms like reinforcement algorithms or bayesian optimization algorithms suffer the time complexity issue. To tackle these challenges, we propose a novel hierarchical framework, namely CAMA, to bridge the gap between local-node-level and global-graph-level information. We migrate the idea from Class Activation Mapping (CAM) [18] to conduct powerful adversarial Attack towards graph classification tasks. This unified solution sheds light on the problem of quantifying the contribution of local node information to global representation in attacking graph classification tasks. An example of CAMA is shown in Figure 2. Summarizing above, the main contributions of our work are outlined below: • Framework: We propose a framework CAMA for adversarial attacks on graph classification by hierarchically decomposing the attack process in two steps. Our attack approach fills the gap in generating local perturbation examples from global graph classification. Given the simplicity and effectiveness, CAMA can serve as a strong benchmark for future works in this branch. • Algorithm: We heuristically design novel algorithms to select target nodes in a graph by graph class activation mapping and its variant, then generate adversarial examples in the level of both structure and feature. • Experiment: We show that our method could deteriorate graph classification performance by a significant margin on various benchmarks targeting multiple SOTA GNNs. Further, except for white-box attacks, we also test the transferability of our attack method under the black-box setting for evasion attacks. RELATED WORK GNNs on graph classification. GNNs have proliferate in recent years for tasks like node classification, link prediction, graph classification, and graph generation. GNNs often stack multiple graph convolutions followed by a readout operation to aggregate nodes' information to a graph-level representation when dealing with graph classification tasks. Various graph convolution layers and graph pooling operations are proposed to learn both nodes and graph representation better [19], [22]. One of the most popular GNNs is Graph Convolutional Networks (GCN) [1] which is inspired by the first-order approximation of Chebyshev polynomials in ChebNet. It updates the node representation by taking an average representation of their one-hop neighbors. GCN has excellent results in the semi-supervised node classification tasks. Graph Isomorphism Network (GIN) [23] uses sum aggregation and multi-layer perceptrons instead of one single activation function. It has excellent discriminative power equal to that of the WL test. Facing the finite nature of recurrent GNNs, Implicit Graph Neural Network (IGNN) [24] is able to capture long-range dependencies and performs well in both graph classification and node classification on heterogeneous networks. Its framework ensures wellposedness based on Perron-Frobenius theory. Except for novel graph convolution operations, diverse pooling strategies affect graph tasks differently. Direct pooling methods like simple node pooling (node-wise meanpooling, sum-pooling, and max-pooling) directly generate graph-level representation based on node representations [25]. In contrast, hierarchical graph pooling exploits the hierarchical graph structure. DiffPool [26] proposes a differentiable hierarchical clustering algorithm to learn representations of the new coarsened graph by training a soft cluster assign matrix in each layer. Based on graph Fourier transform, EigenPooling [27] jointly uses node features and local structure. The graph pooling layer (gPool) [28] conducts down-sampling on graph data by selecting top-k nodes from calculated projection value. Inversely, the graph unpooling layer (gUnpool) does up-sampling to restore graphs to their original structure. Inspired by U-Net in computer vision, graph U-Nets (g-U-Nets) [28] is proposed using gPool and gUnpooling operations. g-U-Nets can encode and decode high-level features for network embedding. In this paper, we use GCN, GIN, and IGNN as representatives of general graph classification neural networks and use g-U-Nets to represent hierarchical graph classification models. Adversarial attacks on graph classification. GNNs have shown their vulnerability under adversarial attacks [29]. Most recent works aim to attack models on node classification tasks [20], [31], [14], [21]. Despite their fruitful progress, these methods can only perform attacks on node-level tasks. For graph-level tasks, based on reinforcement learning, RL-S2V [13] flips edges by selecting two endpoints under black-box attack. ReWatt [32] proposes to perform unnoticeable attacks via rewiring operation and utilizes a similar reinforcement learning strategy as RL-S2V. Grabnel [33] exploits bayesian optimization to conduct adversarial attacks targeting graph classification models. Under the white-box setting, GradArgmax [13] exploits gradients over the adjacency matrix of classification loss and flips edges with the largest absolute gradient. Projective ranking [34] generates adversarial examples by ranking potential edge perturbation masks through encoding node features and projecting selected edge masks. Nevertheless, above methods cannot perturb node features. Further, [35] proposes an attacking strategy on hierarchical graph pooling neural networks. However, they overlook the importance of direct pooling, like simple node pooling. Thus, this approach loses its strength when the Fig. 2: An example of CAMA for a two-classes graph classification task. After getting the ranked CAM matrix, we select nodes and edges from top-ranked nodes and generate corresponding perturbed graphs. The nodes and edges selection process repeat for each column of the ranked CAM matrix until a successful attack . graph classification model is unknown. A novel generic attack framework GraphAttacker is recently proposed by [36], which could attack on multiple tasks. But the time complexity serves as its main concern due to the process of training the GAN-based model. Considering all of these, adversarial attacks on graph classification are not fully explored by previous studies. To mitigate this gap, our proposed general framework could flexibly perform structure attack and feature attack. Besides, aside from the white-box attack, we also analyze the transferability of our method under black-box attacks. CAM on graphs. Class Activation Mapping (CAM) localizes image-level classification into pixel-level image areas by using global average pooling (GAP) in convolutional neural networks in computer vision when it was firstly proposed [37]. CAM has a strong discriminative localization ability in the explanation of image classification. For example, it can localize the toothbrush region in a picture classified as brushing teeth. Compared with the blossom of grand application in computer vision, the utilization of CAM on graph-structured data (Graph CAM) is quite rare with only being applied to the explainability in GNNs [18], [38]. Given a graph classification task, Graph CAM can localize the most influential nodes for classification, which then helps us better understand GNNs. Grad Class Activation Mapping on graphs (Graph Grad CAM) [18] extends CAM on graphs by loosening architecture restrictions and using gradients of hidden layers as projection weights. In this work, we first integrate the localization ability of Graph CAM with the awareness of adversarial attacks on the graph classification tasks. We will undoubtedly increase the scope of research on Graph CAM. PRELIMINARIES Notations Given a set of graphs G = {G i } N i=1 , where |G| = N , we consider graph classification on G. Each graph G i = (A i , X i ) has n i nodes, where A i ∈ {0, 1} ni×ni is the adjacency matrix and X i ∈ R ni×D is the node feature matrix with dimension D. Each G i is assigned with a label c i ∈ C = {1, 2, ..., C}, where C is the total number of classes. Graph Classification Graph classification aims to predict the labels of unlabeled graphs. With paired graphs and labels {G i , c i } i={1,··· ,N } , its goal is to learn a mapping function f : G → C. We simplify graph classification model architecture and consider only one fully connected layer. Given a graph G i = (A i , X i ) with n i nodes, a standard procedure for graph classification with direct pooling can be formulated as: h (0) i = X i , h (l) i = f conv (h (l−1) i ; Θ l ), l = 1, 2, ..., L (1) h i = pooling(h (L) i ), z i = W h i + b,(2) where h (l) i ∈ R ni×D l denotes the hidden node embedding in the l-th graph convolution f conv , and Θ l is the corresponding parameter matrix. h i ∈ R D L is the graph embedding of G i after pooling of final node embedding h (L) i ∈ R ni×D L . W ∈ R C×D L and b ∈ R C are parameters in the output fully connected layer, and L is the number of graph convolution. The objective function for graph classification can be further formulated as: min Θ L Θ (G) = N i=1 l Θ (f Θ (G i ), c i ), where l(·, ·) is a loss function such as the cross-entropy. Adversarial Attacks towards GNNs The problem of adversarial attacks on graph classification is to misclassify graph labels, which is formulated as follows: Problem 1. Given paired data of graphs and their labels {G i , c i } N i=1 , the goal of an attacker is to minimize the attack objective function L atk : argmin G L atk (G ) = N i=1 l atk (f Θ (G i ), c i ), where l atk is the attack loss function, and G i denotes the perturbed version of G i . We could define l atk = −l where l is set as the crossentropy loss for graph classification. We can also define l atk as the other attack loss like the CW-loss [20]. In real-world, the attacker usually only unnoticeably attacks within perturbation budget ∆ for each graph G i . Thus, the domain of modified graphs is constrained as : A i − A i 0 + X i − X i 0 ≤ ∆, where A i and X i are the perturbed adjacency matrix and node feature matrix for graph G i . In the following sections, we omit the subscript i for graph G i for simplicity. Adversarial attacks have various taxonomies from the perspectives of perturbation type (feature attack and structure attack), attacker's knowledge (white-box attack and blackbox attack), and the stage where attacks happen (evasion attack and poisoning attack). A desired general framework should be able to deliberate most situations mentioned above, which is also the aim of this work. METHODOLOGY: A HIERARCHICAL FRAME-WORK In order to quantify the contribution of nodes at the local level to graph classification tasks at the global level and reversely conduct effective adversarial attacks at the local level to destroy the performance from the global level, we propose to decompose the whole attack precede into two steps hierarchically: 1) node importance estimation, and 2) adversarial example generation. In this way, we are able to first transfer the focus from classification on graphs to the contribution of each node and design perturbations locally afterward. Node Importance Estimation After finishing model training, we determine the contribution of nodes from local level to graph classification in a way inspired by Graph CAM and its variant [18]. Graph CAM. As a useful method that provides explainability for graph classification, Graph CAM has been well studied. Since the weight matrix of the output fully connected layer can represent the importance of the features of each dimension for graph classification, Graph CAM builds a heat-map matrix by projecting back the weight matrix to the node representation in the final graph convolution layer to indicates the importance of each node for graph classification. This heat-map matrix is calculated as: L CAM = ReLU(h (L) W T ),(3) where W ∈ R C×D L is the same weight matrix as in Eq. (2), and h (L) ∈ R n×D L denotes node representation in the final graph convolution layer for one graph as shown in Eq. (1). The k-th element in c-th row of W indicates the importance of feature k for predicting label c. A variant of Graph CAM, Graph Grad CAM, uses gradients with respect to each hidden convolutional layer and each class. Then, the calculation of gradients α l ∈ R D l ×C replaces weights W T in CAM to construct the heat-map matrix for each layer. At last, by taking the average over heatmap matrix of all graph convolution layers, the heat-map matrix is calculated as: α l = 1 n v ∂z T ∂h (l) v , L Grad−CAM = 1 L l ReLU(h (l) α l ), where z ∈ R C is the prediction logits, h (l) v ∈ R D l is the hidden embedding for node v in the l-th graph convolution layer. The i-th entry in c-th column of L CAM indicates the relative importance for node i resulting from classifying G i into class c. Though having great explainability, directly using Graph CAM still has two limitations. Firstly, the number of fully connected layers is fixed to one due to the restriction on matrix multiplication in Graph CAM. Secondly, the hidden size must be kept the same for all hidden convolutional layers for Graph Grad CAM. As we will show in Experiments, these architecture restrictions do not deteriorate classification performance on clean graphs. Also, they do not hinder the transferability of our proposed attack methods. Ranked CAM Matrix. We calculate the ranked CAM matrix based on the CAM heat-map matrix. The whole process is summarized in Algorithm 1. After getting the CAM heat-map matrix, We firstly rank each column in a descending order and get the corresponding nodes ranking matrix U orig CAM ∈ R n×C in line 1. This implies the class-specific view of nodes importance ranking. Then, we exploit U orig CAM to calculate a global-level nodes ranking vector u global ∈ R n in line 2. Specifically, we go through each row in U orig CAM and use the highest ranking among all columns for each node until all nodes are included in u global . Finally, we concatenate these two ranking sources of nodes to get the final ranked CAM matrix U CAM ∈ R n×(C+1) . Algorithm 1: Generating ranked CAM matrix. Input: Heat-map matrix L CAM . Output: Ranked CAM matrix U CAM . 1 U orig CAM ← column rank(L CAM ); 2 u global ← global rank(L CAM ); 3 U CAM ← concatenate([U orig CAM , u global ]); 4 Return U CAM ; Because CAM heat-map matrix can precisely demonstrate the importance of each node for graph classification tasks, after ranking operation on CAM heat-map, each column in U CAM indicates one type of view for nodes importance ranking. We could identify the most influential nodes for the whole graph classification process through different views of the ranked CAM matrix and generate adversarial examples accordingly. Since the adversarial attack depends on Graph CAM, we name our hierarchical framework as CAM based Attack and its variant CAMA-Grad when using Graph Grad CAM. Adversarial Example Generation With access to the ranked CAM matrix U CAM , we call each column of U CAM as the ranked CAM vector, denoted as U c , c = 1, ..., C + 1 . How do we generate adversarial examples with a series of ranked CAM vectors? Here, we heuristically propose two attack algorithms towards CAMA (for feature attack and structure attack) and CAMAsubgraph (for structure attack only). For CAMA, in the overall adversarial perturbation, we repeat using our algorithms for each column of ranked CAM matrix U c until a successful attack. For CAMA-subgraph, we only need the column of the predicted label in the ranked CAM matrix to select the candidate perturbations. Both two algorithms have their grad version CAMA-Grad and CAMA-subgraph-Grad. The difference between algorithms and their grad version lies only in calculating CAM heat-map matrix. Feature Attack For feature perturbations, we set both global-level and local-level perturbation budgets. In global-level budgets, we assume only a small number of nodes of one particular graph is available. These nodes are called target nodes. In local-level budgets, we constrain the number of features to be adjusted. Given the limitation of modified node amount r, target nodes are selected by the first r nodes in ranked CAM vector U c . A small constant noise is added to each feature of target nodes for perturbation, while relies on the attacker's knowledge of node features. Specifically, given the information of the training process, the number of adjusted features K and adjusted magnitude λ, noise j added for the j-th feature could be calculated following [21] as: j =          λ · sign n i=1 ∂l(f θ (G),c) ∂Xij , ifj ∈ argtop-K n i=1 ∂l(f θ (G),c) ∂X il l=1,2,...,D 0, otherwise. We replace Carlili-Wagner loss in [21] with cross-entropy loss. The overall number of perturbations is rK ≤ ∆. We summarize the process of CAMA for generating feature perturbations in Algorithm 2. Algorithm 2: CAMA for feature perturbations. Input: Graph G = (A, X) with n nodes; number of nodes limit r; ranked nodes vector U c ; feature noise j , where j = 1, 2, ..., D. Output: Modified feature matrix X. 1 Initialize modified feature matrix X ← X; C nodes ← U c [: r]; 2 for u in C nodes do 3 X j [u] ← X j [u] + j , j = 1, 2, ..., D 4 return X ; Structure Attack Structure attack is more comprehensive compared with feature attack, considering the complexity of connectivity in graphs. To this end, we specially design two structure attack algorithms: CAMA and CAMA-subgraph, with the help of the ranked CAM matrix. CAMA is an efficient algorithm that performs attacks via simply flipping edges among top-ranked vital nodes in the ranked CAM matrix. CAMAsubgraph then takes a step further to attack through learning a subgraph mask to select edges for perturbation. CAMA: structure attack by flipping edges among most important nodes. To generate structure perturbations, we assume edges among nodes of higher activation importance are more influential in graph classification tasks and intuitively flip edges among them. With the known ranked nodes influence on graph classification, we could flip edges among nodes that have higher ranking. Furthermore, we exploit nodes similarity to enhance attack ability aside from the information from graph structure. The similarity score is calculated as follows. Given a learned embedding of node h emb , similarity S between nodes u and v is calculated with cosine distance: S[u, v] = S[v, u] = cos(h emb u , h emb v ). We constrain the operation of adding/deleting edges within the similarity constraint. Under the graph homophily assumption and with the calculated similarity matrix S, we choose to add edges between low similarity node pairs and delete edges between high similarity node pairs: A [u, v] − A[u, v] = 1, A[u, v] = 0 and S[u, v] ≤ s 1 ; A [u, v] − A[u, v] = −1, A[u, v] = 1 and S[u, v] ≥ s 2 . Our attacking strategy is in a heuristic way by increasing ranking number each time, iteratively finding candidate pairs of nodes and flipping edges between new target nodes and old ones within perturbation budget and similarity restriction. In each iteration, we increase ranking number i by one, add a new node u i , which ranked i-th in vector U c , into target nodes set C nodes . In the end, we flip edges between new target node and old ones within ∆. The overall procedure for structure perturbations is summarized in Algorithm 3. CAMA-subgraph: structure attack with subgraph mask training. In order to further exploit the local information from a subgraph perspective, we propose an end-to-end adversarial structure attack model with a subgraph mask. 3 u i ← U c [i]; 4 for v in C nodes do 5 if similarity constraint((u i , v); S, s 1 , s 2 ) then 6 A [u i , v] ← 1 − A[u i , v], For each graph G, we obtain a subgraph G sub by keeping p% top ranked nodes V sub , |V sub | = p%|V| in the nodes rank vector with view of predicted label c (the c-th column U c in the ranked CAM matrix). Then, we limit potential edge perturbations M = V sub × V sub within the subgraph. With the edge perturbation candidates as subgraph {m uv |u, v ∈ V sub , uv m uv ≤ ∆}, the adversarial examples are calculated as follows: c uv = 1 − 2 * a uv (4) a uv = a uv + c uv * σ(m uv ), u ∈ V sub , v ∈ V sub a uv , others,(5) where σ(·) is the sigmoid function to map maks values into zero and one. The larger value of m uv , the more attack importance to perturb edge a uv . Given a trained victim model f Θ , we minimize the attack loss l atk for each graph with the victim model's parameters unchanged to learn the subgraph mask m uv : min l atk = l cw + λ ent * l ent + λ size * l size ,(6) where l cw denotes for CW-loss, l ent represents the mean entropy of each element m uv and l size is the penalization term for total mask size. l cw aims to achieve a successful attack [20] and l ent encourages the masking value of σ(m uv ) to be binary [39]. l size restricts the mask's total size to be close to the number of perturbations. For hyper-parameters λ ent and λ size , they balance the influence of l cw , l ent and l size in the total loss function. Specifically, given the ground truth label c yt of graph, the detailed designs of l cw , l ent and l size are: l cw = max(z cyt − max c =cyt z c , 0),(7)l ent = − 1 |M | u,v∈V sub (σ(m uv ) log σ(m uv ) + (1 − σ(m uv )) log(1 − σ(m uv ))),(8)l size = max(| u,v∈V sub σ(m uv ) − ∆| − η, 0),(9) where the hyper-parameter η is the confidence size controlling how many entries in m uv could be free of penalization. Algorithm 4 shows the whole attacking process of structure attack with subgraph mask training, and we denote it as CAMA-subgraph. First, we select top-ranked nodes in U c to formulate a subgraph and limit the edge perturbation within the subgraph in line 1. Secondly, for each training epoch, we minimize the attack loss l atk to train the subgraph mask M as shown in line 4. Then, we select the top-ranked mask M ∆ within the perturbation budget ∆ in line 6. In lines 7-9, we flip edges for nodes pair selected in M ∆ to generate the adversarial example. Finally, we test the attack performance of generated adversarial examples in lines 11-12. Complexity Analysis We analyze the complexity of the proposed framework by using CAMA as an example. Given a graph with n nodes as target, the main complexity lies in the preparation of inputs: Then we analyze the complexity of Algorithm 2 and 3 accordingly, note that all constraints have no effects on the complexity since they can be checked in constant time: Feature attack (Algorithm 2). The complexity from line 3 to line 5 is O(r). Thus, the total complexity of Algorithm 2 is combining it with U c and all , which is O(n × max(D L , log(n))). Structure attack (Algorithm 3). The complexity from line 2 to line 11 is O(min(n 2 , ∆)). Thus, combining with the complexity of similarity matrix S, the total complexity of Algorithm 3 is O(min(n 2 D L , ∆)) = O(∆), since the modification budget ∆ is controlled to restrict the access from attackers and strictly smaller than n 2 . Through our analysis of the complexity above, we can find that CAMA enjoys computational efficiency, especially in comparison with the complexity of target GNNs. EXPERIMENTS Experimental Setups Datasets. We evaluate our attack strategies on four chemical graph classification benchmarks: MUTAG, PROTEINS, NCI1, COX2 [40], and two social network datasets: IMDB-BINARY, IMDB-MULTI. Among chemical graphs, node features consist of node attributes and node labels: in PROTEINS and COX2, we use both node labels and attributes, while in the others, we only use one-hot node labels as node features. For social networks, node features are initialized with the node degree. The datasets statistics can be found in Table 1. Graph Classifiers. We use four state-of-the-art GNNs for graph classification: GCN, GIN, IGNN and g-U-Nets. For all configurations, only one full connected layer is adopted and no dropout layer used after graph pooling. A same global sum-pooling readout function is applied for all models. For GCN, we use 5 GCN convolutional layers. For GIN, we set = 0 (also called GIN-0) and use 5 GIN convolution layers. For IGNN, we use 3 IGNN convolution layers and tune hyper-parameter κ ∈ {0.7, 0.98}. g-U-Nets have a different architecture due to their hierarchical nature. Here, we use node representation of the last layer before readout function to calculate the CAM heat-map matrix. We apply four gPool layers with respectively 90%, 70%, 60%, and 50% nodes proportions and ignore max-pooling layer in its readout function since global max-pooling is poorer at localization compared to GAP [37]. We implement these GNNs with Pytorch Geometric (PyG) 1 . Baselines. We compare our methods with five baselines as follows. Every baseline we compared either released source code, or made it available upon request. • Random [13]: Random randomly selects nodes to perturb and edges to insert/delete. • Degree [41]: Degree chooses nodes with top degrees and insert/delete edges among them. • GradArgmax [13]: GradArgmax greedily selects perturbation edges by gradients of each pair of nodes, which works only for structure attack. We choose GradArgmax as a representative of the white-box attack baseline. • PGD [12]: PGD performs the project gradient descent topology attack and is an effective white-box attack algorithm. • ReWatt [32]: ReWatt conducts rewiring operations to perform structure attacks and uses reinforcement learning to find the optimal rewiring operations. We select ReWatt as the representative of the state-of-the-art blackbox optimization baseline. In order to enrich the feature attack baseline, we generate two variants, GradArgmax-fea and ReWatt-fea, to perform feature attacks. In particular, we select end points of the perturbed edges by GradArgmax as victim nodes, and then add noises to them. For ReWatt-fea, we change the action space to feature munipulation of selected nodes by rewiring. A validate action contains perturbations on three nodes, that is, one center node, its one-hop neighbor and two-hop neighbor. Because an edge contains two end nodes and a rewire operation is related to three nodes, we ensure that the number of nodes manipulated by GradArgmax and ReWatt is more than or equal to that of CAMA to make a fair comparison. Perturbation restrictions and hyper-parameters. For feature attack, we set feature adjusted magnitude λ = 0.1. We select 10% of nodes in one graph to perturb, and 10% of features are modified for each dataset. For structure attack, we set the perturbation budget ∆ = 10%|E i | for each graph G i , where |E i | denotes the number of edges in graph G i . For ReWatt, the number of rewiring operations are set to 0.5∆ with at least one rewiring, which is kept the same 1. https://github.com/rusty1s/pytorch geometric setting as [32]. Besides, in the similarity restriction, we use the first hidden layer to calculate nodes similarity h emb = h (1) , fix s 2 = 0.95 and tune s 1 ∈ {0.95, 0.9, 1}. For CAMAsubgraph, we set total training epochs as 200, the subgraph graph proportion p% = 50%, λ ent = 1, λ size = 10 and the confidence size η = 3. We conduct untargeted attack and evaluate on test graphs. Specifically, we perform 10-fold cross-validation in each classification process and report the average validation accuracy within the cross validation. This configuration follows [23] on graph classification, resulting from the unstable training of small-sized datasets such as MUTAG. Adversarial Attack on Graph Classification We first compare CAMA and CAMA-subgraph to multiple baselines under the white-box attack. We train on clean graphs for each graph classifier, generate perturbed graphs on validation sets, and calculate prediction accuracy using the trained graph classifiers. Full results under white-box setting for chemical datasets are provided in Table 2, for social networks are demonstrated in Table 3. In feature attack, our proposed methods performs better by a high margin on all datasets and all graph classification models, which implies our methods can select the most influential nodes for graph classification tasks. In structure attack, CAMA and CAMA-subgraph outperform the other baselines in all situations. Meanwhile, the subgraph mask training algorithm (CAMA-subgraph) outperforms the simple heuristic flip edge method (CAMA) by a large margin. Actually, the choice of CAMA and CAMA-subgraph is to balance the attack efficiency and effectiveness. These results demonstrate the high attack effectiveness of CAMA. More interestingly, the grad version CAMA-Grad achieves excellent performance close to CAMA but does not guarantee better performance. We also observe that the attack results vary from different datasets and graph classifiers. The accuracy decreases the least on PROTEINS dataset when suffering attacks. Interestingly, graph classifiers tend to behave differently when they are attacked by structure and feature perturbations. For example, IGNN is more robust facing structural perturbations while more vulnerable under feature attack. Transferability of attack In real-world applications, model parameters usually are not available. Thus, to evaluate CAMA under a more realistic and general situation and further explore the transferability of various attacking methods, we validate our attack strategies under the black-box attack setting for four datasets. Specifically, we use GCN as the surrogate model, generate adversarial examples through targeting GCN, then evaluate the other GNNs on the perturbed graphs. The detailed results are provided in Table 4. First, we could see our approaches surpass the other baselines in most situations. The perturbations generated by CAMA and CAMA-subgraph consistently demonstrate strong transferability on four graph classification datasets under black-box attack setting. For CAMA-subgraph, we could also see a significant performance improvement of CAMAsubgraph over CAMA. The calculation of the ranked CAM Ablation Studies Sensitivity analysis for subgraph proportion p in CAMAsubgraph. The choice of subgraph proportion in CAMAsubgraph is crucial. A larger proportion means more perturbation candidates but also more noise, while a smaller proportion may face perturbation candidates deficiency. An efficient subgraph selection could help the attacker localize the essential subgraph nodes and edges. Figure 5 shows the attack performance of CAMA-subgraph with various subgraph proportion on MUTAG. We could see a clear drop tendency when the subgraph proportion gets smaller from 100%, which indicates the effectiveness of locating the subgraph with the ranked CAM vector. For MUTAG, the best proportion is 60% and the accuracy drop 14.36% under this structure attack perturbation setting. Sensitivity analysis for λ size in CAMA-subgraph. gets larger from zero. We could notice that l size does promote a better attack performance compared without it (λ size = 0). Worth to mention that, λ ent is not that sensitive comparing with λ size . Sensitivity Analysis for Hyper-parameter s 1 and s 2 in CAMA. We perform a sensitivity analysis over s 1 and s 2 in Table 6 and set GIN as the victim model on MUTAG dataset. s 1 controls the edge insertion and s 2 controls the edge deletion. s 1 = 1, s 2 = 0 represents no restriction on edge insertion/deletion. We could find that controlling the edge insertion is more helpful for successful attacks contrast to edge deletion. Perturbations budget for white-box attack. We analyze the changes of accuracy with respect to the perturbation budget from Figure 4. Not surprisingly, the prediction accuracy decreases with higher number of perturbations or larger values of adjusted magnitude. In all settings of hyperparameters, we can observe that CAMA and CAMA-Grad show remarkable advantages over all the other baselines. Meanwhile, from the figure on the right, we could observe the accuracy drops dramatically when the adjusted magnitude λ gets larger for CAMA and CAMA-Grad. Visualization of selected nodes and adversarial edges We visualize our selected important nodes from CAMA and CAMA-Grad, then compare the generated structural perturbations with different baselines in Figure 3. Among all attacking strategies, edges selected by CAMA and CAMA-Grad are more concentrated on important nodes for graph classification while the other methods are not. For example, for Degree, structure attack is generated based on degree. This may result in irrelevant perturbation between nodes and edges. Meanwhile, we can observe that the adversarial edges produced by GradArgmax and ReWatt are trapped near one single node, which further implies the deficiency of these two methods that extract merely local information. Insights of target nodes choosen by CAMA. We compare top 5 nodes seleted by CAMA and Degree and report statistics in Table 7. The relatively small average degree and closeness centrality value differentiates CAMA from centrality-based methods. Through the total variation and number of edges, we find that nodes chosen by CAMA have higher connectivity and smoothness (smaller total variation). Besides, we provide an example of edge perturbations on baselines in Figure 3. Computational efficiency analysis To cooperate with our complexity analysis in Section 4.3, we demonstrate the computational efficiency of CAMA and CAMA-subgraph in Table 8 by reporting the average running time over 10 times in comparison with all baselines. We can find that CAMA can finish within 1.5 second, which is consistent with our complexity analysis and implies that the scalability of our proposed approaches could not be an issue. The time cost of CAMA-subgraph is comparable with ReWatt. Both methods need end-to-end training, but CAMA-subgraph has better attack performance. Poisoning black-box attack We also evaluate our methods under poisoning black-box attack. We select GIN as the victim model and retrain it on perturbed graphs generated from the surrogate GCN. Additionally, we compare CAMA with a more powerful attacker, project gradient descent topology attack (PGD) [12]. PGD was originally designed for node classification tasks. We extend its application domain to graph classification. We use cross entropy loss and fix epoch numbers to 10 in our experiment under PGD topology attack. Figure 6 shows the final attack results. Coordinating with the results of evasion attack above, the strong transferability of CAMA and CAMA-Grad still concludes. However, the method using purely gradient information like GradArgmax and PGD may damage the attacking performance when transferring to other models. Random Accuracy (%) Mean accuray for clean graphs Fig. 6: Box-plot for poisoning structural perturbations under black-box attack. We use GIN as the victim model and record 10-fold testing results on NCI1 dataset. Lower is better. CONCLUSION In this paper, we establish a general attack framework focusing on graph classification which considers comprehensive attack settings under white-box and black-box attack, and performs both structure attack and feature attack. We hierarchically decompose the novel problem of linking the local-node-level knowledge with global-level graph classification task into two steps. In the first step, we estimate the importance of nodes towards the graph classification by Class Activation Mapping and its variant. Then in the second step, we heuristically design two algorithms to generate adversarial examples for both feature and structure attack with the ranking information of nodes. Experiments show that the proposed attack strategies significantly outperform existing approaches on various graph classifiers under various settings. Our general framework can also serve as a simple yet novel baseline for future works in the research branch of evaluating the robustness on graph classification tasks. Fig. 1 : 1Adversarial attack on graph classification. Given a cleaned graph, we can manipulate node features and edges to generate a poisoned graph to fool the victim GNN. Algorithm 3 : 3CAMA for structure perturbations. Input: Graph G = (A, X) with n nodes; modification budget ∆; Similarity matrix S; similarity restriction parameter s 1 , s 2 ; ranked nodes vector U c . Output: Modified adjacency matrix A . 1 Initialize remaining perturbation number n perturbs ← ∆, modified adjacency matrix A ← A, target nodes set C nodes = U c [0], and current rank index i = 1. 2 while (i ≤ n) and (n perturbs > 0) do C nodes ← [C nodes , u i ]; 11 i ← i + 1; 12 Return A ; Algorithm 4 : 4CAMA-subgraph for structure attack. Input: Graph G = (A, X) with n nodes; the ground truth label c gt of graph G; ranked nodes vector of the predicted label U c ; subgraph proportion p%; victim model f Θ ; total training epoch number T ; the perturbation budget ∆; Output: Modified Adjacency matrix A . 1 Initialize perturbation candidate subgraph V sub = {u|u ∈ U c [: n sub ]}, where n sub = p%|V| . 2 for t in 1, 2, ..., T do 3 // Train subgraph mask 4 min M l atk = l cw + λ ent * l ent + λ size * l size ; 5 // Generate the adversarial example 6 select top ∆ perturbations M ∆ ⊂ M ; 7 for (u, v) ∈ {(u, v)|m uv ∈ M ∆ } do 8 a uv ← 1 − a uv ; 9 G ← (A , X); 10 // Test the adversarial example 11 if arg max c f Θ (G ) = c gt then 12 break; 13 return A ; • The original nodes ranking matrix U orig CAM (Algorithm 1): The complexity of line 1 is O(Cn log(n)) = O(n log(n)), since the number of classes is always much less than that of nodes. Then the complexity from line 2 to 6 is O(Cn). Thus the total complexity of Algorithm 1 is O(n log(n) + Cn); • Feature noise j , where j = 1, 2, ..., D L : The complexity of getting all is O(nD L + nK) = O(nD L ), since K is seleted from D L ; • Similarity matrix S: The complexity of having similarity matrix is O(n 2 D L ). Fig. 3 : 3An example of structure attack on MUTAG dataset with edge attack proportion=20%. Green nodes are selected by CAMA, indicating their strong influences on graph classification. Added edges are shown in red lines and deleted edges are shown in orange dashed lines. TABLE 1 : 1Dataset statistics.Models # of Graphs # of Classes Avg. # of Nodes Avg. # of Edges MUTAG 188 2 17.93 19.79 PROTEINS 1113 2 39.06 72.82 NCI1 4110 2 29.87 32.3 COX2 467 2 41.22 43.45 IMDB-BINARY 1000 2 19.77 96.53 IMDB-MULTI 1500 3 13.00 65.94 TABLE 2 : 2Summary of the change in classification accuracy (in %) compared to the clean graph under white-box attack for chimecal datasets. Lower is better. Best performances are shown in bold markers.Dataset MUTAG PROTEINS NCI1 COX2 Models GCN GIN-0 IGNN g-U-Nets GCN GIN-0 IGNN g-U-Nets GCN GIN-0 IGNN g-U-Nets GCN GIN-0 IGNN g-U-Nets Clean 83.04 89.85 81.46 88.89 78.17 77.81 77.99 77.54 78.98 77.59 75.06 72.24 88.87 83.51 83.51 83.08 Feature attack Random -5.35 -5.29 -7.43 -7.02 -1.26 -0.63 -4.04 -0.90 -16.06 -19.03 -33.92 -55.57 -17.32 -3.42 -7.05 -11.54 Degree -4.82 -7.40 -7.43 -8.66 -1.61 -0.81 -4.58 -0.99 -17.27 -23.63 -37.40 -59.37 -22.67 -4.28 -8.54 -13.90 GradArgmax-fea -5.88 -7.43 -7.43 -5.91 -1.35 -0.90 -4.40 -0.81 -18.73 -22.48 -39.76 -55.26 -40.85 -7.72 -10.73 -12.41 ReWatt-fea -2.13 -3.16 -2.63 -2.75 -0.36 -0.72 -2.15 -0.45 -11.29 -14.60 -19.71 -31.80 -15.38 -3.63 -6.62 -4.48 CAMA -10.64 -9.53 -10.12 -11.78 -2.24 -1.44 -6.56 -2.25 -33.58 -36.08 -56.74 -69.61 -52.68 -9.69 -27.83 -27.64 CAMA-Grad -11.70 -9.53 -10.12 -11.73 -2.60 -1.53 -6.29 -2.25 -31.70 -35.57 -56.76 -69.90 -52.23 -15.47 -22.89 -24.40 Structure attack Random -4.82 -16.43 -5.26 -2.13 -0.99 -4.13 -1.53 -0.54 -9.49 -10.97 -6.37 -4.31 -6.43 -3.84 -2.14 -4.93 Degree 8.48 -16.43 -7.92 -3.27 -0.72 -6.91 -1.53 -0.09 -8.08 -15.13 -5.79 -4.31 -6.87 -9.83 -4.07 -5.56 GradArgmax -7.98 -43.33 -7.37 -2.13 -1.88 -7.63 -2.96 -1.08 -10.90 -12.31 -10.85 -7.45 -17.17 -16.24 -13.08 -11.99 ReWatt -4.24 -13.77 -4.74 -8.60 -0.72 -1.89 -0.72 -0.54 -7.64 -5.89 -4.72 -2.90 -2.58 -7.94 -1.29 -1.28 CAMA -11.08 -47.08 -11.64 -9.18 -3.23 -9.44 -2.88 -1.35 -20.68 -22.43 -15.74 -9.88 -22.47 -18.89 -13.93 -12.64 CAMA-Grad -11.64 -50.20 -12.72 -5.85 -2.78 -9.16 -3.24 -1.08 -23.50 -22.29 -16.69 -8.76 -24.86 -18.85 -13.28 -15.81 CAMA-subgraph -12.25 -59.09 -7.92 4.24 -7.39 -37.31 -7.57 -6.67 -37.15 -44.74 -41.77 -20.41 -28.84 -56.25 -30.53 -14.34 CAMA-subgraph-Grad -12.75 -56.99 -11.14 1.00 -6.94 -37.40 -7.93 -6.94 -38.20 -44.38 -40.34 -19.29 -27.77 -56.46 -24.98 -10.24 TABLE 3 : 3Summary of the change in classification accuracy (in %) compared to the clean graph under white-box attack for social networks. Lower is better. Best performances are shown in bold markers.Dataset IMDB-BINARY IMDB-MULTI Models GCN GIN g-U-Nets GCN GIN g-U-Nets Clean 73.67 74.22 73.89 50.00 50.59 48.30 Random -0.78 -7.55 -0.78 -0.96 -9.48 -0.45 Degree -1.67 -18.00 -2.78 -1.63 -14.37 -2.37 GradArgmax -4.34 -19.22 -3.33 -2.82 -14.52 -1.11 ReWatt -6.07 -3.22 -6.99 -6.53 -2.79 -3.03 PGD -2.57 -22.82 -1.79 -2.00 -29.12 -0.57 CAMA -2.11 -15.22 -2.33 -3.11 -11.48 -1.19 CAMA-Grad -2.78 -15.55 -1.56 -3.26 -13.33 -1.04 CAMA-subgraph -7.47 -21.72 -7.99 -7.00 -30.26 -3.63 CAMA-subgraph-Grad -7.07 -24.22 -7.89 -7.27 -29.79 -3.50 TABLE 4 : 4Summary of the change in classification accuracy (in %) compared to the clean graph under black-box attack. Lower is better.Dataset MUTAG PROTEINS NCI1 COX2 Models GIN-0 IGNN g-U-Nets GIN-0 IGNN g-U-Nets GIN-0 IGNN g-U-Nets GIN-0 IGNN g-U-Nets Clean 89.85 81.46 88.89 77.81 77.99 77.54 77.59 75.06 72.24 83.51 83.51 83.08 Feature attack Random -2.13 -4.24 -4.85 -0.54 -3.59 -0.45 -6.59 -9.32 -13.14 -2.56 -7.26 -9.19 Degree -2.66 -4.24 -6.52 -0.63 -4.13 -0.54 -9.46 -10.19 -14.89 -3.85 -7.69 -10.69 GradArgmax-fea -3.19 -3.68 -4.30 -0.45 -3.32 -0.45 -6.72 -13.41 -13.67 -7.50 -11.75 -9.19 ReWatt-fea -3.16 -2.63 -2.75 -0.72 -2.15 -0.45 -14.60 -19.71 -31.80 -3.63 -6.62 -4.48 CAMA -4.24 -6.93 -5.38 -0.90 -5.57 -1.26 -17.13 -23.72 -24.28 -12.88 -22.04 -22.26 CAMA-Grad -3.71 -8.01 -4.27 -0.99 -5.84 -1.17 -15.23 -22.65 -24.06 -12.44 -19.03 -20.35 Structure attack Random -16.43 -5.26 -2.13 -4.13 -1.53 -0.54 -10.97 -6.37 -4.31 -3.84 -2.14 -4.93 Degree -16.43 -7.92 -3.27 -6.91 -1.53 -0.09 -15.13 -5.79 -4.31 -9.83 -4.07 -5.56 GradArgmax -12.75 -9.53 -2.72 -5.48 -1.17 -0.90 -8.88 -6.67 -3.75 -9.82 -4.48 -5.12 ReWatt -13.77 -4.74 -8.60 -1.89 -0.72 -0.54 -5.89 -4.72 -2.90 -7.94 -1.29 -1.28 CAMA -25.44 -10.58 -8.16 -10.42 -2.43 -1.71 -19.03 -11.41 -8.30 -13.91 -10.48 -8.34 CAMA-Grad -27.54 -11.08 -9.21 -11.68 -2.87 -1.62 -19.63 -16.03 -9.51 -16.69 -9.63 -9.84 CAMA-subgraph -54.42 -13.21 -11.23 -24.98 -6.02 -5.39 -37.08 -20.41 -16.30 -53.52 -21.00 -15.00 CAMA-subgraph-Grad -53.39 -13.77 -11.76 -24.16 -5.93 -5.39 -37.86 -21.78 -17.45 -53.31 -16.27 -14.58 Attack results with different perturbation hyper-parameters. All experiments are conducted on MUTAG dataset using GCN. Lower accuracy is better. Left: attack results with increasing perturbation proportion of edges. Middle: attack results with increasing perturbation proportion of nodes. Right: attack results with increasing adjusted magnitude values.matrix and the selected subgraph is important. As a result, the attack performance of a black-box attack may exceed a white-box attack due to an efficient ranked CAM matrix of the GCN surrogate model. Moreover, we could see that the attack performance of ReWatt is unstable. It does work with some datasets, like NCI1, while it fails for the other datasets. Second, compared with the white-box attack, our approaches have a more significant advantage over baselines like GradArgmax. This indicates that our methods have more vital attack ability when transferring to other GNNs. Besides, the results show that perturbations against a surrogate model with typical architecture could also generalize to the hierarchical graph classifier like g-U-Nets.Random Degree GradArgmax-fea ReWatt-fea CAMA CAMA-Grad Fig. 4: Table 5 5displays the changes of test accuracy when the hyperparameters λ size in Eq.(6) are changed in CAMA-subgraph. The test accuracy decreases and then increases when λ sizeFig. 5: Line-plot for attack performance under CAMAsubgraph for different subgraph proportion. We record 10fold testing results on MUTAG dataset using GCN as graph classifier. Lower is better.0 25 50 75 100 Subgraph Proportion (%) 68 72 76 80 Accuracy (%) TABLE 5 : 5Sensitivity analysis for hyper-parameter λ size. Lower is better.λ size 0 1 5 10 15 20 25 Test Accuracy(%) 73.48 68.16 69.21 70.79 71.35 71.90 72.43 TABLE 6 : 6Sensitivity analysis for hyper-parameter s 1 and s 2 . Lower is better.Hyper-parameter clean 0 0.2 0.4 0.6 0.8 1 s1 (fix s2=0) 83.04 67.72 67.72 66.64 67.16 66.64 71.40 s2 (fix s1=1) 83.04 71.40 71.40 71.93 71.93 71.93 72.46 TABLE 7 : 7Statistics for seleted nodes by CAMA and Degree. Avg. Degree Avg. Closeness Total Variation No. EdgesMethod Degree 2.8 0.25 12 1 CAMA 2.4 0.22 8 2 TABLE 8 : 8Running time (s) comparison over all baseline methods on MUTAG using GCN. We report the 10 times average running time within 10-fold cross validation.Models Random Degree GradArgmax ReWatt CAMA CAMA-Subgraph Structure 0.3694 0.3820 0.3429 30.0120 0.7888 34.2288 Feature 0.6252 0.6110 0.7538 44.0460 0.6747 - Semi-supervised classification with graph convolutional networks. T N Kipf, M Welling, international conference on learning representationsT. N. Kipf and M. Welling, "Semi-supervised classification with graph convolutional networks," international conference on learning representations, 2017. Link prediction based on graph neural networks. M Zhang, Y Chen, Proceedings of the 32nd International Conference on Neural Information Processing Systems. the 32nd International Conference on Neural Information Processing SystemsM. Zhang and Y. Chen, "Link prediction based on graph neural networks," in Proceedings of the 32nd International Conference on Neural Information Processing Systems, 2018, pp. 5171-5181. Deep learning on graphs: A survey. Z Zhang, P Cui, W Zhu, IEEE Transactions on Knowledge and Data Engineering. Z. Zhang, P. Cui, and W. Zhu, "Deep learning on graphs: A survey," IEEE Transactions on Knowledge and Data Engineering, 2020. Neural message passing for quantum chemistry. J Gilmer, S S Schoenholz, P F Riley, O Vinyals, G E Dahl, international conference on machine learningJ. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl, "Neural message passing for quantum chemistry," international conference on machine learning, pp. 1263-1272, 2017. Deep learning on graphs: A survey. Z Zhang, P Cui, W Zhu, IEEE Transactions on Knowledge and Data Engineering. Z. Zhang, P. Cui, and W. Zhu, "Deep learning on graphs: A survey," IEEE Transactions on Knowledge and Data Engineering, 2020. Fake news detection on social media using geometric deep learning. F Monti, F Frasca, D Eynard, D Mannion, M M Bronstein, arXiv:1902.06673arXiv preprintF. Monti, F. Frasca, D. Eynard, D. Mannion, and M. M. Bronstein, "Fake news detection on social media using geometric deep learning," arXiv preprint arXiv:1902.06673, 2019. Dynamics based features for graph classification. L G Gómez, B Chiem, J.-C Delvenne, arXiv:1705.10817arXiv preprintL. G. Gómez, B. Chiem, and J.-C. Delvenne, "Dynamics based features for graph classification," arXiv preprint arXiv:1705.10817, 2017. Hats: A hierarchical graph attention network for stock movement prediction. R Kim, C H So, M Jeong, S Lee, J Kim, J Kang, arXiv:1908.07999arXiv preprintR. Kim, C. H. So, M. Jeong, S. Lee, J. Kim, and J. Kang, "Hats: A hierarchical graph attention network for stock movement prediction," arXiv preprint arXiv:1908.07999, 2019. Graph-hist: Graph classification from latent feature histograms with application to bot detection. T Magelinski, D Beskow, K M Carley, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34T. Magelinski, D. Beskow, and K. M. Carley, "Graph-hist: Graph classification from latent feature histograms with application to bot detection," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, 2020, pp. 5134-5141. Graph structure learning for robust graph neural networks. W Jin, Y Ma, X Liu, X Tang, S Wang, J Tang, Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningW. Jin, Y. Ma, X. Liu, X. Tang, S. Wang, and J. Tang, "Graph structure learning for robust graph neural networks," in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020, pp. 66-74. Data mining for credit card fraud: A comparative study. S Bhattacharyya, S Jha, K Tharakunnel, J C Westland, Decision Support Systems. 503S. Bhattacharyya, S. Jha, K. Tharakunnel, and J. C. Westland, "Data mining for credit card fraud: A comparative study," Decision Support Systems, vol. 50, no. 3, pp. 602-613, 2011, on quantitative methods for detection of financial fraud. Topology attack and defense for graph neural networks: An optimization perspective. K Xu, H Chen, S Liu, P.-Y Chen, T.-W Weng, M Hong, X Lin, International Joint Conference on Artificial Intelligence (IJCAI). K. Xu, H. Chen, S. Liu, P.-Y. Chen, T.-W. Weng, M. Hong, and X. Lin, "Topology attack and defense for graph neural networks: An optimization perspective," in International Joint Conference on Artificial Intelligence (IJCAI), 2019. Adversarial attack on graph structured data. H Dai, H Li, T Tian, X Huang, L Wang, J Zhu, L Song, International conference on machine learning. PMLRH. Dai, H. Li, T. Tian, X. Huang, L. Wang, J. Zhu, and L. Song, "Adversarial attack on graph structured data," in International conference on machine learning. PMLR, 2018, pp. 1115-1124. A restricted black-box adversarial framework towards attacking graph embedding models. H Chang, Y Rong, T Xu, W Huang, H Zhang, P Cui, W Zhu, J Huang, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34H. Chang, Y. Rong, T. Xu, W. Huang, H. Zhang, P. Cui, W. Zhu, and J. Huang, "A restricted black-box adversarial framework towards attacking graph embedding models," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, 2020, pp. 3389-3396. Adversarial examples for graph data: Deep insights into attack and defense. H Wu, C Wang, Y Tyshetskiy, A Docherty, K Lu, L Zhu, Proceedings of the 28th International Joint Conference on Artificial Intelligence. the 28th International Joint Conference on Artificial IntelligenceAAAI PressH. Wu, C. Wang, Y. Tyshetskiy, A. Docherty, K. Lu, and L. Zhu, "Adversarial examples for graph data: Deep insights into attack and defense," in Proceedings of the 28th International Joint Conference on Artificial Intelligence. AAAI Press, 2019, pp. 4816-4823. Adversarial attacks on node embeddings via graph poisoning. A Bojchevski, S Günnemann, PMLRProceedings of the 36th International Conference on Machine Learning, ICML, ser. Proceedings of Machine Learning Research. the 36th International Conference on Machine Learning, ICML, ser. Machine Learning ResearchA. Bojchevski and S. Günnemann, "Adversarial attacks on node embeddings via graph poisoning," in Proceedings of the 36th Inter- national Conference on Machine Learning, ICML, ser. Proceedings of Machine Learning Research. PMLR, 2019. Adversarial attacks on node embeddings via graph poisoning. A Bojchevski, S Günnemann, PMLRProceedings of the 36th International Conference on Machine Learning, ICML, ser. Proceedings of Machine Learning Research. the 36th International Conference on Machine Learning, ICML, ser. Machine Learning ResearchA. Bojchevski and S. Günnemann, "Adversarial attacks on node embeddings via graph poisoning," in Proceedings of the 36th Inter- national Conference on Machine Learning, ICML, ser. Proceedings of Machine Learning Research. PMLR, 2019. Explainability methods for graph convolutional neural networks. P E Pope, S Kolouri, M Rostami, C E Martin, H Hoffmann, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionP. E. Pope, S. Kolouri, M. Rostami, C. E. Martin, and H. Hoffmann, "Explainability methods for graph convolutional neural networks," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 10 772-10 781. Spectral graph attention network with fast eigenapproximation. H Chang, Y Rong, T Xu, W Huang, S Sojoudi, J Huang, W Zhu, arXiv:2003.07450arXiv preprintH. Chang, Y. Rong, T. Xu, W. Huang, S. Sojoudi, J. Huang, and W. Zhu, "Spectral graph attention network with fast eigen- approximation," arXiv preprint arXiv:2003.07450, 2020. Adversarial attacks on neural networks for graph data. D Zügner, A Akbarnejad, S Günnemann, Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningD. Zügner, A. Akbarnejad, and S. Günnemann, "Adversarial attacks on neural networks for graph data," in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018, pp. 2847-2856. Towards more practical adversarial attacks on graph neural networks. J Ma, S Ding, Q Mei, Advances in Neural Information Processing Systems. 33J. Ma, S. Ding, and Q. Mei, "Towards more practical adversarial attacks on graph neural networks," Advances in Neural Information Processing Systems, vol. 33, 2020. Autogl: A library for automated graph learning. C Guan, Z Zhang, H Li, H Chang, Z Zhang, Y Qin, J Jiang, X Wang, W Zhu, arXiv:2104.04987arXiv preprintC. Guan, Z. Zhang, H. Li, H. Chang, Z. Zhang, Y. Qin, J. Jiang, X. Wang, and W. Zhu, "Autogl: A library for automated graph learning," arXiv preprint arXiv:2104.04987, 2021. How powerful are graph neural networks. K Xu, W Hu, J Leskovec, S Jegelka, ICLR 2019 : 7th International Conference on Learning Representations. K. Xu, W. Hu, J. Leskovec, and S. Jegelka, "How powerful are graph neural networks," in ICLR 2019 : 7th International Conference on Learning Representations, 2019. Implicit graph neural networks. F Gu, H Chang, W Zhu, S Sojoudi, L El Ghaoui, Advances in Neural Information Processing Systems. 33995F. Gu, H. Chang, W. Zhu, S. Sojoudi, and L. El Ghaoui, "Implicit graph neural networks," in Advances in Neural Information Processing Systems, vol. 33, 2020, pp. 11 984-11 995. Graph neural networks: A review of methods and applications. J Zhou, G Cui, S Hu, Z Zhang, C Yang, Z Liu, L Wang, C Li, M Sun, AI Open. 1J. Zhou, G. Cui, S. Hu, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun, "Graph neural networks: A review of methods and applications," AI Open, vol. 1, pp. 57-81, 2020. Hierarchical graph representation learning with differentiable pooling. Z Ying, J You, C Morris, X Ren, W Hamilton, J Leskovec, neural information processing systemsZ. Ying, J. You, C. Morris, X. Ren, W. Hamilton, and J. Leskovec, "Hierarchical graph representation learning with differentiable pooling," neural information processing systems, pp. 4801-4811, 2018. Graph convolutional networks with eigenpooling. Y Ma, S Wang, C C Aggarwal, J Tang, Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningY. Ma, S. Wang, C. C. Aggarwal, and J. Tang, "Graph convolutional networks with eigenpooling," in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019, pp. 723-731. Graph u-nets. H Gao, S Ji, International Conference on Machine Learning. H. Gao and S. Ji, "Graph u-nets," in International Conference on Machine Learning, 2019, pp. 2083-2092. Adversarial attacks and defenses on graphs: A review and empirical study. W Jin, Y Li, H Xu, Y Wang, J Tang, arXiv:2003.00653arXiv preprintW. Jin, Y. Li, H. Xu, Y. Wang, and J. Tang, "Adversarial attacks and defenses on graphs: A review and empirical study," arXiv preprint arXiv:2003.00653, 2020. Adversarial attacks on neural networks for graph data. D Zügner, A Akbarnejad, S Günnemann, Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningD. Zügner, A. Akbarnejad, and S. Günnemann, "Adversarial attacks on neural networks for graph data," in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018, pp. 2847-2856. Adversarial attacks on graph neural networks via meta learning. D Zügner, S Günnemann, International Conference on Learning Representations (ICLR). D. Zügner and S. Günnemann, "Adversarial attacks on graph neural networks via meta learning," in International Conference on Learning Representations (ICLR), 2019. Graph adversarial attack via rewiring. Y Ma, S Wang, T Derr, L Wu, J Tang, Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, ser. KDD '21. the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, ser. KDD '21Y. Ma, S. Wang, T. Derr, L. Wu, and J. Tang, "Graph adversarial attack via rewiring," in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, ser. KDD '21, 2021, p. 1161-1169. Adversarial attacks on graph classifiers via bayesian optimisation. X Wan, H Kenlay, B Ru, A Blaas, M Osborne, X Dong, Thirty-Fifth Conference on Neural Information Processing Systems. X. Wan, H. Kenlay, B. Ru, A. Blaas, M. Osborne, and X. Dong, "Adversarial attacks on graph classifiers via bayesian optimisation," in Thirty-Fifth Conference on Neural Information Processing Systems, 2021. Projective ranking: A transferable evasion attack method on graph neural networks. H Zhang, B Wu, X Yang, C Zhou, S Wang, X Yuan, S Pan, Proceedings of the 30th ACM International Conference on Information & Knowledge Management, ser. CIKM '21. the 30th ACM International Conference on Information & Knowledge Management, ser. CIKM '21Association for Computing MachineryH. Zhang, B. Wu, X. Yang, C. Zhou, S. Wang, X. Yuan, and S. Pan, "Projective ranking: A transferable evasion attack method on graph neural networks," in Proceedings of the 30th ACM International Conference on Information & Knowledge Management, ser. CIKM '21. Association for Computing Machinery, 2021, p. 3617-3621. H Tang, G Ma, Y Chen, L Guo, W Wang, B Zeng, L Zhan, arXiv:2005.11560Adversarial attack on hierarchical graph pooling neural networks. arXiv preprintH. Tang, G. Ma, Y. Chen, L. Guo, W. Wang, B. Zeng, and L. Zhan, "Adversarial attack on hierarchical graph pooling neural networks," arXiv preprint arXiv:2005.11560, 2020. Graphattacker: A general multi-task graphattack framework. J Chen, D Zhang, Z Ming, K Huang, J. Chen, D. Zhang, Z. Ming, and K. Huang, "Graphattacker: A general multi-task graphattack framework," 2021. Learning deep features for discriminative localization. B Zhou, A Khosla, A Lapedriza, A Oliva, A Torralba, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionB. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, "Learn- ing deep features for discriminative localization," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2921-2929. Explainability in graph neural networks: A taxonomic survey. H Yuan, H Yu, S Gui, S Ji, arXiv:2012.15445arXiv preprintH. Yuan, H. Yu, S. Gui, and S. Ji, "Explainability in graph neural networks: A taxonomic survey," arXiv preprint arXiv:2012.15445, 2020. Gnnexplainer: Generating explanations for graph neural networks. R Ying, D Bourgeois, J You, M Zitnik, J Leskovec, R. Ying, D. Bourgeois, J. You, M. Zitnik, and J. Leskovec, "Gn- nexplainer: Generating explanations for graph neural networks," 2019. Tudataset: A collection of benchmark datasets for learning with graphs. C Morris, N M Kriege, F Bause, K Kersting, P Mutzel, M Neumann, ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020). C. Morris, N. M. Kriege, F. Bause, K. Kersting, P. Mutzel, and M. Neumann, "Tudataset: A collection of benchmark datasets for learning with graphs," in ICML 2020 Workshop on Graph Represen- tation Learning and Beyond (GRL+ 2020), 2020. [Online]. Available: www.graphlearning.io Gelling, and melting, large graphs by edge manipulation. H Tong, B A Prakash, T Eliassi-Rad, M Faloutsos, C Faloutsos, Proceedings of the 21st ACM international conference on Information and knowledge management. the 21st ACM international conference on Information and knowledge managementH. Tong, B. A. Prakash, T. Eliassi-Rad, M. Faloutsos, and C. Falout- sos, "Gelling, and melting, large graphs by edge manipulation," in Proceedings of the 21st ACM international conference on Information and knowledge management, 2012, pp. 245-254.
[ "https://github.com/rusty1s/pytorch" ]
[ "Title: Differentiable rotamer sampling with molecular force fields", "Title: Differentiable rotamer sampling with molecular force fields", "Title: Differentiable rotamer sampling with molecular force fields", "Title: Differentiable rotamer sampling with molecular force fields" ]
[ "Congzhou M Sha \nDepartment of Engineering Science and Mechanics\nPenn State University\nUniversity ParkPAUSA\n\nDepartment of Pharmacology\nPenn State College of Medicine\nHersheyPAUSA\n", "Jian Wang \nDepartment of Pharmacology\nPenn State College of Medicine\nHersheyPAUSA\n", "Nikolay V Dokholyan \nDepartment of Engineering Science and Mechanics\nPenn State University\nUniversity ParkPAUSA\n\nDepartment of Pharmacology\nPenn State College of Medicine\nHersheyPAUSA\n\nDepartment of Biochemistry and Molecular Biology\nPenn State College of Medicine\nHersheyPAUSA\n\nDepartment of Chemistry\nPenn State University\nUniversity ParkPA USA\n\nDepartment of Biomedical Engineering\nPenn State University\nUniversity ParkPAUSA\n", "Congzhou M Sha \nDepartment of Engineering Science and Mechanics\nPenn State University\nUniversity ParkPAUSA\n\nDepartment of Pharmacology\nPenn State College of Medicine\nHersheyPAUSA\n", "Jian Wang \nDepartment of Pharmacology\nPenn State College of Medicine\nHersheyPAUSA\n", "Nikolay V Dokholyan \nDepartment of Engineering Science and Mechanics\nPenn State University\nUniversity ParkPAUSA\n\nDepartment of Pharmacology\nPenn State College of Medicine\nHersheyPAUSA\n\nDepartment of Biochemistry and Molecular Biology\nPenn State College of Medicine\nHersheyPAUSA\n\nDepartment of Chemistry\nPenn State University\nUniversity ParkPA USA\n\nDepartment of Biomedical Engineering\nPenn State University\nUniversity ParkPAUSA\n" ]
[ "Department of Engineering Science and Mechanics\nPenn State University\nUniversity ParkPAUSA", "Department of Pharmacology\nPenn State College of Medicine\nHersheyPAUSA", "Department of Pharmacology\nPenn State College of Medicine\nHersheyPAUSA", "Department of Engineering Science and Mechanics\nPenn State University\nUniversity ParkPAUSA", "Department of Pharmacology\nPenn State College of Medicine\nHersheyPAUSA", "Department of Biochemistry and Molecular Biology\nPenn State College of Medicine\nHersheyPAUSA", "Department of Chemistry\nPenn State University\nUniversity ParkPA USA", "Department of Biomedical Engineering\nPenn State University\nUniversity ParkPAUSA", "Department of Engineering Science and Mechanics\nPenn State University\nUniversity ParkPAUSA", "Department of Pharmacology\nPenn State College of Medicine\nHersheyPAUSA", "Department of Pharmacology\nPenn State College of Medicine\nHersheyPAUSA", "Department of Engineering Science and Mechanics\nPenn State University\nUniversity ParkPAUSA", "Department of Pharmacology\nPenn State College of Medicine\nHersheyPAUSA", "Department of Biochemistry and Molecular Biology\nPenn State College of Medicine\nHersheyPAUSA", "Department of Chemistry\nPenn State University\nUniversity ParkPA USA", "Department of Biomedical Engineering\nPenn State University\nUniversity ParkPAUSA" ]
[]
Abbreviations: molecular dynamics (MD), discrete molecular dynamics (DMD), Markov chain Monte Carlo (MCMC), Protein Data Bank (PDB), Kullback-Leibler divergence divergence (KL divergence), root mean square distance (RMSD)Graphical AbstractOne Sentence Summary: We make theoretical and practical improvements to the Boltzmann generator framework, enabling rapid and accurate sampling of macromolecule rotameric states and replacing molecular dynamics with neural network training.Abstract:Molecular dynamics is the primary computational method by which modern structural biology explores macromolecule structure and function. Boltzmann generators have been proposed as an alternative to molecular dynamics, by replacing the integration of molecular systems over time with the training of generative neural networks. This neural network approach to MD samples rare events at a higher rate than traditional MD, however critical gaps in the theory and computational feasibility of Boltzmann generators significantly reduce their usability. Here, we develop a mathematical foundation to overcome these barriers; we demonstrate that the Boltzmann generator approach is sufficiently rapid to replace traditional MD for complex macromolecules, such as proteins in specific applications, and we provide a comprehensive toolkit for the exploration of molecular energy landscapes with neural networks.Main Text:In 2019, foundational methods were proposed by Noé et al. to use generative neural networks for the sampling of microstates (14). The central idea is that instead of predicting a single trajectory as in MD, one may instead train a neural network to predict Boltzmann-distributed states. This approach seeks to address the issue of rare-event sampling by allowing the neural network to learn multiple energy minima simultaneously. While Noé et al. successfully demonstrated their method for simple physical systems and small proteins, there were critical theoretical and practical deficiencies limiting the application of their methods that we address in this work.The theoretical deficiencies in the original Boltzmann generator approach we address are various biases in angle generation due to (i) the use of a Gaussian prior, and (ii) the regularization of a discontinuous output. The practical deficiencies we address are (iii) tight coupling between energy and entropy estimation, necessitating hundreds of thousands of evaluations of an external molecular force field, (iv) potential numerical instabilities due to reliance on eigendecomposition, and (v) other inefficiencies in the generation of rotamers.Here, we demonstrate that decoupling the energy and entropy training losses and propagating forces directly from the molecular force field reduces the needed evaluations of the force field by a factor of a thousand; we achieve rare-event sampling with only 10 2 -10 3 evaluations of the force field for chicken villin headpiece (PDB ID 1VII), a 35-residue protein domain. We demonstrate a simple method of gradient propagation for an arbitrary external force field, and we implement the AMBER 14 force field (2) in pure PyTorch(16), as is done in the TorchMD framework (17). We include the Generalized Born implicit solvent (18-22), which is not present in TorchMD (17). We suggest strategies to avoid numerical instabilities and intrinsic biases in the neural network, and we propose a codeefficient method of rotamer sampling that can handle arbitrary molecules while remaining end-to-end differentiable for neural network training. We also present a highly parallel and memory-efficient version of the rotamer sampling algorithm. The result of these improvements is a numerically robust and fast architecture for Boltzmann generators.ResultsWith our methods, we were able to train neural networks which produced rotameric states with near-native energies, requiring only hundreds to thousands of evaluations of the energy function, rather than the hundreds of thousands to millions in Boltzmann generators, and millions to billions for traditional MD.Our novel contributions are differentiable rotamer sampling, parallelized differentiable rotamer sampling(Fig. 1, Fig. 2), a thorough guide to adapting OpenMM force fields for PyTorch or TensorFlow use (particularly outlining the calculations necessary for the generalized Born implicit solvent, which is not a feature of the similar framework TorchMD (17)), and an ad hoc method of using arbitrary force fields for gradients without modification to existing force field code, and without code for custom gradients. We also propose the use of unbiased angle generation methods suitable for neural network training.
10.48550/arxiv.2302.11430
[ "https://export.arxiv.org/pdf/2302.11430v1.pdf" ]
257,078,901
2302.11430
ede032d42942ba24a39696e61cc65294e1142737
Title: Differentiable rotamer sampling with molecular force fields Congzhou M Sha Department of Engineering Science and Mechanics Penn State University University ParkPAUSA Department of Pharmacology Penn State College of Medicine HersheyPAUSA Jian Wang Department of Pharmacology Penn State College of Medicine HersheyPAUSA Nikolay V Dokholyan Department of Engineering Science and Mechanics Penn State University University ParkPAUSA Department of Pharmacology Penn State College of Medicine HersheyPAUSA Department of Biochemistry and Molecular Biology Penn State College of Medicine HersheyPAUSA Department of Chemistry Penn State University University ParkPA USA Department of Biomedical Engineering Penn State University University ParkPAUSA Title: Differentiable rotamer sampling with molecular force fields Abbreviations: molecular dynamics (MD), discrete molecular dynamics (DMD), Markov chain Monte Carlo (MCMC), Protein Data Bank (PDB), Kullback-Leibler divergence divergence (KL divergence), root mean square distance (RMSD)Graphical AbstractOne Sentence Summary: We make theoretical and practical improvements to the Boltzmann generator framework, enabling rapid and accurate sampling of macromolecule rotameric states and replacing molecular dynamics with neural network training.Abstract:Molecular dynamics is the primary computational method by which modern structural biology explores macromolecule structure and function. Boltzmann generators have been proposed as an alternative to molecular dynamics, by replacing the integration of molecular systems over time with the training of generative neural networks. This neural network approach to MD samples rare events at a higher rate than traditional MD, however critical gaps in the theory and computational feasibility of Boltzmann generators significantly reduce their usability. Here, we develop a mathematical foundation to overcome these barriers; we demonstrate that the Boltzmann generator approach is sufficiently rapid to replace traditional MD for complex macromolecules, such as proteins in specific applications, and we provide a comprehensive toolkit for the exploration of molecular energy landscapes with neural networks.Main Text:In 2019, foundational methods were proposed by Noé et al. to use generative neural networks for the sampling of microstates (14). The central idea is that instead of predicting a single trajectory as in MD, one may instead train a neural network to predict Boltzmann-distributed states. This approach seeks to address the issue of rare-event sampling by allowing the neural network to learn multiple energy minima simultaneously. While Noé et al. successfully demonstrated their method for simple physical systems and small proteins, there were critical theoretical and practical deficiencies limiting the application of their methods that we address in this work.The theoretical deficiencies in the original Boltzmann generator approach we address are various biases in angle generation due to (i) the use of a Gaussian prior, and (ii) the regularization of a discontinuous output. The practical deficiencies we address are (iii) tight coupling between energy and entropy estimation, necessitating hundreds of thousands of evaluations of an external molecular force field, (iv) potential numerical instabilities due to reliance on eigendecomposition, and (v) other inefficiencies in the generation of rotamers.Here, we demonstrate that decoupling the energy and entropy training losses and propagating forces directly from the molecular force field reduces the needed evaluations of the force field by a factor of a thousand; we achieve rare-event sampling with only 10 2 -10 3 evaluations of the force field for chicken villin headpiece (PDB ID 1VII), a 35-residue protein domain. We demonstrate a simple method of gradient propagation for an arbitrary external force field, and we implement the AMBER 14 force field (2) in pure PyTorch(16), as is done in the TorchMD framework (17). We include the Generalized Born implicit solvent (18-22), which is not present in TorchMD (17). We suggest strategies to avoid numerical instabilities and intrinsic biases in the neural network, and we propose a codeefficient method of rotamer sampling that can handle arbitrary molecules while remaining end-to-end differentiable for neural network training. We also present a highly parallel and memory-efficient version of the rotamer sampling algorithm. The result of these improvements is a numerically robust and fast architecture for Boltzmann generators.ResultsWith our methods, we were able to train neural networks which produced rotameric states with near-native energies, requiring only hundreds to thousands of evaluations of the energy function, rather than the hundreds of thousands to millions in Boltzmann generators, and millions to billions for traditional MD.Our novel contributions are differentiable rotamer sampling, parallelized differentiable rotamer sampling(Fig. 1, Fig. 2), a thorough guide to adapting OpenMM force fields for PyTorch or TensorFlow use (particularly outlining the calculations necessary for the generalized Born implicit solvent, which is not a feature of the similar framework TorchMD (17)), and an ad hoc method of using arbitrary force fields for gradients without modification to existing force field code, and without code for custom gradients. We also propose the use of unbiased angle generation methods suitable for neural network training. Introduction Statistical mechanics describes the behavior of large numbers of physically identical systems (1). Molecular dynamics (MD) is the computational application of statistical mechanics to molecular systems such as proteins, nucleic acids, and lipid membranes (2)(3)(4)(5). The fundamental postulate of statistical mechanics is that every energetically accessible microstate of the physical system is equally probable; a microstate is a partition of the total energy of the physical system to each coordinate of its Hamiltonian (1). When many identical copies of the physical system are present such as in molecular systems at equilibrium, experimental observations reflect the overall probability distribution of microstates. The goal of MD is to computationally sample enough microstates of a system of molecules to approximate the distribution of microstates in a biological system at equilibrium, in which there may be on the order of Avogadro's number ( ! ∼ 10 "# ) molecules. For MD, statistical equilibrium is defined as the NPT ensemble (6), in which the number of particles, pressure, and temperature are fixed, and the underlying microstate probability distribution is the Boltzmann distribution for the enthalpy. Traditional MD attempts to sample microstates by integrating Newton's second law according to empirically determined molecular force fields (2,3). The underlying major assumption is that the MD trajectory is ergodic (6), that is given enough time steps the trajectory will visit all microstates with a frequency given by the Boltzmann distribution. However, there is no guarantee that a given MD trajectory will be ergodic. Transitioning between states which are separated by large energy barriers presents a significant challenge for MD simulations (7). Numerous approaches have been proposed to address this shortcoming of MD, such as Monte Carlo methods (8)(9)(10)(11), metadynamics (12), and umbrella sampling (13). Recently, Boltzmann generators have emerged as a promising candidate for rare-event sampling in MD (14,15). Training a convergent model We observed that due to occasional large force gradients, typically due to Lennard-Jones internuclear repulsions, certain training steps would cause rapid divergence of model parameters. Even with regularization techniques including layer normalization, weight decay, force regularization, gradient clipping, and the use of the Adam training algorithm, it was still necessary to adjust the learning rate from the default = 10 $# to = 10 $% or 10 $& . We describe how we performed this tuning in the Methods. Initializing Boltzmann generators at the native (or input) state We tested the effect of pretraining the neural networks to output = 0, or in other words to produce the identity function on the structure. For these experiments ( Fig. 3a-b), we trained the neural network for '() epochs with the loss '() = * * " * + ()+ ( 1 ) where ()+ includes fixed terms such as the angle modulus loss in Eq ( 50 ) in the Methods and weight decay regularization. We observed that initial pre-training was crucial to allow neural networks to converge. With no pre-training epochs, we were unable to train neural networks within 5,000 epochs, and all structures produced appeared highly unphysical, with numerically infinite energies (not shown). However, any number of pre-training epochs above 100 appeared suitable for initialization of the neural networks (Fig. 3a). Sampling without entropy When we trained neural networks solely on the energy and not the entropy ( Fig. 3a-b) ),)(+-( ) = ( ) + ()+ ( 2 ) we observed that while the intrinsic noise in the Adam optimizer allowed for sampling of states which were not global energy minima (Fig. 3a), the resulting neural networks did not reproduce the results of traditional MD at non-zero temperature (Fig. 3b). Temperature and entropy, and effect of training length on structure generation We observed that by estimating the multivariate circular distribution entropy and training to maximize the entropy and minimize the energy, we were able to reproduce traditional MD protein backbone root mean square fluctuations ( Fig. 3c-h), as well as prevent mode collapse ( Fig. 4a-c) and sample Boltzmann-distributed states ( Fig. 4d-i). For this set of experiments, we used the training loss .(/*, ( ; 00 ) = 1 ( ) 00 − ( ) + ()+ , 00 ≥ 1 ( ) − 00 ⋅ ( ) + ()+ , 00 < 1 where 00 is the temperature of the system. We chose this form of the loss to maintain the relative contributions and dynamic range of the energy function and entropy , while preventing exploding gradient contributions from dividing by a small 00 or multiplying by a large 00 . In the units of our implementation, 00 = ⋅( 4 ) with the ideal gas constant = 8.314 × 10 $# kJ mol $1 K $1 and physical temperature measured in Kelvins . A human body temperature of 310 K yields a numerical value of 00 = 2.58. However, since our estimate of the entropy is only correct asymptotically, the actual numerical values for the temperature may differ in practice. Interestingly, it appears that reproducing the results from traditional MD is a matter of fine-tuning both the length of neural network training and temperature (Fig. 3c, Fig. 4d-i). Benchmarking We performed basic benchmarking of the rotamer sampler, the parallelized version of the rotamer sampler, and the force field, on the CPU-only of an M1 Max Macbook Pro with 64 GB RAM, and on a NVIDIA Tesla T4 GPU with 16 GB RAM (Fig. 5). These benchmarks were performed on the combined forward and backward passes through the computational graph, to imitate real-world usage. We observed an advantage of up to 10x on the GPU in terms of total throughput of rotamer sampling ( Fig. 5a-b). We also compared the original non-parallel dihedral sampler (Algorithm 3) to the parallel dihedral sampling, running on the NVIDIA GPU ( Fig. 5c-d), which showed a performance advantage of 4x on the protein we used for these experiments. Finally, we demonstrated that the energy function also demonstrated a performance benefit running on the NVIDIA GPU compared to CPU, with 10x greater performance ( Fig. 5e-f). Discussion By decoupling energy minimization from entropy maximization, we were able to perform Boltzmann sampling with three orders of magnitude fewer calls to the energy function than in the original Boltzmann generators. We were able to reproduce traditional MD results in the form of RMSF from the initial state. Additionally, we provide benchmarks for our algorithms which demonstrate automatic parallelization through PyTorch allowing for significant computational throughput on an NVIDIA GPU. Unlike in the original Boltzmann generator work, we were able to directly sample internal degrees of freedom (i.e. the dihedral angles), explicitly freezing out all other modes in the molecule. A side effect of our approach was that we did not need to manually remove modes such as overall molecular rotation/translation through eigendecomposition (14). Despite removing a large fraction of the degrees of freedom from the molecule, we were still able to reproduce the RMSF profile of the protein computed by traditional MD (Fig. 3c). The methods we present may be useful in examining, for example, protein allostery with far less computation than required by traditional MD. In terms of performance, it is not surprising that the GPU outperformed the CPU significantly, given a large enough batch size (Fig. 5). We also observed the expected plateau in performance increase due to the saturation of the CUDA driver with simultaneously executing kernels. We discuss the theoretical advantage of the parallelized rotamer sampler in the Methods. Our energy function was not completely optimized, since many of the pairwise computations were computed for both the upper and lower triangles of the distance matrix, resulting in a two-fold redundancy in certain calculations which are symmetric in the particle order. We encountered difficulty with the limited memory on the GPU (the NVIDIA Tesla T4 only has 16 GB of video RAM), particularly with the energy function, whereas the CPU had access to 64 GB of RAM at the cost of compute speed. We did not perform benchmarks on the M1 Max GPU because the PyTorch backend for Apple' Metal Performance Shaders is not complete. In the original Boltzmann generators proposed by Noé et al., angles were generated as a real number, and invertibility of the networks was ensured through a penalty on angles outside the range [− , ) through a squared loss for angles generated outside that range (14). This method of angle generation makes it difficult for the network to explore angles near the extremes − and , and forces the neural network (a continuous model) to approximate a discontinuous transformation from the angle to the circle 1 . In this work, we used the differentiable atan2 function to generate angles, which does not suffer from these difficulties (Methods). Additionally, the form of the loss biases generation of angles to = 0, as we illustrate in a simplified Markov chain model of training (Methods). One major theoretical limitation of traditional MD which carries over to Boltzmann generators is difficulty in sampling disconnected local energy minima (i.e. metastable states). Fundamentally, the neural networks used in Boltzmann generators are differentiable models which generate molecular internal coordinates from latent variables, and are therefore continuous functions from ℝ 2 to ℝ , (23). Furthermore, the Boltzmann generators originally proposed by Noé et al. (14) and in this work generate internal coordinates from the sampling of a single multidimensional Gaussian distribution ∈ ℝ 2 ~ ( = 0, Σ = )( 5 ) centered at = 0 with unit standard deviation in each coordinate. In Noé et al., and were both set to three times the number of atoms in the protein (i.e. the 3D coordinates). This distribution is spherically symmetric; however in high dimensions, the volume of the unit -ball tends to 0 even for modest values of , implying that the density of the multidimensional Gaussian distribution is highly concentrated near the origin. Meanwhile, the probability density of the molecule's Boltzmann distribution is highly concentrated in disjoint regions of ℝ , , since the energy minima of a molecule are separated by high energy barriers (low Boltzmann probability). Since tends to the origin in the latent space, we are asking the neural network to approximate a one-to-many relation, which is not a function, let alone a continuous one. Instead, it may be beneficial to sample from a sum of Gaussians ~ * ( = * , Σ = ) * ( 6 ) and to require that the result of sampling from distinct regions * ≠ 3 results in internal coordinates * and 3 which are also disjoint, such as through a repulsive loss on those pairs of * , 3 . This method would be analogous to metadynamics sampling (12), in which previously generated molecular states are avoided, formulated in the distributional sense. This method is also analogous to k-means clustering, in which each centroid is responsible for representing a single cluster in the data. Alternatively, one could use an ensemble of neural networks, with each neural network responsible for generating Boltzmann-distributed states for a single energy minimum and its neighborhood of conformations. Another theoretical issue with practical considerations is the size of the neural network necessary to represent the molecule. In the original Boltzmann generator proposed by Noé et al., all 3 atomic coordinates are predicted, with principal component analysis to remove the 6 components corresponding to rotations and translations of the entire structure as well as to learn correlations among the degrees of freedom (14). Noé et al. also required that the entire neural network be invertible so that exact gradients for their KL divergence between Gaussian priors and Gaussian posteriors may be backpropagated. Thus, the input dimension of their neural networks has the same 3 dimensions. Even for small proteins like the chicken villin headpiece we studied, = 596. The neural networks quickly grow in number of trainable parameters. If we restrict the number of hidden units in any layer to a value less than 3 , then the Jacobian determinant of the neural network transformation will immediately become 0. Therefore, the number of parameters in an invertible network is at least 9 " (3,196,944 for chicken villin headpiece, not including bias parameters). In practice, we require multiple hidden layers with just as many parameters to gain sufficient approximation power for the neural network. Though in principle, the number of trainable may be reduced by choosing fixed values for some of the parameters, such a choice requires further assumptions on the symmetries of the protein. In this work, we did not assume a Gaussian posterior distribution for the degrees of freedom, and we postulated that many of the degrees of freedom are unimportant to protein dynamics near the energy minima. Though we no longer have a closed form expression for the entropy, we gain an enormous computational advantage in the neural network. Since we used a Gaussian input of dimension 32 (Methods), our networks reproduce a maximum of 32 normal modes in the output. For backbone dihedral sampling, we required only a prediction of the , , angles for each of the 36 residues, leading to a final dimension of 105. Our networks for chicken villin headpiece had 196,692 trainable parameters. It appeared that 32 normal modes were enough to describe the near native structures to high accuracy (Fig. 3c), and we simultaneously accommodated for non-Gaussian posterior distributions for the dihedral angles. In comparison to the original approach proposed by Noé et al., the methods we propose can handle larger proteins in less memory and with fewer trainable parameters. The major caveat is that each protein may require tuning of neural network hyperparameters, especially the minimum dimension of normal modes to predict. In the case of classical molecular force fields, the high energy barriers tend to be a result of physical singularities in Lennard-Jones and Coulomb potentials at low distance due to internuclear forces, with Lennard-Jones repulsion ( ∝ 1/ 1" ) dominating any electrostatic force (∝ 1/ " ) at low distance. One might hope that the high internuclear forces are UV divergences in the classical theory which disappear upon quantization. However, even in the full quantum field theory of the strong nuclear force, while the repulsion we observe is finite, it is still far larger than any of the forces due to electromagnetism, and we must avoid generating states in which the nuclei are too close. In traditional MD and in Monte Carlo methods, umbrella sampling is used to regularize the singularities; we implemented umbrella sampling in this work by directly regularizing the Lennard-Jones forces for nuclear repulsion to a finite, constant repulsive force. In Noé et al., a logarithmic regularization is performed on the total energy. There is additional regularization of the force field when training the neural network, accomplished through gradient clipping (24), dropout (25), and weight penalties (16). Such methods of regularization are forced upon the user in the process of training stable neural networks. In this sense, Boltzmann generators as originally presented are umbrella-sampled. Despite their shortcomings, Boltzmann generators maintain at least one significant advantage over traditional MD with umbrella sampling. Boltzmann generators implicitly retain some memory of the entire training trajectory, and therefore they may reuse knowledge of the structure between different energy states. For example, an ideal Boltzmann generator learns the correlations among the internal coordinates, correlations which may hold between distinct energy minima. As a result, we may directly estimate the entropy of the internal coordinates, i.e., the elusive conformational entropy. Training of quantum circuits has recently emerged as a promising technique in molecular biophysics (26)(27)(28)(29)(30). Example applications of quantum computers include generation of small molecule structures (27,30), molecular docking (26,28), and optimization of molecular energies (29). Current state-of-the-art quantum computers suffer from noise and low memory capacity. By representing the energy landscape of a molecule in terms of its rotameric degrees of freedom, the differentiable rotamer sampling algorithm significantly reduces the number of parameters necessary to represent the molecule, from the naïve 3 coordinates to a much smaller set. Since quantum computers can hold a superposition of states in memory, it may be advantageous to create and train a quantum Boltzmann generator for small molecules such as alanine dipeptide. However, numerous theoretical and practical challenges remain in quantum circuit training, such as the barren plateau problem and the noisiness of current quantum computers (31). Significant challenges remain in the practical use of Boltzmann generators. Even though we were able to reproduce RMSFs with high correlation (>0.7, Fig. 3c), the resulting structures still resemble the native state. We found that the energy landscape of the protein was a highly sensitive function of both the temperature and learning rate; finetuning of both appears to be required to produce useful results. However, the methods we present in this work make the fine-tuning process more easily accessible to researchers. Future work may also examine the effect of neural network architecture and other hyperparameters on angle generation since we did not study that effect here. Finally, it may be possible to further accelerate certain computations with a field-programmable gate array, which can be configured on a per-protein basis to perform rotamer sampling and energy computation. In conclusion, we present a comprehensive toolkit of differentiable methods for molecular science. Our contributions include: ad hoc propagation of forces from an arbitrary force field for cases in which rewriting the force field is infeasible, differentiable and parallel rotamer sampling/protein fragment assembly, a guide to writing molecular force fields in a differentiable programming framework, decoupling of energy and entropic estimation, and mathematical results on 3D point cloud alignment and 3D rotation representation which can be applied to problems in molecular geometry. We additionally address potential sources of bias in molecular structure generation and outline the approach to remaining sources of bias which we did not implement. We demonstrate that our methods are efficiently implementable on CPU and GPU, and mathematically sound. We hope that other researchers will find these methods and the accompanying reference code useful in investigating molecular energy landscapes. Fig. 1: Differentiable rotamer sampling for Boltzmann generators. Given an arbitrary macromolecule, we identify all the rotameric degrees of freedom; in the case of a protein these degrees of freedom are the dihedral angles of backbones and side chains). We use a neural network to generate changes to the dihedral angles, and perform the rotations on the protein structure. Finally, we evaluate the energy and forces of the resulting structure with respect to a molecular force field, and backpropagate the gradients through the sampling process to the neural network. In this way, we can train the neural network to produce states with low energy, allowing for the study of potential stable conformations of the macromolecule. The protein (chicken villin headpiece, PDB ID 1VII) is split into fragments, whose rotamers can be sampled in parallel. (b) For each rotameric degree of freedom, we split the macromolecule into two connected components at the associated bond, using a depth-first search on the graph of bonds. We then rotate (red) the smaller connected component (blue) about the bond axis by the neural network output , using Rodrigues' formula. (c) To combine fragments after dihedrals have been sampled for each fragment, we use an alignment algorithm (Kabsch or quaternionic) on specific atoms in the backbone. Through this method, we can therefore explore the energy landscape of the macromolecule solely as a function of its internal, rotameric degrees of freedom. Because each step of this method is differentiable, we can backpropagate gradients through the rotamer sampling process to provide derivatives for . (d) Intuition for exponential acceleration of power iteration. We used repeated matrix squaring to achieve high matrix powers, leading to exponential acceleration of the traditional power iteration algorithm for finding the largest magnitude eigenvalue and associated eigenvector. With higher powers of multiplying an arbitrary vector , the " component is gradually suppressed. Figures Materials and methods Ad hoc propagation of forces Since MD force fields are optimized and parallelized to calculate forces in addition to energies, we propose using the forces directly as gradients for neural network training. Assume that a neural network produces atomic positions from Gaussian random variables and neural network parameters : = ( ; )( 7 ) We sought to optimize with respect to the energy function 45 ( ) = ( ( ; ))( 8 ) In other words, we sought to calculate the gradients * = * 3 ⋅ 3 * 3 = * 3 ⋅ i ( ; )j 3 * 3( 9 ) However, by definition, the forces are the gradients of the energy with respect to the position 3 = − 45 3( 10 ) Therefore, we do not need to backpropagate gradients through the external molecular force field to provide gradients with respect to . Instead of differentiating 45 to train the network, we propose to use the loss ,)6 = − * 3 ⋅ 3 3 = − * 3 ⋅ i ( ; )j 3 3( 11 ) Notice that 3 is no longer a function of (and therefore not of or either), since we are using it as a constant input from the molecular force field. Taking a derivative, ,)6 * = − * 3 ⋅ 3 * 3 = − * = 45 *( 12 ) Since the gradients for ,)6 are the same as for 45 , there is no theoretical difference between training on ,)6 instead of on 45 . There is no need for a statistical estimate (i.e. KL divergence) of the gradients with respect to the force field energy, as was done in the original work by Noé et al (14). This method is shown in Algorithm 1: FFDiff. Physical interpretation of ,) 6 We can show that ,)6 has a physical meaning beyond being a convenient loss function. First, we note that ,)6 is manifestly rotation-invariant. It is the sum of the inner products of vectors in Euclidean space; rotation of the coordinate system rotates both the position and force contravariantly, leaving the inner product unchanged. Second, ,)6 is translation invariant. Say that we translate a system of particles by a vector . Then in the new primed coordinates, ,)6 7 is ,) 6 7 = − * 3 7 ⋅ 3 7 3 = − * 3 ⋅ ( 3 − ) 3( 13 ) assuming the intramolecular forces are unchanged ( 7 = ) with translation ( 7 = − ). Splitting up the net forces 3 into all pairwise forces 3* (the force exerted on particle by particle ), and then splitting up the sum, we may write ,)6 7 = − * 3* ⋅ i 3 − j *83 − * 3* ⋅ i 3 − j 38* − * 3* *93 ⋅ i 3 − j( 14 ) By Newton's third law (assuming the molecular force field is conservative), every force must be paired with an equal and opposite force, so 3* = − *3 , and therefore the third sum is 0 ( = ). The remaining sums are over the upper and lower triangles of 3* . ,)6 7 = − * 3* ⋅ i 3 − j *83 − * 3* ⋅ i 3 − j 38*( 15 ) By linearity, ,)6 7 = − *n 3* ⋅ 3 − 3* ⋅ o *83 − *n 3* ⋅ 3 − 3* ⋅ o 38* = − * 3* ⋅ 3 *83 − * 3* ⋅ 3 38* − ⋅ * 3* *83 − ⋅ * 3* 38* = − * 3* ⋅ 3 *83 − * 3* ⋅ 3 38* − ⋅ p*i 3* + *3 j *83 q( 16 ) Finally, we apply Newton's third law again so that the third term cancels, and therefore ,) 6 7 has no dependence on and therefore it is translation-invariant. To interpret the physical meaning of ,)6 , it suffices to examine a simple system. Say there is a pair of particles connected by a spring, with particle 1 at the origin ( 1 = [0, 0, 0]) and particle 2 at " = [1, 0, 0] . Assume that the spring is compressed, so that the force pushes the particles away from each other: 1 = [− , 0, 0] and " = [ , 0, 0]. In this case, we can directly calculate ,)6 = − 1 ⋅ 1 − " ⋅ " = −[− , 0, 0] ⋅ [0, 0, 0] − [ , 0, 0] ⋅ [1, 0, 0] = −( 17 ) Hence, if the spring is instead stretched beyond its equilibrium length, then ,)6 is positive. Since ,)6 is invariant under rotation and translation of the system, our choice of coordinate system was irrelevant. Generalizing to many pairs of particles and many springs, the interpretation for ,)6 is that it measures the total compressive energy of the system, with negative values indicating the system is compressed with respect to equilibrium and positive values indicating the system is stretched with respect to equilibrium. Differentiable force fields Using OpenMM 7.7 (32) as our reference, we rewrote force field terms from OpenMM in terms of pure PyTorch operations, allowing for automatic differentiation of the molecular energy without the memory overhead of repeatedly transferring positional data between PyTorch (16) and OpenMM. Our implementation creates a custom PyTorch function for a provided molecule which stores the computational graph necessary to reproduce its energy. After it is prepared, the PyTorch energy function requires the user only to supply the coordinates (in nanometers) of each atom, in the same order as presented in the input molecular file. From the user-supplied atomic positions * , we calculate all pairwise displacements *3 = * − 3( 18 ) and Euclidean distances *3 = v *3 v = v * − 3 v = w * x( * ) ; − i 3 j ; y " ;( 19 ) For the all-atom AMBER 14 force field (2), we implemented: (1) harmonic bond lengths, First, we imported the molecular topology into OpenMM, which assigns parameter values for each atom or tuple of atoms. Second, we referenced the OpenMM documentation to implement each of the forces. Harmonic bond lengths: the AMBER force field provides particle indices representing a covalent bond = ( , ), the bond length < , and the bond strength < . For each bond, we calculate an energy term with these parameters and the pairwise atomic distances: =/(2>,*? <>,@A = * 1 2 < i *3 − < j " <( 20 ) In OpenMM, the parameters are contained in a HarmonicAngleForce object. Harmonic bond angles: for every triple of atoms which are covalently connected in a linear chain, the AMBER force field provides particles indices = ( , , ) ∈ in which is the middle atom, a bond angle . , and a bond angle strength . . We calculate the angle *3; formed between 3* and 3; (the order of the indices is important) through the dot product identity cos *3; = 3* ⋅ 3; v 3* vv 3; v = 3* ⋅ 3; 3* 3;( 21 ) The corresponding energy term is then: =/(2>,*? /,+B)A = * 1 2 . i *3; − . j " .( 22 ) In OpenMM, the parameters are contained in a HarmonicBondForce object. Pairwise atomic Coulomb and Lennard-Jones interactions: for every pair of atoms, there is an electrostatic force as a function of the interatom distance *3 due to estimated partial charges * from AMBER. For every pair of atoms, there is a Lennard-Jones potential which approximates longrange dipole-dipole attractions and short-range nuclear repulsions. AMBER provides an energy scale *3 and interaction distance *3 for each pair of particles. EF = * *3 4 ‚ƒ *3 *3 " 1" − ƒ *3 *3 " & … *D3( 24 ) Certain Lennard-Jones and Coulomb interactions are often ignored or modified for atoms which are within a certain number of bonds of each other. These modifications can be thought of as setting elements of the matrices *3 , *3 , and *3 to specific values and are included in the OpenMM implementation of AMBER. During regularization of the force field, we adjusted the Lennard-Jones potential so that for all particles below a certain distance, the force was a constant repulsive one. Given a maximum force magnitude for internuclear repulsion 2 , and for each Lennard-Jones interaction, we first computed the magnitude of the force for that interaction Lennard-Jones interaction achieves its minimum value, and thus EF,*3 i 2*,,*3 j = 0. This is the only minimum of the Lennard-Jones potential, and it can be seen that EF,*3 is a monotonically increasing function of the distance, with an asymptote at *3 = 0 which tends to negative infinity. Therefore to find the value 2/I +(/@,*3 at which the magnitude of the repulsive force is equal to 2 , we used the bisection method for root finding (33). We did not use the Newton-Raphson method or other methods which require the second derivative of the Lennard-Jones potential due to numerical stability. For a given value of 0 ≤ 2 < ∞, we know that the solution of the equation v EF,*3 i *3 jv − 2 = 0 lies in the interval (0, 2*,,*3 ]. Therefore by starting with the initial guess *3 C = 2*,,*3 /2, we may bisect the interval repeatedly to determine the approximate value of the zero , depending on if v EF,*3 i *3 jv > 2 or v EF,*3 i *3 jv < 2 . We performed this bisection <*A)?. = 100 times, which should ensure a negligibly small error for 2/I +(/@,*3 . Since *3 was measured on the nanometer scale and numerical values for *3 , the bisection method should be accurate to roughly 1 part in 2 , (!)*+, , or 10 $#C nanometers. We defined the regularized energy as EF,()+ = * OE *3 4 ‚ƒ *3 *3 " 1" − ƒ *3 *3 " & … , if *3 > 2/I +(/@,*3 C,*3 − 2 *3 , otherwise *D3( 26 ) Where C,*3 was chosen so that the EF,()+ is continuous at *3 = 2/I +(/@,*3 . By inspection, the derivative of the regularized potential at close internuclear distances is − 2 , as desired. In OpenMM, parameters for Coulomb and Lennard-Jones interactions, as well as any modifications, are contained in a NonbondedForce object. Periodic backbone dihedral (torsional) angle energies: for every linear chain of four atoms = ( , , , ), AMBER provides a periodic force for the dihedral angle, which is defined as the angle that 3* makes with respect to ;B when viewed in the plane whose normal is 3; , defined by a periodicity ? , energy scale ? , and phase shift ? . The dihedral angle ? is calculated by first normalizing 3; : ̂3 ; = 3; 3; ( 27 ) and projecting 3* and ;B onto the plane of interest: 3 * = i1 − 3* ⋅3 ; j 3* ̃; B = i1 − ;B ⋅̂3 ; j ;B( 28 ) Then the dot product identity states that ̃3 * ⋅̃; B = |̃3 * ||̃; B | cos ?( 29 ) To avoid numerical instabilities and to recover the dihedral angle from the full range [− , ) (34), we compute î3 ; ×̃3 * j ⋅̃; B = |̃3 * ||̃; B | sin ? ( 30 ) which is a special case of the polar sine, and finally ? = atan2̃3 * ⋅; B î3 ; ×̃3 * j ⋅̃; B( 31 ) The torsional energy is then: .>(A*>, = * ? [1 + cos( ? ? − ? )] ?( 32 ) In OpenMM, the parameters are contained in a PeriodicTorsionForce object. Generalized Born implicit solvent: we used the generalized Born implicit solvent with improved parameters (22,35), labeled GBn2 in OpenMM or igb=8 in AMBER. The functional form for this energy is complicated, so we only present the calculations necessary to reproduce the energy. Global parameters for the implicit solvent are cutoff ,)?; = 0.68, scale ,)?; = 0.826836, offset = 0.0195141, A>BJ.) = 1, A>BK),. = 78.5, and integral corrections = 28.3919551 and = 0.14 ; per particle and per pair parameters are Born radii radius * , Born/van der Waals cutoff adjustments or * and sr * , partial charges * , effective radii scaling parameters ( * , * , * ), and pairwise neck integral parameters C,*3 and C,*3 . Given pairwise particle distances *3 , we calculate *3 = *3 − 3 ( 33 ) *3 = max(or * , *3 )( 34 )*3 = *3 + 3 ( 35 ) K@6,*3 = 1 1 2 ž 1 *3 − 1 *3 + 1 4 ƒ *3 − sr 3 " *3 " ƒ 1 *3 " − 1 *3 " " + 1 2 *3 log *3 *3 , *3 > sr * − or * 0, otherwise( 36 ) For each particle, note that ** = 0. We then compute * = * *3 3 ( 39 ) * = * or * ( 40 ) * = ž 1 or * − tanh( * * − * * " + * * # ) radius * $1( 41 ) or offset,* = * + offset ( 42 ) QR,*3 = w *3 " + * 3 exp ƒ− *3 " 4 * 3 "( 43 ) The implicit solvent energy is then In OpenMM, the parameters are contained in a CustomGBForce object. The implicit solvent modifies the Coulomb energies computed earlier. We computed all non-indexed sums on pairs of particles using the full matrix format, which duplicated some of the necessary computations. We verified that energies and gradients from our PyTorch implementation matched that of OpenMM to within 5% (see Code). Avoiding singularities in the energy function during backpropagation Numerical singularities in gradients arose due to true singularities in the energy function and artificial singularities due to the computational graph. True singularities in the gradient arose during both the forward and backward passes due to square roots, logarithms, and negative powers. In all cases in our energy function, we are guaranteed that the argument of the function is nonnegative. Therefore, we avoided true singularities by adding a small positive offset to all affected computations. For computations which were performed but later discarded, such as those involving the diagonal of the distance matrix, we manually set specific terms to fixed constant values using PyTorch's masked fill function so that during backpropagation, the gradient for those terms would be fixed to 0 instead of NaN. Preparation of macromolecule graph metadata for rotameric sampling Given an arbitrary macromolecule whose atoms are covalently bonded into a single connected structure, we sought to modify only the dihedral angles. First, we created an undirected graph (36) = ( , ) of the macromolecule, with atoms as the nodes and covalent bonds as the edges ⊆ × . Second, many macromolecules contain cycles which reduce the number of degrees of freedom by one, so we performed a depth-first search (which generates a tree) to break cycles at a single bond and retaining all other bonds as dihedral degrees of freedom: = ( , S ). Third, we removed all leaf nodes and edges terminating on leaves, since the bonds corresponding to such edges do not represent a dihedral angle: @*= = ( @*= , @*= ). For each remaining edge ∈ @*= , we assigned an output of the neural network to control the dihedral angle for that bond. Finally, for each dihedral edge ∈ @*= , we used depth-first search to calculate the two connected components of @*= that result with the removal of . We recorded the smaller of the two connected components, ( ) ⊆ , as the atoms which we would rotate about the dihedral axis represented by the . This method is shown in Algorithm 2: PrepRot. Differentiable rotamer sampling We started with the × 3 matrix of positions in which there is even more asymmetry among the states within the range. Parallelization of differentiable rotamer sampling with improved memory usage To take advantage of the parallel computing potential of GPU and TPU backends, we followed an approach like that of AlQuraishi's parallelized natural extension reference frame (pNeRF) algorithm (39,40); however, our method generalizes to all rotational degrees of freedom. We split protein chain into \ fragments, splitting at the peptide bond between the C1 carbon of residue and the N2 nitrogen of residue + 1. We recorded the positions of atoms in each segment, ; \ , where labels the fragment and labels the number of the atom. For the N-terminal of fragment , we appended the positions of four atoms near the peptide bond (C1=O from the C-terminal residue of fragment − 1 and N2-C from the N-terminal residue of fragment ) and appended those four positions to the ; \ ; we selected the same four atoms from the C-terminal of fragment (C1=O from the C-terminal residue of fragment and N2-C from the N-terminal residue of fragment + 1). Next, we applied our original rotamer sampling method on each fragment independently. We then aligned the C-terminal of to the N-terminal of + 1, thus joining the fragments together. Finally, we applied dihedral rotations using the Rodrigues' formula as discussed previously to the peptide bonds at which we initially split the protein chain, thus recovering the \ − 1 degrees of freedom lost upon splitting the chain into \ pieces. In addition to allowing for parallelization, this approach reduced memory usage. In our original dihedral sampling method, instead of transforming each position separately for each dihedral angle, we transformed all positions in the protein simultaneously, and then masked those positions which were not part of the associated connected component ( 2 ). Assuming that PyTorch's JIT compiler was not able to optimize the resulting computational graph, this approach could be memory-intensive and computationally inefficient, with roughly @*= × /.>2A 3D matrix-vector multiplications and 2 × @*= × /.>2A 3D vector additions. In backpropagation, each of the intermediate matrices must be used, resulting in @*= × /.>2A × 3 floating point numbers being stored (a copy of the position matrix for each dihedral angle). In the parallel approach, ignoring the extra positions appended to each fragment, the number of dihedrals per fragment is roughly 0 1!2 0 3 , and the number of positions per fragment is similarly 0 .,4-) 0 3 . The total number of floating point operations is therefore \ × † 0 1!2 0 3 × 0 .,4-) 0 3 ‡ = 0 1!2 0 .,4-) 0 3 , so we have reduced the raw number of operations by a factor of \ . The analogous calculation for memory usage shows a reduction by a factor of \ . Assuming PyTorch launches and runs all \ kernels simultaneously on a GPU, we can achieve a speed-up over the original dihedral sampler approach of \ " for large proteins, while using 1 0 3 times as much memory. Our parallelized version of differentiable rotamer sampling is summarized in Algorithm 6: DiffRotParallel. For practical purposes, lines 1-21 in Algorithm 6 only need to be executed once, and the rotamer sampling from lines 22 to the end may be placed in a separate function. Alignment of point clouds We used two methods to align point clouds. The first was the well-known Kabsch algorithm (41), and the second was through an alternative approach due to a quaternionic derivation by Coutsias et al. (42). We chose to employ the second method, since it reduced to a largest magnitude eigenvalue/eigenvector problem. Given two sets of positions and , represented by × 3 matrices *3 and *3 , with labeling the particle number and labeling the 3D components, the goal is to find the best fit rotation matrix and translation which transform the positions of the particles in to the corresponding positions in . We first center the two sets of points, by averaging the 3D components over all particles ̅ 3 = 1 * *3 * ½ 3 = 1 * *3 * *3 = *3 − ̅ 3 *3 = *3 − ½ 3( 57 ) We then calculate the covariance matrix In the Kabsch method, we calculate the singular value decomposition = Σ S( 59 ) Since the Kabsch method may result in improper rotations, we correct for changes in basis orientation by calculating = sign(det( S )) ( 60 ) and the optimal rotation matrix is expressed as = ‚ 1 0 0 0 1 0 0 0 … S( 61 ) In the quaternionic approach, the quaternion associated with the optimal rotation in 3D is the eigenvector [ C , 1 , " , # ] associated with the largest magnitude eigenvalue of the traceless, symmetric matrix = Á 11 + "" + ## "# − #" #1 − 1# 1" − "1 "# − #" 11 − "" − ## 1" + "1 1# + #1 #1 − 1# 1" + "1 − 11 + "" − ## "# + #" 1" − "1 1# + #1 "# + #" − 11 − "" − ## Â( 62 ) To convert the quaternion = ( C , 1 , " , # ) to a rotation matrix, we compute = ® C " + 1 " − " " − # " 2( 1 " − C # ) 2( 1 " + C " ) 2( 1 " + C # ) C " − 1 " + " " − # " 2( " # − C 1 ) 2( 1 # − C " ) 2( " # + C 1 ) C " − 1 " − " " + # "¯( 63 ) In the quaternionic approach, the sign of the eigenvalue determines the orientation of the rotation. We assumed that the ideal proper rotation for our uses always has the largest magnitude eigenvalue. The quaternion approach is summarized in Algorithm 4: QuaternionAlignment. Differentiable largest eigenvalue and associated eigenvector of a square matrix Since linear algebra operations like torch.linalg.eigh for Hermitian matrices solve for all eigenvalues of the matrix, we used a modified version of the power iteration algorithm to determine the largest eigenvalue and associated eigenvector. Additionally, this method has the advantage of being implemented in terms of pure PyTorch operations (matrix multiplication and floating point division), which is convenient for PyTorch backends such as Apple's Metal Performance Shaders which do not yet have complete coverage of linear algebra operations. Symmetric matrices are diagonalizable, have real eigenvalues, and have an orthonormal eigenbasis. Given such a matrix, for example the 4 × 4 matrix we may write = Λ S( 64 ) where Λ = diag(λ 1 , λ " , ⋯ , , ) is a diagonal matrix of eigenvalues and is an orthonormal matrix such that S = and the .= column of is a normalized eigenvector * corresponding to * . Then for any matrix power , we may write ; = ( S )( S ) ⋯ ( S ) = Λ ; S( 65 ) Since Λ is diagonal, its powers are also diagonal. Carrying out the matrix multiplications, When is large, the relative contribution of each eigenvalue and its eigenvectors to ; decreases in comparison to that of the maximum eigenvalue/eigenvector pair. To achieve large powers of , we may repeatedly square , so that the relative contributions of the smaller eigenvalues decreases exponentially. In practice, when we square , we need to normalize by the largest magnitude element (or any other matrix norm) to prevent from drifting off to infinity or 0 due to the factor of 2/I ; . We may directly read off the * from the outer product because each row and each column of 2/I 2/I S is some multiple of 2/I . To recover the eigenvalue 2/I , we simply apply the original matrix using the eigenvector equation 2/I = 2/I 2/I and compute 2/I ≈ 1 * ( 2/I ) * ( 2/I ) * *( 69 ) The power iteration through repeated squaring method is summarized in Algorithm 5: PowerIterWithSquaring. Since we only required this method for a few 4 × 4 covariance matrices, it contributed a negligible amount of overhead to the overall algorithm. It is novel to this work, and can be thought of as a modified version of the power iteration algorithm (43,44). Estimation of entropy For estimation of distribution entropy of our Boltzmann generators, we used a method tailored for multivariate circular distributions (45), which attempts to mitigate correlations among the angles. The metric for two sets of angular samples , was defined as the arclength on the unit circle for each angle " ( , ) = *n − v − | * − * |vo " *( 70 )] = log ( , 7 )( 71 ) and averaged over the entire set of samples ∈ Φ. Since we were only using the entropy to provide approximate gradients to the Boltzmann generator, we ignored all constants which corrected for bias to the numerical value of the entropy in the original formula. Estimation of temperature of Boltzmann-generated samples We assumed that the energies of Boltzmann-generated samples follow an exponential distribution ( ) ∼ $^I( 72 ) To reduce the influence of outliers on our estimate, we used the median energy of the sampled states as an estimator of the inverse temperature . It is known analytically (46) that 2)@*/, = log 2 ( 73 ) Additional useful identities To aid in debugging of rotation matrices, we made use of the formula for the rotation angle and axis of an arbitrary 3 × 3 rotation matrix . = cos $1 † tr − 1 2 ‡ ( 74 ) axis = 1 2 sin [ #" − "# 1# − #1 "1 − 1" ] S( 75 ) where tr is the trace. These identities are direct consequences of Rodrigues' rotation formula. Learning rate tuning To ensure convergent training, we found that a learning rate of = 10 $% was necessary to prevent divergence during training due to large gradients. To arrive at this value of , we backpropagated gradients from the force field and entropic terms separately to the angles output by the neural network (i.e., only backpropagation only through the rotameric sampling and entropy estimation portions of the computational graph). We monitored these gradients over the course of a training session and found that angular gradients had a magnitude on the order of 10 # . Therefore, parameters are tuned at a rate of approximately 10 # ⋅ , per epoch, ignoring the momentum contributions in Adam (47). With the choice = 10 $% , we expect parameter tuning at a rate of roughly 0.01 per epoch; along with the choice of gradient clipping at 10.0, we enforced a large dynamic range of gradients with magnitudes under 0.01 to 10.0. Neural network architecture We used simple feedforward neural networks as our Boltzmann generators, with 32 latent variables, 10 layers of 128 hidden units, dropout with a rate of 0.3 (25), residual connections (48) between every other hidden layer, and LeakyReLU (49,50) activations with a coefficient of 0.3. The final output layer dimension equal to the total number of dihedrals we wished to sample. Traditional molecular dynamics We performed traditional MD using the AMBER 14 force field (2) with generalized Born implicit solvent (18)(19)(20)(21)(22) in OpenMM (32). We used a Langevin thermostat at 310 K, friction coefficient of 91 ps $1 , and timestep of 1 fs. We simulated 10 & timesteps (1 ns trajectory), and recorded positions of all atoms every 1,000 timesteps. Order parameters We calculated the root mean square deviation (averaged over the entire structure) and root mean square fluctuation (averaged over a trajectory per alpha-carbon), with respect to reference structure positions *,C . RMSD({ Implementation details All algorithms were implemented in Python 3.10 with PyTorch 1.13 (16). We loaded molecular topology and geometry using OpenMM 7.7 (32). We used PDBFixer 1.8.2 (32) to fix errors in PDB files and model hydrogens. We used NetworkX 2.8.4 (51) for all graph algorithms. We used a batch size of 8, and the Adam optimizer (47), with learning rate = 10 $% , momentum coefficients 1 = 0.9, " = 0.999, and machine tolerance offset = 10 $Z . We also used a loss weight of 6)*+=. = 10 $" for weight decay regularization. For the molecular force field, we used AMBER 14 parameters (2) as provided by OpenMM (32), and the implicit generalized Born solvent (GBn2) (18,20,22,35). Other packages used include NumPy 1. 23.4 (52, 53) and Pandas 1.5.2 (54) for data organization, Matplotlib 3.6.2 (55) for plotting, PyMol 2.5.0 (56) and Mol* (57) for macromolecule visualization, and SciPy 1.9.3 (58). Training of models was performed in float32 accuracy on the CPU only of an M1 Max MacBook Pro with 64 GB RAM. Benchmarking was also performed on a single NVIDIA Tesla T4 GPU with 16 GB RAM on a Linux system. For reproducibility, we also set the random seed for PyTorch to 0. To optimize the speed of dihedral angle application, we used static data structures and used PyTorch's just-in-time compilation on the dihedral sampling function, as well as the generative neural networks. This technique allows PyTorch to optimize numerical array operations through fusion of kernels and memory locality. As a result, we observed that the primary bottleneck for our pipeline was the force field computation and the transfer of positional data between PyTorch and OpenMM. Tables Table S1: Units of measurement and specific numerical values used to represent molecules and their properties in OpenMM and PyTorch. This table may Supplemental Fig. 2 : 2Differentiable rotamer sampling workflow. (a) Fig. 3 : 3Evaluating the consequences of neural network pre-training and entropy (chicken villin headpiece, PDB ID 1VII). We only sampled the protein backbone dihedrals. (a)-(b) Examining the effect of pre-training to produce the native structure, as well as the role of entropy in neural network training. (a) The minimum energy structure produced from ~600 structures across 3 trained networks; not trained on the entropy. (b)-(c) Correlation coefficients between traditional MD alpha-carbon root mean square fluctuations (RMSF) and neural network RMSF. (b) neural networks trained without entropic estimation; (c) neural networks trained on the full loss described in Results. Entropy training is necessary to reproduce traditional RMSFs. (d) Distribution of all correlation coefficients from (b), with median of 0.23 (dashed line). (e) Distribution of all correlation coefficients from (c), with median of 0.63 (dashed line). We performed a Fisher r-to-z transform on the data for (d) and (e) and performed a t-test for unequal variances, ( = −39.78, < 0.001). Training with entropy (e) better reproduces relative RMSFs than without (d). (f)-(h) Traditional MD RMSF versus the neural network RMSF for the four networks with the highest correlation. (f) RT=0.62, 1200 epochs, r=0.73; (g) RT=0.62, 1000 epochs, r=0.75; (h) RT=9.06, 200 epochs, r=0.75. Fig. 4 : 4Training characteristics across a range of temperatures. In these plots, we examine the effect of training on the energies of output structures; for rotamer sampling, we only sampled the protein backbone dihedrals. (a)-(c), for each value of the temperature, we trained three neural networks, first on a loss to reproduce the native structure for 1,000 epochs, and then by the Boltzmann loss of the full energy and entropy for up to 2,400 epochs. Colorbars indicate the numerical value for each square.(a) Each square represents the median energy of ~600 generated structures across all three networks. (b) Minimum energy of the structures. (c) The median RMSD from the native structure. (d) and (e) Initial energy distribution at = 0.62 kJ/mol and 10 kJ/mol respectively. (g) and (h) Final energy distribution at the same temperatures. (f) Energy distributions as a function of training epochs for one of the networks we trained at = 0.62 kJ/mol. (i) Energy distributions as a function of training epochs for one of the networks we trained at = 10 kJ/mol. Higher temperatures homogenize the sampled structures in terms of energy, in (a)-(b). In (d)-(i), we observe that training the neural networks equilibrated within a few hundred epochs of training. Fig. 5 : 5Benchmarking of rotamer sampler and the energy function written in PyTorch. (a) and (b) Comparing performance of the parallel dihedral sampler on chicken villin headpiece (PDB ID 1VII), split into four fragments. We compared CPU-only computation on an M1 Max MacBook Pro (blue) to a NVIDIA Tesla T4 GPU with 16 GB RAM (orange). We observed a plateau above a batch size of 4096 in performance increase, indicating saturation of the GPU with CUDA kernels. (c) and (d) Comparing the performance of the non-parallel (blue) and parallel (orange) versions of the dihedral sampler. (e) and (f) Comparing the M1 Max (blue) versus the NVIDIA GPU (orange) on the energy function. For a batch size of 256 and higher, the 16 GB GPU ran out of memory. harmonic bond angles, (3) pairwise atomic Coulomb interactions, (4) pairwise atomic Lennard-Jones potentials, (5) periodic backbone dihedral (torsional) angle energies, and (6) generalized Born implicit solvent energies. 100i *3 − C,*3 j " + 300,000i *3 − C,*3 j & , *3 < radius * + radius 3 + cutoff ,)?; [ of all the atoms in the macromolecule. For each of the angles 2 predicted by the neural network, we selected the corresponding edge 2 = ( 2 , 2 ), where 2 and 2 represent the two atoms forming the covalent bond. We calculated the axis of rotation with components explicitly that values of outside the range [ 2/I , 2*, ] are discouraged ( 1 = Y = 1 1Z ) and that 2*, and 2/I are less likely than other values of which lie strictly within the range ( ). In practice, there may be other transitions from outside the range to within the range, which is controlled by the learning rate. For example, replacing 1" = 1# = Y& = Y% = 1 " and 1* = Y* = 0 for all other in Eq. ⋯ *, ] = Á *1 ( *1 ) *1 ( *" ) ⋯ *1 ( *, ) *" ( *1 ) *" ( *" ) ⋯ *" ( *, ) ⋮ ⋮ ⋱ ⋮ *, ( *1 ) *, ( *" ) ⋯ *, ( *, )  Thus with large , (in practice we chose = 20) we have; ≈ 2/I ; 2/I 2/I S Given a batch of samples Φ = [ , , ⋯ , ], we then computed the nearest neighbor for each sample in the batch Φ 7 = [ 7 , 7 , ⋯ , 7 ] according to the metric , and estimated the entropy of each sample as (Eq. 17 in the original manuscript(45), with first nearest neighbors corresponding to = 1) be used to convert the numerical outputs of the computations to predicted measurements.Physical quantity Physical units Numerical value during computation distance ( *3 ) nanometer (nm) 1 energy ( , *3 ) kilojoule / mole (kJ mol $1 ) 1 electric charge ( ) elementary charge ( = 1.609 × 10 $1[ C) 1 Avogadro's constant ( ! ) unitless 6.022 × 10 "# Ideal gas constant ( ) kilojoule / (mole Kelvin) (kJ mol $1 K $1 ) 8.314 × 10 $# Coulomb constant x 1 ab 5 y kJ nm mol $1 $" 138.9354576 Solute (protein) relative electric permittivity ( A>BJ.) ) unitless 1 Solvent (water) relative electric permittivity ( A>BK),. ) unitless 78.5 Acknowledgements:Funding: We acknowledge support from the National Institutes of Health (NIH) 1R35 GM134864, 1RF1 AG071675, 1R01 AT012053, the National Science Foundation 2210963, and the Passan Foundation.( 48 )We then took the previously calculated connected component( 2 ), translated all the particles in that connected component by − 2 , applied the rotation matrix 2 , and translated the selected particles back by 2 :After performing this transformation for the dihedrals we wish to sample, we return our final position matrix as the output, *3 (4W1) .This method of differentiable rotamer sampling is shown in Algorithm 3: DiffRot.Bias-free, continuous representation of dihedral anglesWe used simple feedforward neural networks, with a continuous representation of angles for the output(23). To avoid biasing predicted angles, we did not predict angles directly. For each dihedral angle 2 , we used our neural network to predict two parameters, ( 2 , 2 ), and calculated 2 = atan2( 2 , 2 ). atan2 and its derivative are well-defined for all ( 2 , 2 ) in all four quadrants and produces an angle in the range [− , ). To regularize our neural networks and prevent ( 2 , 2 ) from drifting to the origin or infinity, we added a loss to our training cost, with weight I-:This training cost has the advantage of being rotationally symmetric so that there is no preferred angle. For comparison, the regularization losswill bias every angle 2 toward 1 "( 2/I + 2*, ) , since the network is penalized for exploring the space near 2/I and 2*,(14).To show mathematically that the neural network training is biased in the latter case, even for values of within the range [ 2/I , 2*, ], we model the training process as a Markov chain(38), and we also assume that can only take on a discrete set of values in the range [ 2/I , 2*, ] of size , or the values 2*, − or 2/I + . As a toy model, we assume that the training algorithm makes a transition → − or → + with equal probability while is within the range, and by the probability 1 transitions 2*, − → 2*, + 2/I + → 2/I −where represents the learning rate and < /2. This Markov chain is aperiodic and irreducible.At equilibrium, the Markov chain converges to a stationary or invariant probability distribution on the angles, ( * ) , which satisfies the global balance equations. The invariant distribution can be calculated as the largest left eigenvalue/eigenvector of the probability transition matrix , in which the ( , ) entry is the probability of transitioning from state to state .For concreteness, we illustrate within an example. For the states * = { 2*, − , 2*, , 2*, + , 2*, + 2 = 2/I − 2 , 2/I − , 2/I , 2/I + } F Reif, H L Scott, Fundamentals of Statistical and Thermal Physics. TokyoMcGraw Hill66F. Reif, H. L. Scott, Fundamentals of Statistical and Thermal Physics (McGraw Hill, Tokyo, 1998), vol. 66. Development and testing of a general Amber force field. J Wang, R M Wolf, J W Caldwell, P A Kollman, D A Case, J Comput Chem. 25J. Wang, R. M. Wolf, J. W. Caldwell, P. A. Kollman, D. A. Case, Development and testing of a general Amber force field. J Comput Chem. 25, 1157-1174 (2004). CHARMM general force field: A force field for drug-like molecules compatible with the CHARMM all-atom additive biological force fields. K Vanommeslaeghe, E Hatcher, C Acharya, S Kundu, S Zhong, J Shim, E Darian, O Guvench, P Lopes, I Vorobyov, A D Mackerell, J Comput Chem. 31K. Vanommeslaeghe, E. Hatcher, C. Acharya, S. Kundu, S. Zhong, J. Shim, E. Darian, O. Guvench, P. Lopes, I. Vorobyov, A. D. Mackerell, CHARMM general force field: A force field for drug-like molecules compatible with the CHARMM all-atom additive biological force fields. J Comput Chem. 31, 671-690 (2010). GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers. M J Abraham, T Murtola, R Schulz, S Páll, J C Smith, B Hess, E Lindah, SoftwareX. 1-2M. J. Abraham, T. Murtola, R. Schulz, S. Páll, J. C. Smith, B. Hess, E. Lindah, GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers. SoftwareX. 1-2, 19-25 (2015). Hamiltonian replica exchange in GROMACS: A flexible implementation. G Bussi, Molecular Physics. 112G. Bussi, "Hamiltonian replica exchange in GROMACS: A flexible implementation" in Molecular Physics (2014), vol. 112, pp. 379-384. Markov State Models: From an Art to a Science. B E Husic, V S Pande, J Am Chem Soc. 140B. E. Husic, V. S. Pande, Markov State Models: From an Art to a Science. J Am Chem Soc. 140, 2386-2396 (2018). Overcoming free energy barriers using unconstrained molecular dynamics simulations. J Hénin, C Chipot, J Chem Phys. 1212904J. Hénin, C. Chipot, Overcoming free energy barriers using unconstrained molecular dynamics simulations. J Chem Phys. 121, 2904 (2004). Applications of Discrete Molecular Dynamics in biology and medicine. E A Proctor, N V Dokholyan, Curr Opin Struct Biol. 37E. A. Proctor, N. v. Dokholyan, Applications of Discrete Molecular Dynamics in biology and medicine. Curr Opin Struct Biol. 37 (2016), pp. 9-13. Discrete molecular dynamics: An efficient and versatile simulation method for fine protein characterization. D Shirvanyants, F Ding, D Tsao, S Ramachandran, N V Dokholyan, Journal of Physical Chemistry B. 116D. Shirvanyants, F. Ding, D. Tsao, S. Ramachandran, N. v. Dokholyan, Discrete molecular dynamics: An efficient and versatile simulation method for fine protein characterization. Journal of Physical Chemistry B. 116, 8375-8382 (2012). Ab initio RNA folding by discrete molecular dynamics: From structure prediction to folding mechanisms. F Ding, S Sharma, P Chalasani, V V Demidov, N E Broude, N V Dokholyan, Rna. 14F. Ding, S. Sharma, P. Chalasani, V. v. Demidov, N. E. Broude, N. v. Dokholyan, Ab initio RNA folding by discrete molecular dynamics: From structure prediction to folding mechanisms. Rna. 14, 1164-1173 (2008). A Guide to Monte Carlo Simulations in Statistical Physics. D P Landau, K Binder, D. P. Landau, K. Binder, A Guide to Monte Carlo Simulations in Statistical Physics (2014). . A Barducci, M Bonomi, M Parrinello, Metadynamics , Wiley Interdiscip Rev Comput Mol Sci. 1A. Barducci, M. Bonomi, M. Parrinello, Metadynamics. Wiley Interdiscip Rev Comput Mol Sci. 1, 826-843 (2011). Umbrella sampling. J Kästner, 10.1002/wcms.66Wiley Interdiscip Rev Comput Mol Sci. 1J. Kästner, Umbrella sampling. Wiley Interdiscip Rev Comput Mol Sci. 1 (2011), , doi:10.1002/wcms.66. Boltzmann generators: Sampling equilibrium states of many-body systems with deep learning. F Noé, S Olsson, J Köhler, H Wu, 10.1126/science.aaw1147Science. 365F. Noé, S. Olsson, J. Köhler, H. Wu, Boltzmann generators: Sampling equilibrium states of many-body systems with deep learning. Science (1979). 365 (2019), doi:10.1126/science.aaw1147. Boltzmann Samplers for the Random Generation of Combinatorial Structures. Combinatorics, Probability and Computing. P Duchon, P Flajolet, G Louchard, G Schaeffer, 13P. Duchon, P. Flajolet, G. Louchard, G. Schaeffer, Boltzmann Samplers for the Random Generation of Combinatorial Structures. Combinatorics, Probability and Computing. 13, 577-625 (2004). PyTorch: An imperative style, high-performance deep learning library. A Paszke, S Gross, F Massa, A Lerer, J Bradbury, G Chanan, T Killeen, Z Lin, N Gimelshein, L Antiga, A Desmaison, A Köpf, E Yang, Z Devito, M Raison, A Tejani, S Chilamkurthy, B Steiner, L Fang, J Bai, S Chintala, Advances in Neural Information Processing Systems. 32A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, S. Chintala, "PyTorch: An imperative style, high-performance deep learning library" in Advances in Neural Information Processing Systems (2019), vol. 32. TorchMD: A Deep Learning Framework for Molecular Simulations. S Doerr, M Majewski, A Pérez, A Krämer, C Clementi, F Noe, T Giorgino, G De Fabritiis, J Chem Theory Comput. 17S. Doerr, M. Majewski, A. Pérez, A. Krämer, C. Clementi, F. Noe, T. Giorgino, G. de Fabritiis, TorchMD: A Deep Learning Framework for Molecular Simulations. J Chem Theory Comput. 17, 2355-2363 (2021). Generalized Born Implicit Solvent Models for Biomolecules. A V Onufriev, D A Case, 10.1146/annurev-biophys-052118-115325Annu Rev Biophys. 48A. v. Onufriev, D. A. Case, Generalized Born Implicit Solvent Models for Biomolecules. Annu Rev Biophys. 48 (2019), , doi:10.1146/annurev-biophys-052118-115325. Exploring Protein Native States and Large-Scale Conformational Changes with a Modified Generalized Born Model. A Onufriev, D Bashford, D A Case, 10.1002/prot.20033Proteins: Structure, Function and Genetics. 55A. Onufriev, D. Bashford, D. A. Case, Exploring Protein Native States and Large-Scale Conformational Changes with a Modified Generalized Born Model. Proteins: Structure, Function and Genetics. 55 (2004), doi:10.1002/prot.20033. Constant pH molecular dynamics in generalized Born implicit solvent. J Mongan, D A Case, J A Mccammon, 10.1002/jcc.20139J Comput Chem. 25J. Mongan, D. A. Case, J. A. McCammon, Constant pH molecular dynamics in generalized Born implicit solvent. J Comput Chem. 25 (2004), doi:10.1002/jcc.20139. Theory and applications of the Generalized Born solvation model in macromolecular simulations. V Tsui, D A Case, 10.1002/1097-0282Biopolymers. 564<275::AID-BIP10024>3.0.CO;2-EV. Tsui, D. A. Case, Theory and applications of the Generalized Born solvation model in macromolecular simulations. Biopolymers. 56 (2000), doi:10.1002/1097- 0282(2000)56:4<275::AID-BIP10024>3.0.CO;2-E. Improved generalized born solvent model parameters for protein simulations. H Nguyen, D R Roe, C Simmerling, 10.1021/ct3010485J Chem Theory Comput. 9H. Nguyen, D. R. Roe, C. Simmerling, Improved generalized born solvent model parameters for protein simulations. J Chem Theory Comput. 9 (2013), doi:10.1021/ct3010485. On the continuity of rotation representations in neural networks. Y Zhou, C Barnes, J Lu, J Yang, H Li, vols. 2019Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. the IEEE Computer Society Conference on Computer Vision and Pattern RecognitionY. Zhou, C. Barnes, J. Lu, J. Yang, H. Li, "On the continuity of rotation representations in neural networks" in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2019), vols. 2019-June, pp. 5738-5746. J Zhang, T He, S Sra, A Jadbabaie, Why Gradient Clipping Accelerates Training: A Theoretical Justification for Adaptivity. J. Zhang, T. He, S. Sra, A. Jadbabaie, Why Gradient Clipping Accelerates Training: A Theoretical Justification for Adaptivity (2020). Dropout: A simple way to prevent neural networks from overfitting. N Srivastava, G Hinton, A Krizhevsky, I Sutskever, R Salakhutdinov, Journal of Machine Learning Research. 15N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, R. Salakhutdinov, Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research. 15, 1929-1958 (2014). Quantum Machine Learning Algorithms for Drug Discovery Applications. K Batra, K M Zorn, D H Foil, E Minerali, V O Gawriljuk, T R Lane, S Ekins, J Chem Inf Model. 61K. Batra, K. M. Zorn, D. H. Foil, E. Minerali, V. O. Gawriljuk, T. R. Lane, S. Ekins, Quantum Machine Learning Algorithms for Drug Discovery Applications. J Chem Inf Model. 61, 2641-2647 (2021). Invited: Drug Discovery Approaches using Quantum Machine Learning. J Li, M Alam, C M Sha, J Wang, N V Dokholyan, S Ghosh, 2021 58th ACM/IEEE Design Automation Conference (DAC. IEEEJ. Li, M. Alam, C. M. Sha, J. Wang, N. v. Dokholyan, S. Ghosh, "Invited: Drug Discovery Approaches using Quantum Machine Learning" in 2021 58th ACM/IEEE Design Automation Conference (DAC) (IEEE, 2021), pp. 1356-1359. . L Banchi, M Fingerhuth, T Babej, C Ing, J M Arrazola, 10.1126/sciadv.aax1950Molecular docking with Gaussian Boson Sampling. Sci Adv. 6L. Banchi, M. Fingerhuth, T. Babej, C. Ing, J. M. Arrazola, Molecular docking with Gaussian Boson Sampling. Sci Adv. 6 (2020), doi:10.1126/sciadv.aax1950. Quantum computing simulation of the hydrogen molecular ground-state energies with limited resources. A Abu-Nada, Open Physics. 19A. Abu-Nada, Quantum computing simulation of the hydrogen molecular ground-state energies with limited resources. Open Physics. 19, 628-633 (2021). Quantum Generative Models for Small Molecule Drug Discovery. J Li, R O Topaloglu, S Ghosh, IEEE Transactions on Quantum Engineering. 2J. Li, R. O. Topaloglu, S. Ghosh, Quantum Generative Models for Small Molecule Drug Discovery. IEEE Transactions on Quantum Engineering. 2, 1-8 (2021). J R Mcclean, S Boixo, V N Smelyanskiy, R Babbush, H Neven, Barren plateaus in quantum neural network training landscapes. 94812J. R. McClean, S. Boixo, V. N. Smelyanskiy, R. Babbush, H. Neven, Barren plateaus in quantum neural network training landscapes. Nat Commun. 9, 4812 (2018). OpenMM 7: Rapid development of high performance algorithms for molecular dynamics. P Eastman, J Swails, J D Chodera, R T Mcgibbon, Y Zhao, K A Beauchamp, L P Wang, A C Simmonett, M P Harrigan, C D Stern, R P Wiewiora, B R Brooks, V S Pande, 10.1371/journal.pcbi.1005659PLoS Comput Biol. 13P. Eastman, J. Swails, J. D. Chodera, R. T. McGibbon, Y. Zhao, K. A. Beauchamp, L. P. Wang, A. C. Simmonett, M. P. Harrigan, C. D. Stern, R. P. Wiewiora, B. R. Brooks, V. S. Pande, OpenMM 7: Rapid development of high performance algorithms for molecular dynamics. PLoS Comput Biol. 13 (2017), doi:10.1371/journal.pcbi.1005659. A Burden, R L Burden, J Douglas Faires, Numerical Analysis. 1010th ed.A. Burden, R. L. Burden, J. Douglas Faires, Numerical Analysis, 10th ed. (2016), vol. 10. Torsion Angle From Four Points in Cartesian Coordinates in Python. Dihedral / Praxeolytic, Stack OverflowPraxeolytic, Dihedral/Torsion Angle From Four Points in Cartesian Coordinates in Python. Stack Overflow (2015). van der Waals Volumes and Radii. A Bondi, J Phys Chem. 68A. Bondi, van der Waals Volumes and Radii. J Phys Chem. 68, 441-451 (1964). T H Cormen, C E Leiserson, R L Rivest, C Stein, Introduction to Algorithms. Cambridge, MAMIT PressT. H. Cormen, C. E. Leiserson, R. L. Rivest, C. Stein, Introduction to Algorithms (MIT Press, Cambridge, MA, ed. 3, 2009). Euler-Rodrigues formula variations, quaternion conjugation and intrinsic connections. J S Dai, Mech Mach Theory. 92J. S. Dai, Euler-Rodrigues formula variations, quaternion conjugation and intrinsic connections. Mech Mach Theory. 92, 144-152 (2015). P Gagniuc, Markov Chains: From Theory to Implementation and Experimentation. HobokenWileyP. Gagniuc, Markov Chains: From Theory to Implementation and Experimentation (Wiley, Hoboken, ed. 1st, 2017). Parallelized Natural Extension Reference Frame: Parallelized Conversion from Internal to Cartesian Coordinates. M Alquraishi, J Comput Chem. 40M. AlQuraishi, Parallelized Natural Extension Reference Frame: Parallelized Conversion from Internal to Cartesian Coordinates. J Comput Chem. 40, 885-892 (2019). End-to-End Differentiable Learning of Protein Structure. M Alquraishi, Cell Syst. 8M. AlQuraishi, End-to-End Differentiable Learning of Protein Structure. Cell Syst. 8, 292- 301.e3 (2019). A solution for the best rotation to relate two sets of vectors. W Kabsch, 10.1107/S0567739476001873Acta Crystallographica Section A. 32W. Kabsch, A solution for the best rotation to relate two sets of vectors. Acta Crystallographica Section A. 32 (1976), doi:10.1107/S0567739476001873. Using quaternions to calculate RMSD. E A Coutsias, C Seok, K A Dill, 10.1002/jcc.20110J Comput Chem. 25E. A. Coutsias, C. Seok, K. A. Dill, Using quaternions to calculate RMSD. J Comput Chem. 25 (2004), doi:10.1002/jcc.20110. . Power Bindel, Iteration, Lecture Notes. Bindel, Power Iteration. Lecture Notes (2009), pp. 1-4. Implementation of the modified power iteration method to two-group Monte Carlo eigenvalue problems. B Shi, B Petrovic, Ann Nucl Energy. 38B. Shi, B. Petrovic, Implementation of the modified power iteration method to two-group Monte Carlo eigenvalue problems. Ann Nucl Energy. 38, 781-787 (2011). Nearest neighbor estimates of entropy for multivariate circular distributions. N Misra, H Singh, V Hnizdo, 10.3390/e12051125Entropy. 12N. Misra, H. Singh, V. Hnizdo, Nearest neighbor estimates of entropy for multivariate circular distributions. Entropy. 12 (2010), doi:10.3390/e12051125. Introduction to Probability and Statistics for Engineers and Scientists. S Ross, ElsevierS. Ross, Introduction to Probability and Statistics for Engineers and Scientists (Elsevier, 2021). Adam: A method for stochastic optimization. D P Kingma, J L Ba, 3rd International Conference on Learning Representations, ICLR 2015 -Conference Track Proceedings. D. P. Kingma, J. L. Ba, "Adam: A method for stochastic optimization" in 3rd International Conference on Learning Representations, ICLR 2015 -Conference Track Proceedings (2015). Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. the IEEE Computer Society Conference on Computer Vision and Pattern Recognitionvols. 2016-DecemK. He, X. Zhang, S. Ren, J. Sun, "Deep residual learning for image recognition" in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2016), vols. 2016-Decem, pp. 770-778. Empirical Evaluation of Rectified Activations in Convolutional Network. B Xu, N Wang, T Chen, M Li, ArXiv. B. Xu, N. Wang, T. Chen, M. Li, Empirical Evaluation of Rectified Activations in Convolutional Network. ArXiv (2015) (available at http://arxiv.org/abs/1505.00853). Rectifier nonlinearities improve neural network acoustic models. A L Maas, A Y Hannun, A Y Ng, ICML Workshop on Deep Learning for Audio, Speech and Language Processing. A. L. Maas, A. Y. Hannun, A. Y. Ng, "Rectifier nonlinearities improve neural network acoustic models" in in ICML Workshop on Deep Learning for Audio, Speech and Language Processing (2013). Exploring network structure, dynamics, and function using NetworkX. A A Hagberg, D A Schult, P J Swart, 7th Python in Science Conference. A. A. Hagberg, D. A. Schult, P. J. Swart, "Exploring network structure, dynamics, and function using NetworkX" in 7th Python in Science Conference (SciPy 2008) (2008). Array programming with NumPy. C R Harris, K J Millman, S J Van Der Walt, R Gommers, P Virtanen, D Cournapeau, E Wieser, J Taylor, S Berg, N J Smith, R Kern, M Picus, S Hoyer, M H Van Kerkwijk, M Brett, A Haldane, J F Del Río, M Wiebe, P Peterson, P Gérard-Marchant, K Sheppard, T Reddy, W Weckesser, H Abbasi, C Gohlke, T E Oliphant, Nature. 585C. R. Harris, K. J. Millman, S. J. van der Walt, R. Gommers, P. Virtanen, D. Cournapeau, E. Wieser, J. Taylor, S. Berg, N. J. Smith, R. Kern, M. Picus, S. Hoyer, M. H. van Kerkwijk, M. Brett, A. Haldane, J. F. del Río, M. Wiebe, P. Peterson, P. Gérard-Marchant, K. Sheppard, T. Reddy, W. Weckesser, H. Abbasi, C. Gohlke, T. E. Oliphant, Array programming with NumPy. Nature. 585 (2020), pp. 357-362. The NumPy array: A structure for efficient numerical computation. S Van Der Walt, S C Colbert, G Varoquaux, Comput Sci Eng. 13S. van der Walt, S. C. Colbert, G. Varoquaux, The NumPy array: A structure for efficient numerical computation. Comput Sci Eng. 13, 22-30 (2011). W Mckinney, Data Structures for Statistical Computing in Python" in (2010). W. McKinney, "Data Structures for Statistical Computing in Python" in (2010), pp. 56-61. Matplotlib: A 2D Graphics Environment. J D Hunter, Comput Sci Eng. 9J. D. Hunter, Matplotlib: A 2D Graphics Environment. Comput Sci Eng. 9, 90-95 (2007). The PyMOL Molecular Graphics. Llc Schrödinger, The PyMOL Molecular Graphics. Version 2.0 Schrödinger, LLC., (available at https://pymol.org/2/support.html%0Ahttps://scholar.google.com/scholar?hl=en&as_sdt= 0%2C5&q=The+PyMOL+Molecular+Graphics+System%2C+Version+1.74.4+Schroding er%2C+LLC.+https%3A%2F%2Fpymol.org%2F+%3B+Accessed+10+February+2020.& btnG=%0Ahttps://pymol.org/2/supp). Mol * Viewer: Modern web app for 3D visualization and analysis of large biomolecular structures. D Sehnal, S Bittrich, M Deshpande, R Svobodová, K Berka, V Bazgier, S Velankar, S K Burley, J Koča, A S Rose, 10.1093/nar/gkab314Nucleic Acids Res. 49D. Sehnal, S. Bittrich, M. Deshpande, R. Svobodová, K. Berka, V. Bazgier, S. Velankar, S. K. Burley, J. Koča, A. S. Rose, Mol * Viewer: Modern web app for 3D visualization and analysis of large biomolecular structures. Nucleic Acids Res. 49 (2021), doi:10.1093/nar/gkab314. . P Virtanen, R Gommers, T E Oliphant, M Haberland, T Reddy, D Cournapeau, E Burovski, P Peterson, W Weckesser, J Bright, S J Van Der Walt, M Brett, J Wilson, K J Millman, N Mayorov, A R J Nelson, E Jones, R Kern, E Larson, C J Carey, İ Polat, Y Feng, E W Moore, J Vanderplas, D Laxalde, J Perktold, R Cimrman, I Henriksen, E A Quintero, C R Harris, A M Archibald, A H Ribeiro, F Pedregosa, P Van Mulbregt, A Vijaykumar, A Bardelli, A Rothberg, A Hilboll, A Kloeckner, A Scopatz, A Lee, A Rokem, C N Woods, C Fulton, C Masson, C Häggström, C Fitzgerald, D A Nicholson, D R Hagen, D V Pasechnik, E Olivetti, E Martin, E Wieser, F Silva, F Lenders, F Wilhelm, G Young, G A Price, G.-L Ingold, G E Allen, G R Lee, H Audren, I Probst, J P Dietrich, J Silterra, J T Webber, J Slavič, J Nothman, J Buchner, J Kulick, J L Schönberger, J V De Miranda Cardoso, J Reimer, J Harrington, J L C Rodríguez, J Nunez-Iglesias, J Kuczynski, K Tritz, M Thoma, M Newville, M Kümmerer, M Bolingbroke, M Tartre, M Pak, N J Smith, N Nowaczyk, N Shebanov, O Pavlyk, P A Brodtkorb, P Lee, R T Mcgibbon, R Feldbauer, S Lewis, S Tygier, S Sievert, S Vigna, S Peterson, S More, T Pudlik, T Oshima, T J Pingel, T P Robitaille, T Spura, T R Jones, T Cera, T Leslie, T Zito, T Krauss, U Upadhyay, Y O Halchenko, Y Vázquez-Baeza, Nat Methods. 17SciPy 1.0: fundamental algorithms for scientific computing in PythonP. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, J. Bright, S. J. van der Walt, M. Brett, J. Wilson, K. J. Millman, N. Mayorov, A. R. J. Nelson, E. Jones, R. Kern, E. Larson, C. J. Carey, İ. Polat, Y. Feng, E. W. Moore, J. VanderPlas, D. Laxalde, J. Perktold, R. Cimrman, I. Henriksen, E. A. Quintero, C. R. Harris, A. M. Archibald, A. H. Ribeiro, F. Pedregosa, P. van Mulbregt, A. Vijaykumar, A. pietro Bardelli, A. Rothberg, A. Hilboll, A. Kloeckner, A. Scopatz, A. Lee, A. Rokem, C. N. Woods, C. Fulton, C. Masson, C. Häggström, C. Fitzgerald, D. A. Nicholson, D. R. Hagen, D. v. Pasechnik, E. Olivetti, E. Martin, E. Wieser, F. Silva, F. Lenders, F. Wilhelm, G. Young, G. A. Price, G.-L. Ingold, G. E. Allen, G. R. Lee, H. Audren, I. Probst, J. P. Dietrich, J. Silterra, J. T. Webber, J. Slavič, J. Nothman, J. Buchner, J. Kulick, J. L. Schönberger, J. V. de Miranda Cardoso, J. Reimer, J. Harrington, J. L. C. Rodríguez, J. Nunez-Iglesias, J. Kuczynski, K. Tritz, M. Thoma, M. Newville, M. Kümmerer, M. Bolingbroke, M. Tartre, M. Pak, N. J. Smith, N. Nowaczyk, N. Shebanov, O. Pavlyk, P. A. Brodtkorb, P. Lee, R. T. McGibbon, R. Feldbauer, S. Lewis, S. Tygier, S. Sievert, S. Vigna, S. Peterson, S. More, T. Pudlik, T. Oshima, T. J. Pingel, T. P. Robitaille, T. Spura, T. R. Jones, T. Cera, T. Leslie, T. Zito, T. Krauss, U. Upadhyay, Y. O. Halchenko, Y. Vázquez-Baeza, SciPy 1.0: fundamental algorithms for scientific computing in Python. Nat Methods. 17, 261-272 (2020).
[]
[ "A LOW-MASS BLACK HOLE IN THE NEARBY SEYFERT GALAXY UGC 06728", "A LOW-MASS BLACK HOLE IN THE NEARBY SEYFERT GALAXY UGC 06728" ]
[ "Misty C Bentz ", "Merida Batiste ", "James Seals ", "Karen Garcia ", "Rachel Kuzio De Naray ", "Wesley Peters ", "Matthew D Anderson ", "Jeremy Jones ", "Kathryn Lester ", "Camilo Machuca ", "J Robert Parks ", "Crystal L Pope ", "Mitchell Revalski ", "Caroline A Roberts ", "Dicy Saylor ", "R Andrew Sevrinsky ", "Clay Turner " ]
[]
[]
We present the results of a recent reverberation mapping campaign for UGC 06728, a nearby low-luminosity Seyfert 1 in a late-type galaxy. Nightly monitoring in the spring of 2015 allowed us to determine an Hβ time delay of τ = 1.4 ± 0.8 days. Combined with the width of the variable Hβ line profile, we determine a black hole mass of M BH = (7.1 ± 4.0) × 10 5 M ⊙ . We also constrain the bulge stellar velocity dispersion from higher-resolution long slit spectroscopy along the galaxy minor axis and find σ ⋆ = 51.6 ± 4.9 km s −1 . The measurements presented here are in good agreement with both the R BLR − L relationship and the M BH − σ ⋆ relationship for AGNs. Combined with a previously published spin measurement, our mass determination for UGC 06728 makes it the lowest-mass black hole that has been fully characterized, and thus an important object to help anchor the low-mass end of black hole evolutionary models.
10.3847/0004-637x/831/1/2
[ "https://arxiv.org/pdf/1608.03893v1.pdf" ]
119,273,662
1608.03893
1ce50c97fbb300b1e28c55e6f7ec9676f334e736
A LOW-MASS BLACK HOLE IN THE NEARBY SEYFERT GALAXY UGC 06728 12 Aug 2016 MAY 17, 2018 May 17, 2018 Misty C Bentz Merida Batiste James Seals Karen Garcia Rachel Kuzio De Naray Wesley Peters Matthew D Anderson Jeremy Jones Kathryn Lester Camilo Machuca J Robert Parks Crystal L Pope Mitchell Revalski Caroline A Roberts Dicy Saylor R Andrew Sevrinsky Clay Turner A LOW-MASS BLACK HOLE IN THE NEARBY SEYFERT GALAXY UGC 06728 12 Aug 2016 MAY 17, 2018 May 17, 2018(Received; Accepted)DRAFT VERSION Preprint typeset using L A T E X style emulateapj v. 5/2/11 Draft versionSubject headings: galaxies: active -galaxies: nuclei -galaxies: Seyfert We present the results of a recent reverberation mapping campaign for UGC 06728, a nearby low-luminosity Seyfert 1 in a late-type galaxy. Nightly monitoring in the spring of 2015 allowed us to determine an Hβ time delay of τ = 1.4 ± 0.8 days. Combined with the width of the variable Hβ line profile, we determine a black hole mass of M BH = (7.1 ± 4.0) × 10 5 M ⊙ . We also constrain the bulge stellar velocity dispersion from higher-resolution long slit spectroscopy along the galaxy minor axis and find σ ⋆ = 51.6 ± 4.9 km s −1 . The measurements presented here are in good agreement with both the R BLR − L relationship and the M BH − σ ⋆ relationship for AGNs. Combined with a previously published spin measurement, our mass determination for UGC 06728 makes it the lowest-mass black hole that has been fully characterized, and thus an important object to help anchor the low-mass end of black hole evolutionary models. INTRODUCTION Supermassive black holes are now believed to inhabit the nuclei of all massive galaxies. Furthermore, the active galactic nucleus, or AGN, phase is generally understood to be a short-term event in the life of a typical black hole, triggered either by a merger event or secular processes in the host galaxy (cf. the review of Heckman & Best 2014 and references therein). Tight scaling relationships between the observed properties of black holes and their host galaxies point to a symbiotic relationship between the two (e.g., Magorrian et al. 1998;Ferrarese & Merritt 2000;Gebhardt et al. 2000;Gültekin et al. 2009;Kormendy & Ho 2013;van den Bosch 2016), in which the growth of structure and the evolution of galaxies across cosmic time is fundamentally linked to supermassive black holes. Understanding this link requires an understanding of black hole demographics, not just in the local universe, but also at higher redshift where we can witness the growth of structure occurring. Black holes, as opposed to galaxies, are incredibly simple objects that can be fully characterized with only two fundamental measurements: mass and spin. In the Milky Way, years of astrometric monitoring of stars in the central ∼ 0.01 parsec have led to an extremely precise determination of the mass of our own supermassive black hole (Ghez et al. 2000;Genzel et al. 2000;Ghez et al. 2008). Unfortunately, all other galaxies are too distant for this same technique to be employed, and different techniques must be used to understand the masses of a population of central black holes. For galaxies out to ∼ 100 Mpc, spatially-resolved observations of the bulk motions of stars or nuclear gas disks can be combined with dynamical modeling to constrain the central black hole mass (cf. the reviews of Ferrarese & Ford 2005;Kormendy & Ho 2013). Reverberation mapping (Blandford & McKee 1982;Peterson 1993), on the other hand, takes advantage of AGN flux variability to constrain black hole masses through timeresolved, rather than spatially-resolved, observations, thus obviating any distance limitations. Furthermore, the most widely-used technique to constrain supermassive black hole spins requires high X-ray luminosities that are only found in AGNs (e.g., Reynolds 2014 and references therein), so the study of active black holes is an important key to unraveling the growth and evolution of cosmic structure. Unfortunately, bright AGNs are relatively rare in the local universe, leading to a disconnect in our current understanding of nearby black holes compared to those observed at larger look-back times. In particular, we are lacking direct comparisons of black hole mass constraints through multiple independent techniques in the same galaxies. There are a handful of published comparisons of reverberation masses and gas dynamical masses (e.g., Hicks & Malkan 2008), including the low-mass Seyfert NGC 4395 (Peterson et al. 2005;den Brok et al. 2015). The agreement is generally quite good, although the number of galaxies studied is small. Stellar dynamics, on the other hand, is a good check against reverberation masses because it relies on modeling a non-collisional system, unlike gas dynamics where the AGN may be expected to inject energy on resolvable spatial scales. However, only two such comparisons currently exist for black hole masses from reverberation mapping and stellar dynamical modeling: NGC 4151 (Bentz et al. 2006a;Onken et al. 2014) and NGC 3227 (Denney et al. 2009a;Davies et al. 2006). While the techniques give roughly consistent masses for these two examples, there are caveats and limitations to both reverberation mapping and dynamical modeling, and a larger comparison sample is needed to fully assess the consistency of the local and the cosmological black hole mass scales. We have therefore undertaken a program to identify and monitor local AGNs where it might be possible to obtain both a reverberation and a stellar dynamical mass constraint. Both techniques are time-and resource-intensive, and there are very few broad-lined AGNs within z 0.01, where the spatial resolution provided by 8 − 10-m class telescopes would be likely to resolve the black hole's gravitational influence on the nuclear stellar dynamics, but we hope to increase the sample of mass comparisons by a factor of a few. We currently have stellar dynamical modeling underway for two other local AGNs, and we describe here the reverberation results for an addi- tional local AGN in our sample, UGC 06728. 2. OBSERVATIONS UGC 06728 is a low-luminosity Seyfert 1 located at α =11:45:16.0, δ = +79:40:53, z = 0.00652 in a late-type galaxy that is highly inclined to our line of sight. It was monitored nightly over the course of two months in the spring of 2015. Optical spectroscopy and photometry were obtained at Apache Point Observatory in New Mexico, with additional supporting photometry obtained at Hard Labor Creek Observatory in Georgia. We describe the details below. Spectroscopy Spectrophotometric monitoring of UGC 06728 was carried out at Apache Point Observatory (APO) with the 3.5-m telescope from 2015 April 15 − May 30 (UT dates here and throughout). Our monitoring program was scheduled for the first hour of almost every night during this time period, coincident with evening twilight. We employed the Dual Imaging Spectrograph (DIS), which uses a dichroic to split the incoming beam into a red arm and a blue arm, with the low-resolution (B400/R300) gratings centered at 4398 Å and 7493 Å. The B400 and R300 gratings, when used together, cover the entire optical bandpass between the atmospheric cutoff and 1 µm, with a nominal dispersion of 1.8 Å/pix and 2.3 Å/pix respectively. Spectra were obtained through a 5 ′′ slit rotated to a position angle of 0 • (oriented north-south) and centered on the AGN. On each visit, a single spectrum with an exposure time of 600 s was acquired at a typical airmass of 1.5. Observations of the spectrophotometric standard star Feige 34 were also acquired with each visit. All spectra were reduced with IRAF 2 following standard procedures. An extraction width of 12 pixels was adopted, corresponding to an angular width of 5 ′′ and 4.8 ′′ for the blue and red cameras, respectively. The desire to minimize sampling gaps and maximize temporal coverage means that ground-based reverberation campaigns must rely on spectroscopy obtained under nonphotometric conditions. While a spectrophotometric standard star can help correct the overall shape of the spectrum for atmospheric effects, as well as those from the telescope and instrument optics, an additional technique is required to achieve absolute flux calibrations of all the spectra. Fortuitously, the narrow emission lines do not vary on short timescales of weeks to months, so they can serve as convenient "internal" flux calibration sources. We utilize the van Groningen & Wanders (1992) spectral scaling method, which accounts for small differences in wavelength calibration, flux calibration, and resolution (from variations in the seeing). The method compares each spectrum to a reference spectrum built from the best spectra (identified by the user) and minimizes the differences within a specified wavelength range. The method has been shown to result in relative spectrophotometry that is accurate to ∼ 2% (Peterson et al. 1998a λλ 6300,6363 is extremely weak and difficult to detect above the continuum. With no suitable narrow lines available, we were unable to accurately intercalibrate the red-side spectra and we do not consider them further. Figure 1 displays the final mean and root mean square (rms) of all the calibrated spectra acquired throughout the campaign. The rms spectrum displays the variable spectral components, of which Hβ, He II λ 4686, and Hγ are apparent, as is the AGN continuum. Photometry Broad-band g and r images were obtained at APO with the imaging mode of the DIS spectrograph each night directly after acquiring the spectra. The dual-arm nature of the spectrograph allowed both images to be obtained simultaneously. The typical exposure time was 30 s, and a single image in each filter was obtained per visit. Images were reduced in IRAF following standard procedures. The DIS imaging mode provides a relatively small field of view (∼ 4 ′ ×7 ′ ), but there were a handful of convenient bright stars in all of the images (see Figure 2). We carried out aperture photometry employing circular apertures with radii of 3. ′′ 78 in g and 3. ′′ 6 in r, and sky annuli of 6. ′′ 3−7. ′′ 56 and 6. ′′ 0−7. ′′ 2 respectively. Calibrated g− and r−band magnitudes for three field stars were adopted from APASS (the AAVSO Photometric All Sky Survey; Henden & Munari 2014) and set the photometric zeropoints. Photometric monitoring was also carried out with the 24- number of field stars, allowing us to derive a V −band light curve for UGC 06728 by employing image subtraction techniques. We first registered all the images to a common alignment with the Sexterp package (Siverd et al. 2012). We then carried out the image subtraction analysis with the ISIS package (Alard & Lupton 1998;Alard 2000). ISIS builds a reference frame from the best images (specified by the user) and then uses a spatially-varying kernel to convolve the reference frame to match each individual image in the dataset. Subtraction of the two results in a residual image in which all constant components have disappeared and only variable flux remains. In the case of UGC 06728, the host-galaxy and the average AGN brightness are subtracted from all the residual images, leaving behind only the brightness of the AGN relative to its mean level. Aperture photometry is then employed to measure this variable flux, which may be positive or negative, at the location of the target of interest in each residual image, providing a V −band residual light curve. 3. LIGHT CURVE ANALYSIS Light curves for the broad emission lines Hβ, He II λ4686, and Hγ were derived directly from the scaled spectra. We fit a local, linear continuum below each emission line and then integrated the flux above this continuum to determine the total emission-line flux. This includes the contribution from the narrow component of each emission line, which is simply a constant flux offset. We also determined a continuum light curve from the spectra at 5100 × (1 + z) Å, which has the merit of being completely uncontaminated by emission lines. The strong continuum and emission-line variability over the course of the campaign allows us to determine these light curves directly from the spectra without carrying out any spectral modeling or decomposition, which has the potential to introduce artificial features into light curves. In Figure 3, we show the spectroscopic continuum light curve relative to the V −band residual light curve and the g and r photometric light curves (tabulated in Table 1). The V −band residual light curve does not contain significant emission from any broad emission lines, so we combined it with the continuum light curve determined from our spectra to improve the time sampling, especially in the first half of the campaign. We selected pairs of points from the two light curves that were contemporaneous within 0.5 days and fit for the best multiplicative and additive factors to bring the V −band residual fluxes into agreement with the measured continuum flux densities. These best-fit factors account for the differences in host-galaxy background light, average AGN flux level, and bandpass. The V −band light curve was scaled according to the best-fit parameters and merged with the continuum light curve. We then examined the g−band light curve from the APO photometry and found that there was no significant time delay relative to the merged continuum+V light curve, so we merged it as well by again finding the multiplicative and additive scale factors necessary to bring it into agreement with contemporaneous points in the continuum+V light curve. Our final merged continuum light curve was binned to 0.5 day sampling to improve the accuracy. The overall shape of the r−band light curve agrees with the other photometric light curves and the continuum light curve, but the variability level is somewhat damped by additional host-galaxy flux and there is possibly a slight delay in the light curve, so we did not merge the r−band with the other light curves. A detectable delay in r is not unexpected, given that the filter bandpass is centered on Hα. While g is centered on Hβ, the overall contribution of Hβ to the total filter bandpass is much smaller than for Hα and r. In particular, Hβ contributes only 2% of the g−band flux, with the variable component of Hβ accounting for only 10% of the total Hβ contribution, or 0.2% of the total g−band flux. On the other hand, Hα contributes 15% of the total r−band flux. Figure 4 displays the final merged and binned continuum light curve and the broad emission-line light curves (tabulated in Table 2. The variability statistics for each of the light curves are tabulated in Table 3. Column (1) lists the spectral feature and column (2) gives the number of measurements in the light curve. Columns (3) and (4) list the average and median time separation between measurements, respectively. Column (5) gives the mean flux and standard deviation of the light curve, and column (6) lists the mean fractional error (based on the comparison of observations that are closely spaced in time). Column (7) lists the excess variance, computed as: F var = √ σ 2 − δ 2 F(1) where σ 2 is the variance of the fluxes, δ 2 is their mean-square uncertainty, and F is the mean flux. And column (8) is the ratio of the maximum to the minimum flux in the light curve, R max . We employed the interpolated cross-correlation function (ICCF) methodology (Gaskell & Sparke 1986;Gaskell & Peterson 1987) with the modifications of White & Peterson (1994) to search for time delays of the emission lines relative to the continuum. The ICCF method calculates the cross-correlation function (CCF) twice, by interpolating first one light curve and then the other, and averages the two results together to determine the final CCF. The CCF can be characterized by its maximum value (r max ), the time delay at which the maximum occurs (τ peak ) and the centroid (τ cent ) of the points around the peak above some value (typically 0.8r max ). CCFs for each light curve relative to the continuum are displayed in Figure 4 (right panels). For the continuum light curve, this is the autocorrelation function. To quantify the uncertainties on the time delay measurements, τ cent and τ peak , we employ the Monte Carlo "flux randomization/random subset sampling" method of Peterson et al. (1998bPeterson et al. ( , 2004. This method is able to account for the measurement uncertainties as well as the effect of including or excluding any particular data point. The "random subset sampling" is implemented such that, from the N available data points within a light curve, N points are selected without regard to whether a point has been previously chosen. For a point that is sampled 1 ≤ n ≤ N times, the uncertainty on that point is scaled by a factor of n 1/2 . The typical number of points that is not selected in any specific realization is ∼ 1/e. The "flux randomization" component takes the newly sampled light curve and modifies the flux values by a Gaussian deviation of the flux uncertainty. These modified light curves are then cross-correlated with the ICCF method described above, and the whole process is repeated many times (N = 1000). From the large set of realizations, we build distributions of τ cent and τ peak . The median of each distribution is taken to be the measurement value, and the uncertainties are set such that they mark the upper 15.87% and lower 15.87% of the realizations (corresponding to ±1σ for a Gaussian distribution). The red histograms in the Figure 4 depict the crosscorrelation centroid distribution for each emission line. To further check that combining the various photometric and spectroscopic light curves has not affected our measured time delays, we also determined the time delay of Hβ relative to each of the individual continuum, V −band, and g−band light curves. Each of these light curves is slightly undersampled relative to the combined continuum light curve, but the CCFs and recovered Hβ time delays agree within the measurement uncertainties. We also investigated the time delays with the JAVELIN package (Zu et al. 2011). JAVELIN fits the continuum variations with a damped random walk model. It then assumes a top hat model for the reprocessing function, and determines the best-fit shifting and smoothing parameters for the emission-line light curves by maximizing the likelihood of the model. Uncertainties on each of the model parameters are assessed through a Bayesian Markov Chain Monte Carlo method. We denote time delays from JAVELIN as τ jav . Given the extremely short time delays, we were unable to fit a single model while including all the emission lines simultaneously, so we instead modeled each emission line separately relative to the continuum (see Figure 5). Time delay measurements are listed in Table 4. While each of the measurements is an observed time delay, the rest-frame time delays, corrected for a factor of 1 + z, are formally the same within the uncertainties. LINE WIDTH MEASUREMENTS The widths of the broad emission lines in AGN spectra are interpreted as the line-of-sight velocities of the bulk motion of the gas. The narrow emission lines, however, are known to emit from gas that is not participating in the same bulk motion. Therefore, good practice is to isolate the broad emission from the narrow emission when quantifying the line width. In the spectrum of UGC 06728, however, it is not clear what part of the Hβ line is narrow emission (cf. Figure 1). Furthermore, the narrow lines contribute almost no signal to the rms spectrum, demonstrating that our internal spectral scaling method has minimized their apparent variability from changing observing conditions throughout the monitoring campaign. As it is the variable part of the emission line (the rms profile) that we are most interested in, we do not attempt any narrow line subtraction for this object. We measured the widths of the broad Hβ, He II λ4686, and Hγ emission lines in both the mean and the rms spectra and we report two different line width characterizations: the full width at half the maximum flux (FWHM) and the second moment of the line profile (σ line ). Line widths were measured directly from the spectra, with each line profile defined as the flux above a local linear continuum. Uncertainties in the emission line widths were determined using a Monte Carlo random subset sampling method. From a set of N spectra, a subset of N spectra were selected without regard to whether they had been previously chosen. The mean and rms of the subset were created, from which the FWHM and σ line of an emission line were determined and recorded. Distributions of line width measurements were built up over 1000 realizations. We take the mean and the standard deviation of each distribution as the measurement and its uncertainty, respectively. Following Peterson et al. (2004), we corrected the emission-line widths for the dispersion of the spectrograph. The observed emission line width, ∆λ obs , can be described as ∆λ 2 obs ≈ ∆λ 2 true + ∆λ 2 disp (2) where ∆λ true is the intrinsic line width and ∆λ disp is the broadening induced by the spectrograph. The employment of a wide spectrograph slit for reverberation campaigns means that the spectrograph dispersion cannot be determined from night sky emission lines or from arc lamp lines -the unresolved AGN point source, even under poor seeing conditions, will not fill the spectrograph slit. Given the relative obscurity of this particular AGN, we were unable to estimate ∆λ true , and therefore constrain ∆λ disp , from high-quality, high-resolution observations of the narrow emission lines in the literature. However, we have previously monitored other AGNs with this same instrumental setup, and so we adopt the value of ∆λ disp = 14.1 Å that we determined for NGC 5273 from a spring 2014 monitoring campaign (Bentz et al. 2014). Our final resolution-corrected line width measurements are listed in Table 5. 5. BLACK HOLE MASS All of the time delays measured for UGC 06728 are very short, which is to be expected given the low luminosity of the AGN. The time delays determined for Hβ are the only ones that are not formally consistent with zero within the measurement uncertainties, so Hβ is the only emission line we will consider for the determination of the black hole mass. However, Hβ is also the emission line for which we have the largest number of reverberation results (cf. Bentz & Katz 2015 for a recent summary), so it is also the most reliable emission line for determining M BH . The black hole mass is generally determined from reverberation-mapping measurements as: M BH = f cτV 2 G (3) where τ is the time delay for a specific emission line relative to variations in the continuum, and V is the line-of-sight velocity width of the emission line, with c and G being the speed of light and gravitational constants, respectively. The emission-line time delay is interpreted as a measure of the responsivity-weighted average radius of the broad-line region for that specific emission feature (e.g., Hβ). The scaling factor f accounts for the detailed geometry and kinematics of the broad line region gas, which is unresolvable. In practice, the multiplicative factor, f , which is found to bring the M BH − σ ⋆ relationship for AGNs with reverberation masses into agreement with the M BH − σ ⋆ relationship for nearby galaxies with dynamical black hole masses (e.g., Gültekin et al. 2009;McConnell & Ma 2013;Kormendy & Ho 2013) is used as a proxy for f . In this way, the population average factor provides an overall scale for reverberation masses that should be unbiased, but the mass of any particular AGN is expected to be uncertain by a factor of 2-3 because of object-to-object variations. The value of f has varied in the literature from 5.5 (Onken et al. 2004) to 2.8 (Graham et al. 2011), depending on which objects are included and the specifics of the measurements. We adopt the value determined by Grier et al. (2013) of f = 4.3 ± 1.1. Combining the time lag (τ cent ) and line width (σ line ) measurements for Hβ and scaling by f , we determine M BH = (7.1 ± 4.0) × 10 5 M ⊙ . 6. DISCUSSION The extremely rapid response of the broad emission lines to variations in the continuum flux in UGC 06728 means that 6.-Line width versus time delay as measured from the broad optical recombination lines in the spectrum of UGC 06728. The dotted line shows the expected relationship of R ∝ V −2 and is scaled to match the measurements for Hβ. Even though the time delays are quite short, and unresolved in the case of Hγ and He II, the measurements are in relatively good agreement with the expected relationship. our daily sampling was not fine enough to resolve time delays for all the broad optical recombination lines. The time delay of Hβ is the only one that is not formally consistent with zero delay, and it is only marginally resolved at that. However, while we were not able to resolve the time delays for Hγ and He II, we can examine them in light of the expected virial relationship for BLR gas that is under the gravitational dominance of the black hole. In particular, we would expect that R ∝ V −2 . This relationship has been shown to be a good description of observations when reverberation results from multiple emission lines have been recovered (e.g., Peterson et al. 2004;Kollatschny 2003;Bentz et al. 2010). Figure 6 shows the measurements for the optical recombination lines in UGC 06728, with the expected relationship scaled to match the measurements for Hβ. There is generally good agreement with the expected relationship within the measurement uncertainties, such that we would not expect to resolve the responses of these emission lines with our current sampling. A monitoring campaign with finer temporal resolution (∆t = 0.25 − 0.5 days) would be needed to further improve upon these constraints. 6.1. Consistency with the R BLR − L Relationship Furthermore, we can examine the location of UGC 06728 on the AGN R BLR − L relationship to further assess the Hβ time delay measurement. For very nearby galaxies like UGC 06728, however, one complication is the large fraction of host-galaxy starlight that contributes to the continuum emission at rest-frame 5100 Å through the large spectroscopic slit (∼ 5 ′′ ) employed in a reverberation mapping campaign. The usual method to correct for this contamination is to carry out two-dimensional surface brightness modeling of a high-resolution image of the galaxy (usually from the Hubble Space Telescope to maximize the image quality), thereby isolating the host-galaxy starlight components from the unresolved AGN point source. Using the modeling results to create an "AGN-free" image allows the starlight contribution to be directly constrained (Bentz et al. 2006b(Bentz et al. , 2009(Bentz et al. , 2013. Unfortunately, there are no HST images of UGC 06728. The highest resolution optical images available are the APO DIS g−band images discussed above, with a pixel scale of 0.42 ′′ /pixel. While hardly comparable to the quality afforded by HST, the DIS images do allow us to place some rough constraints on the starlight contribution to the flux density at 5100 × (1 + z) Å. We aligned and stacked several of the g−band images to increase the signal-to-noise in the combined image. Using the two-dimensional surface brightness fitting program GALFIT (Peng et al. 2002, we created a model of the point spread function (PSF) of the stacked image by fitting multiple Gaussian components to the profile of a field star in a restricted portion of the image. We then employed this model PSF while fitting the full frame, including a background sky gradient, a PSF for the AGN and the nearby star, an exponential profile for the disk of the galaxy, and a Sérsic profile for the bulge. The bulge profile, in particular, is very compact with a half-light radius of 1.7 pix (0.7 ′′ ), and likely degenerate with the AGN PSF, so we caution that our estimate of the starlight contribution is probably more like a lower limit. Figure 7 displays a 2. ′ 5×2. ′ 5 region of the stacked g−band image, our best-fit model from GALFIT, and the residuals after subtracting the model from the image. As described earlier, calibrated g−band photometry for three field stars from APASS (the AAVSO Photometric All Sky Survey; Henden & Munari 2014) was used to set the overall flux scale of the image. We also account for a slight flux scaling factor, due to the difference in effective wavelength of the g filter compared to 5100 × (1 + z) Å, using Synphot and a template galaxy bulge spectrum (Kinney et al. 1996). Our estimate of the host-galaxy contribution to the spectroscopic flux density is f gal = (1. (Bentz et al. 2013). The filled circle shows the location of UGC 06728 with the Hβ time delay we have derived here and the luminosity after correction for the estimated starlight contribution. The agreement between UGC 06728 and its expected location based on its estimated luminosity is extremely good considering the barely-resolved nature of the time delay and the caveats in the luminosity determination. Furthermore, we can expect that the agreement is actually somewhat better than depicted, given the likelihood that the starlight correction to the luminosity is underestimated as described above. Taking our galaxy decomposition at face value, we can estimate the bulge-to-total ratio as B/T ≈ 0.2, which suggests that the Hubble type of the galaxy is ∼Sb (Kent 1985). We also estimate the color of the galaxy as g − r ≈ 0.9, which suggests M/L g ≈ 6 (Zibetti et al. 2009). The total stellar mass of the galaxy is M ⋆ ≈ 7.5 × 10 9 M ⊙ , which also agrees with the host-galaxy being Sb−Sc in type. Consistency with the M BH − σ ⋆ Relationship To further explore the reverberation results for UGC 06728 within the context of the larger reverberation sample, we obtained supplemental observations on 2016 May 13 with the DIS Spectrograph on the APO 3.5-m telescope with the intent of constraining the bulge stellar velocity dispersion. The high resolution B1200 and R1200 gratings were employed, providing nominal dispersions of 0.62 Å/pix and 0.58 Å/pix and wavelength coverages of 1240 Å and 1160 Å, respectively. The blue grating was centered at 4900 Å to target the Mgb stellar absorption signature, and the red grating was centered at 8500 Å for the Ca II triplet absorption. The 0. ′′ 9 slit was rotated to a position angle of 150 • east of north, approximately along the minor axis of the galaxy. Given the high inclination of the galaxy, we specifically avoided the major axis of the galaxy to mitigate the effects of rotational broadening from the disk within the one-dimensional extracted spectra. Two 1200 s exposures were obtained through patchy clouds and with marginal seeing at an airmass of 1.6. Spectra of the standard star, Feige 34, were also obtained to assist with the flux calibration, as well as spectra of HD 125560 (spectral type K3III) and HD 117876 (spectral type G8III) to provide velocity templates with the same wavelength coverage and dispersion as the galaxy. All spectra were reduced with IRAF following standard procedures. An extraction width of 40 pixels (corresponding to 16 ′′ on the blue camera and 16.8 ′′ on the red camera) was adopted to maximize the galaxy signal in the resultant spectra. Following flux calibration of the spectra, we employed the pPXF (Penalized Pixel Fitting) method of Cappellari & Emsellem (2004) to extract the stellar kinematics. The Mgb absorption signature was not detected in the galaxy spectra, but the Ca II triplet features were detected, so we focused on fitting the red spectra only. During the fitting process, we restricted the wavelength region to 8525−8850 Å and determined the best-fit parameters (velocity, velocity dispersion, h3, and h4) using first one velocity template star and then the other. The best fits to the spectrum of UGC 06728 are displayed in Figure 9: HD125560 (red line) provided a best-fit velocity dispersion of 56.5 km s −1 , and HD117876 (blue line) provided a best fit of 46.7 km s −1 . We take the average of these as the bulge stellar velocity dispersion, σ ⋆ = 51.6 ± 4.9 km s −1 . With this constraint on the bulge stellar velocity dispersion in UGC 06728, we can explore its location on the AGN M BH − FIG. 9.-Spectrum of UGC 06728 in the wavelength region around the Ca II triplet absorption lines. The red and blue lines show the best-fit models to the stellar absorption lines based on HD125560 and HD117876, respectively. We take the average of the solutions provided by the two template stars as our measurement of the bulge stellar velocity dispersion in UGC 06728. σ ⋆ relationship. Figure 10 displays the AGN M BH − σ ⋆ relationship from Grier et al. (2013) (open points and line), with the location of UGC 06728 shown by the filled circle. The scatter at the low-mass end of the M BH − σ ⋆ relationship for AGNs with reverberation masses seems to be much smaller than that found for megamaser host galaxies (Greene et al. 2010). Läsker et al. (2016) also found the megamaser host galaxies to have a high scatter relative to the M BH − L bulge and M BH − M bulge relationships. Each sample of direct black hole masses, whether dynamical, reverberation, or masering, has its own set of biases and assumptions that are independent of the other techniques, so further exploration into this apparent disagreement is likely to shed light on the reliability of black hole mass measurements as they are currently applied. Furthermore, we can estimate the black hole sphere of influence (r h ) in the nucleus of UGC 06728. Generally defined as r h = GM BH σ 2 ⋆ ,(4) r h is often employed as a convenient metric for determining the probability of success for constraining M BH from spatially resolved stellar dynamics. Gültekin et al. (2009) argue that a strict reliance on resolving r h is not necessary, however, for useful constraints on black hole masses. Combining our measurements of M BH and σ ⋆ and again assuming a luminosity distance of D L = 27 Mpc, we estimate r h = 0.01 ′′ for UGC 06728. While this angular size is smaller than the achievable spatial resolution of integral field spectrographs on the largest ground-based telescopes today, it is interesting to note that it is not much smaller than r h for NGC 3227. Davies et al. (2006) were able to constrain the black hole mass of NGC 3227 through stellar dynamical modeling, even though the reverberation mass and bulge stellar velocity dispersion predict r h = 0.018 ′′ . Given the very limited number of AGNs where it will be possible to carry out a direct comparison of reverberation-based and stellar dynamical- based black hole mass measurements with current and nearfuture technology, UGC 06728 could potentially be a worthwhile target for dynamical modeling. Walton et al. (2013) analyzed Suzaku observations of UGC 06728 and determined that it was a "bare" AGN, with minimal intrinsic absorption. Fitting the X-ray spectrum with a relativistic reflection model, and assuming an accretion disk inclination of i = 45 • , they determined a dimensionless spin parameter of a > 0.7, indicating the black hole is spinning rapidly. Combined with our mass contraint of M BH = (7.1 ± 4.0) × 10 5 M ⊙ , UGC 06728 is one of a small number of massive black holes that are completely characterized. A few other low-mass black holes have both mass and spin constraints, and they appear to agree with the properties derived for UGC 06728. MCG-06-30-15 is only slightly more massive with M BH = (1.6 ± 0.4) × 10 6 M ⊙ (Bentz et al. 2016) and is spinning near maximally (a > 0.9; Brenneman & Reynolds 2006;Chiang & Fabian 2011;Marinucci et al. 2014). NGC 4051 is another example, with M BH = (1.3 ± 0.4) × 10 6 M ⊙ (Denney et al. 2009b) and a > 0.99 (Patrick et al. 2012). Mass and Spin Implications Black hole evolutionary models have only recently begun to treat black hole spin in addition to mass. Depending on the model, it is not clear if the properties of the black hole in UGC 06728 are expected or surprising. For example, the model of Volonteri et al. (2013) predicts that black holes with M BH ≈ 10 6 M ⊙ in gas-rich galaxies at z < 0.5 (including AGNs) should have slowly rotating black holes with dimensionless spin parameters of a < 0.4. This model is based on many observational constraints, including the M BH − σ ⋆ relationship, with which we have shown UGC 06728 to be in agreement. One caveat to the evolutionary model of Volonteri et al. (2013) is that it does not account for black hole feeding through disk instabilities, which could be a reason for the apparent discrepancy here. Disk instability accretion events would likely be correlated and serve to spin up a black hole. The evolutionary models of Sesana et al. (2014) attempt to include this effect by linking the gas dynamics of the extended galaxy to the central black hole. Their models predict that local black holes with M BH ≈ 10 6 M ⊙ should tend to be spinning near maximally, and that accreting black holes in spiral galaxies should also tend to have near-maximal spins. Interpretation of black hole spin measurements is still somewhat debated as well. Bonson & Gallo (2016) argue that black hole spins tend to be overestimated in many cases, although they state this is likely not the case for the most maximally spinning black holes (a > 0.8). Furthermore, there is a very strong selection bias inherent in the sample of AGNs with spin measurements. Rapidly spinning black holes have significant boosts to their X-ray flux through increased radiative efficiency, and the current sample of AGNs with spin constraints is based on observations of the brightest X-ray sources, so the current sample will strongly favor rapidly spinning black holes (Brenneman et al. 2011;Vasudevan et al. 2016). In any case, UGC 06728 is an important addition to the sample. As the least massive central black hole that has been fully described, it will help to anchor future studies, both observational and theoretical, of central black hole demographics. SUMMARY We present an Hβ time delay and a reveberation-based black hole mass for the nearby, low-luminosity Seyfert UGC 06728. With τ = 1.4 ± 0.8 days and M BH = (7.1 ± 4.0) × 10 5 M ⊙ , UGC 06728 is at the low end of observed properties within the reverberation mapping sample. The time delay and estimated AGN luminosity agree with the R BLR − L relationship for other reveberation-mapped AGNs, and a measurement of σ ⋆ = 51.6 ± 4.9 km s −1 from long-slit spectroscopy shows that the black hole mass agrees with the AGN M BH − σ ⋆ relationship. With M BH < 10 6 M ⊙ , UGC 06728 is currently the lowest-mass central black hole that is fully described by both direct mass and spin constraints. FIG. 1 . 1-Mean (top) and root mean square (bottom) of all the blue-side spectra obtained from APO during the monitoring campaign. inch Miller Telescope at Hard Labor Creek Observatory (HLCO), owned and operated by Georgia State University in Hard Labor Creek State Park near Rutledge, GA. V −band images were acquired with an Apogee 2048 × 2048 detector, spanning a field of view of 26. ′ 3 × 26. ′ 3 with a pixel scale of 0. ′′ 77. On a typical night, three exposures were obtained at an airmass of ∼ 1.5, each with an exposure time of 300 s. The wide field of view of the HLCO images included a large FIG. 2.-Example r−band image acquired with the imaging mode of the DIS spectrograph at APO. The field stars used to set the magnitude zeropoint are marked with circles. The scale of the image is 3. ′ 9×6. ′ 7 and is oriented with north up and east to the right. FIG. 3 . 3-Spectroscopic continuum and photometric light curves (left panels) and the cross-correlation of each light curve relative to the spectroscopic continuum light curve (right panels). No apparent time delays are detected, except perhaps in the r band, and the light curve features are quite similar. 5100 Å, g−band, and r−band flux densities are in units of 10 −15 ergs s −1 cm −2 Å −1 . V −band residual flux is in units of 10000 counts. Emission line fluxes are in units of 10 −15 ergs s −1 cm −2 . FIG. 4.-Merged continuum light curve and emission-line light curves (left panels). The right panels display the cross-correlation of each light curve relative to the continuum, and the red histograms (arbitrarily scaled) display the cross-correlation centroid distributions. 09 ± 0.22) × 10 −15 ergs s −1 cm −2 Å −1 . Removing this contribution results in an AGN-only continuum flux density of f AGN = (1.12 ± 0.23) × 10 −15 ergs s −1 cm −2 Å −1 . Assuming a luminosity distance of D L = 27 Mpc and correcting for Galactic absorption along the line of sight (Schlafly & Finkbeiner 2011), we derive log λL λ = 41.83 ± 0.24 ergs s −1 . FIG. 7.-Stacked g−band image of a 2. ′ 5×2. ′ 5 region centered on UGC 06728 (left) with the white rectangle showing the geometry of the ground-based spectroscopic monitoring aperture. The best-fit model determined from GALFIT is displayed in the middle panel, and the right panel shows the residuals after subtraction of the model from the image. All images are oriented with north up and east to the right. FIG. 8.-The Hβ time delay for UGC 06728 and estimated AGN luminosity (filled point) compared to the radius-luminosity relationship for other reverberation-mapped AGNs(Bentz et al. 2013). Figure 8 8displays the R BLR −L relationship for nearby AGNs based on reverberation mapping of Hβ FIG. 10.-UGC 06728 (filled point) and the AGN M BH − σ⋆ relationship from Grier et al. (2013). ). We restricted the scaling algorithm to focus on the spectral region containing the [O III] λλ 4959,5007 doublet. Additionally, we adopted an overall flux scale based on the integrated [O III] λ5007 flux measured from the nights with the best observing conditions of f λ5007 = 41.6 × 10 −15 ergs s −1 cm −2 . The red-side spectra showed only Hα emission smoothly blended with [N II] λλ 6548,6583. Emission from [S II] λλ 6716,6730 and [O I] TABLE 1 PHOTOMETRIC 1−15 erg s −1 cm −2 Å −1 ) (10 −15 erg s −1 cm −2 ) (10 −15 erg s −1 cm −2 ) (10 −15 erg s −1 cm −2 )LIGHT CURVES HJD g r HJD V (days) (AB mag) (AB mag) (days) (resid. cts/10000) 7127.6059 15.546 ± 0.006 14.823 ± 0.005 7134.7311 1.216 ± 0.024 7130.6023 15.544 ± 0.008 14.840 ± 0.006 7135.8689 0.133 ± 0.025 7131.6043 15.510 ± 0.008 14.820 ± 0.006 7136.7193 0.309 ± 0.024 7134.6484 15.676 ± 0.006 14.941 ± 0.021 7142.6382 1.202 ± 0.060 7134.6493 15.677 ± 0.005 14.989 ± 0.020 7144.6886 0.205 ± 0.042 7135.6058 15.580 ± 0.009 14.869 ± 0.006 7145.6625 1.093 ± 0.034 7141.6148 15.638 ± 0.007 14.916 ± 0.006 7146.8040 0.865 ± 0.038 7142.6108 15.671 ± 0.010 14.931 ± 0.007 7147.6872 0.180 ± 0.045 7143.6100 15.630 ± 0.011 14.919 ± 0.007 7148.5865 0.789 ± 0.021 7144.6099 15.605 ± 0.011 14.903 ± 0.007 7150.6653 0.746 ± 0.024 7149.6224 15.612 ± 0.006 14.882 ± 0.005 7151.7214 0.351 ± 0.026 7150.6151 15.608 ± 0.009 14.898 ± 0.006 7152.7171 0.036 ± 0.020 7151.6161 15.561 ± 0.008 14.866 ± 0.006 7153.6520 −0.319 ± 0.022 7152.6159 15.556 ± 0.008 14.867 ± 0.006 7156.7405 −0.204 ± 0.042 7153.6223 15.507 ± 0.006 14.820 ± 0.005 7158.6132 −0.410 ± 0.025 7156.6189 15.460 ± 0.010 14.795 ± 0.006 7159.6255 −0.170 ± 0.026 7159.6214 15.479 ± 0.007 14.794 ± 0.005 7160.6317 −0.346 ± 0.009 7160.6215 15.461 ± 0.007 14.793 ± 0.005 7162.6928 −0.118 ± 0.021 7162.6218 15.491 ± 0.007 14.798 ± 0.006 7164.6410 0.514 ± 0.025 7165.6511 15.540 ± 0.008 14.862 ± 0.007 7165.6504 0.280 ± 0.025 7166.6239 15.494 ± 0.008 14.819 ± 0.005 7166.7008 0.109 ± 0.024 7168.6555 15.448 ± 0.005 14.780 ± 0.005 7167.7025 0.226 ± 0.021 7169.6224 15.400 ± 0.014 14.785 ± 0.007 7172.6709 −0.661 ± 0.037 7170.6218 15.495 ± 0.011 14.807 ± 0.006 7173.6752 −0.801 ± 0.030 7171.6249 15.444 ± 0.009 14.784 ± 0.006 TABLE 2 SPECTROSCOPIC LIGHT CURVES HJD 5100 × (1 + z) Å Hβ Hγ He II (days) (10 7127.60131629 2.146 ± 0.013 76.605 ± 0.039 45.467 ± 0.091 12.614 ± 0.092 7130.59772753 2.235 ± 0.018 65.745 ± 0.064 31.085 ± 0.126 7.226 ± 0.158 7131.59975985 2.151 ± 0.016 66.059 ± 0.053 34.986 ± 0.098 11.465 ± 0.127 7134.64244313 1.953 ± 0.007 59.745 ± 0.009 29.567 ± 0.011 8.501 ± 0.014 7135.60113042 2.317 ± 0.023 60.694 ± 0.101 30.367 ± 0.206 5.223 ± 0.263 7141.6102368 2.019 ± 0.011 55.835 ± 0.025 27.859 ± 0.045 3.583 ± 0.056 7142.60623451 1.792 ± 0.045 54.629 ± 0.464 31.498 ± 1.061 7.873 ± 1.207 7143.6053809 2.227 ± 0.023 55.439 ± 0.101 31.572 ± 0.221 2.160 ± 0.262 7144.60529841 2.253 ± 0.033 58.688 ± 0.236 26.036 ± 0.502 · · · 7149.61781551 1.852 ± 0.021 56.921 ± 0.089 31.359 ± 0.145 4.714 ± 0.176 7150.61051989 1.941 ± 0.031 53.080 ± 0.185 25.462 ± 0.354 8.538 ± 0.455 7151.61149262 2.101 ± 0.016 60.457 ± 0.054 36.374 ± 0.109 8.983 ± 0.131 7152.61133458 2.051 ± 0.016 57.825 ± 0.050 34.132 ± 0.107 12.753 ± 0.123 7153.61769047 2.256 ± 0.011 63.765 ± 0.025 32.758 ± 0.042 19.423 ± 0.055 7156.61430815 2.268 ± 0.021 63.760 ± 0.086 38.456 ± 0.175 1.949 ± 0.212 7159.61682615 2.317 ± 0.013 67.404 ± 0.033 34.757 ± 0.066 14.462 ± 0.080 7160.61690956 2.445 ± 0.019 73.758 ± 0.067 31.379 ± 0.128 19.432 ± 0.164 7162.61714264 2.216 ± 0.013 72.343 ± 0.031 37.067 ± 0.058 15.755 ± 0.076 7163.62052311 2.317 ± 0.051 65.958 ± 0.505 32.747 ± 1.106 11.655 ± 1.309 7165.63733328 2.098 ± 0.010 64.085 ± 0.019 32.687 ± 0.026 9.924 ± 0.032 7166.61939046 2.203 ± 0.013 70.381 ± 0.032 35.662 ± 0.059 10.006 ± 0.076 7168.65095798 2.415 ± 0.008 66.380 ± 0.012 35.326 ± 0.015 15.805 ± 0.019 7169.61788092 2.515 ± 0.026 67.836 ± 0.125 35.631 ± 0.273 14.877 ± 0.332 7170.61722538 2.434 ± 0.037 76.008 ± 0.228 36.909 ± 0.465 19.810 ± 0.575 7171.62035657 2.325 ± 0.014 66.832 ± 0.036 42.273 ± 0.072 13.417 ± 0.088 7172.62633782 2.627 ± 0.014 72.460 ± 0.035 39.271 ± 0.062 22.804 ± 0.080 TABLE 3 3LIGHT-CURVE STATISTICSTime Series N T T median F a σ F /F Fvar Rmax (days) (days) 5100 Å 26 1.8 ± 1.4 1.0 2.21 ± 0.20 0.009 0.090 1.466 ± 0.038 V 24 1.7 ± 1.3 1.1 −0.22 ± 0.56 0.050 -2.56 −0.659 ± 0.028 g 25 1.8 ± 1.4 1.0 2.83 ± 0.21 0.008 0.072 1.290 ± 0.017 r 25 1.8 ± 1.4 1.0 3.19 ± 0.17 0.007 0.052 1.213 ± 0.023 Hβ 26 1.8 ± 1.4 1.0 64.3 ± 6.7 0.002 0.105 1.443 ± 0.005 Hγ 26 1.8 ± 1.4 1.0 33.9 ± 4.6 0.007 0.135 1.786 ± 0.025 He II 25 1.9 ± 1.5 1.0 11.3 ± 5.7 0.033 0.500 11.7 ± 1.3 a TABLE 4 TIME 4FIG. 5.-Continuum and Hβ light curves with interpolated JAVELIN light curves drawn from the distribution of acceptable models.LAGS Feature τcent τ peak τ jav (days) (days) (days) Hβ 1.4 +0.7 −0.8 1.1 +0.6 −0.6 1.3 +0.2 −0.7 Hγ 0.0 +1.0 −1.3 −0.7 +2.5 −0.7 −1.5 +0.1 −0.7 He II −0.2 +0.9 −1.1 −0.7 +1.8 −0.7 −1.4 +0.2 −0.1 TABLE 5 LINE 5WIDTHS Mean RMS Feature FWHM σ line FWHM σ line (km s −1 ) (km s −1 ) (km s −1 ) (km s −1 ) Hβ 1144.5 ± 58.3 758.3 ± 19.4 1309.7 ± 182.2 783.7 ± 92.3 Hγ 2333.6 ± 80.3 821.8 ± 21.8 2492.3 ± 1704.7 919.9 ± 70.4 He II 2626.2 ± 593.7 1124.7 ± 127.7 4016.7 ± 912.9 1605.6 ± 157.8 Department of Physics and Astronomy, Georgia State University, Atlanta, GA 30303, USA; [email protected] IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under cooperative agreement with the National Science Foundation. We thank the referee for thoughtful comments that improved the presentation of this paper. MCB gratefully acknowledges support from the NSF through CAREER grant AST-1253702. This research is based on observations obtained with the Apache Point Observatory 3.5-meter telescope, which is owned and operated by the Astrophysical Research Consortium. We heartily thank the staff at APO for all their help with this program. This research has made use of the AAVSO Photometric All-Sky Survey (APASS), funded by the Robert Martin Ayers Sciences Fund. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration and the SIM-BAD database, operated at CDS, Strasbourg, France. . C Alard, A&AS. 144363Alard, C. 2000, A&AS, 144, 363 . C Alard, R H Lupton, ApJ. 503325Alard, C., & Lupton, R. H. 1998, ApJ, 503, 325 . M B Bentz, E M Cackett, D M Crenshaw, K Horne, R Street, B Ou-Yang, ApJ. in press (astro-ph/1608.01229Bentz, M. B., Cackett, E. M., Crenshaw, D. M., Horne, K., Street, R., & Ou-Yang, B. 2016, ApJ, in press (astro-ph/1608.01229) . M C Bentz, K D Denney, E M Cackett, ApJ. 651775Bentz, M. C., Denney, K. D., Cackett, E. M., et al. 2006a, ApJ, 651, 775 . M C Bentz, K D Denney, C J Grier, ApJ. 767149Bentz, M. C., Denney, K. D., Grier, C. J., et al. 2013, ApJ, 767, 149 . M C Bentz, D Horenstein, C Bazhaw, ApJ. 7968Bentz, M. C., Horenstein, D., Bazhaw, C., et al. 2014, ApJ, 796, 8 . M C Bentz, S Katz, PASP. 12767Bentz, M. C., & Katz, S. 2015, PASP, 127, 67 . M C Bentz, B M Peterson, H Netzer, R W Pogge, M Vestergaard, ApJ. 697160Bentz, M. C., Peterson, B. M., Netzer, H., Pogge, R. W., & Vestergaard, M. 2009, ApJ, 697, 160 . M C Bentz, B M Peterson, R W Pogge, M Vestergaard, C A Onken, ApJ. 644133Bentz, M. C., Peterson, B. M., Pogge, R. W., Vestergaard, M., & Onken, C. A. 2006b, ApJ, 644, 133 . M C Bentz, J L Walsh, A J Barth, ApJ. 716993Bentz, M. C., Walsh, J. L., Barth, A. J., et al. 2010, ApJ, 716, 993 . R D Blandford, C F Mckee, ApJ. 255419Blandford, R. D., & McKee, C. F. 1982, ApJ, 255, 419 . K Bonson, L C Gallo, MNRAS. 4581927Bonson, K., & Gallo, L. C. 2016, MNRAS, 458, 1927 . L W Brenneman, C S Reynolds, ApJ. 6521028Brenneman, L. W., & Reynolds, C. S. 2006, ApJ, 652, 1028 . L W Brenneman, C S Reynolds, M A Nowak, ApJ. 736103Brenneman, L. W., Reynolds, C. S., Nowak, M. A., et al. 2011, ApJ, 736, 103 . M Cappellari, E Emsellem, PASP. 116138Cappellari, M., & Emsellem, E. 2004, PASP, 116, 138 . C.-Y Chiang, A C Fabian, MNRAS. 4142345Chiang, C.-Y., & Fabian, A. C. 2011, MNRAS, 414, 2345 . R I Davies, J Thomas, R Genzel, ApJ. 646101ApJDavies, R. I., Thomas, J., Genzel, R., et al. 2006, ApJ, 646, 754 den Brok, M., Seth, A. C., Barth, A. J., et al. 2015, ApJ, 809, 101 . K D Denney, B M Peterson, R W Pogge, ApJ. 70480Denney, K. D., Peterson, B. M., Pogge, R. W., et al. 2009a, ApJ, 704, L80 . K D Denney, L C Watson, B M Peterson, ApJ. 7021353Denney, K. D., Watson, L. C., Peterson, B. M., et al. 2009b, ApJ, 702, 1353 . L Ferrarese, H Ford, Space Sci. Rev. 116523Ferrarese, L., & Ford, H. 2005, Space Sci. Rev., 116, 523 . L Ferrarese, D Merritt, ApJ. 5399Ferrarese, L., & Merritt, D. 2000, ApJ, 539, L9 . C M Gaskell, B M Peterson, ApJS. 651Gaskell, C. M., & Peterson, B. M. 1987, ApJS, 65, 1 . C M Gaskell, L S Sparke, ApJ. 305175Gaskell, C. M., & Sparke, L. S. 1986, ApJ, 305, 175 . K Gebhardt, R Bender, G Bower, ApJ. 53913Gebhardt, K., Bender, R., Bower, G., et al. 2000, ApJ, 539, L13 . R Genzel, C Pichon, A Eckart, O E Gerhard, T Ott, MNRAS. 317348Genzel, R., Pichon, C., Eckart, A., Gerhard, O. E., & Ott, T. 2000, MNRAS, 317, 348 . A M Ghez, M Morris, E E Becklin, A Tanner, T Kremenek, Nature. 407349Ghez, A. M., Morris, M., Becklin, E. E., Tanner, A., & Kremenek, T. 2000, Nature, 407, 349 . A M Ghez, S Salim, N N Weinberg, ApJ. 6891044Ghez, A. M., Salim, S., Weinberg, N. N., et al. 2008, ApJ, 689, 1044 . A W Graham, C A Onken, E Athanassoula, F Combes, MNRAS. 4122211Graham, A. W., Onken, C. A., Athanassoula, E., & Combes, F. 2011, MNRAS, 412, 2211 . J E Greene, C Y Peng, M Kim, ApJ. 72126Greene, J. E., Peng, C. Y., Kim, M., et al. 2010, ApJ, 721, 26 . C J Grier, P Martini, L C Watson, ApJ. 77390Grier, C. J., Martini, P., Watson, L. C., et al. 2013, ApJ, 773, 90 . K Gültekin, D O Richstone, K Gebhardt, ApJ. 698198Gültekin, K., Richstone, D. O., Gebhardt, K., et al. 2009, ApJ, 698, 198 . T M Heckman, P N Best, ARA&A. 52589Heckman, T. M., & Best, P. N. 2014, ARA&A, 52, 589 A Henden, U Munari, Contributions of the Astronomical Observatory Skalnate Pleso. 43518Henden, A., & Munari, U. 2014, Contributions of the Astronomical Observatory Skalnate Pleso, 43, 518 . E K S Hicks, M A Malkan, ApJS. 17431Hicks, E. K. S., & Malkan, M. A. 2008, ApJS, 174, 31 . S M Kent, ApJS. 59115Kent, S. M. 1985, ApJS, 59, 115 . A L Kinney, D Calzetti, R C Bohlin, K Mcquade, T Storchi-Bergmann, H R Schmitt, ApJ. 46738Kinney, A. L., Calzetti, D., Bohlin, R. C., McQuade, K., Storchi-Bergmann, T., & Schmitt, H. R. 1996, ApJ, 467, 38 . W Kollatschny, A&A. 407461Kollatschny, W. 2003, A&A, 407, 461 . J Kormendy, L C Ho, ARA&A. 51511Kormendy, J., & Ho, L. C. 2013, ARA&A, 51, 511 . R Läsker, J E Greene, A Seth, G Van De Ven, J A Braatz, C Henkel, K Y Lo, ApJ. 8253Läsker, R., Greene, J. E., Seth, A., van de Ven, G., Braatz, J. A., Henkel, C., & Lo, K. Y. 2016, ApJ, 825, 3 . J Magorrian, S Tremaine, D Richstone, AJ. 1152285Magorrian, J., Tremaine, S., Richstone, D., et al. 1998, AJ, 115, 2285 . A Marinucci, G Matt, G Miniutti, ApJ. 78783Marinucci, A., Matt, G., Miniutti, G., et al. 2014, ApJ, 787, 83 . N J Mcconnell, C.-P Ma, ApJ. 764184McConnell, N. J., & Ma, C.-P. 2013, ApJ, 764, 184 . C A Onken, L Ferrarese, D Merritt, B M Peterson, R W Pogge, M Vestergaard, A Wandel, ApJ. 615645Onken, C. A., Ferrarese, L., Merritt, D., Peterson, B. M., Pogge, R. W., Vestergaard, M., & Wandel, A. 2004, ApJ, 615, 645 . C A Onken, M Valluri, J S Brown, ApJ. 79137Onken, C. A., Valluri, M., Brown, J. S., et al. 2014, ApJ, 791, 37 . A R Patrick, J N Reeves, D Porquet, A G Markowitz, V Braito, A P Lobban, MNRAS. 4262522Patrick, A. R., Reeves, J. N., Porquet, D., Markowitz, A. G., Braito, V., & Lobban, A. P. 2012, MNRAS, 426, 2522 . C Y Peng, L C Ho, C D Impey, H Rix, AJ. 124266Peng, C. Y., Ho, L. C., Impey, C. D., & Rix, H. 2002, AJ, 124, 266 . C Y Peng, L C Ho, C D Impey, H.-W Rix, AJ. 1392097Peng, C. Y., Ho, L. C., Impey, C. D., & Rix, H.-W. 2010, AJ, 139, 2097 . B M Peterson, PASP. 105247Peterson, B. M. 1993, PASP, 105, 247 . B M Peterson, M C Bentz, L B Desroches, ApJ. 632799Peterson, B. M., Bentz, M. C., Desroches, L. B., et al. 2005, ApJ, 632, 799 . B M Peterson, L Ferrarese, K M Gilbert, ApJ. 613682Peterson, B. M., Ferrarese, L., Gilbert, K. M., et al. 2004, ApJ, 613, 682 . B M Peterson, I Wanders, R Bertram, J F Hunley, R W Pogge, R M Wagner, ApJ. 50182Peterson, B. M., Wanders, I., Bertram, R., Hunley, J. F., Pogge, R. W., & Wagner, R. M. 1998a, ApJ, 501, 82 . B M Peterson, I Wanders, K Horne, S Collier, T Alexander, S Kaspi, D Maoz, PASP. 110660Peterson, B. M., Wanders, I., Horne, K., Collier, S., Alexander, T., Kaspi, S., & Maoz, D. 1998b, PASP, 110, 660 . C S Reynolds, Space Sci. Rev. 183277Reynolds, C. S. 2014, Space Sci. Rev., 183, 277 . E F Schlafly, D P Finkbeiner, ApJ. 737103Schlafly, E. F., & Finkbeiner, D. P. 2011, ApJ, 737, 103 . A Sesana, E Barausse, M Dotti, E M Rossi, ApJ. 794104Sesana, A., Barausse, E., Dotti, M., & Rossi, E. M. 2014, ApJ, 794, 104 . R J Siverd, T G Beatty, J Pepper, ApJ. 761ApJ. in press (astro-ph[1606.01246Siverd, R. J., Beatty, T. G., Pepper, J., et al. 2012, ApJ, 761, 123 van den Bosch, R. 2016, ApJ, in press (astro-ph[1606.01246]) . E Van Groningen, I Wanders, PASP. 104700van Groningen, E., & Wanders, I. 1992, PASP, 104, 700 . R V Vasudevan, A C Fabian, C S Reynolds, J Aird, T Dauser, L C Gallo, MNRAS. 458Vasudevan, R. V., Fabian, A. C., Reynolds, C. S., Aird, J., Dauser, T., & Gallo, L. C. 2016, MNRAS, 458, 2012 . M Volonteri, M Sikora, J.-P Lasota, A Merloni, ApJ. 77594Volonteri, M., Sikora, M., Lasota, J.-P., & Merloni, A. 2013, ApJ, 775, 94 . D J Walton, E Nardini, A C Fabian, L C Gallo, R C Reis, MNRAS. 4282901Walton, D. J., Nardini, E., Fabian, A. C., Gallo, L. C., & Reis, R. C. 2013, MNRAS, 428, 2901 . R J White, B M Peterson, PASP. 106879White, R. J., & Peterson, B. M. 1994, PASP, 106, 879 . S Zibetti, S Charlot, H.-W Rix, MNRAS. 4001181Zibetti, S., Charlot, S., & Rix, H.-W. 2009, MNRAS, 400, 1181 . Y Zu, C S Kochanek, B M Peterson, ApJ. 73580Zu, Y., Kochanek, C. S., & Peterson, B. M. 2011, ApJ, 735, 80
[]
[ "Stable chimeras of non-locally coupled Kuramoto-Sakaguchi oscillators in a finite array Stable chimeras of non-locally coupled Kuramoto-Sakaguchi oscillators in a finite array", "Stable chimeras of non-locally coupled Kuramoto-Sakaguchi oscillators in a finite array Stable chimeras of non-locally coupled Kuramoto-Sakaguchi oscillators in a finite array" ]
[ "Seungjae Lee \nDepartment of Physics\nChonbuk National University\n54896JeonjuKorea\n", "Young Sul Cho \nDepartment of Physics\nChonbuk National University\n54896JeonjuKorea\n\nResearch Institute of Physics and Chemistry\nChonbuk National University\n54896JeonjuKorea\n" ]
[ "Department of Physics\nChonbuk National University\n54896JeonjuKorea", "Department of Physics\nChonbuk National University\n54896JeonjuKorea", "Research Institute of Physics and Chemistry\nChonbuk National University\n54896JeonjuKorea" ]
[]
We consider chimera states of coupled identical phase oscillators where some oscillators are phase synchronized while others are desynchronized. It is known that chimera states of non-locally coupled Kuramoto-Sakaguchi oscillators in arrays of finite size are chaotic transients when the phase lag parameter α ∈ (0, π/2); after a transient time, all the oscillators are phase synchronized, with the transient time increasing exponentially with the number of oscillators. In this work, we consider a small array of six non-locally coupled oscillators with the phase lag parameter α ∈ (π/2, π) in which the complete phase synchronization of the oscillators is unstable. Under these circumstances, we observe a chimera state spontaneously formed by the partition of oscillators into two independently synchronizable clusters of both stable and unstable synchronous states. We provide numerical evidence supporting that the instantaneous frequencies of the oscillators of the chimera state are periodic functions of time with a common period, and as a result, the chimera state is stable but not long-lived transient. We also measure the basin stability of the chimera state and show that it can be observed for random initial conditions when α is slightly larger than π/2. A chimera state is the partition of coupled indistinguishable oscillators into two subsets with distinct behaviors (coherent and incoherent). It has been shown that a stable chimera of non-locally coupled Kuramoto-Sakaguchi oscillators in arrays with the phase lag parameter α ∈ (0, π/2) exists in the thermodynamic limit. However, the chimera state becomes unstable as the number of oscillators becomes finite, and as a result, it collapses to complete phase synchronization after a certain transient time. In this paper, we numerically show that a stable finite-sized chimera state exists if complete phase synchronization is avoided by taking α ∈ (π/2, π).
10.1007/s40042-021-00068-4
[ "https://arxiv.org/pdf/1911.12492v1.pdf" ]
208,513,072
1911.12492
733cb6f428ac5479fdcd5c1bb6f9824d01d532a9
Stable chimeras of non-locally coupled Kuramoto-Sakaguchi oscillators in a finite array Stable chimeras of non-locally coupled Kuramoto-Sakaguchi oscillators in a finite array Seungjae Lee Department of Physics Chonbuk National University 54896JeonjuKorea Young Sul Cho Department of Physics Chonbuk National University 54896JeonjuKorea Research Institute of Physics and Chemistry Chonbuk National University 54896JeonjuKorea Stable chimeras of non-locally coupled Kuramoto-Sakaguchi oscillators in a finite array Stable chimeras of non-locally coupled Kuramoto-Sakaguchi oscillators in a finite array (Dated: 2 December 2019) We consider chimera states of coupled identical phase oscillators where some oscillators are phase synchronized while others are desynchronized. It is known that chimera states of non-locally coupled Kuramoto-Sakaguchi oscillators in arrays of finite size are chaotic transients when the phase lag parameter α ∈ (0, π/2); after a transient time, all the oscillators are phase synchronized, with the transient time increasing exponentially with the number of oscillators. In this work, we consider a small array of six non-locally coupled oscillators with the phase lag parameter α ∈ (π/2, π) in which the complete phase synchronization of the oscillators is unstable. Under these circumstances, we observe a chimera state spontaneously formed by the partition of oscillators into two independently synchronizable clusters of both stable and unstable synchronous states. We provide numerical evidence supporting that the instantaneous frequencies of the oscillators of the chimera state are periodic functions of time with a common period, and as a result, the chimera state is stable but not long-lived transient. We also measure the basin stability of the chimera state and show that it can be observed for random initial conditions when α is slightly larger than π/2. A chimera state is the partition of coupled indistinguishable oscillators into two subsets with distinct behaviors (coherent and incoherent). It has been shown that a stable chimera of non-locally coupled Kuramoto-Sakaguchi oscillators in arrays with the phase lag parameter α ∈ (0, π/2) exists in the thermodynamic limit. However, the chimera state becomes unstable as the number of oscillators becomes finite, and as a result, it collapses to complete phase synchronization after a certain transient time. In this paper, we numerically show that a stable finite-sized chimera state exists if complete phase synchronization is avoided by taking α ∈ (π/2, π). I. INTRODUCTION The chimera state, a phenomenon where coupled identical oscillators are partitioned into coherent and incoherent subsets 1,2 , has been widely studied both theoretically [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21] and experimentally [22][23][24][25][26][27][28][29][30][31][32][33] using various definitions of coherence and incoherence 34 . The first observation of a chimera state was in arrays of non-locally coupled Ginzburg-Landau oscillators 3 . In the state, oscillators in an array are partitioned into two domains: one composed of phase-locked (coherent) oscillators, and one composed of drifting (incoherent) oscillators. To understand the phenomenon analytically, non-locally coupled Kuramoto-Sakaguchi oscillators 35 in arrays with the phase lag parameter α ∈ (0, π/2) have been employed, with which it has been shown that a stable chimera state exists in the limit of an infinite number of oscillators N → ∞ 4,5,19 . However, it was later reported that the chimera state becomes chaotic transient with finite N 7,8,17 , because the complete phase synchronization of all the oscillators is stable in the range 0 < α < π/2 a) Electronic mail: [email protected] such that the chimera state collapses to the complete phase synchronization after a transient time. Here, the transient time increases exponentially with N 8,20,27 , which is consistent with the analytical result that the chimera state is stable in the limit N → ∞ 4,5,19 . In this paper, we consider an array of six non-locally coupled Kuramoto-Sakaguchi oscillators with the phase lag parameter α ∈ (π/2, π), where complete phase synchronization is unstable and thus avoided. With this setup, we numerically observe a chimera state in which two oscillators are phase synchronized (coherent) while the other four oscillators are desynchronized (incoherent). Here, phase synchronization of the two oscillators is guaranteed because they receive the same input from the other four oscillators by permutation symmetry 16,[36][37][38] . Moreover, we show numerically that all oscillators behave periodically with a common period, and as a result, the four incoherent oscillators maintain their desynchronization, thereby leading to the chimera state being stable but not longlived transient. We note that chimera states with α ∈ (0, π/2) would collapse rapidly for such a small number of oscillators (N = 6) 8,20,27 . There have been several approaches to find stable chimeras with finite N by changing oscillators and coupling structures [9][10][11][13][14][15][16]29 . Our approach in this paper claims that the avoidance of complete phase synchronization is key to observe a stable chimera state composed of a finite number of oscillators 16,18,21,30 . The rest of this paper is organized as follows. In Sec. II, we describe a dynamical system where we observe a chimera state and identify the underlying mechanism of its formation. In Sec. III, we present numerical evidence to support that the instantaneous frequencies of the oscillators of the observed chimera state are periodic functions of time with a common period. In Sec. IV, we measure the basin stability 39 of the chimera state and other possible states in the system, and in Sec. V, we discuss the chimera state from the perspective of frequency synchronization and show that it is a weak chimera arXiv:1911.12492v1 [nlin.PS] 28 Nov 2019 state 9-12 . II. OBSERVATION OF A CHIMERA STATE A. Kuramoto-Sakaguchi oscillators in a given network We consider the Kuramoto-Sakaguchi model of phase oscillators 35 . In this model, the time derivative of the phase of each oscillator in a network is given bẏ φ i (t) = ω i + K N ∑ j=1 A i j sin(φ j (t) − φ i (t) + α)(1) for global coupling strength K > 0 and phase lag parameter N) is the phase of the i-th oscillator and A i j is each entry of N × N adjacency matrix A of the network. We let all oscillators be identical such that they have the same natural frequency ω i = ω for ∀i. If we use a rotating reference frame φ i → φ i + ωt for ∀i and time scaling t → t/K, Eq. (1) has the forṁ α ∈ (0, π), where φ i ∈ [0, 2π) (i = 1, ...,φ i (t) = N ∑ j=1 A i j sin(φ j (t) − φ i (t) + α).(2) To observe a chimera state in a finite array of non-locally coupled identical oscillators, we use a network of N = 6, as depicted in Fig. 1(a), where each oscillator is coupled with neighbors within distance two on the ring. In this paper, we use Eq. (2) with A i j of the network to find the chimera state. B. Partition of network oscillators into two independently synchronizable clusters The six oscillators in Fig. 1(a) are partitioned into two clusters C 1 = {1, 4} and C 2 = {2, 3, 5, 6}. We denote the synchronous phase of the first cluster by s 1 and that of the second cluster by s 2 . Then, the time derivatives of s 1 and s 2 are respectively given bẏ s 1 (t) = sin(φ 2 (t) − s 1 (t) + α) + sin(φ 3 (t) − s 1 (t) + α) + sin(φ 5 (t) − s 1 (t) + α) + sin(φ 6 (t) − s 1 (t) + α), s 2 (t) = 2sin(α) + sin(φ 1 (t) − s 2 (t) + α) + sin(φ 4 (t) − s 2 (t) + α).(3) Therefore, the synchronous phase of each cluster evolves following Eq. (3), meaning that each cluster can be synchronous irrespective of oscillator phases of the other cluster. C. Observation of a chimera state where only one cluster is synchronized A chimera state of synchronized C 1 and desynchronized C 2 is discovered using the following procedure. (i) We avoid the complete phase synchronization of all six oscillators by using α ∈ (π/2, π) in which complete phase synchronization is unstable. (ii) In this range of α, we integrate the quotient network dynamics of Eq. (2) for the two synchronous clusters C 1 , C 2 and show that synchronous state of C 1 is stable whereas that of C 2 is unstable along the trajectory of the two synchronous clusters. (iii) Finally, we observe the chimera state in the range of α by integrating the governing equation (Eq. (2)) numerically for random initial phases. We consider the quotient network dynamics of Eq. (2) for the two synchronous clusters C 1 , C 2 given bẏ s 1 (t) = 4sin(s 2 (t) − s 1 (t) + α) s 2 (t) = 2sin(s 1 (t) − s 2 (t) + α) + 2sin(α),(4) where s 1 , s 2 are the phases of synchronous clusters C 1 , C 2 , respectively (i.e. s 1 = φ 1 = φ 4 and s 2 = φ 2 = φ 3 = φ 5 = φ 6 ). A variational equation of Eq. (4) along the trajectory of complete phase synchronization s(t) = s 1 (t) = s 2 (t) is given bẏ η(t) = −6cos(α)η(t)(5) for s 1 (t) = s(t) − 2η(t) and s 2 (t) = s(t) + η(t). We find that η(t) diverges for π/2 < α < π such that complete phase synchronization is unstable and therefore avoided. Accordingly, (g) Fig. 1(a). (a-c) Numerical data to support that (a)φ 1 (solid line) anḋ φ 4 (dotted line), (b)φ 2 , and (c)φ 5 are periodic functions with period T . Comparison between the left and right panels in each row shows that the same pattern ofφ i (t) (i = 1, 2, 4, 5) during period T appears after 10 4 cycles. (d,e) Numerical data to support that (d)φ 3 and (e)φ 6 are periodic functions with period 2T . Comparison between the left and right panels in each row shows that the same pattern oḟ φ i (t) (i = 3, 6) during period 2T appears after 5 × 10 3 cycles. (b) (c) ሶ 2 ሶ 5 ሶ 1 , ሶ 4 ሶ 3 (d)ሶ 1 −2.5 −1.5 0.6 0.0 −0.6 −0.5 −3.0 −2.0 −1.0 0.3 −0.3 = 0, , 2 , 3 , 4 (f) ሶ 3 − ሶ 6 −2.5 −1.5 −0.5 −3.0 −2.0 −1.(f,g) (f) (φ 1 ,φ 3 −φ 6 ) and (g) (φ 1 ,φ 2 −φ 5 ) forφ i in the left panels of (a- e). (φ 1 ,φ 3 −φ 6 ) moves around a fixed path two times with period 2T , whereas (φ 1 ,φ 2 −φ 5 ) moves around a fixed path four times with period T . Arrows indicate the direction of motion. These results support that the least common multiple of the periods of allφ i is indeed 2T . the phases of the two synchronous clusters remain distinct (s 1 (t) = s 2 (t)) in the range π/2 < α < π. Along the trajectory (s 1 (t), s 2 (t)) of Eq. (4) for π/2 < α < π, we show that the synchronous state of C 1 is stable whereas that of C 2 is unstable. For the deviation of each phase δ φ i = φ i − s m for i ∈ C m (m = 1, 2), we consider perturbation transverse to the synchronization manifold of each cluster. Specifically, we consider perturbations η (1) κ (κ = 2) for C 1 (a) (b) (c) ሶ 2 ሶ 5 ሶ 1 , ሶ 4 ሶ 3 (d)2 = (δ φ 1 − δ φ 4 )/ √ 2, η(1)2 = (−δ φ 2 +δ φ 3 −δ φ 5 +δ φ 6 )/2, η(2)3 = (δ φ 2 −δ φ 5 )/ √ 2,(2) and η 2 (t) = −4cos(s 2 (t) − s 1 (t) + α)η (1) 2 (t), η (2) 2 (t) = −2 cos(s 1 (t) − s 2 (t) + α) + 2cos(α) η (2) 2 (t), η (2) 3 (t) = −2 cos(s 1 (t) − s 2 (t) + α) + cos(α) η (2) 3 (t), η (2) 4 (t) = −2 cos(s 1 (t) − s 2 (t) + α) + cos(α) η (2) 4 (t).(6) We numerically obtain transverse Lyapunov exponents Λ (m) κ = (1/t)ln(||η (m) κ (t)||/||η (m) κ (0)||) for t 1, as shown in Fig. 1(b). We note that Λ We can obtain a functional form of Λ (m) κ depending on α by using the evolving s 1 (t), s 2 (t) of Eq. (4) with timeindependent phase difference Y = s 1 (t) − s 2 (t). If we insert Y = s 1 (t)−s 2 (t) into Eq. (4) withṡ 1 =ṡ 2 , we can derive Y as a function of α such that Y (α) = cos −1 (−4 − 5cos(2α))/(5 + 4cos(2α)) . Then, we obtain a functional form of Λ κ (t) = Λ (m) κ (α)η (m) κ (t). We check that this analytic form of Λ (m) κ (α) agrees well with the numerical result, as shown in Fig. 1(b). For π/2 < α < π where complete phase synchronization (i.e. φ i (t) = s(t) for ∀i) is avoided, we find that Λ (1) 2 < 0, Λ (2) 2 > 0, and Λ (2) 3 = Λ(2) 4 > 0 as in Fig. 1(b) such that the synchronous state of C 1 is stable while that of C 2 is unstable along the trajectory s 1 (t) = s 2 (t) of Eq. (4). Therefore, we expect that the chimera state can be observed in the range π/2 < α < π using random initial conditions of φ i for which only the oscillators in C 1 would be synchronized spontaneously. Via numerical integration of Eq. (2), we indeed observe that the chimera state persists even after t = 10 9 for a random initial condition with α ∈ (π/2, π), as shown in the right panel of Fig. 1(a). III. NUMERICAL EVIDENCE FORφ i (t) AS PERIODIC FUNCTIONS WITH A COMMON PERIOD To show that the chimera state in the right panel of Fig. 1(a) is stable but not long-lived transient, we present numerical evidence to support the periodic behavior of the state. Specifically, we obtain numerically thatφ i (t + T ) =φ i (t) (i = 1, 2, 4, 5) andφ i (t + 2T ) =φ i (t) (i = 3, 6) for t ≥ 0 with constant T ≈ 2.02, as shown in Fig. 2. We note that the least common multiple of the periods of allφ i is 2T . This periodic behavior might be understood analytically by finding the integral of motion for this state 10 . As previously mentioned, in the chimera state, two oscillators C 1 = {1, 4} are phase synchronized while the other four oscillators C 2 = {2, 3, 5, 6} are desynchronized. A necessary condition for the phase synchronization of two oscillators {i, j} over a (finite) interval of t isφ i (t) =φ j (t) over the interval of t. Based on the numerical results in Fig. 2, no pair of oscillators {i, j} for 1 ≤ i = j ≤ 6 satisfiesφ i (t) =φ j (t) over the common period 2T except the pair C 1 = {1, 4}, which repeats every 2T . Therefore, the chimera state where only the pair C 1 = {1, 4} is phase synchronized would persist permanently. To investigate the linear stability of the trajectory (φ 1 (t), ..., φ 6 (t)) of the chimera state, we numerically integrate Eq. (2) to obtain a perturbed trajectory for t ≥ 0 for a given random initial perturbation of phases φ i (0) → φ i (0) + δ φ i (0), and then compare the two trajectories for t ≥ 0. For the random initial perturbation of phases δ φ i (0) used in Fig. 3, there is a finite time shift δt between time seriesφ i (t) of the two trajectories after an initial transient. Therefore, the difference between φ i (t) of the two trajectories should be finite as t → ∞ such that the trajectory of the chimera state is unstable or neutrally stable. We checked numerically that the largest nontrivial Lyapunov exponent along the trajectory of the chimera state has a small positive value close to zero, 0.00006 ± 0.00002, which supports that the trajectory of the chimera state would be neutrally stable 40 . However, as shown in Fig. 3, we find that time shift δt is the same regardless of i, which means that everyφ i of the perturbed trajectory behaves the same as that of the trajectory of the chimera state for time shift t → t − δt. We also observe a constant time shift of allφ i with varying δt depending on the initial perturbation of phases δ φ i (0). This time translation invariance ofφ i (t) for arbitrary δt would explain why the chimera state is observable, even though the trajectory of the state is neutrally stable. Based on the numerical results, the chimera state that we observe is not chaotic, in contrast to the finite chimera state with α ∈ (0, π/2) that is chaotic before collapse to complete phase synchronization 7,8,17,28 . Recently, several stable chaotic chimera states of finite size have been suggested using different types of oscillators 11,13,16 . Along these lines, we may find stable chaotic chimera states of finite size by avoiding the complete phase synchronization of non-locally coupled Kuramoto-Sakaguchi oscillators in arrays for larger N 11,41 . IV. BASIN STABILITY OF THE CHIMERA STATE For α ∈ (π/2, π), we measure the fraction of random initial conditions (φ 1 (0), ..., φ 6 (0)) ∈ [0, 2π) 6 that arrives at the chimera state following Eq. (2). To be specific, we integrate Eq. (2) up to t = 10 4 for each initial condition, and regard the final state as the chimera state if it satisfies the following two conditions: φ 1 = φ 4 and φ i = φ j for any pairs {i, j} ∈ {1, 2, 3, 5, 6} (as well as two other equivalent conditions given by the rotational symmetry of the network), and allφ i are periodic functions of t. For the latter, we regard eacḣ φ i as a periodic function if the standard deviation of the distances between two consecutive peak points of the function during 9 × 10 3 ≤ t ≤ 10 4 is less than the step-size of t used to integrate Eq. (2) numerically. Here, we take t = 10 4 for the upper limit of integration to measure basin stability after discarding the initial transients, because the chimera state in Fig. 1(a) appeared for a time interval of integration shorter than 10 3 beginning with a random condition. We observe the chimera state with a finite probability for α < 1.64, whereas no chimera state can be observed outside of this range as shown in Fig. 4(c). This might be because the basin stability of the chimera state is exceedingly small or zero outside this range. In the entire range of π/2 < α < π, we observe two other states as plotted in Fig. 4(a) and (b). The trajectory of the state in Fig. 4(a) is (φ 1 = φ 4 = ωt + C 1 , φ 2 = φ 5 = ωt + C 1 + π, φ 3 = C 2 , φ 6 = C 2 + π), and that of the state in Fig. 4(b) is given by (φ 1 = φ 4 = ωt + C 3 , φ 2 = φ 5 = ωt + 2π/3 + C 3 , φ 3 = φ 6 = ωt + 4π/3 +C 3 ) for arbitrary constants C 1 ,C 2 ,C 3 . Here, ω = −2sin(α) is derived for both states. We obtain the basin stability for these two states (considering other sets of trajectories given by the rotational and reflectional symmetry of the network) as shown in Fig. 4(c) using the same upper limit of integration t = 10 4 . We note that these two states are distinct from the chimera state in the sense that they respectively include two and three synchronous clusters, in contrast to the chimera state having only one synchronous cluster. V. DISCUSSION In this paper, we have discussed phase synchronization φ i = φ j of oscillators i = j. From the perspective of phase synchronization, we observed a chimera state in the network depicted in Fig. 1(a), where six oscillators are partitioned into a synchronous cluster C 1 = {1, 4} and an asynchronous cluster C 2 = {2, 3, 5, 6}. Previously, a study 9 considered frequency synchronization Ω i = Ω j of oscillators i = j, where the frequency of each oscillator i is given by Ω i = lim t→∞ 1 t t 0φ i (t )dt . From the perspective of frequency synchronization, the authors introduced the so-called weak chimera state for oscillators i, j, k in which Ω i = Ω j and Ω i = Ω k . In the invariant subspace of the three-oscillator quotient system (φ 1 = φ 4 , φ 2 = φ 6 , φ 3 = φ 5 ) of Eq. (2) with the same network, they reported a weak chimera state where Ω 2 = Ω 1 and Ω 2 = Ω 3 . Such existence of weak chimera states in three-oscillator quotient systems has recently been understood analytically 10 . In the present work, we numerically measure Ω i = 1 t t 0φ i (t )dt of the chimera state in Fig. 1(a) as Ω i = −1.61081 ± 0.00001 (i = 1, 2, 4, 5) and Ω i = −0.05504 ± 0.00002 (i = 3, 6) by integratingφ i up to t = 10 5 . Based on the obtained values of Ω i , we assume that the oscillators in the chimera state might be partitioned into two clusters {1, 2, 4, 5} and {3, 6}, where the oscillators in each cluster have the same value of Ω i . Consequently, the chimera state would be a weak chimera state satisfying Ω 1 = Ω 3 and Ω 1 = Ω 2 in the invariant subspace of this five-oscillator quotient system (φ 1 = φ 4 , φ 2 , φ 3 , φ 5 , φ 6 ). We may understand the existence of the chimera state analytically by extending the analysis in previous works 9, 10 to the invariant subspace of this five-oscillator quotient system. Finally, we note that the persistence of the synchronous state of the one subset irrespective of the asynchronous phases of the other subset in the chimera state is related to the invariance of the adjacency matrix (symmetry) under permutations within the synchronous subset 16,[36][37][38] . The number of permutations conserving an adjacency matrix usually increases drastically with network size 42 ; therefore, we expect that the formation of synchronous subsets in diverse chimera states in large networks can be understood from the perspective of symmetry under permutations within each subset. FIG . 1. (a) Left: Schematic diagram of the network used in this paper. Right: φ 1 ( ), φ 4 (•), φ 2 (•), φ 3 ( ), φ 5 ( ), and φ 6 ( ) of the chimera state observed in the network. To observe this state, we integrate Eq. (2) with α = 1.58 for a random initial condition (φ 1 , ..., φ 6 ) ∈ [0, 2π) 6 at t = −10 3 to set t to zero after an initial transient. (Note that the chimera state is observed for t ≥ 0 in Figs. 2 and 3.) (b) Thick lines indicate numerically estimated transverse Lyapunov exponents Λ (m) κ for each cluster C m . To obtain these lines, we integrate Eq. (6) with Eq. (4) up to t = 10 5 for each α. To discard the initial transient, we numerically integrate Eq. (4) over −10 5 ≤ t ≤ 0 for randomly taken s m (−10 5 ) ∈ [0, 2π) (m = 1, 2) to obtain s 1 (0) and s 2 (0) for each α. Dotted lines indicate the functional form of Λ (m) κ (α) discussed in the main text. FIG. 2 . 2Periodicity in the time series ofφ i (t) of the chimera state in FIG. 3 . 3Constant time shift δt of allφ i (t) of the chimera state in Fig. 1(a) against initial phase perturbation. (a) The sameφ 1 ,φ 4 of the chimera state in Fig. 2(a), andφ 1 (thick dotted line),φ 4 (thin solid line) of the trajectory perturbed at t = 0. (b-e) The sameφ i (i = 2, 3, 5, 6) of the chimera state in Fig. 2(b-e), andφ i (dotted line) of the trajectory perturbed at t = 0. (a-e) The initial phase perturbation of each oscillator is given by random numbers δ φ i (0) ∈ [−1, 1].In the right panels, eachφ i of the perturbed trajectory is shifted forward by δt ≈ 0.89 constantly for ∀i compared to those of the chimera state. (f,g) (f) (φ 1 ,φ 3 −φ 6 ) and (g) (φ 1 ,φ 2 −φ 5 ) of the chimera state (solid line) and the perturbed trajectory (dotted line) forφ i in the right panels of (a-e). On each plane, both trajectories move around the same path, which supports a constant time shift δt of allφ i . and η(2) κ (κ = 2, 3, 4) for C 2 , where η FIG. 4 . 4Basin stability of the chimera state and other states. φ 1 ( ), φ 2 (•), φ 3 ( ), φ 4 (•), φ 5 ( ), and φ 6 ( ) of (a) a state composed of two synchronous clusters ({1, 4}, {2, 5}) and an asynchronous cluster ({3, 6}), and (b) a state composed of three synchronous clusters ({1, 4}, {2, 5}, {3, 6}). (c) Basin stability of the chimera state (•) and the two states in (a) ( ) and (b) ( ) versus α. For each value of α, we use 10 4 random initial conditions. The vertical dashed line at 1.635 indicates where the chimera state is no longer observed in the range of α to the right of the line. For each value of α, only the symbols of the states with nonzero basin stability are marked. For α = 1.575, we observe states other than the chimera state and the two states in (a) and (b) ( ). ACKNOWLEDGMENTS This work was supported by National Research Foundation of Korea (NRF) Grant No. 2017R1C1B1004292. Chimera states: coexistence of coherence and incoherence in networks of coupled oscillators. M J Panaggio, D M Abrams, Nonlinearity. 2867M. J. Panaggio and D. M. Abrams, "Chimera states: coexistence of coher- ence and incoherence in networks of coupled oscillators," Nonlinearity 28 R67 (2015). The mathematics behind chimera states. O E Omel&apos;chenko, Nonlinearity. 31121O. E. Omel'chenko, "The mathematics behind chimera states," Nonlinear- ity 31, R121 (2018). Coexistence of Coherence and Incoherence in Nonlocally Coupled Phase Oscillators. Y Kuramoto, D Battogtokh, Nonlin. Phenom. Compl. Syst. 5Y. Kuramoto and D. Battogtokh, "Coexistence of Coherence and Incoher- ence in Nonlocally Coupled Phase Oscillators," Nonlin. Phenom. Compl. Syst. 5, 380-385 (2002). Chimera States for Coupled Oscillators. D M Abrams, S H Strogatz, Phys. Rev. Lett. 93174102D. M. Abrams and S. H. Strogatz, "Chimera States for Coupled Oscilla- tors," Phys. Rev. Lett. 93, 174102 (2004). Chimera states in a ring of nonlocally coupled oscillators. D M Abrams, S H Strogatz, Int. J. Bifurcation Chaos Appl. Sci. Eng. 16121D. M. Abrams and S. H. Strogatz, "Chimera states in a ring of nonlocally coupled oscillators," Int. J. Bifurcation Chaos Appl. Sci. Eng. 16(1), 21 (2006). Solvable Model for Chimera States of Coupled Oscillators. D M Abrams, R Mirollo, S H Strogatz, D A Wiley, Phys. Rev. Lett. 10184103D. M. Abrams, R. Mirollo, S. H. Strogatz, and D. A. Wiley, "Solvable Model for Chimera States of Coupled Oscillators," Phys. Rev. Lett. 101, 084103 (2008). Chimera states as chaotic spatiotemporal patterns. O E Omel&apos;chenko, M Wolfrum, Y L Maistrenko, Phys. Rev. E. 8165201O. E. Omel'chenko, M. Wolfrum, and Y. L. Maistrenko, "Chimera states as chaotic spatiotemporal patterns," Phys. Rev. E 81, 065201(R) (2010). Chimera states are chaotic transients. M Wolfrum, O E Omel&apos;chenko, Phys. Rev. E. 8415201M. Wolfrum and O. E. Omel'chenko, "Chimera states are chaotic tran- sients," Phys. Rev. E 84, 015201(R) (2011). Weak chimeras in minimal networks of coupled phase oscillators. P Ashwin, O Burylko, Chaos. 2513106P. Ashwin and O. Burylko, "Weak chimeras in minimal networks of coupled phase oscillators," Chaos 25, 013106 (2015). Existence and stability of chimera states in a minimal system of phase oscillators. M Thoubaan, P Ashwin, Chaos. 28103121M. Thoubaan and P. Ashwin, "Existence and stability of chimera states in a minimal system of phase oscillators," Chaos 28, 103121 (2018). Chaotic weak chimeras and their persistence in coupled populations of phase oscillators. C Bick, P Ashwin, Nonlinearity. 291468C. Bick and P. Ashwin, "Chaotic weak chimeras and their persistence in coupled populations of phase oscillators," Nonlinearity 29, 1468 (2016). Smallest chimera states. Y Maistrenko, S Brezetsky, P Jaros, R Levchenko, T Kapitaniak, Phys. Rev. E. 95R10203Y. Maistrenko, S. Brezetsky, P. Jaros, R. Levchenko, and T. Kapitaniak, "Smallest chimera states," Phys. Rev. E 95, 010203(R) (2017). Persistent chimera states in nonlocally coupled phase oscillators. Y Suda, K Okuda, Phys. Rev. E. 9260901Y. Suda and K. Okuda, "Persistent chimera states in nonlocally coupled phase oscillators," Phys. Rev. E 92, 060901(R) (2015). Chimera states in networks of phase oscillators: The case of two small populations. M J Panaggio, D M Abrams, P Ashwin, C R Laing, Phys. Rev. E. 9312218M. J. Panaggio, D. M. Abrams, P. Ashwin, and C. R. Laing, "Chimera states in networks of phase oscillators: The case of two small populations," Phys. Rev. E 93, 012218 (2016). Temporal intermittency and the lifetime of chimera states in ensembles of nonlocally coupled chaotic oscillators. N I Semenova, G I Strelkova, V S Anishchenko, A Zakharova, Chaos. 2761102N. I. Semenova, G. I. Strelkova, V. S. Anishchenko, and A. Zakharova, "Temporal intermittency and the lifetime of chimera states in ensembles of nonlocally coupled chaotic oscillators," Chaos 27, 061102 (2017). Stable Chimeras and Independently Synchronizable Clusters. Y S Cho, T Nishikawa, A E Motter, Phys. Rev. Lett. 11984101Y. S. Cho, T. Nishikawa, and A. E. Motter, "Stable Chimeras and Indepen- dently Synchronizable Clusters," Phys. Rev. Lett. 119, 084101 (2017). Spectral properties of chimera states. M Wolfrum, O E Omel&apos;chenko, S Yanchuk, Y L Maistrenko, Chaos. 2113112M. Wolfrum, O. E. Omel'chenko, S. Yanchuk, and Y. L. Maistrenko, "Spec- tral properties of chimera states," Chaos 21, 013112 (2011). Self-emerging and turbulent chimeras in oscillator chains. G Bordyugov, A Pikovsky, M Rosenblum, Phys. Rev. E. 8235205G. Bordyugov, A. Pikovsky, and M. Rosenblum, "Self-emerging and turbu- lent chimeras in oscillator chains," Phys. Rev. E 82, 035205(R) (2010). Coherence-incoherence patterns in a ring of nonlocally coupled phase oscillators. O E Omel&apos;chenko, Nonlinearity. 262469O. E. Omel'chenko, "Coherence-incoherence patterns in a ring of non- locally coupled phase oscillators," Nonlinearity 26, 2469 (2013). Stability in the Kuramoto-Sakaguchi model for finite networks of identical oscillators. A Mihara, R O Medrano-T, Nonlinear Dyn. 98539A. Mihara and R. O. Medrano-T, "Stability in the Kuramoto-Sakaguchi model for finite networks of identical oscillators", Nonlinear Dyn. 98, 539 (2019). Loss of Coherence in Dynamical Networks: Spatial Chaos and Chimera States. I Omelchenko, Y Maistrenko, P Hövel, E Schöll, Phys. Rev. Lett. 106234102I. Omelchenko, Y. Maistrenko, P. Hövel, and E. Schöll, "Loss of Coherence in Dynamical Networks: Spatial Chaos and Chimera States," Phys. Rev. Lett. 106, 234102 (2011). Experimental observation of chimeras in coupled-map lattices. A M Hagerstrom, T E Murphy, R Roy, P Hövel, I Omelchenko, E Schöll, Nat. Phys. 8658A. M. Hagerstrom, T. E. Murphy, R. Roy, P. Hövel, I. Omelchenko, and E. Schöll, "Experimental observation of chimeras in coupled-map lattices," Nat. Phys. 8, 658 (2012). Chimera and phase-cluster states in populations of coupled chemical oscillators. M R Tinsley, S Nkomo, K Showalter, Nat. Phys. 8662M. R. Tinsley, S. Nkomo, and K. Showalter, "Chimera and phase-cluster states in populations of coupled chemical oscillators," Nat. Phys. 8, 662 (2012). Chimera states in mechanical oscillator networks. E A Martens, S Thutupalli, A Fourrière, O Hallatschek, Proc. Natl. Acad. Sci. U.S.A. 11010563E. A. Martens, S. Thutupalli, A. Fourrière, and O. Hallatschek, "Chimera states in mechanical oscillator networks," Proc. Natl. Acad. Sci. U.S.A. 110, 10563 (2013). Chimera States in Star Networks. C Meena, K Murali, S Sinha, Int. J. Bifurcation Chaos. 261630023C. Meena, K. Murali, and S. Sinha, "Chimera States in Star Networks," Int. J. Bifurcation Chaos 26, 1630023 (2016). Experimental observation of chimera and cluster states in a minimal globally coupled network. J D Hart, K Bansal, T E Murphy, R Roy, Chaos. 2694801J. D. Hart, K. Bansal, T. E. Murphy, and R. Roy, "Experimental observation of chimera and cluster states in a minimal globally coupled network," Chaos 26, 094801 (2016). Transient scaling and resurgence of chimera states in networks of Boolean phase oscillators. D P Rosin, D Rontani, N D Haynes, E Schöll, D J Gauthier, Phys. Rev. E. 9030902D. P. Rosin, D. Rontani, N. D. Haynes, E. Schöll, and D. J. Gauthier, "Tran- sient scaling and resurgence of chimera states in networks of Boolean phase oscillators," Phys. Rev. E 90, 030902(R) (2014). Transient chaos generates small chimeras. A Banerjee, D Sikder, Phys. Rev. E. 9832220A. Banerjee and D. Sikder, "Transient chaos generates small chimeras," Phys. Rev. E 98, 032220 (2018). Amplitude-phase coupling drives chimera states in globally coupled laser networks. F Böhm, A Zakharova, E Schöll, K Lüdge, Phys. Rev. E. 9140901F. Böhm, A. Zakharova, E. Schöll, and K. Lüdge, "Amplitude-phase cou- pling drives chimera states in globally coupled laser networks," Phys. Rev. E 91, 040901(R) (2015). Small chimera states without multistability in a globally delay-coupled network of four lasers. A Röhm, F Böhm, K Lüdge, Phys. Rev. E. 9442204A. Röhm, F. Böhm, and K. Lüdge, "Small chimera states without multista- bility in a globally delay-coupled network of four lasers," Phys. Rev. E 94, 042204 (2016). Laser chimeras as a paradigm for multistable patterns in complex systems. L Larger, B Penkovsky, Y Maistrenko, Nat. Commun. 67752L. Larger, B. Penkovsky, and Y. Maistrenko, "Laser chimeras as a paradigm for multistable patterns in complex systems," Nat. Commun. 6, 7752 (2015). The smallest chimera state for coupled pendula. J Wojewoda, K Czolczynski, Y Maistrenko, T Kapitaniak, Sci. Rep. 634329J. Wojewoda, K. Czolczynski, Y. Maistrenko, and T. Kapitaniak, "The smallest chimera state for coupled pendula," Sci. Rep. 6, 34329 (2016). Exotic states in a simple network of nanoelectromechanical oscillators. M H Matheny, Science. 3637932M. H. Matheny et al., "Exotic states in a simple network of nanoelectrome- chanical oscillators," Science 363, eaav7932 (2019). A classification scheme for chimera states. F P Kemeth, S W Haugland, L Schmidt, L G Kevrekidis, K Krischer, Chaos. 2694815F. P. Kemeth, S. W. Haugland, L. Schmidt, L. G. Kevrekidis, and K. Krischer, "A classification scheme for chimera states," Chaos 26, 094815 (2016). A Soluble Active Rotator Model Showing Phase Transitions via Mutual Entrainment. H Sakaguchi, Y Kuramoto, Prog. Theor. Phys. 76576H. Sakaguchi and Y. Kuramoto, "A Soluble Active Rotator Model Show- ing Phase Transitions via Mutual Entrainment," Prog. Theor. Phys. 76, 576 (1986). Remote Synchronization Reveals Network Symmetries and Functional Modules. V Nicosia, M Valencia, M Chavez, A Díaz-Guilera, V Latora, Phys. Rev. Lett. 110174102V. Nicosia, M. Valencia, M. Chavez, A. Díaz-Guilera, and V. Latora, "Re- mote Synchronization Reveals Network Symmetries and Functional Mod- ules," Phys. Rev. Lett. 110, 174102 (2013). Cluster synchronization and isolated desynchronization in complex networks with symmetries. L M Pecora, F Sorrentino, A M Hagerstrom, T E Murphy, R Roy, Nat. Commun. 54079L. M. Pecora, F. Sorrentino, A. M. Hagerstrom, T. E. Murphy, and R. Roy, "Cluster synchronization and isolated desynchronization in complex net- works with symmetries," Nat. Commun. 5, 4079 (2014). Complete characterization of the stability of cluster synchronization in complex dynamical networks. F Sorrentino, L M Pecora, A M Hagerstrom, T E Murphy, R Roy, Sci. Adv. 21501737F. Sorrentino, L. M. Pecora, A. M. Hagerstrom, T. E. Murphy, and R. Roy, "Complete characterization of the stability of cluster synchronization in complex dynamical networks," Sci. Adv. 2, e1501737 (2016). How basin stability complements the linear-stability paradigm. P J Menck, J Heitzig, N Marwan, J Kurths, Nat. Phys. 989P. J. Menck, J. Heitzig, N. Marwan, and J. Kurths, "How basin stability complements the linear-stability paradigm," Nat. Phys. 9, 89 (2013). Lyapunov spectra and collective modes of chimera states in globally coupled Stuart-Landau oscillators. K Höhlein, F P Kemeth, K Krischer, Phys. Rev. E. 10022217K. Höhlein, F. P. Kemeth, and K. Krischer, "Lyapunov spectra and collec- tive modes of chimera states in globally coupled Stuart-Landau oscillators," Phys. Rev. E 100, 022217 (2019). Chaos in Symmetric Phase Oscillator Networks. C Bick, M Timme, D Paulikat, D Rathlev, P Ashwin, Phys. Rev. Lett. 107244101C. Bick, M. Timme, D. Paulikat, D. Rathlev, and P. Ashwin, "Chaos in Sym- metric Phase Oscillator Networks,", Phys. Rev. Lett. 107, 244101 (2011). Symmetry in complex networks. B D Macarthur, R J Sánchez-García, J W Anderson, Discrete Appl. Math. 1563525B. D. MacArthur, R. J. Sánchez-García, and J. W. Anderson, "Symmetry in complex networks," Discrete Appl. Math. 156, 3525 (2008).
[]
[ "Tracking and Vertex detectors at FCC-ee", "Tracking and Vertex detectors at FCC-ee" ]
[ "Nicola Bacchetta \nINFN-Padova\nPadovaItaly\n", "Paula Collins \nCERN, EP Department\nGenevaSwitzerland\n", "Petra Riedler \nCERN, EP Department\nGenevaSwitzerland\n" ]
[ "INFN-Padova\nPadovaItaly", "CERN, EP Department\nGenevaSwitzerland", "CERN, EP Department\nGenevaSwitzerland" ]
[]
The combined vertexing and tracking performance of the innermost part of the FCC-ee experiments must deliver outstanding precision for measurement of the track momentum together with an impact parameter resolution exceeding by at least a factor five that typically achieved at LHC experiments. Furthermore, precision measurements require stability and fiducial accuracy at a level which is unprecedented in collider experiments. For the innermost vertex layers these goals translate into a target hit resolution of approximately 3 µm together with a material budget of around 0.2% of a radiation length per layer. Typically this performance might be provided by silicon-based tracking, together with a careful choice of a low-mass cooling technology, and a stable, low mass mechanical structure capable of providing measurements with a low enough systematic error to match the tremendous statistics expected, particularly for the run around the Z resonance. At FCC-ee, the magnetic field will be limited to approximately 2 T, in order to contain the vertical emittance at the Z pole, and a tracking volume up to relative large radius is needed. The technological solution could be silicon or gaseous based tracking, in both cases with the focus on optimising the material budget, and particle identification capability would be an advantage. Depending on the global design, an additional silicon tracking layer could be added at the outer radius of the tracker to provide a final precise point contributing to the momentum or possibly time of flight measurement. Current developments in monolithic and hybrid silicon technology, as well as advanced gaseous tracking developments provide an encouraging road map towards the FCC-ee detector. The current state of the art and potential extensions will be discussed and a generic call for technology which could have a significant impact on the performance of an FCC-ee tracking and vertexing detector is outlined.PACS. PACS-key discribing text of that key -PACS-key discribing text of that key 1 Introduction: tracking requirements for FCC-eeThe tracking volume which makes up the innermost part of any FCC-ee detector must be capable of delivering outstanding performance across the full acceptance, down to approximately 120 mrad, and full momentum range, typically with full efficiencies down to 300 MeV/c and 98% or better for muons down to 100 MeV/c transverse momentum. An overview of proposed detector layout and performance requirements can be found in the FCC-ee Conceptual Design Report (CDR) [1] and in this issue[2]. A driving factor in the design is that the magnetic field will be limited to approximately 2 T, in order to contain the vertical emittance at the Z pole, and a tracking volume up to relatively large radius will therefore be needed. It must be engineered in a way which results in minimum material in front of the external detectors and a stable structure which is capable of providing measurements with a low enough systematic error to match the tremendous statistics expected, particularly for the Z pole running.The role of the tracking system will be decisive for the FCC-ee physics goals. Examples of physics channels which place particular demands on the vertexing and tracking are listed here:-The clean environment in which the Higgs will be produced at FCC-ee, with 1M ZH events expected at 240 GeV [3], gives a unique opportunity to explore all decay modes, hence the importance of excellent b, c and τ tagging, placing high demands on the quality of the impact parameter resolution and secondary vertex resolution. -Another unique feature of the clean Higgs production via "Higgsstrahlung" is the possibility of reconstructing the Higgs recoil mass against the Z boson [3]. In order to fully benefit from this, an excellent momentum resolution is required from the main tracker.Send offprint requests to: arXiv:2112.13019v1 [physics.ins-det]
10.1140/epjp/s13360-021-02323-w
[ "https://arxiv.org/pdf/2112.13019v1.pdf" ]
245,502,125
2112.13019
e00b596d27a0e4662238966bd923be36e6a9a191
Tracking and Vertex detectors at FCC-ee Nicola Bacchetta INFN-Padova PadovaItaly Paula Collins CERN, EP Department GenevaSwitzerland Petra Riedler CERN, EP Department GenevaSwitzerland Tracking and Vertex detectors at FCC-ee Received: December 28, 2021/ Revised version: December 28, 2021EPJ manuscript No. (will be inserted by the editor) The combined vertexing and tracking performance of the innermost part of the FCC-ee experiments must deliver outstanding precision for measurement of the track momentum together with an impact parameter resolution exceeding by at least a factor five that typically achieved at LHC experiments. Furthermore, precision measurements require stability and fiducial accuracy at a level which is unprecedented in collider experiments. For the innermost vertex layers these goals translate into a target hit resolution of approximately 3 µm together with a material budget of around 0.2% of a radiation length per layer. Typically this performance might be provided by silicon-based tracking, together with a careful choice of a low-mass cooling technology, and a stable, low mass mechanical structure capable of providing measurements with a low enough systematic error to match the tremendous statistics expected, particularly for the run around the Z resonance. At FCC-ee, the magnetic field will be limited to approximately 2 T, in order to contain the vertical emittance at the Z pole, and a tracking volume up to relative large radius is needed. The technological solution could be silicon or gaseous based tracking, in both cases with the focus on optimising the material budget, and particle identification capability would be an advantage. Depending on the global design, an additional silicon tracking layer could be added at the outer radius of the tracker to provide a final precise point contributing to the momentum or possibly time of flight measurement. Current developments in monolithic and hybrid silicon technology, as well as advanced gaseous tracking developments provide an encouraging road map towards the FCC-ee detector. The current state of the art and potential extensions will be discussed and a generic call for technology which could have a significant impact on the performance of an FCC-ee tracking and vertexing detector is outlined.PACS. PACS-key discribing text of that key -PACS-key discribing text of that key 1 Introduction: tracking requirements for FCC-eeThe tracking volume which makes up the innermost part of any FCC-ee detector must be capable of delivering outstanding performance across the full acceptance, down to approximately 120 mrad, and full momentum range, typically with full efficiencies down to 300 MeV/c and 98% or better for muons down to 100 MeV/c transverse momentum. An overview of proposed detector layout and performance requirements can be found in the FCC-ee Conceptual Design Report (CDR) [1] and in this issue[2]. A driving factor in the design is that the magnetic field will be limited to approximately 2 T, in order to contain the vertical emittance at the Z pole, and a tracking volume up to relatively large radius will therefore be needed. It must be engineered in a way which results in minimum material in front of the external detectors and a stable structure which is capable of providing measurements with a low enough systematic error to match the tremendous statistics expected, particularly for the Z pole running.The role of the tracking system will be decisive for the FCC-ee physics goals. Examples of physics channels which place particular demands on the vertexing and tracking are listed here:-The clean environment in which the Higgs will be produced at FCC-ee, with 1M ZH events expected at 240 GeV [3], gives a unique opportunity to explore all decay modes, hence the importance of excellent b, c and τ tagging, placing high demands on the quality of the impact parameter resolution and secondary vertex resolution. -Another unique feature of the clean Higgs production via "Higgsstrahlung" is the possibility of reconstructing the Higgs recoil mass against the Z boson [3]. In order to fully benefit from this, an excellent momentum resolution is required from the main tracker.Send offprint requests to: arXiv:2112.13019v1 [physics.ins-det] -Accessing τ properties at the so called "TeraZ", referring to the expected datasample of 5 × 10 12 collected Z bosons, will be another major opportunity at FCC-ee [4]. Measurements which require excellent vertexing include the lifetime, mass, leptonic branching fraction and lepton flavour violating decays such as τ → µµµ or τ → µγ. The impact of improved lifetime measurements is illustrated in Fig. 1. In addition, b hadron decays modes with τ leptons are a crucial factor in the elucidation of flavour physics models. Examples of b-hadron decay modes which access lepton universality tests are B → τ τ and B → τ ν. B → K * o τ + τ − is an example of a particularly interesting electroweak-penguin mode, especially in the light of current anomalies [5]. The impact of the vertex detector performance for this channel is illustrated in Fig. 2. -The dramatic expected improvement in precision on the measurement of electroweak observables which can be expected at FCC-ee relies on precision tracking and flavour tagging down to low angle acceptance. This is particularly true of the b-quark electroweak measurements A b FB , 0 and R b , where statistical improvements by a factor 800 and 2000 are expected [6] -The tracking system may also have an important role to play in particle identification, through dE/dx methods in the tracker, or with the addition of a tracking timing layer for time of flight measurements at the outside of the tracker (see also Section 2.2.1). Fig. 1. Examples of measurements where the performance of the tracking system has a particular impact. The left plot shows the missing mass distribution in HZ and Z → l + l − events [7], illustrating the difference in resolution between an ILD-like detector (red line) and a CMS-like detector (blue line), where a main cause of the improvement is the superior tracking resolution of the ILD-like reconstruction. The right plot shows the potential access to sensitivity to the τ → eνν lepton universality test [4], with the improvement in the τ branching fraction and lifetime measurements which can be expected at FCC-ee. The tracking system contributes significantly to this expected improvement, in particular with the space point precision, low radius, and low material expected in the vertex detector. In order to address these challenges, the tracking system should satisfy the following requirements: -Acceptance The FCC-ee will operate with low emittance beams colliding with a crossing angle of 30 mrad. These parameters define the machine-detector interface, which covers a region of 100 mrad around the detector axis. They also place a limit on the detector solenoid strength, which must be limited to 2 T [9] to avoid unwanted beam emittance blowup. This has the consequence that the tracking volume must be larger than perhaps desirable, and the calorimeter may have to move outside a thin solenoid [10]. The vertexing and tracking performance must be maintained across this full acceptance to exploit the full physics potential. In addition, the angular acceptance boundaries must be defined with great accuracy, of the order of 5-10 µrad, for the high precision cross-section measurements. -Occupancy and Readout The detector readout and the front end pile up in the pixels must be able to cope with sustained physics rates of up to 100 kHz and backgrounds driven by synchrotron radiation and incoherent pair production. At 365 GeV operation, when the beams are separated by 994 ns, the occupancies in the barrel vertex detector, illustrated in Fig. 3, can reach 0.04 hits per mm 2 per bunch crossing at the innermost layer. Taking into account an expected pixel pitch of approximately 25µm, a cluster multiplicity of 5 and a safety factor of 3 gives an occupancy of the * τ τ candidates. The leftmost plot shows the expected signal for a vertex detector with an ILD like performance. The middle and right plots show the improvement that can be gained by artificially improving the vertex resolution of the vertex detector by a factor 2 (middle) and 4 (right) [8]. vertex detector still below the level of 10 −3 . Operating at the Z, the backgrounds are lower, however the bunch separation of 20 ns combined with an expected detector time integration window of around 1 µs yields similar occupancies [11]. Unlike detector designs for the ILC [12], the operation cannot be in a power pulsed mode. -Impact Parameter Resolution The target impact parameter resolution for individual tracks is 5 ⊕ 10/(p T sin 1 2 θ) µm, where p T is the track transverse momentum in GeV/c. Both the asymptotic and multiple scattering term in this formula are crucial for FCC-ee physics. The transverse momentum of, for instance, muons from Z decays rely on the asymptotic term, whereas, tracks from, for instance the K * for the channel illustrated in Fig. 2 have typical transverse momenta of 3-4 GeV. This resolution results in typical primary and secondary vertex resolutions (both transverse and longitudinal) of 3 and 7 µm [13]. The system must be designed to ensure that the radial dimension can be calibrated to a relative precision of a few ppm, to ensure the same relative precision on e.g. the τ lepton lifetime and other weakly decaying particles. -Momentum Resolution Excellent momentum resolution is required, with a target of ∆(1/(p T )) ∼ 2 × 10 −5 ⊕ 1 × 10 −3 /(p T sin θ), driven by the requirements for precise recoil mass reconstruction, measurements of the Higgs mass, cross sections and branching ratios. The possibility of constraining the possible point-to-point centre-of-mass energy uncertainties using the final state momentum distribution requires a high stability of the momentum scale [14]; this will require a precise and continuous monitoring of both the tracker alignment and of the magnetic field. -Angular Resolution The typical muon angular resolution of the FCC-ee and other e + e − detectors, of the order of 0.1 mrad, is sufficient to have an impact of smaller than 1 MeV on the centre-of-mass energy determination, and can be measured with di-muon events with a more-than-adequate precision over the whole acceptance. [14] -Timing measurements Timing may be exploited in the tracking system to support PID, measurement of long lived particles, and to aid pattern reconstruction. The vertex detector may be able to use the timing information to distinguish between early and late collisions from the beam bunches, exploiting the crossing angle. This would allow a check of beambeam systematic uncertainty and a check on √ s. The possibility may exist to run a chromatisation scheme at the Higgs [15] to scan a Higgs resonance within a single run. For these kinds of physics goals the target track timing measurement would be of the order of 6 ps. In order to achieve these goals the target material budget for the innermost layer of the vertex detector must be 0.2%X 0 or better, with a target of less than 1% for the whole vertex detector, and the individual hit resolution must be of the order of 3 µm. The tracking power budget, which typically targets 40 mW/cm 2 or below, and cooling mechanism must be optimised to support the best possible transparency. In addition, the tremendous statistics expected at FCC-ee will place unprecedented requirements on the stability, alignment, and calibration procedures for such a detector. To give an example, the τ lifetime measurement, which will allow a precise test of lepton τ µ universality, will be based on 10 12 τ pairs and will reach an expected statistical precision of 0.001 fs, corresponding to a few tens of nanometers on the flight distance. This will set stringent requirements on the offline alignment and overall radial scale of the vertex detector. Existing Detector Concepts Tracking as implemented in the CLD concept The CLD detector concept [16] is an adaptation of a CLIC style detector design [17] to the FCC-ee environment, especially concerning the reduced value of the magnetic field which forces the tracking region to be extended from 1.5 m to 2.1 m. Another important change is the introduction of CO 2 evaporative cooling system as the pulsed power concept cannot be applied to FCC-ee. It is based on a full silicon concept with double layers of sensors on a common supporting carbon fibre structure for full coverage. It comprises 3 double layers of silicon pixel sensors and 3 double disks for the Vertex Detector, 3 double layers of short strips sensors and 7 forward double disks for the innermost tracking volume and a further 3 double layers and 4 double disks for the outer tracking volume. The vertex detector features sensors with 25 × 25 µm 2 pixels, an effective thickness of 50 µm, an estimated total radiation length including cooling of 0.3% per single layer, and a total active area of about 0.35 m 2 . For the tracking region the proposed sensor is 200 µm thick with 50 µm × 1 mm to 10 mm long strips with the innermost double disk pixelated, similarly to the vertex detector. The estimated radiation length is 1% per layer for the region of the sensors, coolant and mechanical structure, and a further 2.5% for the main cooling distribution pipes, main mechanical supports and cables. The total active area is about 196 m 2 . The total estimated tracker material budget is shown in Fig. 4. As an example of the overall performance, Fig. 5 demonstrates a resolution better than 7 × 10 −5 GeV −1 for 45 GeV muons at normal incidence corresponding to the required accuracy for the expected Z width measurement, and an impact-parameter resolution obtained for isolated muon tracks at various momenta achieving a resolution well below the high-momentum limit of 5 µm at all polar angles. Tracking as implemented in the IDEA concept The current design of the IDEA detector concept [18] features a silicon based vertex detector [19] surrounded by a large drift chamber. The vertex detector currently considered is based on active monolithic pixel sensor technology relying on fully depleted high-resistivity substrates together with on-pixel sparsification and data-driven, time-stamped readout scheme. The target performance would be a resolution of a few microns with a total material of 0.15% -0.3% X 0 per layer and power dissipation around 20 mW/cm 2 in order to avoid the need for active cooling. A central aspect of the IDEA detector concept is a very light central drift chamber, which should achieve lower mass than the equivalent silicon-based tracking and provide better momentum resolution over the range of interest. A novel feature of this detector is that adding timing information to the wires creates the possibility to count individual ionising events of the traversing track and dE/dx information. This is called the cluster counting method and provides PID over most of the momentum range. The IDEA full stereo, high resolution, ultra-light drift chamber is inspired by the MEGII drift chamber concept [20] and its predecessors, such as the Mu2e I-tracker [21]. The wires are laid out in a way which enmeshes the positive and negative stereo angle orientations, giving a high ratio of field to sense wires, and a high density of wires creating a more uniform equipotential surface. There are almost 400k wires in total, requiring a non standard wiring procedure and a feed-through-less wiring system. The wire support endplates also serve to contain the gas, allowing a reduction of material to approximation 10 −3 X 0 for the inner cylinder and a few times 10 −2 X 0 for the endplates. The wiring technique is illustrated in Fig. 6, which shows how the wire PC board layers in green are built up on the frame, separated by precisely machined peek spacers. The cluster counting exploits the fact that in He based gas mixtures the signals from each ionisation event are spread in time over a few ns. With the help of a fast read-out electronics they can be efficiently identified and counted, giving a particle identification method with a better resolution than the integrated dE/dx method. On the right, the expected particle separation performance (solid line), compared to a dE/dx approach (dashed line) for three different particle separation hypotheses. IDEA -Silicon wrapper The IDEA silicon wrapper offers a precise track 3D position measurement at the particles' exit point from the central drift chamber. It opens the possibility to provide an absolute reference for the calibration of the polar angle measurement, and hence define precisely the angular acceptance. The wrapper encapsulates the tracker in both the barrel and forward regions. In addition, if the silicon provides timing information this can can complement the PID range missing from IDEA by providing a time of flight detector. For instance, to cover the pion/kaon loss of discimination around 1 GeV, a timing of 0.5 ns at 2 m just outside the drift chamber would be sufficient to recover the performance. An improved time resolution could strengthen the PID up to 5 GeV. Such a timing measurement will also be valuable for the reconstruction of secondary vertices in the search, or discovery, of massive long lived particles. Future Tracking Technologies In the rest of this document we focus on silicon based tracking devices and possible technological solutions which can be appropriate for FCC-ee implementation. The IDEA concept described above is an example of a state of the art gaseous tracking solution, a full discussion of gaseous based alternatives lies outside the scope of this article. Silicon based tracking devices Silicon strips and pixels are currently the work horses of the vertexing and tracking programmes of the LHC experiments, and the current R&D directions show excellent potential for addressing the needs of the vertex detector, optionally the tracker, and any external or timing layers of the tracking region of an FCC-ee detector. Traditionally we distinguish two major categories of pixelated silicon tracking systems, hybrid and monolithic detectors. In the hybrid system the sensor and front end chip are optimised separately and a fine pitch bump bonding technology is used to connect the sensor with the readout chip. For monolithic devices the charge generation is integrated directly into the ASIC saving on the cost and complexity of the bump bonding step and allowing extremely thin sensors produced in a commercial process, suitable for the vertex region of an FCC detector. However, new technologies such as TSVs, microbumps, wafer stacking as well as alternative interconnection technologies blur the distinction between purely hybrid and monolithic approaches and will offer excellent potential for optimised devices for the future. Figure 8 presents an overview of several variants of hybrid and monolithic pixel developments. Monolithic CMOS MAPS The recent developments of low mass and low power CMOS MAPS (Monolithic Active Pixel Sensors) combined with the possibilities of large area coverage make this a very interesting technology option for FCC-ee. In recent years commercial CMOS processes with quadruple-well technologies to allow for full CMOS circuitry inside the pixel cell as well as high resistivity substrate wafers to enable depleting part or the full sensing volume have become available and are being explored by a large community. The current state of the art in terms of installed detectors is represented by the upgrade of the ALICE ITS during LS2, which is the largest CMOS MAPS tracking detector ever built (≈ 10 m 2 ). This tracker has been installed in the ALICE experiment and is currently being commissioned. The ALPIDE sensors are built in a commercial 180 nm imaging process and feature 27 × 29µm 2 pixels. The innermost three layers of this 7 layer tracker are constructed with a material budget of 0.35%X 0 using 50 µm thin MAPS connected to an Aluminium based flex cable and mounted on a low material budget mechanical support and cooling structure. The ALPIDE use a high resistivity epitaxial layer as a sensing volume on top of a low resistivity p-type substrate. By applying a moderate reverse bias (≤ 6V) the sensing volume around the collection electrode (≈ 3 µm diameter) can be partially depleted. This is fully compatible with operation in the ALICE radiation environment with total ionizing doses of ≈ 3 Mrad and NIEL fluences of ≈ 2 ×10 13 1 MeV n eq cm 2 . During Pb-Pb collisions with an interaction rate of 50 kHz all events will be read out, while during p-p collisions the readout will be 400 kHz. The ALPIDE chip is a monolithic pixel chip with a small collection electrode, thus optimising the analogue power consumption. Together with a sparsified asynchronous readout without distribution of the clock to the matrix, power densities of about 300 mW/cm 2 for the innermost layers can be achieved, meeting the ALICE readout requirements. HVCMOS sensors use commercial processes that embed NMOS and PMOS transistors in a single deep n-well that acts as a charge collection electrode [22]. This allows to bias the substrate with a high negative voltage and to deplete the zone around the n-well. This technology has been chosen for the Mu3e experiment [23] which has very stringent constraints on the material budget and requires the sensors to be thinned to 50 µm. The 2 × 2 cm 2 large sensors are mounted on a low mass service flexible printed circuit. The pixel detector is operated inside a dry helium atmosphere cooled by helium gas flow to further reduce multiple scattering [23]. The next generation monolithic sensors are moving in the direction of Depleted MAPS (DMAPS) driven by requirements for radiation hardness and faster readout as well as fast timing information. These sensors are built from high resistivity substrates and aim to deplete the sensing volume by applying few tens to several hundred volts reverse bias. By fully depleting the sensing volume the charges generated by a passing particle are collected by drift and not dominated by diffusion, which leads to longer charge collection times and potential charge trapping and loss in case of high radiation environments. Furthermore, fast charge collection by drift also improves the signal timing information. Several different design approaches are being studied for DMAPS, which can be coarsely divided into so called "small electrode" and "large electrode" designs, see also Fig. 9. Fig. 9. Schematic cross sections of a pixel cell for large (left) and small (right) electrode design [24] The small electrode approach with typical collection electrode diameters of a few microns seems especially promising, as it presents a much lower capacitance (a few fF as opposed to values greater than 100fF), allows for implementation of smaller pixels in the same technology node, delivers faster signals, and a potential of better signal to noise and lower analogue power. This combined with a sparsified asynchronous readout scheme allows the reduction of the power density of the chip, however is dependent on the hit rate. Challenges of this approach include the routing of the signals to the chip periphery and will depend on the technology chosen and the feature size of it. Some of these concepts have already been incorporated in ASICs such as the MALTA/Monopix chip [25] implemented in a modified Tower Jazz 180 nm process, and optimised for radiation hardness and speed [26], and were also included in the CLICTD sensor chip [27], optimised for extended efficiency for ultra thin silicon trackers. The MALTA chip is now being explored using high resistivity Cz silicon to enable high depletion depth [28], potentially using the full wafer thickness to increase the number of charges created by a passing particle while ensuring full depletion and fast charge collection by drift. These recent developments are proving that the area of sensor engineering, traditionally associated with hybrid pixel detectors, applies also to the monolithic domain. It opens up possibilities to engineer substrates and designs with improved charge collection properties and timing information. Moving to a smaller feature size should allow smaller pitch as well as more room for routing and more functionality to be included in the pixel. Presently, several technology nodes with smaller feature sizes [29], [30] are being explored. Adding further information on the charge deposited in the pixel cells and thus moving beyond a pure binary readout will help improving the hit resolution, while keeping in mind the available power budget. ALICE is already looking beyond the ITS for an ultra light inner barrel detector which could be installed in LS3. This would consist of a new beam pipe with inner radius 16 mm and sensing layers based on ultra thin, wafer scale sensors which can directly be curved to the required shape and operated with air cooling. The application of such a self supported, curved circuit to FCC-ee would allow a decrease in average radius and would eliminate overlaps, acceptance loss and systematic effects due to varying radius or material from supports and services. It is made possible by the industrial availability of stitching which allows multi-reticle size ladders to be constructed of up to 30 cm in 65 nm, together with the ability to thin the sensors to 20-40 µm giving ultra thin and light detectors together with mechanical flexibility. ALICE aims to exploit this process for a curved inner detector beyond LS3, and subsequently to construct an all silicon tracker for beyond LS4 operation consisting of 100 m 2 double sided and curved layers. First test structures using stitching have been submitted for fabrication. While imaging sensors already use stitching to achieve large wafer scale sensors [31], [32], the application in HEP environments is now emerging and under study. The challenges of such a design include the connection between the chips on the module level as well as off-module. The first connections to a curved sensor have been achieved successfully using aluminium wedge wirebonding, as shown in Fig. 11. The curved sensor with bending radius of 1.8 cm is mounted on a special jig for wire bonding; even at this radius the surface presented to the bonding machine for a given pad is essentially flat and the bonding can be completed successfully. To connect the sensor over the full width of 3 cm, the jig has to be rotated to always present a flat bonding surface for the bonding head. The study of alternative interconnection technologies beyond the traditionally used aluminium wedge wire bonding focuses on building large area modules with reduced and optimised interconnection schemes [29]. This includes data and power transfer from chip to chip, as implemented in the MALTA chip, as well as the study of connection offmodule. ACF (Anisotropic Conductive Film) is here a promising candidate to achieve a low mass, simplified connected between chip pads and a flexible printed circuit. Figure 12 shows ACF film deposited on two MALTA chips before flip chip connecting a silicon bridge that connects the data and power pads from one chip with the neighbouring chip [34]. The data accumulated in one chip will be transferred via the bridge contact to the neighbouring chip, merged with the data of the second chip and read out. This concept it presently under study to be extended to four MALTA chips, where the data from all chips are read out only via the last chip in the chain. This technique can also be used for serial powering and thus reducing the material budget for services. Due to the pad size of 88 µm side length the connections between chips is presently carried out using either the silicon bridge or aluminium wedge wire bonding [35]. Further studies, such as the use of RDLs (ReDistribution Layers) will allow the scaling of the chip pad connectivity with the typical size of flexible printed circuit pads, so that assembly techniques can be simplified for larger module numbers and thus larger surfaces, see schematic sketch in Fig. 13. 1 , Paula Collins 2 , Petra Riedler 2 : Tracking and Vertex detectors at FCC-ee Fig. 11. The sketch illustrates the conceptual advantage of moving to large, curved sensors; the material contribution is lower and more uniform and the average radius decreases. The bottom photographs [33] show how this has been implemented for the ALPIDE R&D; on the left, the mechanical integration and mounting of of thinned silicon wafers in the size of stitched sensors, on the right, a wire bonded curved ALPIDE sensor mounted on a rotating chuck. DEPFET, FPCCD technologies The DEPFET and FPCCD technologies are examples of sensors with a split charge amplification and readout scheme. Each pixel is a p channel FET on a fully depleted bulk; the electrons are stored in the internal gate, and the accumulated charge is removed after readout by a clear contact. After processing the wafer backside may have deep areas removed via anisotropic deep etching -the sensors are supported by the monolithically integrated silicon frame and the thickness of the sensitive area is almost a free parameter. This has allowed the DEPFET pixel detector at Belle II to be constructed with a record 0.2% X 0 per layer. The next generation of DEPFET detectors will further optimise the shape of the drain implants to improve the signal, reduce the power consumption, and improve the radiation hardness [38], and in a similar way to the MAPS developments, curved sensors might be employed. FPCCDs offer the the possibility to achieve very small pixels, ≈ 5 µm square, with a sensitive layer that is fully depleted. The signal charge is transferred over a longer distance, up to a few cm, which can lead to radiation induced charge transfer inefficiencies. While radiation damage induced defects cause charge loss by trapping, several techniques to compensate this effect are under study. Using so called fat-zero charge injection methods which fill the radiation Fig. 13. Schematic sketch of a light weight and large area module based on CMOS sensors [37]. induced lattice defects are being investigated. Furthermore, notch-CDD designs that optimise the charge transfer channel width from one pixel to another as well as operation at low temperature are employed to reduce the radiation induced charge transfer inefficiencies [39]. 3D Integration, SoI Industrial developments are strongly focused on heterogenous integration technologies which will allow further reductions in cost and power over the coming years. Several of these technologies, such as 3D stacking are also been studied for future HEP applications [40,41]. The developments in this area are dependent on availability of processes for R&D activities, but are a very promising path for future developments which will increase the functionality and performance of silicon trackers. The developments of SOI (Silicon On Insulator) detectors present a two-tiered separation of the readout electronics and the sensing layer [42]. An SOI wafer with a high resistivity sensor part is connected to a thin CMOS chip using oxide bonding. Figure 14 shows a schematic SOI structure indicating the different tiers and signal generation by a passing particle. This approach has been further developed using buried wells, double SOI and pinned depleted diodes to address signal cross talk and radiation tolerance [43]. Solutions based on hybrid pixel sensors Hybrid pixel solutions, which make up the majority of currently installed systems, offer the possibility to separately optimise the sensor characteristics while being able to benefit from standard commercial processes for a high degree of functionality for the ASIC. They are typically able to cope with very high rates and high radiation environments. Challenges include reducing the material for the sensor, ASIC and interconnection region, and the costs and technological challenges of fine pitch bump bonding. The hybrid pixel solution is of particular interest for FCC-ee due to the possibility of adding sensor and ASIC timing capability, which could be exploited in external layers or in the wrapper layer to support PID capability. A candidate sensor technology for fast time stamp applications is that of LGADs (Low-Gain Avalanche Diodes [44]) which incorporate a thin multiplication layer which aims to supply a small gain which is sufficient to give a reliable timing signal. Such a technology has been shown to be capable of delivering 20 ps resolution or better, depending on the signal size and sensor thickness [45]. The concept of a timing layer using LGADs is being exploited for both the ATLAS and CMS upgrade timing layers in the endcap regions, which aim for a system level timing resolution of 30 ps with 10-15 m 2 of silicon, for installation in 2023-2025 [46,47,48,49]. The LGAD technology is being improved to allow a fine pitch pixel readout, which in traditional LGAD designs is impeded by the no-gain region between the pixels. This can be addressed with solutions such as using trenches for pad isolation, the so-called TI-LGAD [50], moving the gain region to the other side of the sensor ("iLGAD -inverted LGADs [51]") or implementing a resistive readout solution, the so-called AC-LGAD or RSD, which can achieve excellent position resolution even with large pixels, freeing up space in the ASIC to allow timing functionality to be added [52]. An alternative approach could be monolithic BiCMOS, exploiting the properties of SiGe transistors, where the development is now beginning, and where the timing performance could also be enhanced with the use of internal gain. A further possibility for timing detectors which do not incorporate internal gain is the use of silicon sensors with a three dimensional (3-D) architecture [53], where the n and p electrodes penetrate fully or partially through the silicon substrate. This is a rapidly evolving technology which was first successfully deployed at the LHC for the ATLAS IBL [54] and is currently being evaluated for the Phase II LHC Upgrades [55]. Due to the inherently short drift distance, independent of the sensor thickness, the 3D sensors can be very fast, and have good radiation tolerance, and low operation voltages and power dissipation. One of the challenges for timing resolution comes from the variations in the weighting field due to the hit position, and the fill factor, which may be significant for small cell sizes and certain track inclinations. One way to tackle the weighting field variations is with a trench design, as developed by the TIMESPOT collaboration, and for which intrinsic detector resolutions of the order of 10 ps have been demonstrated in testbeam, as shown in Fig. 15 (right). An implementation of timing sensors in a wrapper layer for FCC-ee, with relaxed requirements on pitch may be able to benefit from the enhanced time resolution of traditional column 3D sensors in a configuration with multiple cells connected together [56]. The development of any sensor with timing capability must be done hand in hand with the accompanying IC technology. Applications at FCC-ee can expect to benefit from the current ongoing R&D into ASIC developments for HEP applications including LHC Upgrades, as well as imaging applications. Recent developments include the Timepix4 ASIC [57], a full size 4-side tileable chip with high rate imaging capabilities which has been successfully developed in 65 nm CMOS technology and provides a time stamp binning resolution of 195 ps (RMS 56 ps). The Timespot demonstrator ASIC [58], is being developed in 28 nm CMOS technology and aims for ultimate time resolution using a CSA inverter input stage. The LHCb experiment plans a Phase II Upgrade [59], due for installation after LS4, which will require a high granularity pixel vertex detector capable of hit timestamps, operating at high speed in a high radiation environment. Such developments provide a promising path forward for an eventual hybrid pixel layer implementation at FCC-ee. Mechanical Integration and Interconnection Technologies Mechanics and Cooling The Cooling and mechanical design of a FCC-ee tracker poses many challenges. In order to benefit from progress towards very thin sensors, the supports and services must also reduce the amount of mass and the detector must benefit from an integrated design. The very high luminosities and e + e − cross section at the Z peak imply relative experimental systematic uncertainties at the 10 −5 level, required to match the statistical accuracies. Hence mechanical stability is also a crucial consideration, whether deviations come from internal effects such as vibrations or thermal effects, or external stimuli such as cavern floor movements, earthquakes or stress from other subsystems. The global design of the FCC-ee tracker will need to take advantage of the lightweight next generation solutions for the mechanical supports and cooling. In the ideal case the detector active cooling circuits are completely removed and the heat is removed by a forced airflow. In this case attention must be paid to the vibrations which can be induced in thin silicon ladders. The design must be backed with simulations and realistic mechanical measurements [61]. An example of such a study is shown in Fig. 16. This approach is currently under investigation for the ALICE ITS3 [62,63], where the stitched, curved sensors are supported by ultra-light carbon foam supports which also act as radiators, and the whole device is contained within an external carbon exoskeleton. The engineering model is currently being tested in a wind tunnel for thermal and mechanical stability. It may be that air cooling cannot be implemented, if for instance there are regions with more dense power consumption, or if the size and complexity of the detector does not allow large quantities of air to be introduced in well controlled temperature conditions. If a small increase in material budget is acceptable, microchannel cooling [64] represents one very promising area of development. Such a solution has already been implemented for the LHCb pixel detector upgrade, where the coolant is evaporative CO 2 which circulates in tiny microchannels embedded within a cooling plate consisting of a silicon wafer. Such a system has the advantage of a low and uniform mass distribution, a high thermal efficiency, a good CTE match between substrate and sensor, and a flexible geometry allowing the coolant to be brought precisely below the needed regions. A low material contribution is possible, for instance in the case of the GTK microchannel cooling plates of the NA62 experiment [65], which are thinned in the acceptance to a final contribution of 0.13% X 0 . Recent technological advances for the silicon etched microchannel solution allow the cooling channels to be directly integrated in the sensor substrate. This has been demonstrated for instance in the buried channel process developed by FBK which has been demonstrated on a single MALTA die to be compatible with the CMOS processing [66] and showing full functionality. Similarly, integrated micro-channel cooling to DEPFET detectors is also under consideration [67]. For future microchannel applications a variety of processes in additive manufacturing are being considered. These allow in principle a large choice of materials such as metals, ceramics or acrylics, and have the advantage great flexibility in the geometric forms, as well as being able to address the issue of the interconnections, where for the etched silicon solution the connections tend to be more bulky and fragile. A similar approach is taken by the concept of a microvascular network embedded in a carbon cold-plate [68], which could be adapted to silicon detector cooling supports. An excellent example of the integration of supports, routing and electronics in one thin all-silicon ladder detector is given by the DEPFETs of Belle II, as illustrated in Fig. 17. Whichever technological solution is chosen for a future FCC-ee detector the way in which the sensors are integrated into modules will be critical due to the need to maintain low material throughout the tracking region. The FCC-ee vertex detector will rely on enabling technologies for module packaging, including thinning and dicing techniques, fine pitch bump bonding, or design of multichip modules incorporating concepts such as serial powering. Conclusions The tracking system of a future FCC-ee detector will require innovative technological solutions for the sensors, mechanics and readout in order to address the needs for high precision and low mass. New solutions have to be found be found for alignment and field stability, so that tracker systematics match the new level of statistical accuracy. The chosen solutions must take advantage of current trends in cutting edge technology. In particular, advances in low mass, low power and fine pixel monolithic sensors may be applicable in the inner most layers, sensors with timing capability may be used in outer layers or as an outer wrapper, and the main tracking body may employ for example an all silicon solution or a low mass drift chamber. The detector will act as the interface to the machine and the design and constraints on the beampipe design must be closely integrated with the evolution of the detector layout. Fig. 2 . 2The importance of the secondary vertex resolution in driving the signal to background quality is demonstrated in this study of the topologically reconstructed invariant mass of B o d → K Fig. 3 . 3Expected occupancy in the barrel and forward regions of the vertex detector, driven by incoherent pair creation[11]. Fig. 4 . 4Material budget distributions for the CLD detector concept Fig. 5 . 5Momentum resolution and impact parameter resolution achieved for the CLD detector concept Fig. 6 . 6Wiring concept for the IDEA tracker. From left to right: Field/sense wire arrangement, mechanical mounting technique, photographs of achieved wiring for the MEGII drift chamber Fig. 7 . 7Particle ID concept for the IDEA central tracker. On the left, an illustration of the individual ionisation events and their distinct time of arrival as measured by the readout electronics. Fig. 8 . 8Overview of R&D developments for hybrid and monolithic pixel detectors Fig. 10 . 10Proposed exploitation of large area 65 nm MAPS devices, including truly curved stitched sensors for the inner region, for the ALICE LS3 and LS4 upgrades Fig. 12 . 12Two MALTA chips with ACF[36] (left) and schematic view of ACF connected components[34] (right). Fig. 14 . 14Schematic view of a SOI structure[43]. Fig. 15 . 15Left: Comparison between simulation and a range of measurements for LGAD sensors[44]. Typically time resolutions of down to 25 ps are achieveable with 50 µm square pixels for detector thicknesses of 45-55 µm Right: Recent time resolution results from trench optimised 3D sensors. Subtracting the electronics contribution a time resolution of 10-ps is estimated for the 55 µm square pixels at 140V bias.[60] Fig. 16 . 16Left: Study of the relative importance of different types of cooling for the stability of silicon ladders[69] Right: Example of the technique to integrate cooling channels directly in the substrate of an active sensor, with a photograph of a fully functional demonstrator MALTA chip and the Fe 55 source scan showing that the sensor is fully functional Fig. 17 . 17DEPFET ladder for BELLEII, illustrating integration of supports, routing and electronics within one all-silicon layer , Petra Riedler 2 : Tracking and Vertex detectors at FCC-ee , Petra Riedler 2 : Tracking and Vertex detectors at FCC-ee , Petra Riedler 2 : Tracking and Vertex detectors at FCC-ee 5 , Petra Riedler 2 : Tracking and Vertex detectors at FCC-ee , Petra Riedler 2 : Tracking and Vertex detectors at FCC-ee : Tracking and Vertex detectors at FCC-ee , Petra Riedler 2 : Tracking and Vertex detectors at FCC-ee 9 , Petra Riedler 2 : Tracking and Vertex detectors at FCC-ee , Petra Riedler 2 : Tracking and Vertex detectors at FCC-ee , Petra Riedler 2 : Tracking and Vertex detectors at FCC-ee , Petra Riedler 2 : Tracking and Vertex detectors at FCC-ee , Petra Riedler 2 : Tracking and Vertex detectors at FCC-ee . A Abada, FCC collaboration10.1140/epjc/s10052-019-6904-3FCC Physics Opportunities: Future Circular Collider Conceptual Design Report. 1474Eur. Phys. J. CFCC collaboration, A. Abada et al., FCC Physics Opportunities: Future Circular Collider Conceptual Design Report Volume 1, Eur. Phys. J. C 79 (2019) 474. FCC-ee overview: new opportunities create new challenges. A Blondel, P Janot, 2106.13885A future Higgs and Electroweak factory (FCC): Challenges towards discovery. A. Blondel and P. Janot, FCC-ee overview: new opportunities create new challenges, in A future Higgs and Electroweak factory (FCC): Challenges towards discovery, EPJ+ special issue, Focus on FCC-ee, 2106.13885. Higgs Boson studies at future particle colliders. J De Blas, 10.1007/JHEP01(2020)139The European Physical Journal Special Topics. 139J. De Blas et al., Higgs Boson studies at future particle colliders, The European Physical Journal Special Topics 139 (January, 2020) . Tau-lepton Physics at the FCC-ee circular e + e − Collider. M Dam, 10.21468/SciPostPhysProc.1.041SciPost Phys. Proc. 41M. Dam, Tau-lepton Physics at the FCC-ee circular e + e − Collider, SciPost Phys. Proc. (2019) 41. Test of lepton universality in beauty-quark decays, accepted by Nature Physics. R Aaij, LHCb collaborationLHCb collaboration, R. Aaij et al., Test of lepton universality in beauty-quark decays, accepted by Nature Physics, 2103.11769. A future Higgs and Electroweak factory (FCC): Challenges towards discovery. A Blondel, P Janot, EPJ+ special issue. FCC-ee overview: new opportunities create new challenges. Focus on FCC-eeA. Blondel and P. Janot, "FCC-ee overview: new opportunities create new challenges." A future Higgs and Electroweak factory (FCC): Challenges towards discovery, EPJ+ special issue, Focus on FCC-ee. Study the effect of beam energy spread and detector resolution on the search for Higgs boson decays to invisible particles at a future e + e − circular collider. O Cerri, M De Gruttola, M Pierini, A Podo, G Rolandi, 10.1140/epjc/s10052-017-4680-51605.00100Eur. Phys. J. C. 77116O. Cerri, M. de Gruttola, M. Pierini, A. Podo and G. Rolandi, Study the effect of beam energy spread and detector resolution on the search for Higgs boson decays to invisible particles at a future e + e − circular collider, Eur. Phys. J. C 77 (2017) 116, [1605.00100]. talk presented at the GDR-InF Workshop. S , FCC Design StudyS. Monteil, "FCC Design Study." talk presented at the GDR-InF Workshop, February 02, 2018. https://indico.desy.de/event/28202/contributions/98367/. . M Boscolo, O Blanco-Garcia, N Bacchetta, E Belli, M Benedikt, H Burkhardt, Machine detector interface for the e + e − future circular colliderM. Boscolo, O. Blanco-Garcia, N. Bacchetta, E. Belli, M. Benedikt, H. Burkhardt et al., Machine detector interface for the e + e − future circular collider, . Ultra-Thin Solenoid and Cryostat Development for Novel Detector Magnets. V Ilardi, H F P Silva, T Kulenkampff, A Dudarev, P B Sousa, M Mentink, 10.1109/TASC.2021.3057840IEEE Trans. Appl. Supercond. 314500205V. Ilardi, H. F. P. Silva, T. Kulenkampff, A. Dudarev, P. B. de Sousa, M. Mentink et al., Ultra-Thin Solenoid and Cryostat Development for Novel Detector Magnets, IEEE Trans. Appl. Supercond. 31 (2021) 4500205. Fcc-ee: The lepton collider. A Abada, M Abbrescia, S S Abdussalam, I Abdyukhanov, J Fernandez, A Abramov, 10.1140/epjst/e2019-900045-4The European Physical Journal Special Topics. 228A. Abada, M. Abbrescia, S. S. AbdusSalam, I. Abdyukhanov, J. Abelleira Fernandez, A. Abramov et al., Fcc-ee: The lepton collider, The European Physical Journal Special Topics 228 (2019) 261-623. . H Abramowicz, 1306.6329The International Linear Collider Technical Design Report. 4H. Abramowicz et al., The International Linear Collider Technical Design Report -Volume 4: Detectors, 1306.6329. Heavy-quark opportunities and challenges at FCC-ee. Heavy-quark opportunities and challenges at FCC-ee. S Monteil, G Wilkinson, 10.1140/epjp/s13360-021-01814-0Eur. Phys. J. Plus. 13614 p, [2106.01259S. Monteil and G. Wilkinson, Heavy-quark opportunities and challenges at FCC-ee. Heavy-quark opportunities and challenges at FCC-ee, Eur. Phys. J. Plus 136 (Jun, 2021) 837. 14 p, [2106.01259]. Polarization and Centre-of-mass Energy Calibration at FCC-ee. A Blondel, P Janot, J Wenninger, A. Blondel, P. Janot, J. Wenninger et al., Polarization and Centre-of-mass Energy Calibration at FCC-ee, 1909.12245. Monochromatization of e + e − colliders with a large crossing angle. V I Telnov, V. I. Telnov, Monochromatization of e + e − colliders with a large crossing angle, 2020. CLD -A Detector Concept for the FCC-ee. N Bacchetta, N. Bacchetta et al., CLD -A Detector Concept for the FCC-ee, 1911.12230. . L Linssen, A Miyamoto, M Stanitzki, H Weerts, 1202.5940Physics and Detectors at CLIC: CLIC Conceptual Design Report. L. Linssen, A. Miyamoto, M. Stanitzki, H. Weerts et al., Physics and Detectors at CLIC: CLIC Conceptual Design Report, 1202.5940. IDEA: A detector concept for future leptonic colliders. M Rd-Fa Collaboration, Antonello, 10.1393/ncc/i2020-20027-2Nuovo Cim. C. 4327RD-FA collaboration, M. Antonello, IDEA: A detector concept for future leptonic colliders, Nuovo Cim. C 43 (2020) 27. A 110 nm CMOS process for fully-depleted pixel sensors. L Pancheri, J Olave, S Panati, A Rivetti, F Cossio, M Rolo, 10.1088/1748-0221/14/06/c06016Journal of Instrumentation. 14L. Pancheri, J. Olave, S. Panati, A. Rivetti, F. Cossio, M. Rolo et al., A 110 nm CMOS process for fully-depleted pixel sensors, Journal of Instrumentation 14 (jun, 2019) C06016-C06016. The design of the meg ii experiment. A M Baldini, E Baracchini, C Bemporad, F Berg, M Biasotti, G Boca, 10.1140/epjc/s10052-018-5845-6The European Physical Journal C. 78A. M. Baldini, E. Baracchini, C. Bemporad, F. Berg, M. Biasotti, G. Boca et al., The design of the meg ii experiment, The European Physical Journal C 78 (May, 2018) . The Mu2e Tracker. G Pezzullo, 39th International Conference on High Energy Physics. G. Pezzullo, The Mu2e Tracker, in 39th International Conference on High Energy Physics, 11, 2018. High-voltage pixel detectors in commercial cmos technologies for atlas, clic and mu3e experiments. I Peric, 10.1016/j.nima.2013.05.006Nuclear Instruments and Methods in Physics Research A. 731I. peric et al., High-voltage pixel detectors in commercial cmos technologies for atlas, clic and mu3e experiments, Nuclear Instruments and Methods in Physics Research A 731 (2013) 131-136. Technical design of the phase i mu3e experiment. K Arndt, H Augustin, P Baesso, N Berger, F Berg, C Betancourt, 10.1016/j.nima.2021.165679Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 1014165679K. Arndt, H. Augustin, P. Baesso, N. Berger, F. Berg, C. Betancourt et al., Technical design of the phase i mu3e experiment, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 1014 (2021) 165679. Review on depleted CMOS. T Kugathasan, 10.22323/1.348.0042Proceedings of The 27th International Workshop on Vertex Detectors -PoS(VERTEX2018). The 27th International Workshop on Vertex Detectors -PoS(VERTEX2018)DOI34842T. Kugathasan, Review on depleted CMOS, in Proceedings of The 27th International Workshop on Vertex Detectors - PoS(VERTEX2018), vol. 348, p. 042, 2019. DOI. A process modification for cmos monolithic active pixel sensors for enhanced depletion, timing performance and radiation tolerance. W Snoeys, G Rinella, H Hillemanns, T Kugathasan, M Mager, L Musa, P Riedler, F Reidt, J Van Hoorne, A Fenigstein, T Leitner, 10.1016/j.nima.2017.07.046Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 871W. Snoeys, G. Aglieri Rinella, H. Hillemanns, T. Kugathasan, M. Mager, L. Musa, P. Riedler, F. Reidt, J. Van Hoorne, A. Fenigstein, T. Leitner, A process modification for cmos monolithic active pixel sensors for enhanced depletion, timing performance and radiation tolerance, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 871 (2017) 90-96. Radiation hard monolithic CMOS sensors with small electrodes for HL-LHC, Nuclear Instruments and Methods in. H Pernegger, {10.1016/j.nima.2020.164381}Physics Research A. 986H. Pernegger et al., Radiation hard monolithic CMOS sensors with small electrodes for HL-LHC, Nuclear Instruments and Methods in Physics Research A 986 (2021) . Clictd: A monolithic hr-cmos sensor chip for the clic silicon tracker. I Kremastiotis, R Ballabriga, K Dort, N Egidos, M Munker, I. Kremastiotis, R. Ballabriga, K. Dort, N. Egidos and M. Munker, Clictd: A monolithic hr-cmos sensor chip for the clic silicon tracker, 2020. MALTA-Cz: A radiation hard full-size monolithic CMOS sensor with small electrodes on high-resistivity Czochralski substrate. H Pernegger, publication in preparationH. Pernegger et al., MALTA-Cz: A radiation hard full-size monolithic CMOS sensor with small electrodes on high-resistivity Czochralski substrate, publication in preparation (2021) . Strategic R&D Programme on Technologies for Future Experiments -Annual Report. 2021. 30ECFA Detector R&D Roadmap Symposium of Task Force 7 Electronics and On-detector Processing. Strategic R&D Programme on Technologies for Future Experiments -Annual Report." to be published, 2021. 30. ECFA Detector R&D Roadmap Symposium of Task Force 7 Electronics and On-detector Processing, 2021. Direct electron detectors. R G Mcmullan, A R Faruqi, Doi:10.1016/bs.mie.2016.05.056Methods in Enzymology. 579R. G. McMullan, A.R. Faruqi, Direct electron detectors, Methods in Enzymology 579 (2016) 1-17. Large Area CMOS Image Sensors. R Turchetta, N Guerrini, I Sedgwick, https:/iopscience.iop.org/article/10.1088/1748-0221/6/01/C01099/pdfJounal of Instrumentation. 61099R. Turchetta, N. Guerrini, I. Sedgwick, Large Area CMOS Image Sensors, Jounal of Instrumentation 6 (2011) C01099. ALICE ITS upgrade for LS3. M Mager, talk presented at the 13th Terascale Detector Workshop, 2021M.Mager, "ALICE ITS upgrade for LS3." talk presented at the 13th Terascale Detector Workshop, 2021. https://indico.desy.de/event/24227/. talk presented at the 16th Trento workshop on advanced silicon radiation detectors. M , Vincente Baroso, Pixel detector hybridization and integration with Anisotropic Conductive FilmsM. Vincente Baroso et al., "Pixel detector hybridization and integration with Anisotropic Conductive Films." talk presented at the 16th Trento workshop on advanced silicon radiation detectors, 2021. https://indico.cern.ch/event/983068/contributions/4223158/. Studies for low mass, large area monolithic silicon pixel detector modules using the MALTA CMOS pixel chip. P Riedler, 10.1016/j.nima.2020.164895Nuclear Instruments and Methods in Physics Research A. P. Riedler et al., Studies for low mass, large area monolithic silicon pixel detector modules using the MALTA CMOS pixel chip, Nuclear Instruments and Methods in Physics Research A (2021) . CERN EP Detector Technologies -Annual Report. 2021"CERN EP Detector Technologies -Annual Report." to be published, 2021. talk presented at the ECFA R&D Roadmap Symposium of Task Force 3 -Solid State Detectors. P Riedler, CMOS trackers-present and future developmentsP.Riedler, "CMOS trackers-present and future developments." talk presented at the ECFA R&D Roadmap Symposium of Task Force 3 -Solid State Detectors, 2021. https://indico.cern.ch/event/999816/. M Boronat, 10.1016/j.nuclphysbps.2015.09.154Depfet pixel detector for future e − e + experiments, Nuclear and Particle Physics Proceedings. M. Boronat, Depfet pixel detector for future e − e + experiments, Nuclear and Particle Physics Proceedings 273-275 (2016) 982 -987. Radiation tolerance of FPCCD vertex detector for the ILC. S Murai, A Ishikawa, T Sanuki, A Miyamoto, Y Sugimoto, H Sato, in International Workshop on Future Linear Colliders, 3, 2017. 1703.05603S. Murai, A. Ishikawa, T. Sanuki, A. Miyamoto, Y. Sugimoto, H. Sato et al., Radiation tolerance of FPCCD vertex detector for the ILC, in International Workshop on Future Linear Colliders, 3, 2017. 1703.05603. ECFA Detector R&D Roadmap Status. P Allport, talk presented at the EPS-HEP Conference. P.Allport, "ECFA Detector R&D Roadmap Status." talk presented at the EPS-HEP Conference, July 30, 2021. https://indico.cern.ch/event/686737/contributions/2818061/attachments/1594064/2523898/FCC 20180202 GDRInf monteil.pdf . Disruptive developments for advanced die attach to tackle the challenges of heterogeneous integration. H Pristauz, A Mayr, S Behler, 5H. Pristauz, A. Mayr and S. Behler, Disruptive developments for advanced die attach to tackle the challenges of heterogeneous integration, 05, 2018. R&D status of SOI-based pixel detector with 3D stacking readout. T Tsuboyama, 10.1016/j.nima.2018.08.089Nucl. Instrum. Meth. A. 924T. Tsuboyama et al., R&D status of SOI-based pixel detector with 3D stacking readout, Nucl. Instrum. Meth. A 924 (2019) 422-425. SOI Monolithic pixel detector technology. Y Arai, Soipix collaboration10.22323/1.287.0029PoS. 201629Soipix collaboration, Y. Arai, SOI Monolithic pixel detector technology, PoS Vertex2016 (2017) 029. 4D tracking with ultra-fast silicon detectors. C N Sadrozinski, H F Seiden, A , 10.1088/1361-6633/aa94d3Rep. Prog. Phys. 812C. N. Sadrozinski HF, Seiden A, 4D tracking with ultra-fast silicon detectors, Rep. Prog. Phys. 81(2) (2017) . Timing performance of small cell 3d silicon detectors. G Kramberger, V Cindro, D Flores, S Hidalgo, B Hiti, M Manna, 10.1016/j.nima.2019.04.088Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 934G. Kramberger, V. Cindro, D. Flores, S. Hidalgo, B. Hiti, M. Manna et al., Timing performance of small cell 3d silicon detectors, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 934 (2019) 26-32. Silvestris and on. J Butler, CMS collaboration ; CMS CollaborationD Contardo, CMS collaboration ; CMS CollaborationM Klute, CMS collaboration ; CMS CollaborationJ Mans, CMS collaboration ; CMS CollaborationL , CMS collaboration ; CMS CollaborationCMS Phase II Upgrade Scope Document, tech. rep. GenevaCERNCMS collaboration, J. Butler, D. Contardo, M. Klute, J. Mans, L. Silvestris and on behalf of the CMS Collaboration, CMS Phase II Upgrade Scope Document, tech. rep., CERN, Geneva, Sep, 2015. . K Einsweiler, ATLAS collaborationL Pontecorvo, ATLAS collaborationATLAS Phase-II Upgrade Scoping Document, tech. rep. CERNATLAS collaboration, K. Einsweiler and L. Pontecorvo, ATLAS Phase-II Upgrade Scoping Document, tech. rep., CERN, Geneva, Sep, 2015. A High-Granularity Timing Detector (HGTD) for the Phase-II upgrade of the ATLAS detector. S M Mazza, 10.1088/1748-0221/14/10/C10028JINST. 1410028S. M. Mazza, A High-Granularity Timing Detector (HGTD) for the Phase-II upgrade of the ATLAS detector, JINST 14 (2019) C10028. Performance of CMS Endcap Precision Timing Sensors. M Lazarovits, CMS collaborationWorkshop of QCD and Forward Physics at the the LHC, the future Electron Ion Collider and Cosmic Ray Physics. Lawrence9University of Kansas LibrariesCMS collaboration, M. Lazarovits, Performance of CMS Endcap Precision Timing Sensors, in Workshop of QCD and Forward Physics at the the LHC, the future Electron Ion Collider and Cosmic Ray Physics, (Lawrence), University of Kansas Libraries, 9, 2020. Trench-isolated low gain avalanche diodes (ti-lgads). G Paternoster, G Borghi, M Boscardin, N Cartiglia, M Ferrero, F Ficorella, 10.1109/LED.2020.2991351IEEE Electron Device Letters. 41G. Paternoster, G. Borghi, M. Boscardin, N. Cartiglia, M. Ferrero, F. Ficorella et al., Trench-isolated low gain avalanche diodes (ti-lgads), IEEE Electron Device Letters 41 (2020) 884-887. Beam test results of a 16 ps timing system based on ultra-fast silicon detectors. N Cartiglia, 10.1016/j.nima.2017.01.0211608.08681Nucl. Instrum. Meth. A. 850N. Cartiglia et al., Beam test results of a 16 ps timing system based on ultra-fast silicon detectors, Nucl. Instrum. Meth. A 850 (2017) 83-88, [1608.08681]. Resistive AC-Coupled Silicon Detectors: principles of operation and first results from a combined analysis of beam test and laser data. M Tornago, 10.1016/j.nima.2021.165319Nucl. Instrum. Meth. A. 10031653192007.09528M. Tornago et al., Resistive AC-Coupled Silicon Detectors: principles of operation and first results from a combined analysis of beam test and laser data, Nucl. Instrum. Meth. A 1003 (2021) 165319, [2007.09528]. 3d -a proposed new architecture for solid-state radiation detectors. S Parker, C Kenney, J Segal, 10.1016/S0168-9002(97)00694-3Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 395S. Parker, C. Kenney and J. Segal, 3d -a proposed new architecture for solid-state radiation detectors, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 395 (1997) 328-343. 3D silicon sensors: Design, large area production and quality assurance for the ATLAS IBL pixel detector upgrade. C Da Via, 10.1016/j.nima.2012.07.058Nucl. Instrum. Meth. A. 694C. Da Via et al., 3D silicon sensors: Design, large area production and quality assurance for the ATLAS IBL pixel detector upgrade, Nucl. Instrum. Meth. A 694 (2012) 321-330. Silicon detectors for the lhc phase-ii upgrade and beyond rd50 status report. T Szumlak, 10.1016/j.nima.2019.05.028Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 958162187T. Szumlak, Silicon detectors for the lhc phase-ii upgrade and beyond rd50 status report, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 958 (2020) 162187. Timing performance of small cell 3D silicon detectors. G Kramberger, 10.1016/j.nima.2019.04.0881901.02538Nucl. Instrum. Meth. A. 934G. Kramberger et al., Timing performance of small cell 3D silicon detectors, Nucl. Instrum. Meth. A 934 (2019) 26-32, [1901.02538]. X Llopart, J Alozy, R Ballabriga, M Campbell, R Casanova, V Gromov, E H M Heijne, T Poikela, E Santin, V Sriskaran, L Tlustos, A Vitkovskiy, Timepix4, a large area pixel detector readout chip which can be tiled on 4 sides providing sub-200ps timestamp binning. Submitted to Journal of InstrumentationX. Llopart, J. Alozy, R. Ballabriga, M. Campbell, R. Casanova, V. Gromov, E.H.M. Heijne, T. Poikela, E. Santin, V. Sriskaran, L. Tlustos, A. Vitkovskiy, Timepix4, a large area pixel detector readout chip which can be tiled on 4 sides providing sub-200ps timestamp binning, Submitted to Journal of Instrumentation (2021) . The first ASIC prototype of a 28 nm time-space front-end electronics for real-time tracking. L Piccolo, TimeSpOT collaboration10.22323/1.370.0022PoS. 201922TimeSpOT collaboration, L. Piccolo et al., The first ASIC prototype of a 28 nm time-space front-end electronics for real-time tracking, PoS TWEPP2019 (2020) 022. Expression of Interest for a Phase-II LHCb Upgrade: Opportunities in flavour physics, and beyond, in the HL-LHC era, tech. rep. R Aaij, LHCb Collaboration collaborationGenevaCERNLHCb Collaboration collaboration, Aaij, R. et al, Expression of Interest for a Phase-II LHCb Upgrade: Opportunities in flavour physics, and beyond, in the HL-LHC era, tech. rep., CERN, Geneva, Feb, 2017. Intrinsic time resolution of 3d-trench silicon pixels for charged particle detection. L Anderlini, M Aresti, A Bizzeti, M Boscardin, A Cardini, G.-F D Betta, 10.1088/1748-0221/15/09/p09029Journal of Instrumentation. 15L. Anderlini, M. Aresti, A. Bizzeti, M. Boscardin, A. Cardini, G.-F. D. Betta et al., Intrinsic time resolution of 3d-trench silicon pixels for charged particle detection, Journal of Instrumentation 15 (Sep, 2020) P09029-P09029. Thermal management and mechanical structures for silicon detector systems. G Viehhauser, 10.1088/1748-0221/10/09/p09001Journal of Instrumentation. 10G. Viehhauser, Thermal management and mechanical structures for silicon detector systems, Journal of Instrumentation 10 (Sep, 2015) P09001-P09001. Technical Design Report for the Upgrade of the ALICE Inner Tracking System. B Abelev, ALICE collaboration10.1088/0954-3899/41/8/087002J. Phys. G. 4187002ALICE collaboration, B. Abelev et al., Technical Design Report for the Upgrade of the ALICE Inner Tracking System, J. Phys. G 41 (2014) 087002. Upgrade of the ALICE ITS in LS3. M Mager, Proceedings of Science Vertex2019. Science Vertex2019M. Mager, Upgrade of the ALICE ITS in LS3, Proceedings of Science Vertex2019 (2019) . Evaporative co 2 microchannel cooling for the lhcb velo pixel upgrade. O Francisco, J Buytaert, P Collins, R Dumps, M John, A Mapelli, 10.1088/1748-0221/10/05/C05014Journal of Instrumentation. 1005O. Francisco, J. Buytaert, P. Collins, R. Dumps, M. John, A. Mapelli et al., Evaporative co 2 microchannel cooling for the lhcb velo pixel upgrade, Journal of Instrumentation 10 (05, 2015) C05014-C05014. Silicon micro-fluidic cooling for NA62 GTK pixel detectors. G Romagnoli, D A Feito, B Brunel, A Catinaccio, J Degrange, A Mapelli, 10.1016/j.mee.2015.04.006Microelectron. Eng. 145G. Romagnoli, D. A. Feito, B. Brunel, A. Catinaccio, J. Degrange, A. Mapelli et al., Silicon micro-fluidic cooling for NA62 GTK pixel detectors, Microelectron. Eng. 145 (2015) 133-137. Microfabricated silicon substrates for pixel detectors assembly and thermal management a.k.a. silicon microchannel cooling plates, Nuclear Instruments and Methods in. A Mapelli, 10.1016/j.nima.2019.04.096Physics Research A. 958220A. Mapelli, Microfabricated silicon substrates for pixel detectors assembly and thermal management a.k.a. silicon microchannel cooling plates, Nuclear Instruments and Methods in Physics Research A 958 (220) . Micro-channel cooling in high energy physics. M Vos, 10.22323/1.287.0037PoS. 201637M. Vos, Micro-channel cooling in high energy physics, PoS Vertex2016 (2017) 037. Carbon fiber composites with 2d microvascular networks for battery cooling. S Pety, M Tan, A Najafi, P Barnett, P Geubelle, S White, 10.1016/j.ijheatmasstransfer.2017.07.047International Journal of Heat and Mass Transfer. 115S. Pety, M. Tan, A. Najafi, P. Barnett, P. Geubelle and S. White, Carbon fiber composites with 2d microvascular networks for battery cooling, International Journal of Heat and Mass Transfer 115 (12, 2017) 513-522. Integrated cooling channels in position-sensitive silicon detectors. L Andricek, M Boronat, J Fuster, I Garcia, P Gomis, C Marinas, 10.1088/1748-0221/11/06/p06018Journal of Instrumentation. 11L. Andricek, M. Boronat, J. Fuster, I. Garcia, P. Gomis, C. Marinas et al., Integrated cooling channels in position-sensitive silicon detectors, Journal of Instrumentation 11 (jun, 2016) P06018-P06018.
[]
[ "Unconventional anisotropic superexchange in α ′ -NaV 2 O 5", "Unconventional anisotropic superexchange in α ′ -NaV 2 O 5" ]
[ "M V Eremin \nEP V\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\n86135AugsburgGermany\n\nKazan State University\n420008KazanRussia\n", "D V Zakharov \nEP V\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\n86135AugsburgGermany\n\nKazan State University\n420008KazanRussia\n", "R M Eremina \nEP V\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\n86135AugsburgGermany\n\nE. K. Zavoisky Physical Technical Institute\n420029KazanRussia\n", "J Deisenhofer \nEP V\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\n86135AugsburgGermany\n", "H.-A Krug Von Nidda \nEP V\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\n86135AugsburgGermany\n", "G Obermeier \nEP II\nInstitut für Physik\nUniversität Augsburg\nD-86135AugsburgGermany\n", "S Horn \nEP II\nInstitut für Physik\nUniversität Augsburg\nD-86135AugsburgGermany\n", "A Loidl \nEP V\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\n86135AugsburgGermany\n" ]
[ "EP V\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\n86135AugsburgGermany", "Kazan State University\n420008KazanRussia", "EP V\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\n86135AugsburgGermany", "Kazan State University\n420008KazanRussia", "EP V\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\n86135AugsburgGermany", "E. K. Zavoisky Physical Technical Institute\n420029KazanRussia", "EP V\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\n86135AugsburgGermany", "EP V\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\n86135AugsburgGermany", "EP II\nInstitut für Physik\nUniversität Augsburg\nD-86135AugsburgGermany", "EP II\nInstitut für Physik\nUniversität Augsburg\nD-86135AugsburgGermany", "EP V\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\n86135AugsburgGermany" ]
[]
The strong line broadening observed in electron spin resonance on NaV2O5 is found to originate from an unusual type of the symmetric anisotropic exchange interaction with simultaneous spinorbit coupling on both sites. The microscopically derived anisotropic exchange constant is almost two orders of magnitude larger than the one obtained from conventional estimations. Based on this result we systematically evaluate the anisotropy of the ESR linewidth in terms of the symmetric anisotropic exchange, only, and we find microscopic evidence for precursor effects of the charge ordering already below 150 K. 75.30.Et The isotropic exchange constants in one-dimensional antiferromagnets are obtained by measurements of the magnetic susceptibility or by inelastic neutron scattering. The anisotropic exchange contributions, however, are only accessible by means of electron spin resonance (ESR), because the spin-spin relaxation measured by the ESR linewidth is driven primarily by the corresponding effective local fields. Conventional theoretical estimations of anisotropic exchange parameters yield values by far too small to describe the experimental results. A prominent example for this problem is the spin-ladder α ′ -NaV 2 O 5 . This compound was initially identified as a spin-Peierls system [1] which triggered enormous efforts to investigate the nature of this transition. It was found that the system actually undergoes a charge-order (CO) transition at T CO ≈ 34 K [2] from a uniform oxidation state of V 4.5+ ions at high temperature [3] into a state with "zig-zag" type charge distribution [4] accompanied by spin-gap formation. Moreover, various experimental studies revealed an anomalous behavior at about 200 K, far above T CO , which has been attributed to the existence of charge fluctuations in the system [5, 6, 7].Previously, it was proposed that the spin relaxation in NaV 2 O 5 is strongly affected by these charge fluctuations[8]. ESR directly probes the spin of interest and, hence, is extremely sensitive to such dynamic processes of the electronic structure. The underlying mechanism of the spin relaxation, however, still remained a matter of heavy debate[9,10]. In this work, we identify the anisotropic exchange (AE) interaction as the dominant source of line broadening and we calculate the AE parameters on the basis of a microscopic charge-distribution * Corresponding author: [email protected] picture. With these parameters we are able to describe the angular dependence of the ESR linewidth ∆H at temperatures above T CO . The resulting temperature dependence of the exchange parameters is a clear fingerprint of the increasing charge fluctuations on approaching T CO .All details concerning the preparation and characterization of the crystals and the experimental ESR set-up have been published previously[11]. The observed ESR signal in α-NaV 2 O 5 consists of a Lorentzian line with a g-value g ≈ 2, characteristic of a spin-only system with quenched orbital moments[11]. The linewidth increases monotonously from a value of 10 Oe at T CO up to several hundredth Oe above room temperature, with the linewidth ∆H c for the magnetic field applied along the crystallographic c axis being about twice as large as ∆H a and ∆H b (along the a and b axis).At first, let us consider possible origins for the line broadening ∆H in NaV 2 O 5 . Single-ion (S =1/2), hyperfine and spin-lattice relaxation were shown to be less important in NaV 2 O 5 ([7, 12, 13]). The anisotropic Zeeman-effect is not relevant, because of nearly equivalent g tensors for all vanadium sites. Therefore, only three sources remain to account for the broadening of the ESR spectra in NaV 2 O 5 -the dipole-dipole (DD), the symmetric anisotropic-exchange (AE) and the antisymmetric Dzyaloshinsky-Moriya exchange (DM) interactions. These contributions have been already estimated and discussed in Ref. 12. The assumption that the DM interaction is the main perturbation was based on the conventional relation for the DM-vector |d| ≈ (∆g/g)|J| (where g and ∆g are the g factor and its anisotropy, respectively)[14]. The ESR data could be modeled by this approach, but it was necessary to assume the presence of strong charge disproportions even at highest temperatures to allow for an appropriate direction of the DM vector[8]. Later, Choukroun et al. [9] questioned the
10.1103/physrevlett.96.027209
[ "https://export.arxiv.org/pdf/cond-mat/0512587v1.pdf" ]
43,371,347
cond-mat/0512587
76cede44cf9cb56f2c22dea0ebfed009b588b632
Unconventional anisotropic superexchange in α ′ -NaV 2 O 5 22 Dec 2005 M V Eremin EP V Center for Electronic Correlations and Magnetism University of Augsburg 86135AugsburgGermany Kazan State University 420008KazanRussia D V Zakharov EP V Center for Electronic Correlations and Magnetism University of Augsburg 86135AugsburgGermany Kazan State University 420008KazanRussia R M Eremina EP V Center for Electronic Correlations and Magnetism University of Augsburg 86135AugsburgGermany E. K. Zavoisky Physical Technical Institute 420029KazanRussia J Deisenhofer EP V Center for Electronic Correlations and Magnetism University of Augsburg 86135AugsburgGermany H.-A Krug Von Nidda EP V Center for Electronic Correlations and Magnetism University of Augsburg 86135AugsburgGermany G Obermeier EP II Institut für Physik Universität Augsburg D-86135AugsburgGermany S Horn EP II Institut für Physik Universität Augsburg D-86135AugsburgGermany A Loidl EP V Center for Electronic Correlations and Magnetism University of Augsburg 86135AugsburgGermany Unconventional anisotropic superexchange in α ′ -NaV 2 O 5 22 Dec 2005(Dated: March 23, 2022) The strong line broadening observed in electron spin resonance on NaV2O5 is found to originate from an unusual type of the symmetric anisotropic exchange interaction with simultaneous spinorbit coupling on both sites. The microscopically derived anisotropic exchange constant is almost two orders of magnitude larger than the one obtained from conventional estimations. Based on this result we systematically evaluate the anisotropy of the ESR linewidth in terms of the symmetric anisotropic exchange, only, and we find microscopic evidence for precursor effects of the charge ordering already below 150 K. 75.30.Et The isotropic exchange constants in one-dimensional antiferromagnets are obtained by measurements of the magnetic susceptibility or by inelastic neutron scattering. The anisotropic exchange contributions, however, are only accessible by means of electron spin resonance (ESR), because the spin-spin relaxation measured by the ESR linewidth is driven primarily by the corresponding effective local fields. Conventional theoretical estimations of anisotropic exchange parameters yield values by far too small to describe the experimental results. A prominent example for this problem is the spin-ladder α ′ -NaV 2 O 5 . This compound was initially identified as a spin-Peierls system [1] which triggered enormous efforts to investigate the nature of this transition. It was found that the system actually undergoes a charge-order (CO) transition at T CO ≈ 34 K [2] from a uniform oxidation state of V 4.5+ ions at high temperature [3] into a state with "zig-zag" type charge distribution [4] accompanied by spin-gap formation. Moreover, various experimental studies revealed an anomalous behavior at about 200 K, far above T CO , which has been attributed to the existence of charge fluctuations in the system [5, 6, 7].Previously, it was proposed that the spin relaxation in NaV 2 O 5 is strongly affected by these charge fluctuations[8]. ESR directly probes the spin of interest and, hence, is extremely sensitive to such dynamic processes of the electronic structure. The underlying mechanism of the spin relaxation, however, still remained a matter of heavy debate[9,10]. In this work, we identify the anisotropic exchange (AE) interaction as the dominant source of line broadening and we calculate the AE parameters on the basis of a microscopic charge-distribution * Corresponding author: [email protected] picture. With these parameters we are able to describe the angular dependence of the ESR linewidth ∆H at temperatures above T CO . The resulting temperature dependence of the exchange parameters is a clear fingerprint of the increasing charge fluctuations on approaching T CO .All details concerning the preparation and characterization of the crystals and the experimental ESR set-up have been published previously[11]. The observed ESR signal in α-NaV 2 O 5 consists of a Lorentzian line with a g-value g ≈ 2, characteristic of a spin-only system with quenched orbital moments[11]. The linewidth increases monotonously from a value of 10 Oe at T CO up to several hundredth Oe above room temperature, with the linewidth ∆H c for the magnetic field applied along the crystallographic c axis being about twice as large as ∆H a and ∆H b (along the a and b axis).At first, let us consider possible origins for the line broadening ∆H in NaV 2 O 5 . Single-ion (S =1/2), hyperfine and spin-lattice relaxation were shown to be less important in NaV 2 O 5 ([7, 12, 13]). The anisotropic Zeeman-effect is not relevant, because of nearly equivalent g tensors for all vanadium sites. Therefore, only three sources remain to account for the broadening of the ESR spectra in NaV 2 O 5 -the dipole-dipole (DD), the symmetric anisotropic-exchange (AE) and the antisymmetric Dzyaloshinsky-Moriya exchange (DM) interactions. These contributions have been already estimated and discussed in Ref. 12. The assumption that the DM interaction is the main perturbation was based on the conventional relation for the DM-vector |d| ≈ (∆g/g)|J| (where g and ∆g are the g factor and its anisotropy, respectively)[14]. The ESR data could be modeled by this approach, but it was necessary to assume the presence of strong charge disproportions even at highest temperatures to allow for an appropriate direction of the DM vector[8]. Later, Choukroun et al. [9] questioned the The strong line broadening observed in electron spin resonance on NaV2O5 is found to originate from an unusual type of the symmetric anisotropic exchange interaction with simultaneous spinorbit coupling on both sites. The microscopically derived anisotropic exchange constant is almost two orders of magnitude larger than the one obtained from conventional estimations. Based on this result we systematically evaluate the anisotropy of the ESR linewidth in terms of the symmetric anisotropic exchange, only, and we find microscopic evidence for precursor effects of the charge ordering already below 150 K. The isotropic exchange constants in one-dimensional antiferromagnets are obtained by measurements of the magnetic susceptibility or by inelastic neutron scattering. The anisotropic exchange contributions, however, are only accessible by means of electron spin resonance (ESR), because the spin-spin relaxation measured by the ESR linewidth is driven primarily by the corresponding effective local fields. Conventional theoretical estimations of anisotropic exchange parameters yield values by far too small to describe the experimental results. A prominent example for this problem is the spin-ladder α ′ -NaV 2 O 5 . This compound was initially identified as a spin-Peierls system [1] which triggered enormous efforts to investigate the nature of this transition. It was found that the system actually undergoes a charge-order (CO) transition at T CO ≈ 34 K [2] from a uniform oxidation state of V 4.5+ ions at high temperature [3] into a state with "zig-zag" type charge distribution [4] accompanied by spin-gap formation. Moreover, various experimental studies revealed an anomalous behavior at about 200 K, far above T CO , which has been attributed to the existence of charge fluctuations in the system [5,6,7]. Previously, it was proposed that the spin relaxation in NaV 2 O 5 is strongly affected by these charge fluctuations [8]. ESR directly probes the spin of interest and, hence, is extremely sensitive to such dynamic processes of the electronic structure. The underlying mechanism of the spin relaxation, however, still remained a matter of heavy debate [9,10]. In this work, we identify the anisotropic exchange (AE) interaction as the dominant source of line broadening and we calculate the AE parameters on the basis of a microscopic charge-distribution * Corresponding author: [email protected] picture. With these parameters we are able to describe the angular dependence of the ESR linewidth ∆H at temperatures above T CO . The resulting temperature dependence of the exchange parameters is a clear fingerprint of the increasing charge fluctuations on approaching T CO . All details concerning the preparation and characterization of the crystals and the experimental ESR set-up have been published previously [11]. The observed ESR signal in α-NaV 2 O 5 consists of a Lorentzian line with a g-value g ≈ 2, characteristic of a spin-only system with quenched orbital moments [11]. The linewidth increases monotonously from a value of 10 Oe at T CO up to several hundredth Oe above room temperature, with the linewidth ∆H c for the magnetic field applied along the crystallographic c axis being about twice as large as ∆H a and ∆H b (along the a and b axis). At first, let us consider possible origins for the line broadening ∆H in NaV 2 O 5 . Single-ion (S =1/2), hyperfine and spin-lattice relaxation were shown to be less important in NaV 2 O 5 ( [7,12,13]). The anisotropic Zeeman-effect is not relevant, because of nearly equivalent g tensors for all vanadium sites. Therefore, only three sources remain to account for the broadening of the ESR spectra in NaV 2 O 5 -the dipole-dipole (DD), the symmetric anisotropic-exchange (AE) and the antisymmetric Dzyaloshinsky-Moriya exchange (DM) interactions. These contributions have been already estimated and discussed in Ref. 12. The assumption that the DM interaction is the main perturbation was based on the conventional relation for the DM-vector |d| ≈ (∆g/g)|J| (where g and ∆g are the g factor and its anisotropy, respectively) [14]. The ESR data could be modeled by this approach, but it was necessary to assume the presence of strong charge disproportions even at highest temperatures to allow for an appropriate direction of the DM vector [8]. Later, Choukroun et al. [9] questioned the FIG. 1: Possible paths for AE between two sites A (with ground state |η , exited state |ζ ) and A ′ (|ξ and |ϕ , respectively). Solid arrows correspond to the effective hopping integrals, dashed arrows indicate the matrix elements of the spin-orbit coupling. Numeration corresponds to the sequence of the matrix elements in forth order perturbation expansion (see Eq. 1). dominance of the DM interaction, showing that the contribution of the DM interaction to the ESR linewidth in quantum-spin chains cannot be larger than that of the AE. But the AE itself as taken from conventional estimations is by far too small to account for the large linewidth observed in NaV 2 O 5 . Therefore, such conventional estimations have to be taken with care. A recent fieldtheoretical treatment of quasi-one-dimensional S = 1/2 antiferromagnetic chains came to similar conclusions [10], because the DM interaction was found to produce a divergence in the temperature dependence of the linewidth ∆H DM ∼ T −2 for T ≪ J/k B . This is in contrast to the monotonic increase of ∆H with increasing temperature in NaV 2 O 5 [8]. Such a behavior, however, is in agreement with the theoretical expectation for a dominant AE [10]. Experimental investigations of related compounds corroborate this expectation, too [15,16,17,18]. In this respect LiCuVO 4 received a key role [18], because the DM interaction can be completely ruled out by its crystal symmetry. The linewidth is dominated by the AE, because the ring-exchange geometry in the Cu-O 2 chains strongly enhances the AE as compared to the conventional estimation |J AE | ≈ (∆g/g) 2 |J| [14]. In the following we will provide detailed microscopical estimations of this term in NaV 2 O 5 and show that the angular and temperature dependencies of ∆H can be completely described in terms of this relaxation mechanism, only. Starting with the microscopic analysis of the AE paths, the Hamiltonian for AE between two neighbouring sites A and A ′ can be written as site A to site A ′ , we derive the following expression for D AA ′ αβ in forth order of perturbation theory: H AE = S α A ·D AA ′ αβ ·S β A ′ ,D AA ′ αβ (ηξ) = λAλ A ′ 2∆ AA ′ { η|lα|ζ ∆ ζη t ζξ t ξζ ′ ζ ′ |l β |η ∆ ζ ′ η + +t ηϕ ϕ|lα|ξ ∆ ϕξ ξ|l β |ϕ ′ ∆ ϕ ′ ξ t ϕ ′ ξ + η|lα|ζ ∆ ζη t ζξ ξ|l β |ϕ ∆ ϕξ t ϕη + +t ηϕ ϕ|lα|ξ ∆ ϕξ t ξζ ζ|l β |η ∆ ζη + t ηξ ξ|lα|ϕ ∆ ϕξ t ϕζ ζ|l β |η ∆ ζη + + η|lα|ζ ∆ ζη t ζϕ ϕ|l β |ξ ∆ ϕξ t ξη },(1) where, e.g., t ξζ is the effective hopping integral between the states |ξ and |ζ via intermediate oxygens, and ξ|l α |ζ denotes the matrix element of the spin-orbit (SO) coupling H SO = λl α s α . Here we assume that the chargetransfer energy ∆ AA ′ from site A to site A ′ is large compared to the crystal-field splittings ∆ cf ≡ ∆ ζη , ∆ ζ ′ η . The first two correspond to conventional AE processes [19,20], while the others (Fig. 1(c-f)) and the general expression (1) are presented to the best of our knowledge for the first time. For example, in case (f) the electron at site A is excited via SO coupling from the ground state |η into the state |ζ , then it is transferred to the empty state |ϕ at site A ′ and interacts via SO coupling with the electron in the corresponding ground state |ξ . Finally, one of the electrons hops from state |ξ to the initial state |η . Focusing now on NaV 2 O 5 , we recall that the elec- tron is distributed between two V-ions on the same rung. Correspondingly, its ground state wave function |η is given as a superposition c 1 |d xy − c 2 |d xy of the two vanadium d-orbitals. Analogously, the ground state of the electron on the adjacent rung is given by |ξ = c ′ 1 |d xy − c ′ 2 |d xy . We illustrated the corresponding d-orbitals together with relevant bridging oxygen porbitals (π-bonding with hopping integral t ξη = t π ) in Fig. 2(a) for the high-temperature limit, where all coefficients c 1 , c 2 , c ′ 1 , c ′ 2 become equal to 1/ √ 2. Note, that due to the orthogonality of the wave functions processes (a)-(d) (Fig. 1) do not contribute to AE within one ladder in NaV 2 O 5 . Therefore, we will now concentrate on processes (e) and (f) and discuss the relevant excited states |ζ and |ϕ involved. Considering the possible excitations of the electrons via SO coupling we find that the largest contribution is obtained by the matrix element d xy |l z |d x 2 −y 2 = 2i. Hence, the relevant excited states are the combinations c 1 |d x 2 −y 2 − c 2 |d x 2 −y 2 and c ′ 1 |d x 2 −y 2 − c ′ 2 |d x 2 −y 2 for A and A ′ rungs, respectively. The charge-distribution picture for the excited states (σ-bonding via oxygen p-orbitals with hopping integral t ζϕ = t ′ σ ) is shown in Fig. 2(b). Thus, using expression (1) one can derive D zz as D zz = 8λ 2 t π t ′ σ ∆ 2 cf ∆ AA ′ [c * 1 c ′ 1 + c * 2 c ′ 2 ] 2 .(2) To estimate D zz we use the free ion value λ = 31 meV [21], the splitting between the d xy and the d x 2 −y 2 states ∆ cf ≃ 0.36 eV [22,23], and the charge-transfer energy ∆ AA ′ = 3 eV [24]. The hopping integral t ′ σ cannot easily be calculated, however, one can assume t ′ σ ≈ t π = 0.17 eV [3] as a lower bound for t ′ σ , and we obtain D zz ≈ 0.6 meV in the high-temperature limit where the electron is equally distributed on each rung. This yields a characteristic linewidth ∆H ∼ 300 Oe in very good agreement with the experimental linewidth. Note that our estimate is about two orders of magnitude larger than previous results [12], corroborating the importance of microscopic considerations for the estimation of AE parameters. Taking now into account possible inter-ladder exchange paths as shown in Fig. 2(c), we obtain contributions of comparable strength for the inter-ladder AE. These paths involve a 90 • -exchange geometry, which has been discussed in detail in Refs. 18,25,26. Considering all possible inter-ladder exchange paths in the appropriate local coordinate systems [17,27] we find a maximal component of the effective AE tensor along the crystallographic a axis. Having identified and estimated the source of the ESR line broadening in NaV 2 O 5 , we will now apply this model to the experimental data. The analysis of the angular dependencies in terms of second moments has been discribed previously [17,18]. The experimental angular dependencies [28] of ∆H are shown in Fig. 3 together with the fit curves. As a result we derive the ratio of the two essential fit parameters D intra and D inter for the AE parameters within and between the ladders, respectively. Fig. 4 shows the temperature dependence of the ratio D inter /D intra together with the linewidth ratios ∆H a /∆H c , ∆H b /∆H c (note that only the ratio of the exchange parameters can be determined from the ESR linewidth at temperatures T < J/k B as discussed in Ref. 17). It can be clearly seen that at high temperatures (T > 150 K) the dominant contribution to the line broadening is given by the intra-ladder AE. On decreasing temperature, below 150 K, the ratio strongly increases and the inter-ladder contribution becomes dominant. This can be understood taking into account the strong dependence of the AE parameters from the coefficients c 1 , c ′ 1 , c 2 , c ′ 2 which describe the electronic occupation on the vanadium sites. That means e.g. for D intra = D zz (Eq. 2) the coefficient [c * 1 c ′ 1 + c * 2 c ′ 2 ] is equal to 1 for the case V 4.5+ − V 4.5+ , and vanishes for the "zigzag" charge order (V 5+ − V 4+ ) realized below T CO [4]. The observed increase of the D inter /D intra ratio already far above T CO indicates that precursors of the developing CO set in at about 150 K, weakening considerably the intra-ladder AE. Further evidence for the onset of charge disproportions above T CO has been reported by the strong frequency dependence of the ESR linewidth between 34-100 K [29], and the anomalous features observed by optical spectroscopy measurements [5,6,30]. Note that the uniform susceptibility has significant deviations from the Bonner-Fisher law already at T < 200 K, too [7]. The coupling of these charge fluctuations to the lattice has been revealed by a softening of the elastic constants below 100 K detected by ultra-sound experiments [31,32] and by the shift of the phonon energy found in light-scattering measurements around 80 K [33]. Moreover, we would like to point out that similar observations in CuGeO 3 have been explained in terms of lattice fluctuations existing already far above the spin-Peierls transition [17]. We believe that the proposed spin-relaxation mechanism and the microscopic estimations do not only apply for the case of NaV 2 O 5 , but may allow to describe the spin dynamics in many transition-metal compounds. In summary, we have identified the symmetric anisotropic super-exchange to be the source of the immense ESR line broadening in NaV 2 O 5 . In this microscopic picture the dominant process consists of the simultaneous virtual hopping of electrons between the ground states and excited states of vanadium ions on neighboring rungs of the ladder involving the spin-orbit coupling on both rungs. This novel unconventional exchange process has not been considered in the discussion of ESR line broadening before. The corresponding AE parameter is found to be of the order of 1% of the isotropic exchange constant resulting in a high temperature limit of the ESR linewidth of approximately 10 2 Oe. On the basis of this microscopic analysis we have shown that the ESR data can be entirely described in terms of the symmetric anisotropic exchange only. The temperature dependence of the linewidth and derived exchange parameters evidences the presence of charge fluctuations in NaV 2 O 5 up to 150 K on a microscopic level. This work was supported by the German BMBF under Contract No. VDI/EKM 13N6917, by the DFG within SFB 484 (Augsburg), by the RFBR (Grant No. 03-02-17430) and RBHE (REC-007). One of us (D. V. Z.) was supported by DAAD. where {α, β} = {x, y, z}. Taking into account all possible virtual processes (displayed schematically in Fig. 1) between FIG. 2: Schematic pathway of AE between V ions in NaV2O5. Big spheres denote V ions, small spheres O ions. Frame (a): intra-ladder exchange between the ground state dxy orbitals. Frame (b): intra-ladder exchange between the excited d x 2 −y 2 states. Frame (c): Possible inter-ladder exchange paths. FIG. 3 : 3Angular dependence of the ESR-linewidth at different temperatures. Fit curves (lines) are described in the text. Upper frame: normalized to the linewidth for the magnetic field applied along the c axis. Lower frame: illustration of the contributions of intra-and inter-ladder AE to the linewidth far above (dashed line) and near TCO (solid line). FIG. 4 : 4Right ordinate: temperature dependence of the linewidth-ratio for the magnetic field applied along a or b axis normalized to ∆Hc. Left ordinate: temperature dependence ratio of the inter-to intra-ladder AE constants obtained from the fitting of the angular dependencies of ∆H. . M Isobe, Y Ueda, J. Phys. Soc. Jpn. 651178M. Isobe and Y. Ueda, J. Phys. Soc. Jpn. 65, 1178 (1996). . T Ohama, Phys. Rev. B. 593299T. Ohama et al., Phys. Rev. B 59, 3299 (1999). . H Smolinski, Phys. Rev. Lett. 805164H. Smolinski et al., Phys. Rev. Lett. 80, 5164 (1998). . H Seo, H Fukuyama, J. Phys. Soc. Jpn. 672602H. Seo and H. Fukuyama, J. Phys. Soc. Jpn. 67, 2602 (1998). . A Damascelli, Phys. Rev. B. 612535A. Damascelli et al., Phys. Rev. B 61, 2535 (2000). . S Nishimoto, Y Ohta, J. Phys. Soc. Jpn. 673679S. Nishimoto and Y. Ohta, J. Phys. Soc. Jpn. 67, 3679 (1998); . J. Phys. Soc. Jpn. 674010J. Phys. Soc. Jpn. 67, 4010 (1998). . J Hemberger, Europhys. Lett. 42661J. Hemberger et al., Europhys. Lett. 42, 661 (1998). . M Lohmann, Phys. Rev. Lett. 851742M. Lohmann et al., Phys. Rev. Lett. 85, 1742 (2000). . J Choukroun, J.-L Richard, A Stepanov, Phys. Rev. Lett. 87127207J. Choukroun, J.-L. Richard, and A. Stepanov, Phys. Rev. Lett. 87, 127207 (2001). . M Oshikawa, I Affleck, Phys. Rev. Lett. 825136M. Oshikawa, I. Affleck, Phys. Rev. Lett. 82, 5136 (1999); . Phys. Rev. B. 65134410Phys. Rev. B 65, 134410 (2002). Solid State Comm. M Lohmann, 104649M. Lohmann et al., Solid State Comm. 104, 649 (1997). . I Yamada, J. Phys. Soc. Jpn. 674269I. Yamada et al., J. Phys. Soc. Jpn. 67, 4269 (1998). . A A Zvyagin, Phys. Rev. B. 63172409A. A. Zvyagin, Phys. Rev. B 63, 172409 (2001). . T Moriya, Phys. Rev. 12091T. Moriya, Phys. Rev. 120, 91 (1960). . S D Demishev, Europhys. Lett. 63446S. D. Demishev et al., Europhys. Lett. 63, 446 (2003). . S A Zvyagin, Phys. Rev. Lett. 9517207S. A. Zvyagin et al., Phys. Rev. Lett. 95, 017207 (2005). . R M Eremina, Phys. Rev. B. 6814417R. M. Eremina et al., Phys. Rev. B 68, 014417 (2003). . H A Krug Von Nidda, Phys. Rev. B. 65134445H. A. Krug von Nidda et al., Phys. Rev. B 65, 134445 (2002). . B Bleaney, K D Bowers, Proc. Roy. Soc. 214451B. Bleaney, K. D. Bowers, Proc. Roy. Soc. A214, 451 (1952). K Yosida, Theory of Magnetism. BerlinSpringerK. Yosida, Theory of Magnetism, Springer, Berlin, (1996). Electron Paramagnetic Resonance of Transition Ions. A Abragam, B Bleaney, Clarendon, OxfordA. Abragam and B. Bleaney, Electron Paramagnetic Res- onance of Transition Ions, Clarendon, Oxford, (1970). . T Ohama, J. Phys. Soc. Jpn. 663008T. Ohama et al., J. Phys. Soc. Jpn. 66, 3008 (1997). . V V Mazurenko, Phys. Rev. B. 6681104V. V. Mazurenko et al., Phys. Rev. B 66, 081104 (2002). . S A Golubchik, J. Phys. Soc. Jpn. 664042S. A. Golubchik et al., J. Phys. Soc. Jpn. 66, 4042 (1997). . V Yu, R Yushankhai, Hayn, Europhys. Lett. 47116V. Yu. Yushankhai and R. Hayn, Europhys. Lett. 47, 116 (1999). . S Tornow, O Entin-Wohlman, A Aharony, Phys. Rev. B. 6010206S. Tornow, O. Entin-Wohlman, and A. Aharony, Phys. Rev. B 60, 10206 (1999). . A Bencini, D Gatteschi, SpringerBerlinEPR of Exchange Coupled SystemsA. Bencini and D. Gatteschi, EPR of Exchange Coupled Systems, Springer, Berlin, (1991). The detailed reinvestigation of a series of samples has shown that in best crystals (which have the smallest linewidth) the additional modulation of the angular dependence of ∆H within the (bc)-plane. reported in Ref. 8 is absentThe detailed reinvestigation of a series of samples has shown that in best crystals (which have the smallest linewidth) the additional modulation of the angular de- pendence of ∆H within the (bc)-plane reported in Ref. 8 is absent. . H Nojiri, J. Phys. Soc. Jpn. 692291H. Nojiri et al., J. Phys. Soc. Jpn. 69, 2291 (2000). . A I Smirnov, Phys. Rev. B. 5914546A. I. Smirnov et al., Phys. Rev. B 59, 14546 (1999). . H Schenk, Phys. Rev. B. 609194H. Schenk et al., Phys. Rev. B 60, 9194 (1999). . T Goto, B Luethi, Adv. in Phys. 5267T. Goto and B. Luethi, Adv. in Phys. 52, 67 (2003). . M Fischer, Phys. Rev. B. 607284M. Fischer et al., Phys. Rev. B 60, 7284 (1999).
[]
[ "Theory of Transmission of Light by Sub-wavelength Cylindrical Holes in Metallic Films", "Theory of Transmission of Light by Sub-wavelength Cylindrical Holes in Metallic Films" ]
[ "N García \nLaboratorio de Física de Sistemas Pequeños y Nanotecnología\nConsejo Superior de Investigaciones Científicas\n144, 28006Serrano, MadridSpain\n", "Ming Bai \nLaboratorio de Física de Sistemas Pequeños y Nanotecnología\nConsejo Superior de Investigaciones Científicas\n144, 28006Serrano, MadridSpain\n" ]
[ "Laboratorio de Física de Sistemas Pequeños y Nanotecnología\nConsejo Superior de Investigaciones Científicas\n144, 28006Serrano, MadridSpain", "Laboratorio de Física de Sistemas Pequeños y Nanotecnología\nConsejo Superior de Investigaciones Científicas\n144, 28006Serrano, MadridSpain" ]
[]
This paper presents theory and finite-difference time-domain (FDTD) calculations for a single and arrays of sub-wavelength cylindrical holes in metallic films presenting large transmission. These calculations are in excellent agreement with experimental measurements. This effect has to be understood in terms of the properties exhibited by the dielectric constant of metals which cannot be treated as ideal metals for the purpose of transmission and diffraction of light. We discuss the cases of well-differentiated metals silver and tungsten. It is found that the effect of surface plasmons or other surface wave excitations due to a periodical set of holes or other roughness at the surface is marginal. The effect can enhance but also can depress the transmission of the arrays as shown by theory and experiments. The peak structure observed in experiments is a consequence of the interference of the wavefronts transmitted by each hole and is determined by the surface array period independently of the material. Without large transmission through a single hole there is no large transmission through the array. We found that in the case of Ag which at the discussed frequencies is a metal there are cylindrical plasmons at the wall of the hole, as reported by Economu et al 30 years ago, that enhanced the transmission. But it turns out, as will be explained, that for the case of W which behaves as a dielectric, there is also a large transmission when compared with that of an ideal metal waveguide. To deal with this problem one has to use the measured dielectric function of the metals. We discuss thoroughly all these cases and compare with the data.PACs Code: 42.25.Bs, 42.79.Gn
10.1364/oe.14.010028
[ "https://export.arxiv.org/pdf/physics/0608237v1.pdf" ]
26,753,728
physics/0608237
87bb0a1910e20bd937729892a50e5678198d5192
Theory of Transmission of Light by Sub-wavelength Cylindrical Holes in Metallic Films N García Laboratorio de Física de Sistemas Pequeños y Nanotecnología Consejo Superior de Investigaciones Científicas 144, 28006Serrano, MadridSpain Ming Bai Laboratorio de Física de Sistemas Pequeños y Nanotecnología Consejo Superior de Investigaciones Científicas 144, 28006Serrano, MadridSpain Theory of Transmission of Light by Sub-wavelength Cylindrical Holes in Metallic Films This paper presents theory and finite-difference time-domain (FDTD) calculations for a single and arrays of sub-wavelength cylindrical holes in metallic films presenting large transmission. These calculations are in excellent agreement with experimental measurements. This effect has to be understood in terms of the properties exhibited by the dielectric constant of metals which cannot be treated as ideal metals for the purpose of transmission and diffraction of light. We discuss the cases of well-differentiated metals silver and tungsten. It is found that the effect of surface plasmons or other surface wave excitations due to a periodical set of holes or other roughness at the surface is marginal. The effect can enhance but also can depress the transmission of the arrays as shown by theory and experiments. The peak structure observed in experiments is a consequence of the interference of the wavefronts transmitted by each hole and is determined by the surface array period independently of the material. Without large transmission through a single hole there is no large transmission through the array. We found that in the case of Ag which at the discussed frequencies is a metal there are cylindrical plasmons at the wall of the hole, as reported by Economu et al 30 years ago, that enhanced the transmission. But it turns out, as will be explained, that for the case of W which behaves as a dielectric, there is also a large transmission when compared with that of an ideal metal waveguide. To deal with this problem one has to use the measured dielectric function of the metals. We discuss thoroughly all these cases and compare with the data.PACs Code: 42.25.Bs, 42.79.Gn I. INTRODUCTION Experiments were reported [1] showing that the transmission of light through subwavelength holes drilled periodically in a metallic film of Ag was large, 1000´s times larger, as compared with the transmission of one single hole of the same size in the same material. Recent experiments [2], by part of the same team of ref. 1 (Lezec and Thio), appear to contradict the earlier experiments [1]. The explanation of the experiments [1] where based on the existence of surface plasmons polaritons (SPP) that are excited in the case of a set of periodic holes. After the initial work, a whole serial of papers have appeared insisting on the same point (for a review see ref. 2). The last one, to our knowledge, appears recently on the same matter [3]. The recent paper by Lezec and Thio [2], reviewing the field and containing many new data, criticised much the whole saga of papers on the matter of the extraordinary enhancement of the transmission in a periodic array of holes in metallic films. This paper [2] claims to dismount the interpretation of SPP to understand the experiments and disclaims the entire picture in periodic arrays of holes. Also, it showed experimentally that the transmission enhancement by the periodic array with respect to that of a single hole is as much as a factor of 7, not a factor of 1000s, so that it can be a depression of the relative transmission as well. In fact the title of this paper is: "Diffracted evanescent wave model for enhanced and suppressed optical transmission through subwavelengths hole arrays". Maybe there is a point of broken physical argument in ref. 1 and subsequent papers. Their claims are based on that they compared the transmission by each hole of the array with the transmission of a single hole reported in an earlier paper by Bethe [4]. This paper [4] was a theoretical study showing that the transmission of a subwavelength hole drilled in a perfect metal screen, ideal conductor (dielectric constant being negative infinity, ε→ -∞), behaves as (r/λ) 4 , where λ is the wavelength of the radiation and r is the hole radius. However ref. 4 makes an approximation that is only valid for holes much smaller than λ, so that the field is practically constant on the hole. This does not hold for the experiments reported [1][2][3] because for the frequencies considered, the hole radius should be smaller than 25 nm but the holes used in the experiments are much larger. Another point is that an ideal metal has little resemblance regarding optical propagation with plasmonic metals as Ag or with a dielectric as W for the experimental frequencies. This has been discussed in part in a recent work [5]. Therefore the Bethe paper [4] has no significance in the problem at hand. In fact the main result in ref. 2, in our opinion, is that the single whole transmission is very large when compared with that reported in ref.4 or with that resulting from a theory for wave propagation in ideal metals long waveguides. Therefore it seems reasonable to try to discuss and understand these experiments using the experimental frequency dependent dielectric constants for the metals Ag and W. For these two materials we used extensive comparisons with the data. The aim of this paper is to study the transmissions of a hole of sub-wavelength size in a flat metallic film with thickness of the order of the hole diameter, as those used experimentally [1][2][3]. When simplistic estimations are considered, the transmission can be of the order of several thousand times larger than that for the same hole in an ideal metal. The understanding of this phenomenon is due to two terms for Ag: i) surface plasmons excited at the cylinder walls defining the hole, having the same nature that those described by Pfeiffer, Economou and Ngai [6] also Martinos and Economou [7] for metallic cylinders and recently discussed in the contest of the problem at hand [5]. These are surface plasmons rotating at the surface of the cylinders and propagating along its axis. ii) the penetration of the field in the metal. For the dielectric metal W, there may be a question whether cylindrical plasmons exist, but still is the penetration of the field in the metal. The signatures of waves at the cylinder surface are well manifested in the intensity curve depending of frequency for single holes. As will be seen these are effects that are governed in a subtle way by the values of the dielectric function at each frequency. II. CALCULATIONS The structures we calculated in this work are single circular hole or hole arrays in a flat metallic film with a given diameter d and thickness t. Fig.1 a and b present a view of the single and arrays of hole structures with period P. The plane wave impinges perpendicular to the plane of the holes. plane pulse wave with a broad band is set as incident wave. We recorded both the incident and transmitted wave through the structure. The frequency response of the structure is calculated by dividing the spectrum of the transmitted wave by the spectrum of the incident wave. The calculations are performed using 3D-FDTD method. The wave spectrums before and after the structures are achieved by Fourier transform from the recorded wave signal in time domain. In the FDTD method we used for different materials, the metals like Ag and W are modelled as frequency dispersive media. The corresponding frequency dependent permittivities are retrieved from experimental data by Johnson and Christy [8] and from Physics Data [9]. For ideal metal, it is treated by means that the electric components of wave are excluded from any part of the metal; i.e. the field is forced to be zero at the metal surface. This is equivalent to set the permittivity as infinitely negative. The perfect matched layer absorbing boundary condition [10] is applied in FDTD calculations for single hole transmission, and periodical boundary condition are applied for hole arrays transmission. The grids and time steps are taken fine enough to obtain convergent solutions. II. A. The Ag Case for Single Holes i) Calculations Ag is a paradigmatic case for studying surface plasmons and there are a large amount of literatures on surface polariton plasmons (SPP) by gratings [11] in the visible region. The reason for that is because at these wavelengths the imaginary part of the permittivity (ε 2 ) is small, then the SPP is well defined [11,12]. Fig.2a shows the real (ε 1 ) and the imaginary (ε 2 ) part of the permittivity. It represents a nice Drude plasmonic behaviour with a bulk plasmon wavelength, λ p =325nm (ω p~3 .8ev). It has been also shown that in a periodic surface with a certain small single Fourier component the enhancement of the field due to the SPP can be very large (≈ 100 times that of the incident field) [12,13]. However when the Fourier component increases or many Fourier component exist (for example a step-like profile of the surface) the enhancement is reduced drastically due to the enlargement of the SPP linewidth. We mention these points to stress that the existence of SPP in a surface does not imply large enhancements but many other requisites are needed. In particular the structure for study in Fig.1b has many Fourier components and the SPP enhancement due to SPP can not be large. To model metal Ag in the calculation, a Drude dispersion relation (Equ. 1) is used to meet the frequency dependent permittivity of Ag. ωδ ω ω ε ω ε i p f 2 1 / ) ( 2 2 − − = (1) which is characterized by fitting permittivity ε f in the visible range, the bulk plasma frequency ω p and the damping constant δ. These parameters are chosen to fit the experimental data by Johnson and Christy [8] for Ag. The chosen parameters are ε f =6.8, ω p~3 .8eV and δ~-0.02eV. In the frequency region that we are interested, the whole visual range, these values give a satisfactory good fit with the experimental data for both the real and the imaginary parts of the permittivity, as shown in Fig 2a. The transmission coefficient is presented in Fig.2b for a hole of diameter d=270nm in metal film with thickness t=340nm for the permittivity (Eq.1). The incidence is in normal direction with respect to the surface of the metal film. We also present results for a hole in ideal metal film with the same diameter and with thickness of t=340nm and 750nm. The results are quite illuminating and tell us what is going on in the single hole transmission. It is clear that the transport using the theory of transport in waveguides in ideal metals which shows the first wavelength cutoff at λ c =2πd/3.68=1.705d [14] only applies for large values of t=750nm (see Fig.2b). For comparison, the waveguide theory for ideal metal with much larger thickness is also plotted. For the values of t smaller than 340nm which is used in the experiments, a hole in the ideal metal gives a considerable transmission in long wavelength tail. Even at λ=700nm, the coefficient is 0.10. Therefore these result show that making considerations for enhancement of transmission comparing with long waveguide in ideal metals is unphysical and not realistic. The striking result is that when calculate the transmission with Ag film using the experimental values of permittivity for Ag, we found a much higher transmittivity at larger wavelengths. The cutoff has been moved from 460nm (ideal waveguide cutoff for d=270nm) to ~630nm. Notice the results presented in Fig.5a, at 700nm where a ii) Cylindrical surface waves. The surface plasmons excited in the cylindrical holes propagate the same way as those that has been studied on the surface of metallic cylinder [6,7]. The surface plasmons locate along the circumference of the cylinder with wavelength λ θ n =2πr/n, i.e. circumference length divided by the index branch n. And the SPs also propagate along the cylinder axis z with a wave k z . The possibility of exciting long wavelength modes is given by the cylindricality α=2πr/λ p, where λ p is the bulk plasmon wavelength ( λ p =325nm for Ag [8] ). This theory is done for Drude plasmon dispersion relation with δ=0 in (1). In our case we do not have a cylinder of infinite length as has been discussed in ref. 5 This cutoff of the surface plasmons is determined by the dispersion relation of the n=1 mode. The dispersion relation was studied with a planar approximation in ref. 6. As an approximation the cylindrical surface was treated as semi-infinite plane, the curvature of the cylinder was considered as periodic boundary condition. The resulted dispersion relation in the surface can be written as [6]: p p sp k k K n K Q Q Q / ] ) / ( [ ] ) 4 1 1 ( 2 1 1 [ 2 1 2 2 2 1 2 1 4 2 = + = + + + = − − − α ω ω(2) where k p = 2π/λ p and the cylindricality α=dk p /2 play an important role in the dispersion relation.. Fig.3a actually is the case for cylinder holes in Ag film with λ p =325nm, diameter d=270nm, the same as in However, we noticed that the above dispersion relation is based on the semi-infinite plane approximation for the cylindrical surface for a Drude metal with δ=0. The peak structures are not expected to be completely matched with the experiments. In fact the n index in our cylindrical hole structure may depart considerably of those given by (2) and plotted in Fig.3a to illustrate the problem. Still our holes are of finite small thickness while the theory of ref.6 is for infinite cylinders. The more important point is that this theory result explains the extra transmission above the waveguide cutoff limit. The cylindrical wave is excited at the entrance of the hole and it carries the energy through the other side of the hole which is not allowed in ideal waveguide. It is assuming that the thickness of the hole is not big enough because the wave has a propagation decay length. When the thickness of the hole becomes larger than the decay length, the transmission will be controlled by the waveguide modes without cylindrical surface waves. We have performed calculations for Ag with d=270nm and changing the thickness t=340nm, 525nm and 735nm to check the propagation decay length, as shown in Fig. 3b. Clearly when t=735nm, the cutoff is retracted to 600nm, however still bigger than that of the ideal metal cutoff (~500nm) for t=750nm in Fig.2b. This establishes that the transmission in a hole of Ag at larger wavelengths (>500nm) is controlled by cylindrical surface waves with a decay length that we estimated to be more than 1um, which we will discuss in anther work. The decay should depend also of the diameter because it limits the extension of the cylindrical wave into the vacuum in the hole, as well as of the thickness. iii) Comparison with Experiments Ref.2 has presented an ample number of experiments for single holes of different values of d and t. These represent a good set of experimental data to contrast with our calculations. Fig.4a presents the experimental data (Ref.2, Fig.2c) as well as the calculations for the same parameters as in the experiments. The comparison is strikingly good for all the cases. Also, as is important we plot the enhancement for the case of d=270nm. This is defined as the transmission for the real Ag divided by the transmission predicted by the ideal waveguide theory [14]. It is observed that this enhancement can be up to 1000. Analogously we present similar results in Fig.4b for the case of d=200nm and the enhancement is of the order of 10000. This proves that the enhancements by single holes we calculated are already of the order of those measured for hole arrays in Ref.1 and claimed to be due to SPPs of the periodical arrays. Calculations and experiments [2] prove that it is not necessary to have hole arrays in order to have such enhancements. This tends to rule out the SPP between the arrays as the physical reason for enhancement from arrays. However it is due to the cylindrical surface waves in the walls of the cavity drilled on the metal to have enhancement from single hole. It is clear that the influence of SPPs, if exist, is marginal for the large values observed in the transmission from holes arrays. It is also worth to notice that the transmission of a single hole shows weak oscillations as we tentative assign to the different cylindrical surface waves index n indicated in Fig.3a. II.B. The Ag Case for Arrays of Holes We now proceed by discussing the transmission of light through an array of holes following the same procedures discussed above. Since a single hole gives such an enhancement beyond the cut-off wavelength, it will not be a surprise that a periodical array will give also a very large enhancement. The result for an array will be produced by the interference of the waves merging from the holes. Therefore the transmission will have for some frequencies enhancements over the single hole transmission and for other frequencies depressions. Same ideas have been described in ref. 2, however our FDTD calculations will prove all at once. In order to prove this we have performed such calculations for periodical arrays of holes to compare with existing experimental results [2,3]. Fig.5a shows the transmission results for the periodical array with P=600nm, Fig.1 of ref.3. The agreement is again strikingly remarkable and without fitting any parameter, just taking the permittivity of Ag [8]. The three experimental peaks at λ≈ 700nm, 550nm and 430nm are excellently described not only the peaks positions but also the measured amplitude. For comparison we also presented the existing theory [2] showed that arrays of N×N holes yield practically the same results for N>9, so the infinite arrays give, as shown by the calculations, practically the same answer at normal incidence. We would like to explain a little bit on the appearance of the peaks positions in the periodical arrays as following. Once the cylindrical surface plasmons are excited, there will be a comparative large transmission per hole. These plasmons radiate waves at the surface and then interfere. The peaks positions and their intensity are given by the value of the period P. The hole´s diameter intervenes in the peak intensities because when, for a given frequency, a large value of a single hole intensity falls at the same position that the ideal interference peak, then this interference peak shows a pronounced maximum. However if these conditions do not match, the peak of the array is much smaller. As an illustration we present calculations in Fig.5c for d=200nm and P=600nm. It is clearly seen that the enhanced peak at around λ≈690nm is strongly reduced (compared with Fig.5a, 5b for the same P value), because the single hole at this wavelength has little intensity as shown in Fig.4b (compared with Fig. 4a for different d value). This is also in excellent agreement with the data of ref.2. To provide further information as a prediction result, we present in Fig.6a a set of calculations for the values of P=750nm, 870nm and 1050nm with same d=270nm and t=340nm. In agreement with the discussion above, the intensity peaks move according to the produced interferences. This shifts their peak wavelengths with periodic parameter P. Another series of transmission are also calculated (Fig.6b) fixing P=1200nm and t=340nm but for different d= 270nm, 300nm and 360nm. This time, the peak structure is always at the same position because P is fixed. However, the peaks change their intensity because d is varied. Therefore P and d determined the peak position and intensities respectively. Moreover, the thickness t also counts, because the material has absorption and the plasmons have certain decay length. Actually, the propagation length, the plasmons speed and the retardations, etc [15] all play a role and show up in the experiments. II.C. The W Case for Single Holes In the frequency region we discussed, Ag is considered as a special case because of its ability to support surface wave for extra transmission with respect to ideal metal. We would like to see how the holes in different real metals transmit wave from the same hole´s structure. For W, in the whole visible frequency range, the real part of the permittivity is positive and approximately constant. It behaves as a dielectric. The SPP waves cannot be supported by this metal. Let us be no so definitive because this may need further discussion. But le us accept, at least, that the SPP waves, as those existing in Ag owning to permittivity (Equ.1) cannot be hold at the surface. Then, the question is should the transmission of the holes in W be very different from Ag case? To study the case of W in the calculation, its frequency dependent permittivity is set as: 0 ) ( ωε σ ε ω ε i r + = (3) a model used for a conductor, with ε r >0 as the real part of the permittivity and σ as the conductivity. In the sense of optical transmission, the material governed by this model behaves as a dielectric but with big attenuation. It means that the wave will penetrate the material and meanwhile loss the energy because of the attenuation. It should be noticed that in an ideal metal there is no attenuation and neither penetration, which actually plays an important role in transmission from the hole. The experimental permittivity data [9] is shown in Fig.7 Fig.3a for arrays) in W is presented in Fig. 8, together with the transmission for the same hole in an ideal metal. Their transmission profiles are similar but very different from Ag case in Fig. 2b and 4. The strong extra transmission beyond the cutoff in Ag case does not exist, which is consistent with fact that there are not surface cylindrical plasmons for W in the large wavelengths to transmit the wave. The other fact should be noticed is that the transmission is smaller than that of an ideal metal for λ<700nm. It is understandable, as we mentioned above, the wave in the hole will constantly penetrate into the metal, and the energy will be constantly killed because of the attenuation nature of W. We verified with more calculations the transmission becomes smaller and smaller till totally dies for bigger thickness of W film. However for λ>700nm the transmittivity for W overpass that of the ideal metal. This is due to penetration, which makes the effective hole size bigger. Therefore losses and penetration of the wave interplay during the transmission. On the other hand, there is similarity between the W case and the ideal metal case. Once we calculate for a metal with the same model in Equ.3, but increasing σ by 200 times, the resulted transmission is almost identical as ideal metal case. To understand this result we should turn to the complex reflect index n=n+ik, where the imaginary part of the permittivity makes ε i »ε r , so that n ≈ k » 1. The wave's decay length of the penetration into the metal is therefore greatly reduced and the reflectivity is ~1. That is the reason this model gives the same transmission as ideal metal does. In principle, a hole in W film transmit light can be treated the same as waveguide but with large attenuation. All this is a qualitative discussion. However for fixed d and t of the order of the experiments, ideal waveguides is not well defined. Then ideal metal and W provide more transmission than expected by long ideal wavelengths. This needs further more discussion and the case of │ε│→ ∞ may be an interesting one for the t values discussed in this paper, yet we do not want to conclude it. II.D. The W case for arrays of holes We also perform calculations for the transmission of hole arrays in W film. The result is compared with the experiment in ref. 2 for the same structure, d=300nm, t=400nm, P=600nm. As shown in Fig.9, the calculation fits well to the experiments, for the whole profile, the peak positions and intensities. The peak position at λ~700nm is the same as those in Fig.5a and Fig 5b, with the same periodical parameter P, but with totally different material, hole size and thickness. This strongly confirm the regularity we discussed above in Ag case, that the transmission peaks of hole arrays are determined only by arrays periodic, the same conclusion that is mentioned in ref. 2. However, the intensity of the peaks in W case is 6 times smaller than those of Ag. It is merely because the transmittivity of single hole in W is smaller, by a similar factor, than that of Ag case in Fig.2b. Discussions and Conclusions From the calculations and observations made above we reach the following conclusions: 1. One obvious point, yet many people overlooked, is that one has to use the experimental data of the dielectric properties of the metal in theoretical consideration. As well as the parameters (d and t) of the hole used. 4. Dielectric metals, as W in the visible, with the permittivity as given in Fig.7 behave in a particular way. There is absorption and penetration of the wave function in the metal which reduced the transmission for short wavelengths, but increased it at large wavelength with respect to ideal metal (see Fig.8). 5. If the SPPs in the Ag cases we discussed, have an influence in the transmission of the holes arrays, it seems to be marginal; i.e. is a small factor. The experiments prove this as well as our calculations. 7. It appears that for the frequency discussed, Ag holds SPPs and W does not. However the existence of other possible surface waves that could enhance the field at the surface it is not clear yet. Insisting in the SPPs enhancements and in the marginal role that they play in the experiments at hand, we believe that it is because the structure of the holes has many large Fourier components which reduce the enhancement (see ref. [11][12][13] (b) Transmission of holes with fixed P=1200nm but for different diameter holes with d=270nm, d=300nm and 360nm. The comparison shows the peak positions remain fixed because of the same P, the peak intensity is influenced by the hole diameter. Figure 7 The experimental W permittivity dispersion data (points) [9] and the fitting (lines) by dispersion relation Equ. 3. ε i is the real part of the permittivity, ε r is the imaginary part of the permittivity. and 6, but we have a cylindrical metallic cavity of certain thickness t. However by looking at the boundary conditions the same kind of modes should exist and our full solution of Maxwell equations shows up in the transmission. In Fig. 4a modulations in the transmission can be identified, which actually may correspond to the surface plasmons modes excited in the cavity surface. The peak with longer wavelength corresponds to the smaller n, n=1 identifies the longest wavelength surface plasmons mode, i.e. there is a cutoff for the surface plasmons modes. Fig 2b As shown in Fig. 3a, with fixed cylindricality parameter α, the possible modes of surface plasmons that can be excited at the cross points with the photon line are limited in wavelength.. From Fig.3a, the wavelength for the possible surface plasmons can be excited in the cylinder holes is in range of 470nm~630nm. The 630nm corresponds approximately with the cutoff for the transmission using the permittivity in Fig.2b. There are also weak oscillations in the structure of the transmission, in both experiments and calculations that we may tentatively assign to the different plasmons index n, in Fig.3a and in Fig.4 as well. performed in ref. 3 in which a rather good agreement is claimed. In fact, there is not such an agreement because the calculations only show two peaks at 630nm and 460nm which are shifted from the three experimental peaks. Or to be more explicit, the peaks of the theory in ref. 3 correspond, not with the experimental peaks, but with the minima. Fig .5b shows comparison for the experiments in ref.2 for P=600nm, d=250nm and t=340nm and again the agreement is remarkable in the peak positions, the intensities and the enhancement with respect to a single hole intensities. Our calculations are for an infinity array of holes, while in the experiment the hole arrays are finite. However the experiments 2 . 2Because of the previous point, the analysis of enhancements in terms of ideal calculations with approximations, like that used with Bethe theory or ideal metal long waveguides, cannot be used because it produces mislead conclusions even if the experiments are interesting and right. 3. Plasmonic metal, in particular the paradigmatic Ag, has long cutoff wavelength in the transmission because of cylindrical surface plasmons as discussed above and in earlier references 5-7. So far, the t values discussed in the experiments are shorter than the decay length of the plasmons. For larger values of t, the transmission is drastically reduced and only remains the waveguide modes with losses. More experiments should be performed with longer t to clear up this point. Moreover we have made predictions of what may happen. 6 . 6The transmission peak positions of the arrays are given by the period P, and are material independent. Their intensity amplitude is large only if the single hole transmission is large. A paradigmatic effect of this is given in Fig 5a, 5c andFig.6. of the field amplitude at the surface. However there are surface profiles of the hole´s structures that may produce the desired effect. If this is possible there could be a multiplicative effect: one due to SPPs and the other to the cylindrical waves. In this sense, to work out with Ag, W and Cr choosing carefully the structures and the frequencies may give surprises. More experimental data are also needed changing the structures and the values of d and t. For example, what happens for squared holes? Figure Captions Figure 1 1Schematics of cylindrical hole structures in metal film from optical transmission calculations, (a) single hole with diameter d and thickness t (b) arrays of holes with periodic P. Figure 2a 2aThe experimental Ag permittivity dispersion data (dots)[8] and the fitting (lines) by Drude dispersion mode in Equ. 1. ε 1 is the real part of the permittivity, ε 2 is the imaginary part of the permittivity. Figure 2b 2bTransmission coefficient of single holes in different kind of film. The diameter of the hole is d=270nm. For ideal metal, two cases with t=340nm and t=750nm are compared. For the Ag case, t=340nm. And the thin line is calculated from ideal long waveguide theory. Figure 3a 3aCalculated dispersion relations λ sp by Equ.2 for the SPPs waves in the surface of Ag cylinder with a given cylindricality α, determined by d=270nm and Ag bulk plasmons λ p =325nm. Index n represents the possible SPPs mode number in the cylindrical surface. The cross points between the straight photon line and the SPPs dispersion lines indicate the possible SPPs wave modes that can be excited. For large index n, the dispersion lines tends to the same line. Figure 3b 3bTransmission coefficient of single holes in Ag film with fixed diameter (d=270nm) with different thickness t=340nm, 525nm and 735nm. Figure 4 4Transmission coefficient of single holes in Ag film (t=340nm).(a) Transmission of single hole with d=250nm (solid line), d=300nm (dash dot line) by experiment [2], with d=250nm (-▓-), d=270nm (-•-) by FDTD calculation and by ideal metal waveguide theory (thick line). The enhancement factor (dash line) is presented by dividing transmission from FDTD calculation by that from waveguide theory. The arrows indicate the positions for different cylindrical plasmons excitation modes. Notice the similar oscillations appear in the experiment data with a little shift. (b) Transmission of holes with d=200nm by experiment (solid line) [2], by FDTD calculation (-•-), by the ideal metal waveguide theory (thick line). The enhancement factor (dash line) is presented by dividing transmission from FDTD calculation by that from waveguide theory. Figure 5 5Transmission coefficient of periodic arrays (P=600nm) of holes in Ag film (a) Transmission of holes (d=270nm, t=225nm) by experiments (solid line) and by theoretical calculations in Fig. 1 of ref. 3 (dash line). Line (-•-) is our FDTD calculation results in excellent agreement with experiments. (b) Transmission of holes (d=250nm, t=340nm) by the experiments (solid line) and the corresponding enhancement factor (dash line) versus single hole transmission (Fig. 2a, 2b in ref. 2). Line (-•-) is by our FDTD calculation, together with its corresponding enhancement factor versus single hole transmission (dot line).(c) Transmission of holes (d=250nm, t=340nm) by the experiments (dash line) (Fig. 2c in ref. 2) and by our FDTD calculations (solid line). Figure 6 6Transmission coefficient of periodic arrays of holes in Ag film (t=340nm) (a) Transmission of holes with fixed d=270nm but for different periodic P=750nm, 870nm and 1050nm. The comparison shows the peak positions is strictly corresponding to the P values. Figure 8 8Transmission coefficient of single holes in different kind of film. The transmission of single hole in W with d=300nm and t=400nm is compared with that of the same hole in ideal metal. Figure 9 FiguresFigure 1a Figure 1b 91a1bTransmission coefficient of periodic arrays of holes in W film with d=200nm , t=340nm and P=600nm. Experimental data (solid line) and calculation (-•-) are compared, showing well agreement. Figure 2a 2aFigure 2a Figure 2b Figure 3a 3aFigure 3a Figure 3b Figure 4a Figure 4b Figure 5a Figure 5b Figure 5c Figure 6a Figure 6b Figure 7 7Figure 7 Figure 8 Figure 9 9Figure 9 Acknowledgements:We thank the European EU-FP6 Project Molecular Imaging LSHG-CT-2003-503259 for support. Extraordinary optical transmission through sub-wavelength hole arrays. T W Ebbesen, H J Lezec, H F Ghaemi, T Thio, P A Wolff, Nature. 391T. W. Ebbesen, H. J. Lezec, H. F. Ghaemi, T. Thio, and P. A. Wolff, "Extraordinary optical transmission through sub-wavelength hole arrays", Nature (London) 391, 667- 669 (1998) Diffracted evanescent wave model for enhanced and suppressed optical transmission through sub-wavelength hole arrays. H J Lezec, T Thio, Opt. Exp. 12H. J. Lezec and T. Thio, "Diffracted evanescent wave model for enhanced and suppressed optical transmission through sub-wavelength hole arrays", Opt. Exp. 12, 3629-3651 (2004) How Light emerges from an illuminated array of subwavelength holes. J Bravo-Abad, A Degiron, F Przybilla, C Genet, F J García-Vidal, L Martín-Moreno, T W Ebbesen, Nature Physics. 2J. Bravo-Abad, A. Degiron, F. Przybilla, C. Genet, F. J. García-Vidal, L. Martín- Moreno and T. W. Ebbesen, "How Light emerges from an illuminated array of sub- wavelength holes", Nature Physics, 2, 120-123 (2006) Theory of Diffraction by Small Holes. H A Bethe, Phys. Rev. 66H. A. Bethe, "Theory of Diffraction by Small Holes", Phys. Rev. 66, 163-182 (1944) Effect of the plasmonic dispersion relation on the transmission properties of subwavelength cylindrical holes. Hocheol Shin, Peter B Catrysse, Shanhui Fan, Phys. Rev. B. 7285436Hocheol Shin, Peter B. Catrysse, and Shanhui Fan, "Effect of the plasmonic dispersion relation on the transmission properties of subwavelength cylindrical holes", Phys. Rev. B 72, 085436 (2005) Surface polaritons in a circularly cylindrical interface: Surface plasmons. C A Pfeiffer, E N Economou, K L Ngai, Phys. Rev. B. 10C. A. Pfeiffer, E. N. Economou and K. L. Ngai, "Surface polaritons in a circularly cylindrical interface: Surface plasmons", Phys. Rev. B, 10, 3038-3051 (1974) Excitation of surface plasmons in cylinders by electrons. S S Martinos, E N Economou, Phys, Rev. B. 24S. S. Martinos and E. N. Economou, "Excitation of surface plasmons in cylinders by electrons", Phys, Rev. B, 24, 6908-6914 (1981) Optical Constant of the Noble Metals. P B Johnson, R W Christy, Phys. Rev. B. 64370P. B. Johnson and R. W. Christy, "Optical Constant of the Noble Metals", Phys. Rev. B, 6, 4370(1972) . J H Weaver, C Krafka, D W Lynch, E E Koch, Optical Properties. J. H. Weaver, C. Krafka, D. W. Lynch, and E. E. Koch, "Optical Properties of Metals, Physik Daten/Physics Data" No. 18-1 (Fach-Informations-Zentrum, Energie Physick Mathematik GmbH. KarlsruheMetals, Physik Daten/Physics Data" No. 18-1 (Fach-Informations-Zentrum, Energie Physick Mathematik GmbH, Karlsruhe, 1981). A Taflove, Advances in Computational Electrodynamics, The Finite-Difference Time-Domain Method. Artech HouseA. Taflove, "Advances in Computational Electrodynamics, The Finite-Difference Time-Domain Method", Artech House (1998). Surface Plasmons on Smooth and Rough Surfaces and on Gratings. H Raether, Springer Tracts in Modern Physics. Berlin HeidelbergSpinger-Verlag111H. Raether, Springer Tracts in Modern Physics, Vol. 111: "Surface Plasmons on Smooth and Rough Surfaces and on Gratings", Spinger-Verlag Berlin Heidelberg, 1988. Exact calculations of p-polarized electromagnetic fields incident on grating surfaces: Surface polariton resonances. N García, Opt. Comm. 45307N. García, "Exact calculations of p-polarized electromagnetic fields incident on grating surfaces: Surface polariton resonances",Opt. Comm. 45, 307(1983). Intensities and field enhancement of light scattered from periodic gratings: study of Ag, Au and Cu surface. N García, G Diaz, J H Saenz, C , Surf. Sci. 143N. García, G. Diaz, J. H. Saenz, C. Ocal, "Intensities and field enhancement of light scattered from periodic gratings: study of Ag, Au and Cu surface" Surf. Sci. 143, 342(1984). . John David, Jackson , WileyClassical Electrodynamics" 3 rd editionJohn David Jackson, "Classical Electrodynamics" 3 rd edition, Wiley, 1999. Measuring the speed of a surface plasmon. M Bai, C Guerrero, S Ioanid, E Paz, M Sanz, N García, Phys. Rev. B. 69M. Bai, C. Guerrero, S. Ioanid, E. Paz, M. Sanz and N. García, "Measuring the speed of a surface plasmon", Phys. Rev. B, 69, 115416-115421 (2004)
[]
[ "DiFT: Differentiable Differential Feature Transform for Multi-View Stereo", "DiFT: Differentiable Differential Feature Transform for Multi-View Stereo" ]
[ "Kaizhang Kang ", "Chong Zeng ", "Hongzhi Wu ", "Kun Zhou ", "Kaizhang Kang ", "Chong Zeng ", "Hongzhi Wu ", "Kun Zhou ", "\nState Key Lab of CAD&CG\nZhejiang University\nChina\n", "\nState Key Lab of CAD&CG, Zhejiang University and ZJU-FaceUnity Joint Lab of Intelligent Graphics\nChina\n", "\nState Key Lab of CAD&CG\nZhejiang University\nChina; Kun Zhou\n", "\nState Key Lab of CAD&CG, Zhejiang University and ZJU-FaceUnity Joint Lab of Intelligent Graphics\nChina\n" ]
[ "State Key Lab of CAD&CG\nZhejiang University\nChina", "State Key Lab of CAD&CG, Zhejiang University and ZJU-FaceUnity Joint Lab of Intelligent Graphics\nChina", "State Key Lab of CAD&CG\nZhejiang University\nChina; Kun Zhou", "State Key Lab of CAD&CG, Zhejiang University and ZJU-FaceUnity Joint Lab of Intelligent Graphics\nChina" ]
[]
Fig. 1. From input images densely captured with a rotational motion under pre-optimized illumination (left), we automatically learn to transform the differential structural cues into spatially discriminative and view-invariant per-pixel features at each view (center), which can be directly fed to any multi-view stereo technique for enhanced 3D reconstruction of an object with challenging appearance (second to the right). We also explore additional applications like computational stylization (right).We present a novel framework to automatically learn to transform the differential cues from a stack of images densely captured with a rotational motion into spatially discriminative and view-invariant per-pixel features at each view. These low-level features can be directly fed to any existing multi-view stereo technique for enhanced 3D reconstruction. The lighting condition during acquisition can also be jointly optimized in a differentiable fashion. We sample from a dozen of pre-scanned objects with a wide variety of geometry and reflectance to synthesize a large amount of high-quality training data. The effectiveness of our features is demonstrated on a number of challenging objects acquired with a lightstage, comparing favorably with state-of-the-art techniques. Finally, we explore additional applications of geometric detail visualization and computational stylization of complex appearance.
10.48550/arxiv.2203.08435
[ "https://arxiv.org/pdf/2203.08435v1.pdf" ]
247,476,016
2203.08435
b747c3522031705ab9b6544526704286c35cbb8a
DiFT: Differentiable Differential Feature Transform for Multi-View Stereo Kaizhang Kang Chong Zeng Hongzhi Wu Kun Zhou Kaizhang Kang Chong Zeng Hongzhi Wu Kun Zhou State Key Lab of CAD&CG Zhejiang University China State Key Lab of CAD&CG, Zhejiang University and ZJU-FaceUnity Joint Lab of Intelligent Graphics China State Key Lab of CAD&CG Zhejiang University China; Kun Zhou State Key Lab of CAD&CG, Zhejiang University and ZJU-FaceUnity Joint Lab of Intelligent Graphics China DiFT: Differentiable Differential Feature Transform for Multi-View Stereo 10.1145/nnnnnnn.nnnnnnnCCS Concepts: • Computing methodologies → 3D imaging; Shape mod- eling Additional Key Words and Phrases: low-level featuresfeature learningcomputational illumination ACM Reference Format: Fig. 1. From input images densely captured with a rotational motion under pre-optimized illumination (left), we automatically learn to transform the differential structural cues into spatially discriminative and view-invariant per-pixel features at each view (center), which can be directly fed to any multi-view stereo technique for enhanced 3D reconstruction of an object with challenging appearance (second to the right). We also explore additional applications like computational stylization (right).We present a novel framework to automatically learn to transform the differential cues from a stack of images densely captured with a rotational motion into spatially discriminative and view-invariant per-pixel features at each view. These low-level features can be directly fed to any existing multi-view stereo technique for enhanced 3D reconstruction. The lighting condition during acquisition can also be jointly optimized in a differentiable fashion. We sample from a dozen of pre-scanned objects with a wide variety of geometry and reflectance to synthesize a large amount of high-quality training data. The effectiveness of our features is demonstrated on a number of challenging objects acquired with a lightstage, comparing favorably with state-of-the-art techniques. Finally, we explore additional applications of geometric detail visualization and computational stylization of complex appearance. INTRODUCTION Image-based 3D reconstruction of physical objects with complex appearance is a key problem in computer vision and graphics. It has many important applications, ranging from quality inspection, product design, e-commerce to cultural heritage,. It is known that images captured with densely sampled views exhibit unique differential structural cues that can be used to deduce more accurate and complete 3D information [Johannsen et al. 2017], in comparison with standard multi-view stereo methods [Furukawa and Hernández 2015] that do not explicitly consider such properties. For example, the trace of an unoccluded 3D point is a straight line in a stack of images captured with an equidistant 1D translational motion between the camera and the object; and the slope of this line directly corresponds to the depth. In the case of a rotational motion, the trace of a point is a helix, whose radius and pitch can be transformed to obtain a 3D position. However, it remains an open problem to identify as many traces and compute their differentiable properties as accurately as possible for 3D reconstruction of a general object. Specifically, it is highly challenging to handle the appearance that varies considerably with the lighting and/or view conditions, textureless regions and complex occlusions. Dense depth estimation from a lightfield introduces heuristic [Wanner and Goldluecke 2013] or learning-based [Heber and Pock 2016] solutions, often assuming a Lambertian reflectance. Differential photometric stereo manually derives the relationship between depths and derivatives of image pairs undergoing differential motions [Chandraker et al. 2013;Wang et al. 2016a]. The closest work to ours is [Kang et al. 2021], which learns to convert the photometric information into low-level multi-view stereo features. Their approach relies on efficient illumination sampling, and does not exploit differential cues in the view domain. In this paper, we present a novel differentiable framework to automatically learn to extract the geometric information from differential cues in a stack of images captured with an equiangular rotational motion, and transform into spatially distinctive and view-invariant per-pixel features at each view, in an end-to-end fashion. The illumination condition during acquisition can also be jointly optimized with the feature transform. We focus on learning high-quality local features, and delegate subsequent processing (e.g., spatial aggregation/view selection) to any existing learning-/non-learning-based multi-view stereo pipeline. In addition, our framework is flexible, as it can adapt to various factors, including the type of motion and the lighting layout, in a data-driven manner. The effectiveness of the framework is demonstrated with a lightstage, on a variety of challenging objects with complex appearance. Our reconstruction results compare favorably with state-of-the-art techniques. Moreover, we perform an extensive study on the impact of different factors over the reconstruction quality, and extend the framework to a fixed illumination condition. Finally, we explore additional applications of visualization of minor geometric details as well as computational stylization of high-dimensional appearance. RELATED WORK 2.1 Depth from Densely Sampled Views A straightforward solution is to treat the problem as standard multiview stereo [Bishop et al. 2009;Vaish et al. 2006]. However, it does not explicitly exploit the rich, depth-related structures in densely sampled near-by views, leading to sub-optimal geometric reconstruction [Johannsen et al. 2017]. In addition, the narrow baseline between neighboring views may require special treatments [Jeon et al. 2015]. From the seminal work of [Bolles et al. 1987], substantial research efforts have been devoted to dense depth estimation from a lightfield that densely samples the view domain [Wu et al. 2017b]. A typical pipeline consists of three steps. The first step extracts useful local structural cues [Wanner and Goldluecke 2013;Zhang et al. 2016] from the input data (e.g., EPIs or virtual views sampled on circles [Heber et al. 2013]). Here Lambertian reflectance is usually assumed. Additional care must be taken to handle view-varying appearance variations, where it is difficult to compute reliable local features [Sulc et al. 2022;Tao et al. 2014]. The next step aggregates local information to compute a global representation. Sophisticated algorithms are proposed to handle challenging cases like complex occlusions [Chen et al. 2014;Wang et al. 2016b]. The final step is to generate the depth according to its relationship with the global information [Johannsen et al. 2016;Wanner and Goldluecke 2012]. Recently, deep learning is adopted to convert the input data to depth-related orientation cues [Heber and Pock 2016], or even to a disparity map in an end-to-end fashion [Shin et al. 2018]. In differential photometric stereo [Chandraker et al. 2013;Wang et al. 2016a], the relationship among the depth, the normal and derivatives of image pairs undergoing differential motions in the presence of diffuse plus single-lobe SVBRDFs are manually analyzed. Building upon this relationship, efficient methods are proposed to optimize for a depth map. In comparison, our framework is the first to automatically learn efficient, low-level geometric features from the 3D image stack captured with densely sampled views, in the presence of complex appearance. Existing work either is based on manual derivations, or only works well with Lambertian reflectance. Moreover, we jointly optimize active illumination condition during acquisition to pack more geometric information into physical measurements, essentially improving the signal-to-noise-ratio. Third, we focus on learning modular features and delegate local-to-global processing to powerful multi-view stereo techniques, making it possible to harness the recent or even future development in that field. Features for Multi-View Stereo A typical pipeline in multi-view stereo [Furukawa and Hernández 2015] starts with computing local features from each input image. It then matches the features across different views, and exploits their correspondences to compute 3D points via triangulation. The quality of features directly affects the number and the reliability of correspondence matches, which is critical for the completeness and accuracy of the reconstructed shape. Over the past decades, the research on feature design has gradually evolved from hand-crafted ones [Fan et al. 2015;Mikolajczyk and Schmid 2005], traditional-learning-based ones [Ke and Sukthankar 2004;Simonyan et al. 2014;Trzcinski et al. 2013] to deeplearning-based ones [Tian et al. 2017;Zagoruyko and Komodakis 2015;Zbontar and LeCun 2015]. However, the majority of work assumes Lambertian reflectance and attempts to filter out, rather than exploit, the complex appearance variations. The closest work to ours is [Kang et al. 2021], which learns to convert the photometric information into spatially discriminative and view-invariant low-level features that are amenable for multi-view stereo. Their approach requires sufficient photometric information, and does not work as good as ours in the case of a single lighting condition. Moreover, it does not exploit the rich differential cues available in densely sampled views. ACQUISITION DEVICE We conduct physical experiments in a box-shaped lightstage with a size of 80cm×80cm×80cm, as illustrated in the left of Fig. 2. A 24MP Basler a2A5328-15ucPRO vision camera takes photographs of a 3D object placed on a digital turntable in the lightstage, from an angle of about 45 • above the horizontal plane. The sample object is illuminated with 24,576 high-intensity LEDs mounted to all six faces of the box. The LED pitch is 1cm, and the intensity is quantized with 8 bits and controlled with house-made circuits. We calibrate the intrinsic and extrinsic parameters of the camera, as well as the positions, orientations and the angular intensity distribution of each LED. RENDERING EQUATION The following equation describes the relationship among the image measurement from a surface point p, the reflectance and the intensity of each LED of the device, which is crucial for optimization in a differentiable framework. Here we focus on a single channel for brevity. ( , x p , n p , t p , ) = ∑︁ ( ) ∫ 1 ||x l − x p || 2 Ψ(x l , − i ) (x l , x p ) ( i ′ ; o ′ , p)( i · n p ) + (− i · n l ) + x l ,(1) where is the index of a planar light source, and ( ) is its intensity in the range of [0, 1], the collection of which will be referred to as a lighting pattern. In addition, x p /n p /t p is the position/normal/tangent of p, while x l /n l is the position/normal of a point on the light whose index is . We denote i / o as the lighting/view direction, with i = x l −x p | |x l −x p | | . Ψ(x l , ·) is the angular distribution of the light intensity. is a binary visibility function between x l and x p . The operator (·) + computes the dot product between two vectors, and clamps a negative result to zero. (·; o ′ , p) is a 2D BRDF slice, which is a function of the lighting direction. We use the anisotropic GGX model [Walter et al. 2007] to represent . OVERVIEW We first acquire a series of images of a physical object, rotating from 0 to 360 • with a constant angular interval under a pre-optimized lighting pattern. From high-quality training data synthesized with a wide variety of pre-captured objects, our network automatically learns to transform a local rank-3 tensor in the spatial-angular domain (i.e., the differential structural cues) to a spatially discriminative and view/lighting-invariant low-level feature vector. The procedure is repeated for each pixel in each image, resulting in multi-view high-dimensional feature maps. Finally, the maps are converted to RGB images and sent to a state-of-the-art multi-view stereo technique to reconstruct a 3D shape. Fig. 2 illustrates the process. OUR NETWORK 6.1 Input/Output For each valid pixel at each input view, we assemble its neighborhood in the spatial-angular domain into a rank-3 tensor of × × , with the pixel of interest sitting at the center ( = = 5 in most experiments). This tensor is the main input to our network. Similar to [Kang et al. 2021], the view specification of the current image is also fed to the network in the continuous form of [cos( ), sin( )], where is the absolute rotation angle of the turntable in our setup. The idea is to encourage the network to exploit this additional view information to enhance feature quality. Our output is a 10D unit vector, representing a spatially distinctive and view-invariant feature at the pixel of interest. Architecture Our main network is designed to exploit the differential structural cues in the input tensor. It consists of 11 fc layers, 1 normalization layer and employs leaky ReLU for nonlinear activation. The first layer can be viewed as filters that extract different structural cues from the input tensor (Fig. 6). Note that the view specification (Sec. 6.1) is supplied to the network after 5 layers. The idea is to perform a low-level, view-independent transform first, and then further convert into a view-dependent feature with the additional view information. The final normalization layer produces a unit feature vector as output for training stability, as common in related literature [Schroff et al. 2015;Wu et al. 2017a]. Please refer to Fig. 3 for an illustration. In addition, the lighting pattern during acquisition is related to the input tensor (and therefore the loss function) in a differentiable way, according to Eq. 1. This allows us to optimize the illumination in conjunction with our main network for further quality improvement, as detailed in Sec. 6.3. To constrain the intensity of each light to the feasible range of [0,1], we use two unconstrained parameters and to express one LED intensity as 1 2 ( √ 2 + 2 + 1). Data Generation We generate training tensors by rendering 14 pre-captured objects ( Fig. 4) with 3D shapes and 6D SVBRDFs represented as texture maps of GGX parameters. Fig. 5 shows some examples. Specifically, for each object, we virtually place it on the turntable, and rotate from 0 to 360 • with the same angular interval as in the physical experiments. For each rotated view, we render the position/normal/tangent/BRDF parameters, and store the results as attribute maps. Next, for a valid pixel at a particular view, we assemble its attribute tensor by putting together its neighborhood in the spatial-angular domain from all attribute maps. Finally, this attribute tensor is used to render an input tensor of the same size for the main network under a given lighting pattern in a differentiable manner, using Eq. 1. Towards these goals, we propose the following loss function defined on a batch of samples: Loss Function = pos − neg ,(2) Here denotes an input tensor and the transform by our main network. consists of two terms: pos to encourage the view-invariance of features, and neg to increase spatial distinctiveness. Specifically, pos measures the Euclidean feature distances between positive pairs (i.e., input tensors at two different views that belong to the same 3D surface point). neg measures the feature distances between negative pairs (i.e., tensors that belong to points within a spatial neighborhood of neg × neg at the same view). We set to 0.01 in most experiments. To prepare a batch of samples for training, we first randomly select a surface point 0 on one of the pre-captured objects, along with a pre-rendered view (Sec. 6.3). Next, we randomly sample valid pixels in the neg × neg spatial neighborhood of the projection of 0 at the current view, and store the corresponding surface points as { 1 , 2 , ..., −1 }. For each , we additionally sample another visible view along with the current one; the corresponding tensors at two views form a positive pair. On the other hand, for each pair of { , } ≠ , the two corresponding tensors at the current view form a negative pair. Note that our negative pairs are "harder" than in [Kang et al. 2021], as the tensors corresponding to spatially nearby points could be Fig. 3. Network architecture. Our network takes as input a rank-3 tensor and produces a normalized 10D feature as output. Fig. 4. The collection of all pre-captured objects used to generate our training data. The diffuse albedos, as shown here, along with other material and geometric properties are sampled to produce attribute tensors that can be relit using arbitrary illumination conditions during network training. quite similar. This makes learning with a relative distance loss [Kang et al. 2021;Tian et al. 2017] rather difficult in our pilot study. As a result, we design the current simpler loss based on absolute distances. Training Our network is implemented with PyTorch, and trained with the Adam optimizer with mini-batches of 12 and a momentum of 0.9. Xavier initialization is applied, and the learning rate of 1 × 10 −4 . To increase the robustness in processing physical measurements, we perturb each rendered pixel in a training tensor with a multiplicative Gaussian noise ( = 1, = 1%). IMPLEMENTATION DETAILS At runtime, we separately transform the RGB channels of the input images, resulting in a three-channel high-dimensional feature vector at each pixel location in each view. All channels are further concatenated into a single vector, and then projected to a 3D space in the range of [0, 1] 3 via principal component analysis. Next, the results are quantized with 8 bits and stored as conventional RGB images. For each photograph, we mask out the background using an additional image with only the back-face LEDs on [Gardner et al. 2003]. Finally, the masked RGB images are fed as input to COLMAP [Schönberger et al. 2016] for 3D reconstruction. RESULTS AND DISCUSSIONS We acquire the geometry of 4 physical objects that varies in shape and appearance. Unless otherwise noted, it takes 8 minutes to take 360 photographs of each object with a 1 • rotational interval, under pre-optimized lighting condition in our setup. No high-dynamicrange imaging is employed. The ground-truth shapes are obtained with a professional 3D scanner [Shining3D 2022], after applying powder to the surfaces of the physical samples, which is necessary to reduce the adversarial specular reflections. Each captured image is downsampled to reduce noise and cropped to exclude the background to a size of about 300K effective pixels. All computation is performed on a workstation with dual Intel Xeon 4210 CPUs, 256GB DDR4 memory and 4 NVIDIA GeForce RTX 3090 GPUs. It takes 71 hours to generate positive/negative pairs Fig. 6. Visualization of our learned filters in the spatial-angular domain. Each column of images represent a filter of 5×5×5, displayed as = 5 images of × = 5×5 at each row. Positive/negative values are indicated as red/green, respectively. Only 16 filters are shown due to limited space. and compute the corresponding attribute tensors, sampled from the collection of all pre-captured objects, in a pre-processing pass (Sec. 6.3). This results in 1.2TB of training data. Next, the network training takes 38 hours for 500K iterations. At runtime, it takes 15 minutes to compute our DiFT features from input photographs and project to RGB images with unoptimized code, and 30 minutes to reconstruct the final 3D shape with COLMAP, a state-of-the-art non-learning-based multi-view stereo technique [Schönberger et al. 2016]. We visualize the weights in the first layer of the network in Fig. 6, which are essentially learned tensor filters. It is interesting to see the variety of the filters, including edge and ring-shaped ones, as a result of DiFT training. Moreover, the non-zero weights in views other than the center one demonstrates the effectiveness of the network in exploiting differential cues in the spatial-angular domain. Comparisons We compare in Fig. 7 the reconstruction results of 4 physical objects captured with our device, using DiFT against three closely related methods. EPFT is the closest work to ours, which learns efficient features from photometric information [Kang et al. 2021]. For a fair comparison, we train EPFT with a single lighting pattern, the same as in DiFT. CasMVSNet is a state-of-the-art deep-learning-based multi-view stereo technique [Gu et al. 2020]. COLMAP, on the other hand, is the representative non-learning-based stereo work [Schönberger et al. 2016], as well as the back-end of EPFT and DiFT. All four approaches take as input 360 photographs. DiFT and EPFT employ a pre-optimized lighting pattern during acquisition. For CasMVSNet and COLMAP, one of two commonly used lighting conditions is employed for different objects: a full-on pattern physically reduces the view variance of complex appearance, while a 4-point pattern introduces more shading cues. Please refer to the accompanying video for the input photographs of DiFT along with the corresponding feature maps at all captured views. Note that the same set of parameters are used in all 3D reconstruction experiments. In the figure, quantitative errors in accuracy/completeness (%) at a 0.5mm threshold are reported (as A/C). For accuracy, we achieve consistently the highest scores among all approaches, due to the efficient exploitation of useful differential cues in the spatial-angular domain. For completeness, we rank first in Bust, and are second to CasMVSNet for other objects. The reason is that in our experiments, CasMVSNet tends to output overly dense points that are visually pleasing, but with a lower accuracy. A visual comparison of all results also confirms this point. Moreover, compared with EPFT trained with relatively independent negative pairs, our negative samples are from a close spatial neighborhood, leading to desirable higher-frequency features as shown in the figure, for enhanced reconstruction. Evaluations We evaluate the impact of an extensive set of factors over the final reconstruction. For all figures mentioned in this subsection, images in the first and second column are for visual comparison of learned features with a small baseline, and the third column is for large baseline comparisons. Quantitative results on 3D reconstruction are also listed on the right in vertical text. Moreover, the second row in Fig. 8 shows the common baseline result, which is omitted in all other figures due to limited space. In Fig. 8, we first study the impact of . When is small, the loss on positive pairs dominates, leading to degenerated features that are less spatially distinctive. On the other hand, when is large, the loss on negative pairs kicks in to produce high-frequency spatial features, at the cost of view stability. The current choice of 0.01 is selected after balancing the two factors. We test two interesting tensor sizes in Fig. 9, 5×5×1 and 1×1×5. The former discards the differential cues in the neighboring views and degenerates to a pure image-domain technique, resulting in features that are less sensitive to surface details and thus a less satisfactory shape. The latter does not take the spatial neighborhood into account and solely relies on the angular domain, producing a better reconstruction with a surprisingly limited tensor size. This demonstrates the effectiveness of cues in densely sampled views. In Fig. 11, we further investigate the neighborhood size used in negative pair sampling (Sec. 6.4). Due to effective spatial propagation, different neighborhood sizes end up with similar results. In Fig. 12, we experiment with different angular sampling rates. While a bigger angular interval (2 • /4 • ) corresponds to a larger receptive field of the tensor, the coherence (i.e., differential structures) among neighboring views is reduced and the number of captured views as well. These two factors lead to less complete results. Finally, the framework is extended to a fixed 4-point lighting pattern ( Fig. 10). As expected, the quality is not as good as the baseline, due to fewer degrees of freedom in the optimization. Nevertheless, the experiment demonstrates the flexibility of our framework to adapt to different configurations. It will be interesting to extend to more complex illumination conditions, such as multiple patterns at each view and alternating patterns that change with the view, to further exploit the rich photometric information in these cases. Additional Applications Here we briefly describe two additional applications. First, despite the presence of complex appearance, DiFT features visualize/magnify the intricate geometric details using as few as 5 input images at neighboring views (Fig. 13). Such details might be difficult to spot in the original photograph, or even after applying a conventional image-domain Laplacian filter. As shown in the figure, our features bring out the natural growth pattern on the surface of the conch, which might be helpful in fields like biology or archaeology. In Fig. 14, we convert the feature maps with simple per-pixel operations into cartoon-/sketch-like stylization results that are stable across different views. DiFT may offer a unique perspective to computational stylization in the presence of complex appearance [Bousseau et al. 2013]. LIMITATIONS AND FUTURE WORK Our work is subject to a few limitations. First, the training data depend on the availability of high-quality digitized objects with separate representations of shape and appearance, a key resource that is still lacking nowadays. In addition, global illumination is not considered in training tensor generation, though more complete simulations can be performed at the expense of a substantially increased computation burden. Finally, the current framework supports reflectance only. For future work, it will be promising to extend DiFT to handle 1D/2D translational motions, or even irregular motions, as long as the view specification for each image can be accurately calibrated and reparameterized to be used with deep learning. It will also be interesting to reconstruct the 6D appearance from differential cues in conjunction with the geometry. Following Sec. 8.3, we hope that DiFT will serve as a fundamental low-level feature descriptor that might find useful applications in a variety of fields beyond computer graphics. Fig. 14. Cartoon-/sketch-like stylization with our features. Ideal features in multi-view stereo should possess the following properties [Tian et al. 2017; Zagoruyko and Komodakis 2015; Zbontar and LeCun 2015]: (1) the features of the same point at different views are invariant; (2) the features of different points are discriminative; (3) it is efficient to compare features. Fig. 5 . 5Examples of training tensors. They are generated by first sampling geometric and material attributes from a rotating digitized object, and then rendering with a given lighting condition. Each column of images represent a tensor of 5×5×5, displayed as = 5 images of × = 5×5 at each row. Fig. 8 .Fig. 9 . 89Impact of over features. Reconstruction errors are reported on the right. Impact of tensor size over features. Reconstruction errors are reported on the right.Stefan Heber, Rene Ranftl, and Thomas Pock. 2013. Variational shape from light field. In International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition. Springer, 66-79. Hae-Gon Jeon, Jaesik Park, Gyeongmin Choe, Jinsun Park, Yunsu Bok, Yu-Wing Tai, and In So Kweon. 2015. Accurate depth map estimation from a lenslet light field camera. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1547-1555. O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. Jeon, I. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. Tai, Q. Wang, T. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu. 2017. A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms. In CVPRW. 1795-1812. Fixed 4 Fig. 10 .Fig. 11 . 41011Impact of illumination over features. Reconstruction errors are reported on the right. Impact of neighborhood size ( neg × neg ) of negative samples over features. Reconstruction errors are reported on the right. Fig. 7. Comparisons with related work. From the left column to right, our features, EPFT features [Kang et al. 2021], photographs with a fixed lighting condition, the results using DiFT, EPFT, CasMVSNet [Gu et al. 2020] and COLMAP [Schönberger et al. 2016]. The third column is input to both CasMVSNet and COLMAP, which uses a full-on pattern for the top two objects and a 4-point lighting for the rest.Features(Ours) Features(EPFT) Photo(MVS) Ours EPFT CasMVSNet COLMAP Bust A:47.8/C:78.4 A:45.7/C:70.6 A:36.7/C:74.6 A:45.9/C:55.1 Train A:41.5/C:49.9 A:40.5/C:43.7 A:34.1/C:74.4 A:39.5/C:35.8 Cat A:61.6/C:76.2 A:55.7/C:66.0 A:44.9/C:86.8 A:31.7/C:24.6 Baymax A:41.9/C:41.6 A:38.1/C:37.9 A:33.8/C:73.0 A:28.2/C:16.2 Fig. 12. Impact of angular sampling rate over features. Reconstruction errors are reported on the right. Johannsen, Antonin Sulc, and Bastian Goldluecke. 2016. What sparse light field coding reveals about scene structure. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3262-3270. Kaizhang Kang, Cihui Xie, Ruisheng Zhu, Xiaohe Ma, Ping Tan, Hongzhi Wu, and Kun Zhou. 2021. Learning Efficient Photometric Feature Transform for Multi-view Stereo. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5956-5965.Fig. 13. Comparison between DiFT and the image-domain Laplacian.Our features better bring out intricate geometric details. Please zoom-in on a computer screen for a better visualization.Image# = 180 A=51.5/C=63.4 Image# = 90 A=51.3/C=40.1 Ole Photo DiFT Laplacian Photo Style#1 Style#2 • Trovato and Tobin, et al. , Vol. 1, No. 1, Article . Publication date: March 2022. , Vol. 1, No. 1, Article . Publication date: March 2022.DiFT: Differentiable Differential Feature Transform for Multi-View Stereo • 5 Light field superresolution. E Tom, Sara Bishop, Paolo Zanetti, Favaro, ICCP. IEEE. Tom E Bishop, Sara Zanetti, and Paolo Favaro. 2009. Light field superresolution. In ICCP. IEEE, 1-9. Epipolar-plane image analysis: An approach to determining structure from motion. C Robert, Harlyn Bolles, David H Baker, Marimont, International journal of computer vision. 1Robert C Bolles, H Harlyn Baker, and David H Marimont. 1987. Epipolar-plane image analysis: An approach to determining structure from motion. International journal of computer vision 1, 1 (1987), 7-55. Gloss perception in painterly and cartoon rendering. Adrien Bousseau, P James, Frédo O&apos;shea, Ravi Durand, Maneesh Ramamoorthi, Agrawala, ACM TOG. 32Adrien Bousseau, James P O'shea, Frédo Durand, Ravi Ramamoorthi, and Maneesh Agrawala. 2013. Gloss perception in painterly and cartoon rendering. ACM TOG 32, 2 (2013), 1-13. On Differential Photometric Reconstruction for Unknown, Isotropic BRDFs. Manmohan Chandraker, Jiamin Bai, Ravi Ramamoorthi, PAMI. 35Manmohan Chandraker, Jiamin Bai, and Ravi Ramamoorthi. 2013. On Differential Photometric Reconstruction for Unknown, Isotropic BRDFs. PAMI 35, 12 (2013), 2941-2955. Light field stereo matching using bilateral statistics of surface cameras. Can Chen, Haiting Lin, Zhan Yu, Bing Sing, Jingyi Kang, Yu, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionCan Chen, Haiting Lin, Zhan Yu, Sing Bing Kang, and Jingyi Yu. 2014. Light field stereo matching using bilateral statistics of surface cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1518-1525. Zhenhua Bin Fan, Fuchao Wang, Wu, Local image descriptor: modern approaches. Springer108Bin Fan, Zhenhua Wang, Fuchao Wu, et al. 2015. Local image descriptor: modern approaches. Vol. 108. Springer. Multi-view stereo: A tutorial. Foundations and Trends in Computer Graphics and Vision. Yasutaka Furukawa, Carlos Hernández, 9Yasutaka Furukawa and Carlos Hernández. 2015. Multi-view stereo: A tutorial. Foun- dations and Trends in Computer Graphics and Vision 9, 1-2 (2015), 1-148. Linear light source reflectometry. Andrew Gardner, Chris Tchou, Tim Hawkins, Paul Debevec, ACM Trans. Graph. 22Andrew Gardner, Chris Tchou, Tim Hawkins, and Paul Debevec. 2003. Linear light source reflectometry. ACM Trans. Graph. 22, 3 (2003), 749-758. Feitong Tan, and Ping Tan. 2020. Cascade cost volume for high-resolution multi-view stereo and stereo matching. Xiaodong Gu, Zhiwen Fan, Siyu Zhu, Zuozhuo Dai, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionXiaodong Gu, Zhiwen Fan, Siyu Zhu, Zuozhuo Dai, Feitong Tan, and Ping Tan. 2020. Cascade cost volume for high-resolution multi-view stereo and stereo matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2495-2504. Convolutional networks for shape from light field. Stefan Heber, Thomas Pock, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionStefan Heber and Thomas Pock. 2016. Convolutional networks for shape from light field. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3746-3754. PCA-SIFT: A more distinctive representation for local image descriptors. Yan Ke, Rahul Sukthankar, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. the 2004 IEEE Computer Society Conference on Computer Vision and Pattern RecognitionIEEE2IIYan Ke and Rahul Sukthankar. 2004. PCA-SIFT: A more distinctive representation for local image descriptors. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., Vol. 2. IEEE, II-II. A performance evaluation of local descriptors. Krystian Mikolajczyk, Cordelia Schmid, 27Krystian Mikolajczyk and Cordelia Schmid. 2005. A performance evaluation of local descriptors. IEEE transactions on pattern analysis and machine intelligence 27, 10 (2005), 1615-1630. Pixelwise View Selection for Unstructured Multi-View Stereo. Johannes Lutz Schönberger, Enliang Zheng, Marc Pollefeys, Jan-Michael Frahm, ECCV. Johannes Lutz Schönberger, Enliang Zheng, Marc Pollefeys, and Jan-Michael Frahm. 2016. Pixelwise View Selection for Unstructured Multi-View Stereo. In ECCV. Facenet: A unified embedding for face recognition and clustering. Florian Schroff, Dmitry Kalenichenko, James Philbin, CVPR. Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. Facenet: A unified embedding for face recognition and clustering. In CVPR. 815-823. Epinet: A fully-convolutional neural network using epipolar geometry for depth from light field images. Changha Shin, Hae-Gon Jeon, Youngjin Yoon, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. So Kweon, and Seon Joo Kimthe IEEE Conference on Computer Vision and Pattern RecognitionChangha Shin, Hae-Gon Jeon, Youngjin Yoon, In So Kweon, and Seon Joo Kim. 2018. Epinet: A fully-convolutional neural network using epipolar geometry for depth from light field images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4748-4757. EinScan Pro 2X Plus Handheld Industrial Scanner. Shining3d, RetrievedShining3D. 2022. EinScan Pro 2X Plus Handheld Industrial Scanner. Retrieved January, 2022 from https://www.einscan.com/handheld-3d-scanner/2x-plus/ Learning local feature descriptors using convex optimisation. Karen Simonyan, Andrea Vedaldi, Andrew Zisserman, TPAMI. 36Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2014. Learning local feature descriptors using convex optimisation. TPAMI 36, 8 (2014), 1573-1585. Recovery of geometry, natural illumination, and BRDF from a single light field image. Antonin Sulc, Ole Johannsen, Bastian Goldluecke, JOSA A. 39Antonin Sulc, Ole Johannsen, and Bastian Goldluecke. 2022. Recovery of geometry, natural illumination, and BRDF from a single light field image. JOSA A 39, 1 (2022), 72-85. Depth estimation for glossy surfaces with light-field cameras. W Michael, Ting-Chun Tao, Jitendra Wang, Ravi Malik, Ramamoorthi, European Conference on Computer Vision. SpringerMichael W Tao, Ting-Chun Wang, Jitendra Malik, and Ravi Ramamoorthi. 2014. Depth estimation for glossy surfaces with light-field cameras. In European Conference on Computer Vision. Springer, 533-547. L2-net: Deep learning of discriminative patch descriptor in euclidean space. Yurun Tian, Bin Fan, Fuchao Wu, CVPR. Yurun Tian, Bin Fan, and Fuchao Wu. 2017. L2-net: Deep learning of discriminative patch descriptor in euclidean space. In CVPR. 661-669. Boosting binary keypoint descriptors. Tomasz Trzcinski, Mario Christoudias, Pascal Fua, Vincent Lepetit, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionTomasz Trzcinski, Mario Christoudias, Pascal Fua, and Vincent Lepetit. 2013. Boosting binary keypoint descriptors. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2874-2881. Reconstructing Occluded Surfaces Using Synthetic Apertures: Stereo, Focus and Robust Measures. V Vaish, M Levoy, R Szeliski, C L Zitnick, Sing Bing Kang, In CVPR. 2V. Vaish, M. Levoy, R. Szeliski, C.L. Zitnick, and Sing Bing Kang. 2006. Reconstructing Occluded Surfaces Using Synthetic Apertures: Stereo, Focus and Robust Measures. In CVPR, Vol. 2. 2331-2338. Microfacet Models for Refraction through Rough Surfaces. Bruce Walter, Stephen R Marschner, Hongsong Li, Kenneth E Torrance, Rendering Techniques (Proc. EGWR). Bruce Walter, Stephen R. Marschner, Hongsong Li, and Kenneth E. Torrance. 2007. Microfacet Models for Refraction through Rough Surfaces. In Rendering Techniques (Proc. EGWR). SVBRDF-invariant shape and reflectance estimation from light-field cameras. Ting-Chun Wang, Manmohan Chandraker, Alexei A Efros, Ravi Ramamoorthi, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionTing-Chun Wang, Manmohan Chandraker, Alexei A Efros, and Ravi Ramamoorthi. 2016a. SVBRDF-invariant shape and reflectance estimation from light-field cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 5451-5459. Depth estimation with occlusion modeling using light-field cameras. Ting-Chun Wang, Alexei A Efros, Ravi Ramamoorthi, IEEE transactions. 38Ting-Chun Wang, Alexei A Efros, and Ravi Ramamoorthi. 2016b. Depth estimation with occlusion modeling using light-field cameras. IEEE transactions on pattern analysis and machine intelligence 38, 11 (2016), 2170-2181. Globally consistent depth labeling of 4D light fields. Sven Wanner, Bastian Goldluecke, 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEESven Wanner and Bastian Goldluecke. 2012. Globally consistent depth labeling of 4D light fields. In 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 41-48. Variational light field analysis for disparity estimation and super-resolution. Sven Wanner, Bastian Goldluecke, IEEE transactions. 36Sven Wanner and Bastian Goldluecke. 2013. Variational light field analysis for disparity estimation and super-resolution. IEEE transactions on pattern analysis and machine intelligence 36, 3 (2013), 606-619. Sampling matters in deep embedding learning. Chao-Yuan, R Wu, Alexander J Manmatha, Philipp Smola, Krahenbuhl, ICCV. Chao-Yuan Wu, R Manmatha, Alexander J Smola, and Philipp Krahenbuhl. 2017a. Sampling matters in deep embedding learning. In ICCV. 2840-2848. Light field image processing: An overview. Gaochang Wu, Belen Masia, Adrian Jarabo, Yuchen Zhang, Liangyong Wang, Qionghai Dai, Tianyou Chai, Yebin Liu, IEEE Journal of Selected Topics in Signal Processing. 11Gaochang Wu, Belen Masia, Adrian Jarabo, Yuchen Zhang, Liangyong Wang, Qionghai Dai, Tianyou Chai, and Yebin Liu. 2017b. Light field image processing: An overview. IEEE Journal of Selected Topics in Signal Processing 11, 7 (2017), 926-954. Learning to compare image patches via convolutional neural networks. Sergey Zagoruyko, Nikos Komodakis, CVPR. Sergey Zagoruyko and Nikos Komodakis. 2015. Learning to compare image patches via convolutional neural networks. In CVPR. 4353-4361. Computing the stereo matching cost with a convolutional neural network. Jure Zbontar, Yann Lecun, CVPR. Jure Zbontar and Yann LeCun. 2015. Computing the stereo matching cost with a convolutional neural network. In CVPR. 1592-1599. Robust depth estimation for light field via spinning parallelogram operator. Shuo Zhang, Hao Sheng, Chao Li, Jun Zhang, Zhang Xiong, Computer Vision and Image Understanding. 145Shuo Zhang, Hao Sheng, Chao Li, Jun Zhang, and Zhang Xiong. 2016. Robust depth estimation for light field via spinning parallelogram operator. Computer Vision and Image Understanding 145 (2016), 148-159.
[]
[ "Entropy-Driven Microstructure Evolution Predicted with the Steepest-Entropy-Ascent Quantum Thermodynamic Framework", "Entropy-Driven Microstructure Evolution Predicted with the Steepest-Entropy-Ascent Quantum Thermodynamic Framework", "Entropy-Driven Microstructure Evolution Predicted with the Steepest-Entropy-Ascent Quantum Thermodynamic Framework", "Entropy-Driven Microstructure Evolution Predicted with the Steepest-Entropy-Ascent Quantum Thermodynamic Framework" ]
[ "Jared Mcdonald \nMaterials Science and Engineering Department\nVirginia Tech\n24061BlacksburgVAUSA\n", "Michael R Von Spakovsky \nMechanical Engineering Department\nVirginia Tech\n24061BlacksburgVAUSA\n", "William T Reynolds Jr\nMaterials Science and Engineering Department\nVirginia Tech\n24061BlacksburgVAUSA\n", "Jared Mcdonald \nMaterials Science and Engineering Department\nVirginia Tech\n24061BlacksburgVAUSA\n", "Michael R Von Spakovsky \nMechanical Engineering Department\nVirginia Tech\n24061BlacksburgVAUSA\n", "William T Reynolds Jr\nMaterials Science and Engineering Department\nVirginia Tech\n24061BlacksburgVAUSA\n" ]
[ "Materials Science and Engineering Department\nVirginia Tech\n24061BlacksburgVAUSA", "Mechanical Engineering Department\nVirginia Tech\n24061BlacksburgVAUSA", "Materials Science and Engineering Department\nVirginia Tech\n24061BlacksburgVAUSA", "Materials Science and Engineering Department\nVirginia Tech\n24061BlacksburgVAUSA", "Mechanical Engineering Department\nVirginia Tech\n24061BlacksburgVAUSA", "Materials Science and Engineering Department\nVirginia Tech\n24061BlacksburgVAUSA" ]
[]
A Potts model and the Replica Exchange Wang-Landau algorithm is used to construct an energy landscape for a crystalline solid containing surfaces and grain boundaries. The energy landscape is applied to an equation of motion from the steepest-entropy-ascent quantum thermodynamic (SEAQT) framework to explore the kinetics of three distinct kinds of microstructural evolution: polycrystalline sintering, precipitate coarsening, and grain growth. The steepest entropy ascent postulate predicts unique kinetic paths for these non-equilibrium processes without needing any detailed information about the underlying physical mechanisms of the processes. A method is also proposed for associating the kinetic path in state space to a set of smoothly evolving microstructural descriptors. The SEAQT-predicted kinetics agree well with available experimental kinetics for ZrO2 sintering, Al3Li precipitate coarsening, and grain growth in nanocrystalline Pd. The computational cost associated with calculating the energy landscape needed by the approach is comparable to a Monte Carlo simulation. However, the subsequent kinetic calculations from the SEAQT equation of motion are quite modest and save considerable computational resources by obviating the need for averaging multiple kinetic Monte Carlo runs. arXiv:2108.11924v3 [cond-mat.mtrl-sci] 24 May 2022
10.1016/j.actamat.2022.118163
[ "https://export.arxiv.org/pdf/2108.11924v3.pdf" ]
247,476,417
2108.11924
266ffd9eafbcfe37e592aaff4af1daf846ff28fc
Entropy-Driven Microstructure Evolution Predicted with the Steepest-Entropy-Ascent Quantum Thermodynamic Framework Jared Mcdonald Materials Science and Engineering Department Virginia Tech 24061BlacksburgVAUSA Michael R Von Spakovsky Mechanical Engineering Department Virginia Tech 24061BlacksburgVAUSA William T Reynolds Jr Materials Science and Engineering Department Virginia Tech 24061BlacksburgVAUSA Entropy-Driven Microstructure Evolution Predicted with the Steepest-Entropy-Ascent Quantum Thermodynamic Framework (Dated: 2021-08-22) A Potts model and the Replica Exchange Wang-Landau algorithm is used to construct an energy landscape for a crystalline solid containing surfaces and grain boundaries. The energy landscape is applied to an equation of motion from the steepest-entropy-ascent quantum thermodynamic (SEAQT) framework to explore the kinetics of three distinct kinds of microstructural evolution: polycrystalline sintering, precipitate coarsening, and grain growth. The steepest entropy ascent postulate predicts unique kinetic paths for these non-equilibrium processes without needing any detailed information about the underlying physical mechanisms of the processes. A method is also proposed for associating the kinetic path in state space to a set of smoothly evolving microstructural descriptors. The SEAQT-predicted kinetics agree well with available experimental kinetics for ZrO2 sintering, Al3Li precipitate coarsening, and grain growth in nanocrystalline Pd. The computational cost associated with calculating the energy landscape needed by the approach is comparable to a Monte Carlo simulation. However, the subsequent kinetic calculations from the SEAQT equation of motion are quite modest and save considerable computational resources by obviating the need for averaging multiple kinetic Monte Carlo runs. arXiv:2108.11924v3 [cond-mat.mtrl-sci] 24 May 2022 I. INTRODUCTION The impetus for microstructural evolution lies in one of Clausius's seminal statements of the second law of thermodynamics: the entropy of an isolated system at constant energy tends to a maximum. Historically, this maximum entropy principle was rarely used in materials science because it is impossible to relate entropy directly to measurable microstructural parameters. Instead, changes during processes like particle sintering, grain growth, and precipitate coarsening were typically modeled by the conjugate principle of minimizing energy at constant entropy. The kinetics of these processes were expressed as a linear function of a driving force typically taken to be a local free-energy change associated with reducing the area of surfaces and grain boundaries. In contrast to such kinetic descriptions, the steepestentropy-ascent quantum thermodynamic (SEAQT) framework provides a practical vehicle for applying Clausius's maximum entropy principle. The framework identifies unique kinetic paths to stable equilibrium without the need for the usual near-and local-equilibrium assumptions. The SEAQT approach is based upon entropy calculated directly from a discrete energy landscape that covers all possible microstructures of a system. The energy landscape is determined from an appropriate model that depends upon the nature of the physical system [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16]. The model can either be quantum mechanically-based or quantum mechanicallyinspired (e.g., solid-state, Ising, Heisenberg, or Potts models). The contribution presented here applies the SEAQT framework to an energy landscape to describe the kinetics of three kinds of microstructural evolution to demonstrate the generality and flexibility of the approach. The energy landscape is based on a Potts model, which is developed using the Replica Exchange Wang-Landau algorithm [17,18] with a Hamiltonian defined for a solid with surfaces and grain boundaries. The algorithm is used to numerically generate the energy landscape and corresponding density of states for the system. The state of the system is expressed as a probability density distribution at each instant of time over the energy levels of the system's energy landscape [19], and expected values of the energy and entropy are calculated directly from these time-dependent probabilities. The probability density distributions are uniquely predicted by the SEAQT equation of motion, which provides the time evolution of the probabilities that describe the occupancy of the energy levels of the landscape and, thus, the nonequilibrium state of the system in time. Different starting points or initial conditions on the energy landscape give rise to qualitatively and quantitatively different kinetic paths along which the system evolves. The results for three different initial conditions are given here to demonstrate the phenomenological behavior corresponding to polycrystalline sintering, precipitate coarsening (or Ostwald ripening), and grain growth. The results include time evolutions of the system microstructure as well as the microstructure's average grain size, the number of grain boundaries and surface boundaries, and the relative density. Even though the three evolutions involve distinctly different phenomenological behaviors, they are obtained with a single model and a single energy landscape without reference to kinetic mechanisms or assumed limiting rates; state evolution in each case is driven simply by the principle of steepest entropy ascent (SEA). This principle has been postulated as a fundamental law of nature [20] and is used by the SEAQT equation of motion [21] to maximize the entropy production at each instant along the nonequilibrium path the system takes through state space It is significant that the SEA principle in this framework is not merely a constrained optimization with an objective function and a set of decision variables, but it is instead the application of a variational principle that leads to a unique thermodynamic path through state space. The former, for example, provides for a maximum growth velocity as in the case of Martyushev's application of the MEP to dendrite growth [22] or to a stability point uniquely identified as a minimum/maximum in the entropy production as in Kirkaldy's application to eutectic spacing [23] but has nothing to say about the unique non-equilibrium transient thermodynamic path taken. In contrast, the variational principle of the SEAQT framework assumes that nature always seeks a thermodynamic path that satisfies an extremum. This is analogous to the variational principle used in classical mechanics that determines the unique trajectory of a particle from an infinite number of possible trajectories by finding the "least action" (i.e., minimizing the difference of the kinetic and potential energies represented by the Lagrangian). The result is the set of Euler-Lagrange equations and as a consequence the equation of motion of Newtonian physics. It is such a variational principle based on SEA that leads to the SEAQT equation of motion and the ability to predict the unique non-equilibrium transient thermodynamic path taken by a system. The question, of course, arises as to why the extremum in this case should be the maximization as opposed to the minimization of the entropy production. At first glance, the two would seem to contradict each other. However, they do not. As has been very clearly shown, one can arrive at both linear and nonlinear non-equilibrium thermodynamics from Ziegler's maximum entropy production principle [24][25][26][27][28], which yields as a particular case Onsager's linear result. This contrasts with Prigogine's minimum entropy production principle [29], which is a particular case of the Onsager-Gyarmati principle of linear non-equilibrium thermodynamics in which a stationary process is in the presence of free thermodynamic forces (e.g., a free boundary or indeterminate diffusion as in reference [23]). However, there is evidence to suggest that in nonstationary processes and in stationary processes wellremoved from equilibrium nature chooses to operate with fixed forces at any given instant of time and, thus, maximizes the entropy production at each moment [12,20]. It is this idea which forms the basis of Ziegler's principle as well as that of Beretta [30][31][32][33][34][35]: that the direction nature chooses at every instant of time is that of steepest entropy ascent. A final comment about the generality of the SEA principle is illustrated with the work of Kirkaldy [36][37][38] in which, using a variational principle, he applies the minimum entropy principle to diffusional growth in solid-solid transformations. Cahn and Mullins [39] challenge the generality of this principle and Kirkaldy's use of it and do so by using two simple examples: i) one-dimensional steady state heat conduction and ii) one-dimensional steady state mass diffusion in the presence of an ex-ternally maintained temperature gradient. In both examples, Cahn and Mullins correctly show that in this particular case, minimum entropy production does not correspond to the correct temperature or concentration profile that would result from the transient thermal and mass diffusion equations and Fourier's and Fick's laws. Thus, for the case of thermal diffusion, the temperature profile, which should be linear at steady state, is shown by Cahn and Mullins, using a variational principle, to be logarithmic for the case of minimum entropy production. However, all that this particular example and the other one demonstrate is that minimum entropy production as implemented by Cahn and Mullins via a variational principle does not correspond to the correct steady state profile, which can, however, be determined via the SEA principle as implemented in the SEAQT equation of motion. In fact, this is done for the first of these two examples by Li, von Spakovsky, and Hin in Appendix B of [5] where using the SEAQT equation of motion, the authors predict the correct steady state linear temperature profile for the case of constant thermal conductivity. They as well show that when the thermal conductivity is not constant, the steady state temperature profile is somewhat nonlinear as would be expected. This is done using the SEA principle only. No assumption of a particular kinetic mechanism, i.e., Fourier's law as used in the transient thermal diffusion equation, is made. For the case of mass diffusion in the presence of a fixed temperature gradient, the expected steady state linear concentration profile is predicted by Li and von Spakovsky in [2] using the SEAQT equation of motion. Again, this is done without an a priori assumption of a particular kinetic mechanism, i.e., in this case, Fick's law as employed in the transient mass diffusion equation. Finally, the remainder of the paper is organized as follows. Section II A describes the energy landscape and the Replica Exchange Wang-Landau method for developing this landscape. Section II B presents the SEAQT equation of motion and discusses how it is formulated for this application, and Section II C outlines how the system's state space is linked to the system's microstructure. Sections III A, III B, and III C present the results of the models for sintering, precipitate coarsening, and grain growth, respectively, and Section IV provides some summary conclusions. II. METHOD A. Energy Landscape The system microstructure is described by a 2dimensional grid of pixels [40][41][42][43][44] whose energy is given by a q-spin Potts model. This model defines a variable, q, that monitors the spin of each pixel in the system. The q integers can range from 0 to several hundred, depending upon what physical entity the q phases represent. In this work, each location has an associated integer q value which represents either a void (when q = 0) or grain orientation (q 1). The larger the maximum number of q values is, the larger the number of allowed grain orientations. Surface energy arises when a void pixel is adjacent to a solid pixel (i.e., when a pixel with q = 0 is adjacent to a pixel with q 1). Grain boundary energy arises between pixels with different positive q values. The system is represented by a square L × L lattice where L is the linear size in pixels. The total energy, E, of the system is the sum of the energies of all the surface and grain boundaries and is represented mathematically by the following Potts model interaction Hamiltonian [40][41][42][43][44]: E = 1 2 N n=1 Z z=1 J (1 − δ(q n , q z ))(1) In this equation, E is determined by summing over the number of lattice sites (pixels), N (= L 2 ), and the number of neighbors to each site, Z. The Potts coupling constant or interaction energy, J, switches between a surface energy, γ s , and a grain boundary energy, γ gb , depending upon the identities of q n and q z . The δ on the right side represents the Kronecker delta which returns a value of 1 if the neighboring z th grain to n is of the same orientation (i.e., q n = q z ), or 0 if it is not (q n = q z ) [40][41][42][43][44]. Thus, the only contributions to the total energy come from boundaries in the system. Equation (1) gives the energy of any arbitrary configuration or state of solid grains bounded by surfaces and/or grain boundaries. The energy landscape, or energy eigenstructure, represents all the energies of all possible system configurations. For lattices of appreciable size, many of these configurations have the same energy; stated conversely, most energy levels are degenerate. In order to calculate system properties like the entropy, it is essential to know the degeneracy of each energy level. This information is typically represented as a density of states for each level. The density of states for the system can be obtained from the Wang-Landau method [45,46]. The Wang-Landau method is a non-Marcovian approach for estimating the degeneracies from a flat histogram generated via a Monte Carlo walk through all the possible energy levels of the system. The algorithm estimates the degeneracies from the fact that Monte Carlo transitions between individual energy levels occur with a probability given by 1/g(E j ), where g(E j ) is the estimated degeneracy of the E j energy level. Repeated Monte Carlo sweeps through the energy spectrum refines the accuracy of the estimates. The "replica exchange" [17,18] variant of the Wang-Landau method greatly accelerates the algorithm by subdividing the energy spectrum into multiple windows, utilizing multiple Monte Carlo walkers over the energy windows, and passing information among them. The basis of the Replica Exchange Wang-Landau code employed here is given in Vogel, Li, and Landau [47]. The density of states calculated with the Replica Exchange Wang-Landau algorithm for an L × L lattice with L = 34 is shown in Figure 1. The lattice was chosen to have 50% of the pixels as voids (q = 0) and the remaining 50% with values of q 1. The plot represents the natural log of the number of states or configurations as a function of the state energy. For this energy landscape, the surface energy and the grain boundary energy are assumed to be isotropic, and there are a maximum of 50 distinct grain orientations. This figure represents all the energies of all the possible states (configurations) of the physical system and the degeneracy of each energy level. Each eigenenergy along the abscissa has a unique degeneracy. B. SEAQT Equation of Motion The SEAQT equation of motion is used to predict the non-equilibrium thermodynamic behavior of the system. This equation requires no a priori knowledge of the kinetic mechanisms involved because it predicts the evolution of system properties on the principle of steepest entropy ascent. The energy transitions between levels of the landscape provide the underlying first-principle-basis for the kinetic phenomena that a given system experiences. It is these transitions that the SEAQT equation of motion captures with the principle of steepest entropy ascent (or equivalently, maximum entropy production), satisfying in the process the first and second laws of thermodynamics and the postulates of quantum mechanics provided the quantum mechanical features of the system have been included in the energy landscape. In the case of our application here, the quantum features are not needed and, thus, are not included. From a phenomenological and continuum standpoint, the constitutive laws and kinetic mechanisms typically used by traditional deterministic and stochastic material science models have an underlying second law of thermodynamics basis rooted in Onsager's linear theory of nonequilibrium thermodynamics in which a direct correlation between paired generalized forces and fluxes provides an estimate of the rate of entropy production. That linear phenomenological continuum approach cannot, however, from purely thermodynamic considerations, be shown to be applicable in regions other than those close to equilibrium. That limitation does not apply to the SEAQT framework, which has been shown thermodynamically to be applicable throughout the non-equilibrium region [2,4,31]. Another advantage is the SEAQT framework inherently captures the effects of coupled [5,11] and concurrent [12] phenomena when constitutive relationships such as Fick's law, Fourier's law, Ohm's law, etc., and simple rate-limiting models break down. The approach calculates the entropy production directly from the distribution of the system energy among the levels of a discrete energy landscape so there is no need to build a microstructural model with field equations involving fluxes and the corresponding thermo- There are 629,997 discrete energy levels for the system. The horizontal axis is scaled by the energy of the maximum level, and the vertical axis is the natural log of the degeneracy of the Ej energy eigenlevels. (a) shows the entire energy landscape; the individual energy eigenlevels are represented by points that are too close to differentiate so that the density of states appears like a solid, continuous region. (b) is a greatly enlarged segment of (a) that reveals the individual energy levels. The energy eigenlevels are arranged in arcs that correspond to iso-grain-boundary-areas (an example arc of iso-grain-boundary-area is highlighted in orange) and iso-surface-areas (an example arc of iso-surface-area is highlighted in yellow). dynamic forces associated with gradients in chemical potential, temperature, or electric potential. As a consequence, the accuracy of the SEAQT approach comes entirely from the details of the energy landscape rather than the details of assumed kinetic mechanisms, which may or may not be generally applicable. The SEAQT framework uses one universal kinetic model of energy transitions captured by an equation of motion that satisfies the laws and postulates of thermodynamics and quantum mechanics. Changing the application, i.e., system, simply requires building a different energy landscape. For the case of a simple quantum system, the equation of motion is expressed as [48][49][50] dρ dt = 1 i [ρ,Ĥ] + 1 τ (ρ)D (ρ)(2) In this expression, t is time, the modified Planck constant,Ĥ the Hamiltonian operator,D the dissipation operator, τ the relaxation parameter, andρ the density operator, which represents the thermodynamic state of the system (i.e., the distribution of eigenstates that comprise the thermodynamic state) at each instant of time. Note that [·] represents the Poisson bracket. The term on the left-hand side of the equation and the first term on the right side, the so-called symplectic term, constitute the time-dependent part of the von Neumann form of the Schrödinger equation of motion used in quantum mechanics to predict the reversible evolution of pure states (i.e., zero-entropy states). The second term on the right is there to capture evolutions involving the nonzero-entropy states of irreversible processes. Now, since the energy landscape considered here is only quantum-inspired and as a result contains no quantum information, the density operator reduces to a probability density distribution and the symplectic term is zero (because there are no quantum correlations) and, thus, H andρ, which is diagonal in the energy eigenvalue basis of the Hamiltonian, commute [1,2,4,[48][49][50]. Furthermore,D, which was originally postulated by Beretta [31,32], can be derived via a variational principle that preserves the energy and occupational probabilities by a constrained-gradient descent in Hilbert space along the direction of steepest entropy ascent at each instant of time. For the case considered here when the only two generators of the motion are the Hamiltonian and the identity operators, the equation of motion (Eq. (2) reduces to [48][49][50]: dp j dt = 1 τ −p j ln pj gj p j E j p j S 1 E E S E E 2 1 E E E 2(3) Here, p j represents the occupation probability of the j th energy eigenlevel, E j , and g j its associated degen-eracy. The degeneracy is the number of possible system configurations for a given energy eigenvalue. In the SEAQT framework, the von Neumann form for entropy, S j = −p j ln pj gj , is used because it satisfies the necessary characteristics for the entropy required by thermodynamics [51] and it provides a simple means of directly calculating the entropy of the system in any of its possible states. Additionally, · represents the expectation value of a property of the system such as the energy, E, the entropy, S, the energy squared, E 2 , or the product of the energy and entropy [48][49][50], e.g., E 2 = j E 2 j p j (4) E S = j E j p j ln p j g j(5) Although the equation of motion is only applicable to an isolated system, any set of systems may be treated as an isolated composite of subsystems. This enables interactions -such as the exchange of heat and massamong subsystems to be taken into account. For example, the equation of motion for an isolated composite of two subsystems A and B experiencing a heat interaction is given by [1]: dp j dt = 1 τ −p A j ln p A j g A j p A j 0 E A j p A j S A 1 0 E A S B 0 1 E B E S E A E B E 2 1 0 E A 0 1 E B E A E B E 2(6) Using the cofactors of the first line of the numerator's determinant, C 1 , C A 2 , and C 3 , and assuming the hypoequilibrium condition developed in [1], the equation of motion for subsystem A can be written compactly as [1]: dp A j dt * = p A j −ln p A j g A j − C A 2 C 1 − E A j C 3 C 1 = p j (S A j − S A ) − (E A j − E A ) C 3 C 1(7) where the dimensionless time t * = t 0 1 τ ( p(t )) dt is used to replace t and τ . The relaxation parameter, τ , from the equation of motion describes the system's dynamic speed along the kinetic path from the initial state to stable equilibrium. In the most general case, τ is a function of the time-dependent occupation probabilities p j represented by the vector p. It can be estimated from ab-initio calculations of the transition rate among energy levels in the system, or in the absence of such detailed information, it can be used as a fitting parameter to scale the predicted SEAQT kinetics to experimental data [10]. If the size of subsystem B is assumed to be significantly larger than A, subsystem B can be treated as a reservoir, denoted by R, and the previous equation reduces to [1,4] dp A j dt * = p j (S A j − S A ) − (E A j − E A )β R(8) where β R = C 3 /C 1 reduces to 1 k b T R with T R representing the temperature of the reservoir. With this formulation, the constant-temperature kinetic processes can be simulated [1,2]. Eq. (8) is a system of equations that is solved simultaneously to yield the time-dependent occupation probabilities of the energy eigenlevels. These probabilities collectively describe one unique path, i.e., the steepestentropy-ascent path. This is analogous to solving an Euler-Lagrange equation; the solution is guaranteed to yield the extremal path that one is seeking. This is an important advantage of the SEAQT framework. Because the path is in state space, it already accounts for all possible configurations -including all the local spatial and temporal fluctuations that appear in microstructural space. Solving Eq. (8) results in the globally maximum entropy production and is, thus, guaranteed to end at equilibrium and cannot get trapped "along the way" in microstructures associated with local free-energy minima. The solution provides the non-equilibrium thermodynamic state probability distribution at each instant of time and culminates with the probability distribution at equilibrium. These distributions are then used to calculate the thermodynamic, transport, and material properties of the system at each instant of time. The system of equations, Eq. (8), varies in size but can be quite large as is the case for our particular problem in which there are 629,997 equations, i.e., one equation for each energy eigenlevel in the energy landscape. However, solving this many equations is not typically a problem since Eq. (8) represents a system of ordinary, first order differential equations, and the coefficients constitute a sparse matrix because most of the probabilities are effectively zero at any given instant of time. These characteristics make the problem mathematically much more tractable than classical moving boundary problems (which are inherently multi-dimensional, partial differential equations). To solve Eq. (8) for the present problem, we used a standard MatLab numerical solver function, which typically required less than 30 minutes on a standard laptop computer. To solve the system of equations represented by Eq. (8), an initial condition defined by a distribution of occupied energy levels is needed. Eigenstructure information was used to generate initial microstructures for the three types of microstructural evolution considered (sintering, coarsening, and grain growth). For sintering, the initial microstructure was chosen to be an agglomeration of un-sintered particles, for coarsening, it was a distribution of small precipitates within a single grain, and for grain growth the initial microstructure was a collection of small grains within a contiguous solid without voids. Partially canonical distributions along with a perturbation function are then used to calculate the initial probability distributions needed for the SEAQT equation of motion. The partially canonical probabilities of the initial condition, the p pe j , are calculated from, p pe j = δ j g j exp(−β pe E j ) j δ j g j exp(−β pe E j )(9) where δ j takes a value of 1 or 0 depending on whether or not it is assumed that the j th energy eigenlevel is initially occupied or not and g j and E j are the degeneracy and energy eigenvalue of the j th eigenlevel. In this equation, β pe is an unknown determined by adding an energy constraint to the system of equations for the p pe j . Once the initial p pe j are known, an initial non-equilibrium distribution (i.e., initial state) is found using a perturbation function that utilizes the partially canonical probabilities and those of a corresponding canonical distribution. C. Linking State Space to Microstructure A distinguishing feature of the SEAQT framework is that it works in state space, i.e., Hilbert space or Foch space. The kinetic path is calculated from the component of the entropy gradient perpendicular to the manifold that conserves the generator of motion (e.g., the Hamiltonian and the identity operators). Consequently, it does not depend upon an actual mechanism or even a microstructure to determine how the system evolves. However, to extend its usefulness and help validate the SEAQT framework, the kinetic path information in state space must be connected to the physical microstructures of the evolving material. This is challenging because the degeneracies of some energy levels can be beyond enormous ( Figure 1 indicates there are more than 10 1300 configurations for the most degenerate levels of the present energy landscape!) and the microstructures corresponding to a single energy level can be quite different from each other. This situation poses two problems. The first is that it is impossible to store all the representative microstructures from such an astronomically large collection of possibilities, and the second is that even if one could, randomly selected microstructures along the smooth kinetic path would not necessarily be at all related to each other in time. However, it is possible to select representative microstructures that are consistent with a smooth evolution of microstructure by introducing one or more microstructural parameters in the description of the states. These descriptors can be used to select from among the many degenerate configurations only those that are consistent with a given initial state's evolution to some final stable equilibrium state. Each microstructural descriptor is chosen to reflect an important physical characteristic. For example, relative density, grain boundary length, and surface length are appropriate descriptors to track sintering kinetics. In the cases of precipitate coarsening and grain growth, appropriate descriptors are the average precipitate size (area) and grain size (area), respectively. Once an appropriate set of descriptors are selected, state space is linked to the system's microstructural evolution with the following procedure. First, the Replica Exchange Wang-Landau algorithm is run to establish the energy levels and the density of states for the energy landscape. At the same time, the values for one or more microstructure descriptors are calculated and recorded for the times an energy level is visited by the algorithm. Each energy level is characterized by arithmetically averaging the descriptor(s) over the recorded times of visits to the level. Second, an initial state on the energy landscape is selected and the SEAQT equation of motion is solved to find the kinetic path through state space. The energy levels along this path with non-zero occupation probabilities typically represent a very small subset of all the available energy levels; they are the only levels for which microstructure information need be stored. Third, the Replica Exchange Wang-Landau code is re-run to record representative microstructures only for this small subset of energy levels and that also have microstructural descriptor values close to the averaged value for the energy level. This subset of representative microstructures is indexed by energy level and one or more arithmeticallyaveraged descriptor value(s). Lastly, at each moment of time along the SEAQT kinetic path, the occupation probabilities are used to calculate the system energy (an expectation value) and weightaveraged descriptor values. These values at each time are used to select the closest microstructure from the collection of stored representatives. The resulting time sequence of microstructures evolves smoothly and continuously along the kinetic path through state space. Additional properties such as the relative density and precipitate size distribution of the system microstructure can, of course, also be tracked in time. To scale the results predicted by the SEAQT equation of motion to real time, the relaxation parameter, τ , from Equation (8) can be linked to experimental data or to some dynamical model of the phenomena involved. The former approach is used here. Thus, τ as a function of the real time, t, is determined from experimental data found in the literature. Finally, grain area calculations for the individual energy levels are based upon the percolation algorithm found in [52]. The algorithm functions by assigning tags to individual pixels in the modeled domain. When particles of like orientation are checked the system assigns the smallest available tag to both grains and adds the counted area of the grains to this tag. This function al-lows for efficient calculation of grain size in a single pass of a given lattice. The grain boundary length was calculated as the sum of the lengths of the pixel edges making up grain boundaries in the system. The initial value was adjusted to match the minimum grain size of the experimental data used to test the model. Additionally, the precipitate and grain sizes were calculated by assuming they are approximately circular in shape. The grain (or precipitate) area was easily tracked in the model by simply summing the area of the pixels with q-spins greater than zero, and this area was then used to approximate the grain size from the relationship, r = A/π. III. RESULTS A. Sintering The evolution of microstructure during sintering is shown in Figure 2, which presents a time series of microstructures of the material subsystem (the thermodynamic system is a composite system of the material plus the thermal reservoir of Equation 8). Part a) of the figure is the initial microstructure. Each pixel in the figure represents a powder particle 10 nm on a side. The pixel colors indicate crystal orientations, and gray pixels are voids. Different colors adjacent to each other indicate either two grains separated by a grain boundary or a grain with a surface. The maximum number of possible orientations is 50. The physical parameters chosen for the simulation correspond to sintered zirconia with a surface energy of 2.570 J/m 2 and grain boundary energy of 0.987 J/m 2 [53,54]. Initially, the energy of the material subsystem is distributed over a narrow set of energy levels associated with many small powder particles. As the material subsystem in Figure 2 moves toward stable equilibrium, the steepest entropy ascent principle distributes the subsystem energy more uniformly over the available energy levels. Since the energy of the subsystem in this model arises only from surface and grain boundaries [55], the removal boundaries during sintering is accomplished by transferring heat from the material subsystem to the thermal reservoir. When the evolution of states predicted by SEAQT equation of motion are converted to microstructures, the common physical features of sintering and grain growth are evident. For example, the initial small, single-crystal powder particles agglomerate to form larger polycrystalline particles with necks between them (Figure 2b), and these polycrystalline particles gradually grow in size (Figures 2c-e). The smaller powder particles eventually disappear entirely as a single solid mass appears ( Figure 2f). Also, the grains within each of the polycrystalline particles grow in size during the process (Figure 2b-d). Within the larger particles, one grain orientation eventually grows at the expense of all the others, and at stable equilibrium, the entire solid becomes a single-crystal with minimum surface area (the flat surfaces in Figure 2f result from periodic boundary conditions on the simulation domain). The change in relative density associated with this microstructural evolution is shown in Figure 3. The relative density descriptor is calculated from averaged configurations of the thermodynamic states represented by the probability distributions predicted by the SEAQT equation of motion. The predicted relative density in Figure 3 deviates only slightly from the experimental results of [53]. The largest deviation, which occurs at later sintering times for zirconia compacted at 700 MPa ( Figure 3a), suggests equilibrium was not reached because the experimental relative density failed to reach 100%. Note that the SEAQT framework tracks system evolution through state space rather than through a microstructural space as is done with approaches like kinetic Monte Carlo (KMC). This creates important differences between the two methods, notably with regard to how representative configurations of the microstructure are selected and evolve. In KMC models, individual microstructures or snapshots are sequentially linked in time and are used to approximate material processes. Many KMC models also commonly begin with an idealized starting structure and may utilize algorithms to reduce the computational complexity [44,56]. In contrast, the SEAQT framework utilizes state-based properties to track the kinetic path so that the selected microstructures are linked by evolving properties, i.e., energy and entropy, versus being explicitly linked in time. In addition, to maintain the generality of the model within the domain of the system, unconstrained particle exchange is allowed. This difference means that greater microstructural variety may be present for a given state in the SEAQT framework, and the average morphology for a given energy level has the potential to differ significantly from a similar level visited in a KMC simulation. In other words, the average state-based morphology in the SEAQT framework possesses a greater number of similar permutations than a similar state visited in a KMC model. Thus, under KMC, a specific morphology may exert an inordinate influence on the state's canonical properties. In addition, averaged state properties, such as grain size and particle size distribution, may vary for a KMC simulation, which visits the same energy level. This sampling problem makes it necessary to statistically average multiple KMC runs to obtain representative properties. This sampling problem is avoided in the SEAQT framework because all the evolving properties are expressed as time-dependent expected values. The 2-dimensional relative density is the ratio of the area occupied by non-void pixels to the total pixel area. In Monte Carlo sintering simulations [40,44,56], this is commonly calculated from the number of occupied and vacant lattice sites within a given simulation region, which is typically taken from the interior of the coarsening system. In our state-based approach, sintering agglomeration can take place anywhere and is not constrained to any particular region. For this reason, relative density was calculated in this work by eliminating the unattached particles in each simulation configuration and calculating the fraction of solid sites (non-zero Potts q-spins) from the remaining regions of agglomerating particles. This procedure yielded a relative density directly comparable to experimental densities as shown in Figure 3. Overall, the experimental kinetics are described closely by the steepest entropy ascent principle. The predicted relative density in Figure 3 has an asymmetric S-shaped curve that reflects changing stages in the state of the subsystem and as well as the underlying microstructure evolution. The initial stage before significant grain growth corresponds to an initial incubation-like period of slow particle consolidation. Afterwards, the material transitions to a rapid densification stage with concurrent grain growth. This is followed by a final asymptotic stage during which the rate of property evolution decreases significantly. Microstructurally, this stage is characterized by a reorientation and agglomeration of the largest grains and a steady reduction of the remaining individual grains in the subsystem. As already mentioned above, these results correspond closely with yttria-stabilized zirconia sintering results used for comparison [53]. The relative density increase exhibits similar beginning and end lags and an intermediate stage of significant growth. However, the final stage of the experimental results for the yttria-stabilized zirconia still contains multiple grain boundaries not present in the SEAQT results. This can be attributed to the conformational freedom of the model; no constraints were placed on the model to prevent it from achieving stable (single-crystal) equilibrium. The dissipation parameter, τ , in the SEAQT equation of motion can be adjusted to fit predicted kinetics to experimental data. The predicted and experimental density of zirconia powder during pressureless isothermal sintering at 1100 • C after compacting at a pressure of 700MPa and at 1000MPa [53] are compared in Figure 3). The time-dependent τ functions used to match these two sets of experimental data are shown in Figure 4. Physically, τ reflects how fast the system moves along the kinetic path in state space. Now, one of the distinct advantages of working in state space is that the steepest-entropy-ascent (or, equivalently, maximum entropy) principle is able to identify the unique kinetic path a system follows without any prior knowledge of the physical mechanisms involved. This is illustrated schematically in Figure 5a which represents a plot of the energy of the material subsystem as a function of its entropy. The solid bounding curve represents the set of equilibrium states; the ground state is the energy at the point S = 0. The two red points represent arbitrary non-equilibrium states. The information contained in the SEAQT equation of motion provides the unique path from an initial state to stable equilibrium (the dashed gray curve) that maximizes entropy ascent at each point in time. In the sintering case, the entropy of the material subsystem decreases from the initial state as surface and grain boundaries are removed from the solid and heat is transferred from the subsystem to the reservoir. However, the net entropy of the overall composite system (subsystem plus reservoir) increases along the path to equilibrium (see Figure 5b), and this entropy ascent spontaneously drives the sintering process. It is worth noting that most computational tools for finding stable states -like density functional theory, kinetic Monte Carlo methods, and molecular dynamics -all minimize system energy without regard to how energy is dissipated. Thus, these tools cannot predict the thermodynamic path between an arbitrary initial state and a stable equilibrium state. Perhaps most surprisingly, the SEAQT equation of motion predicts the microstructural evolution sequence (Figure 2) without any explicit assumptions about how the surfaces and grain boundaries physically behave during the sintering process. B. Precipitate Coarsening Appropriate regions of the energy landscape (Section II A and Figure 1) can be used to describe precipitate coarsening kinetics. Specifically, we represent a precipitate phase as one non-zero q-spin of the Potts model (or one color) contained within a parent phase, or matrix, which is designated by another non-zero q-spin (a second color). By choosing an initial state in a region of the energy landscape with only two spins and no free surfaces, the distribution of spins can represent precipitate particles undergoing coarsening within a matrix phase. The SEAQT predicted coarsening kinetics of precipitates are compared against experimental data for the coarsening of Al 3 Li precipitates (designated δ ) in an Al-Li solid solution (the α matrix) [57][58][59][60]. This alloy system is convenient because δ precipitates are nearly spherical and are thus comparable to the morphologies expected from the isotropic boundaries assumed in the energy landscape. The representative microstructures calculated during precipitate coarsening are shown in Figure 6; the initial state is Figure 6(a). In the actual alloy, the initial state for the coarsening process is a dispersion of small precipitates produced by nucleating δ precipitates within a supersaturated grain of α. Coarsening takes place during annealing at at 225 • C [57][58][59][60]. Only one α grain is considered, so there is only one matrix orientation. The energetics of nucleation in this system ensures that only one δ orientation appears within any particular α grain, so the precipitate coarsening kinetics can be described with only two colors separated by an interphase boundary between the α and δ phases. In the model microstructure (Figure 6), the α phase is represented by the purple pixels and the δ phase by the yellow pixels. The δ : α boundary energy, γ is quite small in the Al-Li system [61]; it is assumed to be 0.005 J/m 2 in the SEAQT coarsening simulation. An increase in the average size of precipitates is evident in the simulation microstructures of Figure 6. The elongated precipitate with planar boundaries that appears in the longest two times is a consequence of periodic boundary conditions imposed on the simulation domain. Although the coarsening path through state space is not tracking a particular microstructure configuration, the SEAQT framework is able to capture the approximately circular growth of the δ precipitates (Figure 6(b) and (c), the disappearance of small precipitates (Figure 6(d) and (e), and the eventual dominance of large ones ( Figure 6(e) and (f). Analytical models for coarsening in 2-dimensions [62] suggest the mean precipitate radius should follow a t 1/3 dependence. Figure 7 presents the expected value of the precipitate radius, R, predicted by the SEAQT model as a function of t 1/3 for four different interphase boundary energies. In each case, the precipitate radius approximates a t 1/3 dependence, but only over a very limited range of times. Overall, the SEAQT-predicted radius exhibits an S -shaped time dependence and slows asymptotically as equilibrium is approached. This does not necessarily contradict analytical coarsening models. The absence of a universal t 1/3 dependence is most likely a consequence of the fact that the SEAQT model is not constrained by assumptions typically made to reach an analytical expression (e.g. small precipitate volume fractions, a mean-field approximation for the average solute distribution, a cutoff precipitate size, etc). The different coarsening rates for the four different boundary energies demonstrate that changes in the energy landscape (even relatively small changes in the energy scale) can alter the steepest-entropy-ascent path and lead to different overall kinetics. Coarsening is commonly characterized using precipitate size distributions. The size distributions of the SEAQT simulation are reported here using the precipitate size, R, and the log of the frequency of that size. To reduce noise in the statistics arising from the limited domain size, only precipitates larger than 10% of the largest precipitate were included in the distribution. The SEAQT-simulated precipitate size distributions at the initial state and the final equilibrium state as well as at four intermediate non-equilibrium states along the system's kinetic path are shown in Figure 8. A bimodal size distribution develops as larger precipitates begin to evolve from the initial distribution. The distribution of small precipitates does not change much over the course of the simulation. The radius of the larger precipitates gradually shifts to larger and larger sizes, and eventually ends at a radius of 67 nm (the largest size available with this particular simulation domain). It is interesting to use the energy landscape to explore coarsening behavior from a very different starting precipitate distribution. Figure 9 shows the evolution of two isolated δ precipitates. To describe two-particle coarsening mechanistically and compare it with multiple particle The SEAQT equation of motion provides a unique path from an initial state (red point on the right) to stable equilibrium along the dashed gray curve. Techniques like density functional theory and kinetic Monte Carlo methods minimize energy without regard to energy dissipation so there is no thermodynamic path between an initial state and final equilibrium. (Right) The entropy production that drives the subsystem from a non-equilibrium initial state to equilibrium along the kinetic path determined by the SEAQT equation of motion. Entropy of the composite system increases despite the fact that heat transferred out of the subsystem into the reservoir reduces the entropy of the material subsystem. coarsening, it would be necessary to reformulate classical Greenwood-Lifshitz-Slyozov-Wagner theory to reflect the new geometry. A feature of the SEAQT framework is that it works from an energy landscape that incorporates all the possible microstructural geometries. For example, Figure 1 includes states consisting of two, three, four, ..., n precipitates and all kinds of different size distributions. Simulating two-particle coarsening simply involves constructing an initial probability distribution associated with several energy levels near this desired microstructural configuration. The set of evolving microstructures in Figure 9 was generated from a two-precipitate initial microstructure averaged from three similar energy eigenlevels. The microstructural sequence was obtained with the procedure of Section II C using boundary length as the descriptor. The periodic boundary conditions were relaxed to allow the precipitates to adopt any shape; this choice affects the microstructure evolution, but not the energy landscape or the kinetic path. For a fixed precipitate fraction, the system of two precipitates begins with much less boundary length than the original case shown in Figure 6(a). Nevertheless, both systems must coarsen to the same thermodynamic equilibrium state. Solving the equation of motion to find the time-dependent occupation probabilities in the two cases makes such a comparison straightforward. The expected energy during coarsening is shown in Figure 10 (the expected entropy can be calculated in a similar fashion). The black curve represents the expected energy during coarsening of the δ precipitates shown in Figure 6, and the dashed red curve corresponds to coarsening of the two δ precipitates shown in Figure 9. The initial expected energy of the two-precipitate microstructure, Figure 9(a), is lower than that of the many-precipitate microstructure, Figure 6(a), but the final equilibrium configurations are essentially the same (Figures 9(f) and 6(f)). Overall coarsening kinetics are not expected to depend explicitly on the number of precipitates, so the similarity in the way the energy evolves with time in the two cases of Figure 10 is reasonable. While the microstructures and precipitate size distributions predicted by the SEAQT framework during coarsening are qualitatively reasonable (Figures 6-8), they deviate in at least two significant ways from δ coarsening in an Al-Li alloy [60]. Most obviously, the limited number of pixels in the simulation make it necessary to represent the initial precipitate microstructure as an over-simplified array of individual squares rather than as a distribution of circles. Also, the use of only first nearest-neighbor interactions in the Potts model biases the energy landscape in a way that shifts the predicted precipitate distribution to smaller sizes. Both of these shortcomings can be addressed by computing a more accurate energy landscape with finer energy resolution. However, because the landscape was used for two other applications (sintering in Section III A and grain growth in III C), and these applications do not need as much energy resolution, no additional computational effort was made to refine the energy landscape and improve agreement with experimental coarsening data. C. Grain Growth The kinetics of grain growth predicted by the SEAQT framework from the aforementioned energy landscape (Section II A and Figure 1) can be simulated by starting with an initial state that represents a fully dense solid with a collection of different grain orientations (different non-zero q-spins) represented by different pixel colors. The landscape of Figure 1 included up to maximum of 50 different grain orientations. The representative microstructures predicted by the SEAQT equation of motion are shown in Figure 11 for a sequence of annealing times. The physical size of the initial grains represented by the pixel size was set at 1 nm on a side, which corresponds approximately to the initial grain size in a nanocrystalline Pd system undergoing grain growth at room temperature [63]. The surface boundary energy used for this system is 1.47 J/m 2 taken from a weighted average found in Tran [64]. The grain boundary energy utilized is 0.8 J/m 2 [63]. This system is chosen for its similarities with the microstructural descriptors calculated previously. The initial state begins with a near-maximum number of grain boundaries, i.e., adjacent pixels all have different colors and, thus, represent grains of different orientations. To consider only grain boundary changes independent of any changes in surface energy, the initial state is a single solid block of individual grains selected to have a minimum number of surface pixels. Because of periodic boundary conditions, this yields planar top and bottom surfaces in Figure 11. There are no internal voids for this simulation of pure grain growth. Figure 11a) is a very different initial condition from that of sintering ( Figure 2a)) and precipitate coarsening (Figure 6a)), but each of these simply represent different starting states that can be found on the same energy landscape of Figure 1. The microstructural changes predicted by the SEAQT framework in Figure 11 follow the general expectations of grain growth: the average grain size increases in Figures 11b) through 11e) as small grains coalesce into larger grains and the larger grains continue to grow at the expense of the smaller grains. At stable equilibrium ( Figure 11f)), the system consists of a single grain with minimum surface area. The average grain size (a microstructural parameter linked to the energy levels) and grain size distribution are the descriptors used to characterize grain growth evolution with time. The evolution of the average grain area, a descriptor calculated from averaged configurations of the states represented by the probability distributions, is bluecompared in Figure 12 with experimental grain growth data from nanocrystalline Pd [63]. The predicted average grain size has an overall "S−shape" that matches the experimental growth kinetics quite well at short times. At longer times (beyond annealing times of 20 hours) the experimental data deviates from the predicted kinetics, but there are a couple of possible explanations for this deviation. Grain growth in thin, transmission electron microscopy samples is expected to slow as the grain size approaches the sample thickness (on the order of 100nm) and thermal grooving begins to pin grain boundary migration. Ames et al. [63] also note that Pd grain growth begins with an initial slow grain growth followed by a period of rapid abnormal grain growth, and ends with a potential reoccurrence of the initial steady grain growth. The abnormal grain growth violates the statistical self-similarity postulate for a sintering system, which states that evolving systems in time should retain the statistical similarity of their geometric features. Nonetheless, the abnormal behavior is generic in nanograined systems and may be caused by the release of microstrain as grain size increases [63]. D. Discussion The compatibility of the SEAQT framework predictions of microstructural evolutions with those of traditional kinetic models is complicated by a few issues. As stated previously, a representative lattice for a given energy level determined from the Replica Exchange Wang-Landau algorithm, will appear visually distinct from that determined along a given kinetic path of a KMC modeled system. One specific example is the presence of individual pixel grains that remain in the conformational space of the simulation. The presence of these peculiar grains is due to the nature of the Replica Exchange, Wang-Landau process, which estimates the density of states independent of a modeled kinetic path. Often to reach these energy levels, physically unexpected but intuitively easier transitions occur. Transitioning to higher energy levels by placing a grain currently in contact with a larger coarsening mass into a vacant void site is often easier than complex conformations of the existing structure. Multiple similar transitions can lead to individual grains located significantly far apart from the majority of the coarsening mass. Thus, small grains, in proportion to the larger coarsening mass, are partially ignored in the present descriptor calculations such as those for the relative density and precipitate size distribution calculations since their variable locations can nonphysically bias the values output. Future work to counteract this behavior could bias transitions to reduce the probability of particle and vacant site exchanges or restrict translational movement during grain transitions. Another method to potentially further distinguish between these variable microstructural formations would be the addition of other energetic terms. One example is the addition of 2 nd nearest neighbor energetic considerations. This addition would allow grain agglomeration to be more easily differentiated energetically because of the higher number of grain sites considered in the energy interaction of a single grain. It would also distinguish lattices with a large number of grains surrounded by vacant sites attributable to the higher energy associated with the transition. This and other additional energetic considerations would result in an increased computational cost from the higher number of energy levels and could potentially limit the size of the simulated system. The data presented for the precipitate coarsening and grain growth case should be regarded as demonstrating qualitative trends. Although the data aligns closely to expected results, it is intended here to only demonstrate the applicability and flexibility of the SEAQT framework rather than a particular quantitative result. Such a result could be had with a more precise energy landscape. For example, with regard to modeling the precipitate coarsening kinetics, the volume fraction of the precipitate δ in the simulated or experimental systems is significantly lower than in the system shown in these articles [57,60,61,[65][66][67][68][69]. Commonly simulated values for the volume fraction do not exceed > 30% [60]. In experimental work the volume fraction trends towards < 12% [66]. The constraints on this fraction are either physical and due to solubility limits on the amount of precipitate phase formed or conform to kinetic theory, which relies on a lower density to match experimental results more closely. Thus, the main inadequacy of using the current eigenstructure or landscape with the SEAQT equation of motion to predict precipitate coarsening kinetics is the higher volume fraction of the precipitate phase, which increases the potential for interactions between individual coalesced precipitates. The resultant formations can impede the formation of spherical precipitates and skew the expected size distribution. Improving the calculation of precipitate coarsening kinetics will require calculating a new, more accurate eigenstructure that accounts for precipitate interactions or has a smaller solute concentration. In that case, a new eigenstructure could also be constructed to remove boundary interactions as a free parameter. Removing this degree of freedom in the model and thus, reducing the number of occupied precipitate sites would also allow for the possibility of simulations of larger systems with higher spatial resolution. Additionally, this would make a more efficient estimation of the density of states possible due to the reduction in the number of available energy levels. Another limitation of the present energy landscape or eigenstructure is its limited spatial resolution. The mass conservation constraint used in the model in Section III C forces the modeling of grain growth behavior to occur along a path of minimum surface boundaries. This is intended to mimic grain growth in a system with a defined number of vacant sites. However, because of the lack of spatial resolution for this particular landscape, the dimensional configurations are effectively limited to a single spatial direction. This can prevent the formation of spherical grains and bias the particle size distribution. A more robust method of calculating grain growth would require a new eigenstructure where grain bound-ary energy is the only energetic parameter. Removing the degree of freedom of surface boundaries, as is done in the precipitate coarsening case, allows for the calculation of a larger number of available sites. Additionally, other types of kinetics, like recrystallization, could be modeled by including additional energy terms for stored plastic deformation in the energy eigenstructure. Finally, it is important to note that most of the computational resources required by the SEAQT framework are needed to generate the energy landscape (Figure 1) via the Replica Exchange, Wang-Landau algorithm. This algorithm calculates the energy levels and their associated degeneracies via a non-Markovian Monte Carlo walk through the system's energy spectrum. The computational time to do this depends upon the number of energy levels and the degrees of freedom of the problem of interest. However, a great benefit of the framework is that the landscape only needs to be calculated once. The kinetics are subsequently obtained from the SEAQT equation of motion, which is a system of first-order ordinary differential equations, applied to the energy landscape for a specific initial condition. This is a relatively modest problem and can easily be repeated for any number of different initial conditions. Furthermore, since the SEAQT equation of motion produces a single kinetic curve for each initial condition and expresses properties and descriptors as expected values, there is no need to repeatedly simulate any particular kinetic path over and over again and average the results, as must be done with traditional KMC approaches. Thus, the SEAQT framework is effectively computationally comparable to a couple of Monte Carlo simulations. IV. CONCLUSIONS The principle of steepest entropy ascent is applied to a simple energy landscape to describe the kinetics of three related physical processes (sintering, precipitate coarsening, and grain growth) under one framework without assuming the system is in local-or near-equilibrium and without ad hoc assumptions about the rate control-ling mechanisms of the processes. The computationally efficient Replica Exchange, Wang-Landau algorithm is used to generate an energy landscape and the degeneracies associated with the energy levels, while a method is proposed for linking microstructural descriptors to state space. Once an accurate energy landscape and descriptors are constructed, the SEAQT framework is used to find a unique kinetic path via an equation of motion in the form of a system of first-order, ordinary differential equations. With respect to the comparisons of theory with experiment: 1. The SEAQT-predicted kinetics qualitatively agree with the available experimental kinetics for ZrO 2 sintering, Al 3 Li precipitate coarsening, and grain growth in nanocrystalline Pd. 2. The predicted kinetics can be brought into quantitative agreement by simply adjusting the SEAQT relaxation parameter, τ . 3. The kinetic path through state space predicted by the SEAQT framework can be linked directly to the microstructure through one or more descriptors averaged over the occupied energy levels. 4. The computational burden associated with applying the SEAQT framework to the three physical processes is limited primarily to constructing the energy landscape; solving the SEAQT equation of motion is straightforward and requires limited computational resources. FIG. 1 . 1Density of states calculated with the Replica Exchange, Wang-Landau algorithm for a 34 × 34 lattice consisting of 50% solid with surfaces and grain boundaries. FIG. 2 . 2Sequence of representative microstructures during sintering. Each image represents a weighted average of the expected states obtained from the SEAQT equation of motion and a descriptor that links the energy levels to the microstructure. The descriptor in this case is the average grain size. Panel a) is the initial state and panels b) through f) are example microstructures along the kinetic path to stable equilibrium. FIG. 3 .FIG. 4 . 34The relative density (a microstructural descriptor linked to energy levels) during sintering calculated from the SEAQT equation of motion.Figures (a)and (b) compare the predicted and experimental density of zirconia powder during pressureless isothermal sintering at 1100 • C after compacting with a pressure of 700 MPa and 1000 MPa, respectively[53]. The differences in compaction pressures affect the starting relative density values and are modeled by averaging the relative density for 3000+ states at two separate initial probability evolution conditions Scaling factor τ versus real time in minutes. The variance in the plotted values for τ effect the rapidity of certain phases in the microstructural evolution in the real system versus the initial simulation results FIG. 5 . 5(Left) A schematic plot of the energy versus entropy of the material subsystem. The bounding solid curve represents the set of equilibrium states and the two red points are possible non-equilibrium states. FIG. 6 . 6Sequence of representative microstructures during coarsening of Li-rich δ precipitates (yellow) in an aluminum matrix (purple). Each image represents a weighted average of the expected states obtained from the SEAQT equation of motion and a descriptor that links the energy levels to the microstructure. FIG. 8 . 8The SEAQT-predicted relative frequency of Li-rich δ precipitates in an aluminum matrix at different stages of coarsening: (a) the initial distribution, (b) after annealing 13.3 hrs at 225 • C, (c) after annealing 22.7 hrs at 225 • C, (d) after annealing 26.2 hrs at 225 • C, (e) after annealing 29.4 hrs at 225 • C, and (f) after annealing 32.9 hrs at 225 • C. The vertical axis in each panel represents the frequency of precipitate sizes averaged over multiple representative lattices for each energy level. A log scale is utilized to reveal the small frequencies. FIG. 9. Coarsening of two isolated precipitates. FIG. 11. Sequence of representative microstructures during grain growth of nanocrystalline Pd. Each image represents a weighted average of the expected states obtained from the SEAQT equation of motion and a descriptor that links the energy levels to the microstructure. The descriptor in this case is the average grain size. Panel (a) shows the microstructure of the initial state of many small grains, panels (b) through (e) provide the microstructures for increasing annealing times, and panel (f) is the microstructure of a single crystal at stable equilibrium. ACKNOWLEDGMENTSThe authors thank an anonymous referee for insightful comments and helpful suggestions. We acknowledge Advanced Research Computing at Virginia Tech for providing computational resources and technical support that have contributed to the results reported within this paper. JM acknowledges support from the Department of Education through the Graduate Assistance in Areas of National Need Program (grant number P200A180016). Expected radius of Li-rich δ precipitates during coarsening predicted by the SEAQT equation of motion: (a) a range of times from an initial state of many small precipitates to stable equilibrium, and (b) a selected intermediate time period over which t 1/3 kinetics are roughly linear. The four curves in each figure represent coarsening kinetics for γ = 0.005 J/m 2 (black curves. 23 and 10 times this boundary energy (red, blue, and orange curves, respectivelyFIG. 7. Expected radius of Li-rich δ precipitates during coarsening predicted by the SEAQT equation of motion: (a) a range of times from an initial state of many small precipitates to stable equilibrium, and (b) a selected intermediate time period over which t 1/3 kinetics are roughly linear. The four curves in each figure represent coarsening kinetics for γ = 0.005 J/m 2 (black curves), and 2, 3 and 10 times this boundary energy (red, blue, and orange curves, respectively). Steepest-entropyascent quantum thermodynamic modeling of the relaxation process of isolated chemically reactive systems using density of states and the concept of hypoequilibrium state. G C Li, M R Spakovsky, Physical Review E. 931G. C. Li and M. R. von Spakovsky, "Steepest-entropy- ascent quantum thermodynamic modeling of the relax- ation process of isolated chemically reactive systems us- ing density of states and the concept of hypoequilibrium state," Physical Review E, vol. 93, no. 1, 2016. Generalized thermodynamic relations for a system experiencing heat and mass diffusion in the far-from-equilibrium realm based on steepest entropy ascent. G C Li, M R Spakovsky, Physical Review E. 943G. C. Li and M. R. von Spakovsky, "Generalized ther- modynamic relations for a system experiencing heat and mass diffusion in the far-from-equilibrium realm based on steepest entropy ascent," Physical Review E, vol. 94, no. 3, 2016. Modeling the nonequilibrium effects in a nonquasi-equilibrium thermodynamic cycle based on steepest entropy ascent and an isothermal-isobaric ensemble. G C Li, M R Spakovsky, Energy. 115G. C. Li and M. R. von Spakovsky, "Modeling the nonequilibrium effects in a nonquasi-equilibrium thermo- dynamic cycle based on steepest entropy ascent and an isothermal-isobaric ensemble," Energy, vol. 115, pp. 498- 512, 2016. Steepest-entropy-ascent model of mesoscopic quantum systems far from equilibrium along with generalized thermodynamic definitions of measurement and reservoir. G Li, M R Spakovsky, Physical Review E. 9842113G. Li and M. R. von Spakovsky, "Steepest-entropy-ascent model of mesoscopic quantum systems far from equi- librium along with generalized thermodynamic defini- tions of measurement and reservoir," Physical Review E, vol. 98, p. 042113, Oct 2018. Steepest entropy ascent quantum thermodynamic model of electron and phonon transport. G Li, M R Spakovsky, C Hin, Physical Review B. 97224308G. Li, M. R. von Spakovsky, and C. Hin, "Steepest en- tropy ascent quantum thermodynamic model of electron and phonon transport," Physical Review B, vol. 97, no. 2, p. 024308, 2018. G Li, M R Spakovsky, Study of Nonequilibrium Size and Concentration Effects on the Heat and Mass Diffusion of Indistinguishable Particles using Steepest-Entropy-Ascent Quantum Thermodynamics. 139122003G. Li and M. R. von Spakovsky, "Study of Nonequi- librium Size and Concentration Effects on the Heat and Mass Diffusion of Indistinguishable Particles us- ing Steepest-Entropy-Ascent Quantum Thermodynam- ics," Journal of Heat Transfer, vol. 139, no. 12, p. 122003, 2017. Multiscale Transient and Steady-State Study of the Influence of Microstructure Degradation and Chromium Oxide Poisoning on Solid Oxide Fuel Cell Cathode Performance. G Li, M R Spakovsky, F Shen, K Lu, Journal of Non-Equilibrium Thermodynamics. 431G. Li, M. R. von Spakovsky, F. Shen, and K. Lu, "Mul- tiscale Transient and Steady-State Study of the Influ- ence of Microstructure Degradation and Chromium Ox- ide Poisoning on Solid Oxide Fuel Cell Cathode Perfor- mance," Journal of Non-Equilibrium Thermodynamics, vol. 43, no. 1, pp. 21-42, 2018. A method for predicting non-equilibrium thermal expansion using steepest-entropy-ascent quantum thermodynamics. R Yamada, M R Spakovsky, W T ReynoldsJr, Journal of Physics: Condensed Matter. 3032325901R. Yamada, M. R. von Spakovsky, and W. T. Reynolds Jr., "A method for predicting non-equilibrium thermal expansion using steepest-entropy-ascent quan- tum thermodynamics," Journal of Physics: Condensed Matter, vol. 30, no. 32, p. 325901, 2018. Methodology of an application of the steepest-entropy-ascent quantum thermodynamic framework to physical phenomena in materials science. R Yamada, M R Spakovsky, W T ReynoldsJr, Computational Materials Science. 166R. Yamada, M. R. von Spakovsky, and W. T. Reynolds Jr., "Methodology of an application of the steepest-entropy-ascent quantum thermodynamic frame- work to physical phenomena in materials science," Com- putational Materials Science, vol. 166, pp. 251-264, 2019. Predicting the continuous and discontinuous phase decompositions using the steepest-entropyascent quantum thermodynamics modeling. R Yamada, M R Spakovsky, W T ReynoldsJr, Phys. Rev. E. 99552121R. Yamada, M. R. von Spakovsky, and W. T. Reynolds Jr., "Predicting the continuous and discontin- uous phase decompositions using the steepest-entropy- ascent quantum thermodynamics modeling," Phys. Rev. E, vol. 99, no. 5, p. 052121, 2019. Low-temperature atomistic spin relaxation and non-equilibrium intensive properties using steepest-entropy-ascent quantum-inspired thermodynamics modeling. R Yamada, M R Spakovsky, W T ReynoldsJr, Journal of Physics: Condensed Matter. 31505901R. Yamada, M. R. von Spakovsky, and W. T. Reynolds Jr., "Low-temperature atomistic spin re- laxation and non-equilibrium intensive properties us- ing steepest-entropy-ascent quantum-inspired thermody- namics modeling," Journal of Physics: Condensed Mat- ter, vol. 31, p. 505901, 2019. Kinetic pathways of ordering and phase separation using classical solid state models within the steepest-entropy-ascent quantum thermodynamic framework. R Yamada, M R Spakovsky, W T ReynoldsJr, Acta Materiala. 182R. Yamada, M. R. von Spakovsky, and W. T. Reynolds Jr., "Kinetic pathways of ordering and phase separation using classical solid state models within the steepest-entropy-ascent quantum thermodynamic frame- work," Acta Materiala, vol. 182, pp. 87-99, 2020. Steepestentropy-ascent quantum thermodynamic modeling of decoherence in two different microscopic composite systems. J A Barrera, C E Damian-Ascencio, M R Spakovsky, S Cano-Andrade, Physical Review A. 10152336J. A. Montañez Barrera, C. E. Damian-Ascencio, M. R. von Spakovsky, and S. Cano-Andrade, "Steepest- entropy-ascent quantum thermodynamic modeling of de- coherence in two different microscopic composite sys- tems," Physical Review A, vol. 101, p. 052336, 2020. Steepest-entropy-ascent quantum thermodynamic modeling of decoherence in two different microscopic composite systems. S Cano-Andrade, G P Beretta, M R Spakovsky, Physical Review A. 91113848S. Cano-Andrade, G. P. Beretta, and M. R. von Spakovsky, "Steepest-entropy-ascent quantum thermo- dynamic modeling of decoherence in two different micro- scopic composite systems," Physical Review A, vol. 91, no. 1, p. 013848, 2015. Ch4 adsorption probability on gan(0001) and (000-1) during movpe and its relationship with carbon contamination in the films. A Kusaba, G Li, P Kempisty, M R Spakovsky, Y Kangawa, Materials. 166972A. Kusaba, G. Li, P. Kempisty, M. R. von Spakovsky, and Y. Kangawa, "Ch4 adsorption probability on gan(0001) and (000-1) during movpe and its relationship with car- bon contamination in the films," Materials, vol. 16, no. 6, p. 972, 2019. Predicting the chemical kinetics of air at high temperatures using steepest-entropy-ascent quantum thermodynamics. M R Spakovsky, C S Schlosser, J B Martin, E Josyula, AIAA-2020-3274AIAA 2020 Aviation Forum. American Institute pf Aeronautics and AstronauticsM. R. von Spakovsky, C. S. Schlosser, J. B. Martin, and E. Josyula, "Predicting the chemical kinetics of air at high temperatures using steepest-entropy-ascent quan- tum thermodynamics," in AIAA 2020 Aviation Forum, pp. AIAA-2020-3274, American Institute pf Aeronautics and Astronautics, 2020. Generic, hierarchical framework for massively parallel wang-landau sampling. T Vogel, Y W Li, T Wust, D P Landau, Physical Review Letters. 11021T. Vogel, Y. W. Li, T. Wust, and D. P. Landau, "Generic, hierarchical framework for massively paral- lel wang-landau sampling," Physical Review Letters, vol. 110, no. 21, 2013. Scalable replica-exchange framework for wang-landau sampling. T Vogel, Y W Li, T Wust, D P Landau, Physical Review E. 902T. Vogel, Y. W. Li, T. Wust, and D. P. Landau, "Scalable replica-exchange framework for wang-landau sampling," Physical Review E, vol. 90, no. 2, 2014. Note that the basis for such a probability density distribution in the quantal formulation of the SEAQT framework is the density or so-called "state" operator. which is based on a homogeneous ensemble [70Note that the basis for such a probability density distri- bution in the quantal formulation of the SEAQT frame- work is the density or so-called "state" operator, which is based on a homogeneous ensemble [70]. Maximum entropy production principle: history and current status. L M Martyushev, Physics-Uspekhi. 64L. M. Martyushev, "Maximum entropy production prin- ciple: history and current status," Physics-Uspekhi, vol. 64, pp. 558-583, Sep 2021. The fourth law of thermodynamics: steepest entropy ascent. G P Beretta, Philosophical Transactions of the Royal Society A. 378217020190168G. P. Beretta, "The fourth law of thermodynamics: steepest entropy ascent," Philosophical Transactions of the Royal Society A, vol. 378, no. 2170, p. 20190168, 2020. Phenomenological model of nonequilibrium solidification. L Martyushev, A Soboleva, Physica A: Statistical Mechanics and its Applications. 39222L. Martyushev and A. Soboleva, "Phenomenological model of nonequilibrium solidification," Physica A: Sta- tistical Mechanics and its Applications, vol. 392, no. 22, pp. 5757-5763, 2013. Stability principles for lamellar eutectoid(ic) reactions. J Kirkaldy, R Sharma, Acta Metallurgica. 287J. Kirkaldy and R. Sharma, "Stability principles for lamellar eutectoid(ic) reactions," Acta Metallurgica, vol. 28, no. 7, pp. 1009-1021, 1980. Some extremum principles in irreversible thermodynamics, with application to continuum mechanics. H Ziegler, Progress in solid mechanics. 4H. Ziegler, "Some extremum principles in irreversible thermodynamics, with application to continuum mechan- ics," Progress in solid mechanics, vol. 4, pp. 93-193, 1963. Thermodynamik und rheologische Probleme. H Ziegler, Archive of Applied Mechanics. 251H. Ziegler, "Thermodynamik und rheologische Prob- leme," Archive of Applied Mechanics, vol. 25, no. 1, pp. 58-70, 1957. Chemical reactions and the principle of maximal rate of entropy production. H Ziegler, Zeitschrift für angewandte Mathematik und Physik ZAMP. 34H. Ziegler, "Chemical reactions and the principle of max- imal rate of entropy production," Zeitschrift für ange- wandte Mathematik und Physik ZAMP, vol. 34, no. 6, pp. 832-844, 1983. H Ziegler, An Introduction to Thermomechanics. Amsterdam: North-Holland. 1st ed.H. Ziegler, An Introduction to Thermomechanics. Ams- terdam: North-Holland, 1st ed., 1983. On a principle of maximal rate of entropy production. H Ziegler, C Wehrli, Journal of Non-Equilibrium Thermodynamics. 123H. Ziegler and C. Wehrli, "On a principle of maximal rate of entropy production," Journal of Non-Equilibrium Thermodynamics, vol. 12, no. 3, pp. 229-243, 1987. Introduction to thermodynamics of irreversible processes. I Prigogine, Interscience. New York3rd ed.I. Prigogine, Introduction to thermodynamics of irre- versible processes. New York: Interscience, 3rd ed., 1967. On the general equation of motion of quantum thermodynamics and the distinction between quantal and nonquantal uncertainties. G P Beretta, Massachusetts Institute of TechnologyPhD thesisG. P. Beretta, On the general equation of motion of quan- tum thermodynamics and the distinction between quantal and nonquantal uncertainties. PhD thesis, Massachusetts Institute of Technology, 1981. Quantum thermodynamics -a new equation of motion for a single constituent of matter. G P Beretta, E P Gyftopoulos, J L Park, G N Hatsopoulos, Nuovo Cimento Della Societa Italiana Di Fisica B-General Physics Relativity Astronomy and Mathematical Physics and Methods. 822G. P. Beretta, E. P. Gyftopoulos, J. L. Park, and G. N. Hatsopoulos, "Quantum thermodynamics -a new equa- tion of motion for a single constituent of matter," Nuovo Cimento Della Societa Italiana Di Fisica B-General Physics Relativity Astronomy and Mathematical Physics and Methods, vol. 82, no. 2, pp. 169-191, 1984. Quantum thermodynamics -a new equation of motion for a general quantum system. G P Beretta, E P Gyftopoulos, J L Park, Nuovo Cimento Della Societa Italiana Di Fisica B-General Physics Relativity Astronomy and Mathematical Physics and Methods. 871G. P. Beretta, E. P. Gyftopoulos, and J. L. Park, "Quan- tum thermodynamics -a new equation of motion for a general quantum system," Nuovo Cimento Della Soci- eta Italiana Di Fisica B-General Physics Relativity As- tronomy and Mathematical Physics and Methods, vol. 87, no. 1, pp. 77-97, 1985. Nonlinear model dynamics for closedsystem, constrained, maximal-entropy-generation relaxation by energy redistribution. G P Beretta, Physical Review E. 73226113G. P. Beretta, "Nonlinear model dynamics for closed- system, constrained, maximal-entropy-generation relax- ation by energy redistribution," Physical Review E, vol. 73, no. 2, p. 026113, 2006. Nonlinear quantum evolution equations to model irreversible adiabatic relaxation with maximal entropy production and other nonunitary processes. G P Beretta, Reports on Mathematical Physics. 641G. P. Beretta, "Nonlinear quantum evolution equations to model irreversible adiabatic relaxation with maximal entropy production and other nonunitary processes," Re- ports on Mathematical Physics, vol. 64, no. 1/2, pp. 139- 168, 2009. Steepest entropy ascent model for farnonequilibrium thermodynamics: Unified implementation of the maximum entropy production principle. G P Beretta, Physical Review E. 90442113G. P. Beretta, "Steepest entropy ascent model for far- nonequilibrium thermodynamics: Unified implementa- tion of the maximum entropy production principle," Physical Review E, vol. 90, no. 4, p. 042113, 2014. The thermodynamic description of heterogeneous dissipative systems by variational methods. i. a formulation of the principle of minimum rate of entropy production with application to certain stationary heterogeneous convective systems; ii. a variational principle applicable to non-stationary (unconstrained) heterogeneous dissipative systems. J S Kirkaldy, Canadian J. of Phys. 38J. S. Kirkaldy, "The thermodynamic description of het- erogeneous dissipative systems by variational methods. i. a formulation of the principle of minimum rate of en- tropy production with application to certain stationary heterogeneous convective systems; ii. a variational prin- ciple applicable to non-stationary (unconstrained) het- erogeneous dissipative systems," Canadian J. of Phys., vol. 38, pp. 1343, 1356, 1960. Crystal growth and the thermodynamics of irreversible processes. J S Kirkaldy, Canadian J. of Phys. 37739J. S. Kirkaldy, "Crystal growth and the thermodynamics of irreversible processes," Canadian J. of Phys., vol. 37, p. 739, 1959. Theory of diffusional growth in solid-solid transformations. J S Kirkaldy, The Metallurgical Society and the American Institute of Mining, Metallurgical, and Petroleum Engineers. Decomposition of Austenite by Diffusional ProcessesJ. S. Kirkaldy, "Theory of diffusional growth in solid-solid transformations," in Decomposition of Austenite by Dif- fusional Processes, pp. 39-123, The Metallurgical Society and the American Institute of Mining, Metallurgical, and Petroleum Engineers, 1964. Theory of diffusional growth in solid-solid transformations: Discussion. J W Cahn, W W Mullins, The Metallurgical Society and the American Institute of Mining, Metallurgical, and Petroleum Engineers. Decomposition of Austenite by Diffusional ProcessesJ. W. Cahn and W. W. Mullins, "Theory of diffu- sional growth in solid-solid transformations: Discussion," in Decomposition of Austenite by Diffusional Processes, pp. 123-130, The Metallurgical Society and the Amer- ican Institute of Mining, Metallurgical, and Petroleum Engineers, 1964. Kinetic monte carlo simulation of sintering behavior of additively manufactured stainless steel powder particles using reconstructed microstructures from synchrotron x-ray microtomography. Y Zhang, X H Xiao, J Zhang, Results in Physics. 13Y. Zhang, X. H. Xiao, and J. Zhang, "Kinetic monte carlo simulation of sintering behavior of additively manufac- tured stainless steel powder particles using reconstructed microstructures from synchrotron x-ray microtomogra- phy," Results in Physics, vol. 13, 2019. Sintering analysis of sub-micron-sized nickel powders: Kinetic monte carlo simulation verified by fib-sem reconstruction. S Hara, A Ohi, N Shikazono, Journal of Power Sources. 276S. Hara, A. Ohi, and N. Shikazono, "Sintering analysis of sub-micron-sized nickel powders: Kinetic monte carlo simulation verified by fib-sem reconstruction," Journal of Power Sources, vol. 276, pp. 105-112, 2015. Strain in the mesoscale kinetic monte carlo model for sintering. R Bjork, H L Frandsen, V Tikare, E Olevsky, N Pryds, Computational Materials Science. 82R. Bjork, H. L. Frandsen, V. Tikare, E. Olevsky, and N. Pryds, "Strain in the mesoscale kinetic monte carlo model for sintering," Computational Materials Science, vol. 82, pp. 293-297, 2014. Numerical simulation of microstructural evolution during sintering at the mesoscale in a 3d powder compact. V Tikare, M Braginsky, D Bouvard, A Vagnon, Computational Materials Science. 482V. Tikare, M. Braginsky, D. Bouvard, and A. Vagnon, "Numerical simulation of microstructural evolution dur- ing sintering at the mesoscale in a 3d powder compact," Computational Materials Science, vol. 48, no. 2, pp. 317- 325, 2010. Numerical simulation of solid state sintering. M Braginsky, V Tikare, E Olevsky, International Journal of Solids and Structures. 422M. Braginsky, V. Tikare, and E. Olevsky, "Numerical simulation of solid state sintering," International Journal of Solids and Structures, vol. 42, no. 2, pp. 621-636, 2005. Efficient, multiple-range random walk algorithm to calculate the density of states. F Wang, D P Landau, Physical Review Letters. 86F. Wang and D. P. Landau, "Efficient, multiple-range random walk algorithm to calculate the density of states," Physical Review Letters, vol. 86, pp. 2050-2053, mar 2001. Determining the density of states for classical statistical models: A random walk algorithm to produce a flat histogram. F Wang, D P Landau, Phys. Rev. E. 6456101F. Wang and D. P. Landau, "Determining the density of states for classical statistical models: A random walk algorithm to produce a flat histogram," Phys. Rev. E, vol. 64, p. 056101, Oct 2001. A practical guide to replica-exchange wang-landau simulations. T Vogel, Y W Li, D P Landau, Journal of Physics: Conference Series. 101212003T. Vogel, Y. W. Li, and D. P. Landau, "A practical guide to replica-exchange wang-landau simulations," Journal of Physics: Conference Series, vol. 1012, p. 012003, apr 2018. Nonlinear model dynamics for closedsystem, constrained, maximal-entropy-generation relaxation by energy redistribution. G P Beretta, Physical Review E. 73226113G. P. Beretta, "Nonlinear model dynamics for closed- system, constrained, maximal-entropy-generation relax- ation by energy redistribution," Physical Review E, vol. 73, no. 2, p. 026113, 2006. Nonlinear quantum evolution equations to model irreversible adiabatic relaxation with maximal entropy production and other nonunitary processes. G P Beretta, Reports on Mathematical Physics. 641-2G. P. Beretta, "Nonlinear quantum evolution equations to model irreversible adiabatic relaxation with maximal entropy production and other nonunitary processes," Re- ports on Mathematical Physics, vol. 64, no. 1-2, pp. 139- 168, 2009. Steepest-entropyascent quantum thermodynamic modeling of the relaxation process of isolated chemically reactive systems using density of states and the concept of hypoequilibrium state. G C Li, M R Spakovsky, Physical Review E. 931G. C. Li and M. R. von Spakovsky, "Steepest-entropy- ascent quantum thermodynamic modeling of the relax- ation process of isolated chemically reactive systems us- ing density of states and the concept of hypoequilibrium state," Physical Review E, vol. 93, no. 1, 2016. Entropy: Thermodynamic definition and quantum expression. E P Gyftopoulos, E Cubukcu, Physical Review E. 554E. P. Gyftopoulos and E. Cubukcu, "Entropy: Thermo- dynamic definition and quantum expression," Physical Review E, vol. 55, no. 4, pp. 3851-3858, 1997. Percolation and cluster distribution .1. cluster multiple labeling technique and critical concentration algorithm. J Hoshen, R Kopelman, Physical Review B. 148J. Hoshen and R. Kopelman, "Percolation and clus- ter distribution .1. cluster multiple labeling technique and critical concentration algorithm," Physical Review B, vol. 14, no. 8, pp. 3438-3445, 1976. Compaction and pressureless sintering of zirconia nanoparticles. M Trunec, K Maca, Journal of the American Ceramic Society. 909M. Trunec and K. Maca, "Compaction and pressureless sintering of zirconia nanoparticles," Journal of the Amer- ican Ceramic Society, vol. 90, no. 9, pp. 2735-2740, 2007. Surface and grainboundary energies in yttria-stabilized zirconia (YSZ-8 mol ). A Tsoga, P Nikolopoulos, Journal of Materials Science. 3120A. Tsoga and P. Nikolopoulos, "Surface and grain- boundary energies in yttria-stabilized zirconia (YSZ-8 mol )," Journal of Materials Science, vol. 31, no. 20, pp. 5409-5413, 1996. there is no thermal vibrational contribution from the solid in the subsystem energy given by Equation. there is no thermal vibrational contribution from the solid in the subsystem energy given by Equation (1). However, a harmonic oscillator term could be added if the actual temperature of the solid is an important consideration. However, a harmonic oscillator term could be added if the actual temperature of the solid is an important con- sideration. The effect of particle size distributions on the microstructural evolution during sintering. R Bjork, V Tikare, H L Frandsen, N Pryds, Journal of the American Ceramic Society. 961R. Bjork, V. Tikare, H. L. Frandsen, and N. Pryds, "The effect of particle size distributions on the microstructural evolution during sintering," Journal of the American Ce- ramic Society, vol. 96, no. 1, pp. 103-110, 2013. The precipitation of δ (Al3Li) in dilute aluminium-lithium alloys. D B Williams, J W Edington, Metal Science. 91D. B. Williams and J. W. Edington, "The precipitation of δ (Al3Li) in dilute aluminium-lithium alloys," Metal Science, vol. 9, no. 1, pp. 529-532, 1975. Use of the Gibbs-Thompson relation to obtain the interfacial energy of delta' precipitates in Al-Li alloys. B Noble, S E Bray, Materials Science and Engineering a-Structural Materials Properties Microstructure and Processing. 2661-2B. Noble and S. E. Bray, "Use of the Gibbs-Thompson relation to obtain the interfacial energy of delta' precip- itates in Al-Li alloys," Materials Science and Engineer- ing a-Structural Materials Properties Microstructure and Processing, vol. 266, no. 1-2, pp. 80-85, 1999. The development of microstructures in Al-Li alloys. O Jensrud, N Ryum, Materials Science and Engineering. 642O. Jensrud and N. Ryum, "The development of mi- crostructures in Al-Li alloys," Materials Science and En- gineering, vol. 64, no. 2, pp. 229-236, 1984. Experimental, computational and theoretical studies of delta ' phase coarsening in Al-Li alloys. B A Pletcher, K G Wang, M E Glicksman, Acta Materialia. 6016B. A. Pletcher, K. G. Wang, and M. E. Glicksman, "Experimental, computational and theoretical studies of delta ' phase coarsening in Al-Li alloys," Acta Materi- alia, vol. 60, no. 16, pp. 5803-5817, 2012. The surface-energy of metastable Al3Li precipitates from coarsening kinetics. J J Hoyt, S Spooner, Acta Metallurgica et Materialia. 394J. J. Hoyt and S. Spooner, "The surface-energy of metastable Al3Li precipitates from coarsening kinetics," Acta Metallurgica et Materialia, vol. 39, no. 4, pp. 689- 693, 1991. Dynamics of late stage phase separations in two dimensions. J A Marqusee, The Journal of Chemical Physics. 812J. A. Marqusee, "Dynamics of late stage phase sepa- rations in two dimensions," The Journal of Chemical Physics, vol. 81, no. 2, pp. 976-981, 1984. Unraveling the nature of room temperature grain growth in nanocrystalline materials. M Ames, J Markmann, R Karos, A Michels, A Tschope, R Birringer, Acta Materialia. 5616M. Ames, J. Markmann, R. Karos, A. Michels, A. Tschope, and R. Birringer, "Unraveling the nature of room temperature grain growth in nanocrystalline ma- terials," Acta Materialia, vol. 56, no. 16, pp. 4255-4266, 2008. Surface energies of elemental crystals. R Tran, Z Xu, B Radhakrishnan, D Winston, W Sun, K A Persson, S P Ong, Scientific Data. 31160080R. Tran, Z. Xu, B. Radhakrishnan, D. Winston, W. Sun, K. A. Persson, and S. P. Ong, "Surface energies of ele- mental crystals," Scientific Data, vol. 3, no. 1, p. 160080, 2016. Expected energy as a function of coarsening time. The black curve represents the case of many δ precipitates shown in Figure 6, and the dashed red curve corresponds to the two δ precipitates shown in Figure 9. of delta'-Al3Li precipitates: phase-field simulation in 2D and 3D. V Vaithyanathan, L Q Chen, Scripta Materialia. 4210Coarsening kinetics FIG. 10V. Vaithyanathan and L. Q. Chen, "Coarsening kinetics FIG. 10. Expected energy as a function of coarsening time. The black curve represents the case of many δ precipitates shown in Figure 6, and the dashed red curve corresponds to the two δ precipitates shown in Figure 9. of delta'-Al3Li precipitates: phase-field simulation in 2D and 3D," Scripta Materialia, vol. 42, no. 10, pp. 967-973, 2000. . M Groza, J Shackelford, J Lavernia; E;, M. Groza, J.; Shackelford J.; Lavernia; E; Materials Processing Handbook. Taylor & Framcis Group. Powers, Powers, Ma- terials Processing Handbook. Taylor & Framcis Group, 2007. Coarsening of delta' (Al3Li) precipitates in an Al-2.8Li-0.3Mn alloy. B P Gu, G L Liedl, J H Kulwicki, T H Sanders, Materials Science and Engineering. 701-2B. P. Gu, G. L. Liedl, J. H. Kulwicki, and T. H. Sanders, "Coarsening of delta' (Al3Li) precipitates in an Al-2.8Li- 0.3Mn alloy," Materials Science and Engineering, vol. 70, no. 1-2, pp. 217-228, 1985. Decomposition of an Al-Li alloy -the early stages observed by hrem. G Schmitz, P Haasen, Acta Metallurgica Et Materialia. 409G. Schmitz and P. Haasen, "Decomposition of an Al-Li alloy -the early stages observed by hrem," Acta Metal- lurgica Et Materialia, vol. 40, no. 9, pp. 2209-2217, 1992. Ostwald ripening in Al-Li alloys: A test of theory. B A Pletcher, K G Wang, M E Glicksman, International Journal of Materials Research. 10311B. A. Pletcher, K. G. Wang, and M. E. Glicksman, "Ost- wald ripening in Al-Li alloys: A test of theory," Inter- national Journal of Materials Research, vol. 103, no. 11, pp. 1289-1293, 2012. A unified quantum theory of mechanics and thermodynamics. Part III. Irreducible quantal dispersions. G N Hatsopoulos, E P Gyftopoulos, Foundations of Physics. 65G. N. Hatsopoulos and E. P. Gyftopoulos, "A uni- fied quantum theory of mechanics and thermodynamics. Part III. Irreducible quantal dispersions," Foundations of Physics, vol. 6, no. 5, pp. 561-570, 1976. The average grain size of nanocrytalline Pd during grain growth at room temperature. The black curve represents the grain size predicted by the SEAQT equation of motion, and the red curve represents data from reference. Fig, 63The predicted curve is extended beyond the largest grain size accessible with the original experimentFIG. 12. The average grain size of nanocrytalline Pd during grain growth at room temperature. The black curve represents the grain size predicted by the SEAQT equation of motion, and the red curve represents data from reference [63]. The predicted curve is extended beyond the largest grain size accessible with the original experiment.
[]
[ "BIHARMONIC MAPS FROM A 2-SPHERE", "BIHARMONIC MAPS FROM A 2-SPHERE" ]
[ "Ze-Ping Wang ", "Ye-Lin Ou ", "Han-Chun Yang " ]
[]
[]
Motivated by the rich theory of harmonic maps from a 2-sphere, we study biharmonic maps from a 2-sphere in this paper. We first derive biharmonic equation for rotationally symmetric maps between rotationally symmetric 2manifolds. We then apply the equation to obtain a classification of biharmonic maps in a family of rotationally symmetric maps between 2-spheres. We also find many examples of proper biharmonic maps defined locally on a 2-sphere.Our results seem to suggest that any biharmonic map S 2 −→ (N n , h) be a weakly conformal immersion.Date: 12/08/2013. 1991 Mathematics Subject Classification. 58E20, 53C12.
10.1016/j.geomphys.2013.12.005
[ "https://arxiv.org/pdf/1310.0562v2.pdf" ]
119,137,141
1310.0562
8a7223c49cf08bffef049a9abac02c665cc9c4ff
BIHARMONIC MAPS FROM A 2-SPHERE 10 Dec 2013 Ze-Ping Wang Ye-Lin Ou Han-Chun Yang BIHARMONIC MAPS FROM A 2-SPHERE 10 Dec 2013arXiv:1310.0562v2 [math.DG] Motivated by the rich theory of harmonic maps from a 2-sphere, we study biharmonic maps from a 2-sphere in this paper. We first derive biharmonic equation for rotationally symmetric maps between rotationally symmetric 2manifolds. We then apply the equation to obtain a classification of biharmonic maps in a family of rotationally symmetric maps between 2-spheres. We also find many examples of proper biharmonic maps defined locally on a 2-sphere.Our results seem to suggest that any biharmonic map S 2 −→ (N n , h) be a weakly conformal immersion.Date: 12/08/2013. 1991 Mathematics Subject Classification. 58E20, 53C12. Introduction In this paper, we work on the category of smooth objects, so all manifolds, tensor fields, and maps, etc. are assumed to be smooth. A harmonic map is a map φ : (M, g) −→ (N, h) between Riemannian manifolds that is a critical point of the energy functional defined by E(φ) = 1 2 Ω |dφ| 2 v g , where Ω is a compact domain of M. The Euler-Lagrange equation of the energy functional gives the harmonic map equation ( [ES]) (1) τ (φ) ≡ Tr g ∇ dφ = 0, where τ (φ) is called the tension field of the map φ. A biharmonic map is a map φ : (M, g) −→ (N, h) between Riemannian manifolds that is a critical point of the bienergy functional defined by E 2 (φ) = 1 2 Ω |τ (φ)| 2 v g , where Ω is a compact domain of M. The Euler-Lagrange equation of this functional gives the biharmonic map equation ( [Ji1]) (2) τ 2 (φ) := Trace g (∇ φ ∇ φ − ∇ φ ∇ M )τ (φ) − Trace g R N (dφ, τ (φ))dφ = 0, where R N denotes the curvature operator of (N, h) defined by Y ] Z. Clearly, any harmonic map (τ (φ) ≡ 0) is always a biharmonic map. We call a biharmonic map that is not harmonic a proper biharmonic map. R N (X, Y )Z = [∇ N X , ∇ N Y ]Z − ∇ N [X, The study of biharmonic maps (as a special case of k-polyharmonic maps with k = 2) was proposed by Eells-Lemaire in [EL] (Section (8.7)). Jiang [Ji1], [Ji2], [Ji3] made a first effort to study such maps by calculating the first and second variational formulas of the bienergy functional and specializing on the biharmonic isometric immersions which nowadays are called biharmonic submanifolds. Very interestingly, the notion of biharmonic submanifolds was also introduced by B. Y. Chen [Ch] in a different way in his study of the finite type submanifolds in Euclidean spaces. Since 2000, the study of biharmonic maps has been attracting a growing attention and it has become an active area of research with many progresses. We refer the readers to [BK], [BFO2], [BMO2], [LOn1], [MO], [NUG], [Ou1], [Ou4], [OL], [Oua], and the references therein for some recent geometric study of general biharmonic maps. For some recent progress on biharmonic submanifolds see [CI], [Ji2], [Ji3], [Di], [CMO1], [CMO2], [BMO1], [BMO3], [Ou3], [OT], [OW], [NU], [TO], [CM], [AGR] and the references therein. For biharmonic conformal immersions and submersions see [Ou2], [Ou5], [BFO1], [LO], [WO] and the references therein. In this paper, we study biharmonic maps from a 2-sphere. One of our motivations comes from the following observations. There are many interesting examples and a rich theory of harmonic maps from a 2-sphere: • Chern-Goldberg [CG]: Any harmonic immersion f : S 2 −→ (N n , h) has to be minimal, or equivalently, a conformal immersion; • Sacks-Uhlenbeck [SU] and Wood [Wo1]: Any harmonic map f : S 2 −→ (N n , h) with n ≥ 3 has to be a conformal branched minimal immersion; • Smith [Sm]: Any homotopy class of maps S 2 −→ S 2 has a harmonic map representative; • There exist harmonic embeddings of S 2 into S 3 , equipped with arbitrary metric ( [Sm1]); • There are many beautiful explicit constructions that can be used to produce harmonic maps from S 2 into projective spaces, Grassmannian manifolds, and Lie groups (See e.g., Uhlenbeck [Uh], Burstall-Salamon [BS], Burstall-Wood [BW] and Wood [Wo2], [Wo3]); • Fernández [Fe]: The dimension of the space of harmonic maps from the 2-sphere to the 2n-sphere is 2d + n 2 . There is an explicit algebraic method to construct all harmonic maps from the 2-sphere to the n-sphere. It would be interesting to know if any of the above results can be generalized to the case of proper biharmonic maps. Knowing that the only known example of proper biharmonic maps from S 2 is the biharmonic isometric immersion S 2 ( 1 √ 2 ) −→ S 3 [CMO1] (or a composition of this with a totally geodesic maps from S 3 into another manifold (See, e.g., [Ou1])) we would especially like to know the answer to the question: Does there exist a proper biharmonic map ϕ : S 2 −→ (N n , h) that is NOT a conformal immersion? In this paper, we study biharmonicity of rotationally symmetric maps from S 2 . We obtain a classification of biharmonic maps in a family of rotationally symmetric maps between 2-spheres. We are able to find many examples of locally defined proper biharmonic maps from S 2 . Very interestingly, we find that none of these locally defined proper biharmonic maps allows an extension to a biharmonic map defined globally on S 2 . Our results seem to suggest the following Conjecture: any biharmonic map ϕ : S 2 −→ (N n , h) is a weakly conformal immersion. Biharmonic equations for rotationally symmetric maps In this section, we will derive biharmonic equations for a large class of maps that includes rotationally symmetric maps between rotationally symmetric manifolds. We will need the following lemma that gives the equation of biharmonic maps in local coordinates. Lemma 2.1. [OL] Let φ : (M m , g) −→ (N n , h) be a map between Riemannian manifolds with φ(x 1 , . . . , x m ) = (φ 1 (x), . . . , φ n (x)) with respect to local coordinates (x i ) in M and (y α ) in N. Then, φ is biharmonic if and only if it is a solution of the following system of PDE's ∆τ σ + 2g(∇τ α , ∇φ β )Γ σ αβ + τ α ∆φ βΓσ αβ +τ α g(∇φ β , ∇φ ρ )(∂ ρΓ σ αβ +Γ ν αβΓ σ νρ ) − τ ν g(∇φ α , ∇φ β )R σ β αν = 0, (3) σ = 1, 2, . . . , n, where τ 1 , . . . , τ n are components of the tension field of the map φ, ∇, ∆ denote the gradient and the Laplace operators defined by the metric g, andΓ σ αβ andR σ β αν are the components of the connection and the curvature of the target manifold. Lemma 2.2. The map ϕ : (M 2 , dr 2 + σ 2 (r)dθ 2 ) −→ (N 2 , dρ 2 + λ 2 (ρ)dφ 2 ) with ϕ(r, θ) = (ρ(r), cr + kθ + a 2 ) is biharmonic if and only if it solves the system (4)              x ′′ + σ ′ σ x ′ − (c 2 + κ 2 σ 2 )(λλ ′ (ρ)) ′ (ρ)x − (2cy ′ + y 2 )λλ ′ (ρ) = 0, cy ′′ + yy ′ + 2c 2 λ ′ (ρ) λ x ′ + 2c 2 σ ′ λ ′ (ρ) σλ + ρ ′ (λλ ′ (ρ)) ′ (ρ) λ 2 x = 0, x = τ 1 = ρ ′′ + σ ′ σ ρ ′ − (c 2 + κ 2 σ 2 )λλ ′ (ρ), y = τ 2 = 2cρ ′ λ ′ (ρ) λ + cσ ′ σ , Proof. One can easily compute the connection coefficients of the domain and the target surfaces to get Γ 1 11 = 0, Γ 1 12 = 0, Γ 1 22 = −σσ ′ , Γ 2 11 = 0, Γ 2 12 = σ ′ σ , Γ 2 22 = 0, Γ 1 11 = 0,Γ 1 12 = 0,Γ 1 22 = −λλ ′ (ρ),Γ 2 11 = 0,Γ 2 12 = λ ′ (ρ) λ ,Γ 2 22 = 0. We can also check that the components of the Riemannian curvature of the target surface are given bȳ R 1 212 = −λλ ′′ (ρ),R 1 221 = λλ ′′ (ρ),R 2 112 = λ ′′ (ρ) λ ,R 2 121 = − λ ′′ (ρ) λ othersR l kij = 0, and that the tension field of the map ϕ has components τ 1 = g ij (ϕ 1 ij − Γ k ij ϕ 1 k +Γ 1 αβ ϕ α i ϕ β j ) = ρ ′′ + σ ′ σ ρ ′ − (c 2 + k 2 σ 2 )λλ ′ (ρ),(5)τ 2 = g ij (ϕ 2 ij − Γ k ij ϕ 2 k +Γ 2 αβ ϕ α i ϕ β j ) = 2cρ ′ λ ′ (ρ) λ + cσ ′ σ .(6) Using the notations x = τ 1 and y = τ 2 and performing a further computation we have ∆τ 1 = g ij (τ 1 ij − Γ k ij τ 1 k ) = x ′′ + σ ′ σ x ′ ,(7)∆τ 2 = g ij (τ 2 ij − Γ k ij τ 2 k ) = y ′′ + σ ′ σ y ′ ,(8)2g(∇τ α , ∇ϕ β )Γ 1 αβ = −2cλλ ′ (ρ)y ′ ,(9)2g(∇τ α , ∇ϕ β )Γ 2 αβ = 2(cx ′ + ρ ′ y ′ )λ ′ (ρ) λ ,(10)τ α ∆ϕ βΓ1 αβ = − cσ ′ λλ ′ (ρ) σ y,(11)τ α ∆ϕ βΓ2 αβ = λ ′ (ρ) λ [ cσ ′ σ x + (ρ ′′ + σ ′ σ ρ ′ )y],(12)τ α g(∇ϕ β , ∇ϕ ρ )∂ ρΓ 1 αβ = −cρ ′ (λλ ′ (ρ)) ′ (ρ)y,(13)τ α g(∇ϕ β , ∇ϕ ρ )Γ v αβΓ 1 vρ = −(c 2 + k 2 σ 2 )λ ′2 (ρ)x − cρ ′ λ ′2 (ρ)y,(14)τ α g(∇ϕ β , ∇ϕ ρ )∂ ρΓ 2 αβ = cρ ′ ( λ ′ (ρ) λ ) ′ (ρ)x + ρ ′2 ( λ ′ (ρ) λ )) ′ (ρ)y,(15)τ α g(∇ϕ β , ∇ϕ ρ )Γ v αβΓ 2 vρ = cρ ′ ( λ ′ (ρ) λ ) 2 x + ρ ′2 ( λ ′ (ρ) λ ) 2 y − (c 2 + k 2 σ 2 )λ ′2 (ρ)y,(16)−τ v g(∇ϕ α , ∇ϕ β )R 1 βαv = −(c 2 + k 2 σ 2 )λλ ′′ (ρ)x + cρ ′ λλ ′′ (ρ)y,(17) and −τ v g(∇ϕ α , ∇ϕ β )R 2 βαv = cρ ′ λ ′′ (ρ) λ x − ρ ′2 λ ′′ (ρ) λ y.(18) Substituting (7)∼ (18) into (3) we conclude that the map ϕ is biharmonic if and only if                    x ′′ + σ ′ σ x ′ − (c 2 + k 2 σ 2 )(λλ ′ (ρ)) ′ (ρ)x − 2cλλ ′ (ρ)y ′ − c σ ′ λλ ′ (ρ) σ + 2ρ ′ λ ′2 (ρ) y = 0, y ′′ + ( σ ′ σ + 2ρ ′ λ ′ (ρ) λ )y ′ + λ ′ (ρ) λ (ρ ′′ + σ ′ σ ρ ′ ) − (c 2 + k 2 σ 2 )λ ′2 (ρ) y + 2cλ ′ (ρ) λ x ′ + c σ ′ λ ′ (ρ) σλ + 2ρ ′ λ ′′ (ρ) λ x = 0, x = τ 1 = ρ ′′ + σ ′ σ ρ ′ − (c 2 + k 2 σ 2 )λλ ′ (ρ), y = τ 2 = 2cρ ′ λ ′ (ρ) λ + cσ ′ σ , which is equivalent to the system (4). Thus we we obtain the lemma. Remark 1. (i) With σ = 1, c = 0, k = 1 our Lemma 2.2 recovers Proposition 3.1 in [OL]; (ii) One can also check that our Lemma 2.2 also recovers the case for m = n = 2 in Theorem 5.4 in [BMO2]. Some other straightforward applications of Lemma 2.2 can be stated as Corollary 2.3. The map ϕ : (M 2 , dr 2 + σ 2 (r)dθ 2 ) −→ (N 2 , dρ 2 + λ 2 (ρ)dφ 2 ), ϕ(r, θ) = (ρ(r), kθ + a 2 ) is biharmonic if and only if it solves the system (19) x ′′ + σ ′ σ x ′ − k 2 (λλ ′ ) ′ (ρ) σ 2 x = 0, x = τ 1 = ρ ′′ + σ ′ σ ρ ′ − k 2 λλ ′ (ρ) σ 2 . In particular, the rotationally symmetric map ϕ : (T 2 , dr 2 + dθ 2 ) −→ (S 2 , h = dρ 2 + sin 2 ρdφ 2 ) from a flat torus into a sphere with ϕ(r, θ) = (ρ(r), κθ) is bihar- monic if and only if (20) ρ (4) − 2κ 2 cos(2ρ)ρ ′′ + 2κ 2 sin(2ρ)ρ ′2 + κ 4 4 sin(4ρ) = 0. Remark 2. Note that Equation (20) was obtained in [MR] by using a 1-dimensional variational approach. It was also observed in [MR] that when ρ = π 4 , 3π 4 , ϕ give proper biharmonic maps. Corollary 2.4. The rotationally symmetric map ϕ : (S 2 , dr 2 + sin 2 rdθ 2 ) −→ (S 2 , dρ 2 + sin 2 ρdφ 2 ) with ϕ(r, θ) = (ρ(r), kθ) is biharmonic if and only if the function ρ = ρ(r) solves the system (21) x ′′ + cot rx ′ − k 2 cos 2ρ sin 2 r x = 0, x = τ 1 = ρ ′′ + cot rρ ′ − k 2 sin 2ρ 2 sin 2 r , or equivalently,(22)   d 2 x dt 2 = k 2 x cos(2ρ), x(t) = cosh 2 t d 2 ρ dt 2 − k 2 sin ρ cos ρ , where t = ln | tan r 2 |. Proof. Equation (21) is obtained by applying Corollary 2.3 with σ = sin r, λ = sin ρ, a 2 = 0 whilst Equation (22) comes from Equation (21) by a transformation t = ln | tan r 2 |. A classification of biharmonic maps between 2-spheres Note that there are many smooth maps between spheres. For example, the following family of maps was studied in Peng-Tang [PT]. Let f k : S n −→ S n (k > 0) be defined by (23) (cos r, sin r · X) −→ (cos(kr), sin(kr) · X), where 0 ≤ r ≤ π and X ∈ S n−1 ⊂ R n . It was proved in [PT] that f k is a k-form and the Brouwer degree of f k is degf k =        k, if n is odd; 1 if n is even and k is odd; 0 otherwise. Note also that with respect to geodesic polar coordinates, this family of maps can be described as f k (r, θ) = (kr, θ), a family of rotationally symmetric maps between spheres. It is easy to check that the quadratic polynomial map F : R 3 −→ R 3 defined by F (x, y, z) = (x 2 − y 2 − z 2 , 2xy, 2xz) restricts to a map between spheres f = F | S 2 : S 2 −→ S 2 . Using geodesic polar coordinates (r, θ) on the domain sphere and (ρ, φ) on the target sphere we can have a local expression of the map given by (24) f : (S 2 , d r 2 + sin 2 r d θ 2 ) −→ (S 2 , d ρ 2 + sin 2 ρ d φ 2 ), f (r, θ) = (2r, θ). It follows that this restriction of the polynomial map is a rotationally symmetric map between 2-spheres belonging to the family (23): f k : S 2 −→ S 2 . One can further check that the tension field of the map f is τ (f ) = 2 sin 2r ∂ ∂r , so it is NOT a harmonic map. It would be interesting to know whether there exists any proper biharmonic map in the family of rotationally symmetric maps f k . Our next theorem gives a classification of biharmonic maps in a class of maps S 2 −→ S 2 which includes the family f k of rotationally symmetric maps as a subset. Theorem 3.1. The rotationally symmetric map ϕ : (S 2 , dr 2 + sin 2 rdθ 2 ) −→ (S 2 , dρ 2 + sin 2 ρdφ 2 ) with ϕ(r, θ) = (ar + a 1 , kθ) and a = 0 is biharmonic if and only if a 2 = 1, k 2 = 1, and a 1 = 0, or, a 1 = π, i.e., the map is actually harmonic. Proof. For ρ = ar + a 1 , it follows from Corollary 2.4 that ϕ is biharmonic if and only if it solves the system (25)        x ′′ + cot rx ′ − k 2 cos 2ρ sin 2 r x = 0, x = τ 1 = a cot r − k 2 sin 2ρ 2 sin 2 r , ρ = ar + a 1 . A straightforward computation using the last two equations of (25) gives (26) x ′ = − a sin 2 r − k 2 a sin r cos 2ρ−k 2 cos r sin 2ρ sin 3 r , x ′′ = 2a cos r sin 3 r − (k 2 −2k 2 a 2 ) sin 2 r sin 2ρ+3k 2 cos 2 r sin 2ρ−4k 2 a sin r cos r cos 2ρ sin 4 r . Substituting (26) into Equation (25) we have 2a sin r cos r+2(2k 2 a 2 −k 2 ) sin 2 r sin 2ρ−4k 2 cos 2 r sin 2ρ+4k 2 a sin r cos r cos 2ρ+k 4 sin 2ρ cos 2ρ 2 sin 4 r = 0, ρ = ar + a 1 , which is equivalent to        2a sin r cos r + 2(2k 2 a 2 − k 2 ) sin 2 r sin 2ρ − 4k 2 cos 2 r sin 2ρ +4k 2 a sin r cos r cos 2ρ + k 4 sin 2ρ cos 2ρ = 0, ρ = ar + a 1 . By a further computation, we can rewrite the above equation as (27)        a sin 2r + (2k 2 a 2 − 3k 2 ) sin 2ρ − (2k 2 a 2 + k 2 ) cos 2r sin 2ρ +2k 2 a sin 2r cos 2ρ + k 4 sin 2ρ cos 2ρ = 0, ρ = ar + a 1 . Write f (r) = a sin 2r + (2k 2 a 2 − 3k 2 ) sin 2ρ − (2k 2 a 2 + k 2 ) cos 2r sin 2ρ + 2k 2 a sin 2r cos 2ρ + k 4 sin 2ρ cos 2ρ, then we have f ′ (r) = 2a cos 2r + 2a(2k 2 a 2 − 3k 2 ) cos 2ρ + 2k 2 sin 2r sin 2ρ +(2k 2 a − 4k 2 a 3 ) cos 2r cos 2ρ + 2k 4 a cos 4ρ, f ′′ (r) = −4a sin 2r − 4a 2 (2k 2 a 2 − 3k 2 ) sin 2ρ +(8k 2 a 4 − 4k 2 a 2 + 4k 2 ) cos 2r sin 2ρ + 8k 2 a 3 sin 2r cos 2ρ − 8k 4 a 2 sin 4ρ, and f ′′′ (r) = −8a cos 2r − 8a 3 (2k 2 a 2 − 3k 2 ) cos 2ρ +(−32k 2 a 4 + 8k 2 a 2 − 8k 2 ) sin 2r sin 2ρ +(16k 2 a 5 + 8k 2 a 3 + 8k 2 a) cos 2r cos 2ρ − 32k 4 a 3 cos 4ρ. Equation (27) implies that for any r, we have (28)                  f (r) = 0, f ′ (r) = 0, f ′′ (r) = 0, f ′′′ (r) = 0, ρ = ar + a 1 . Since a = 0, one can easily check that if k = 0, then Equation (25 ) has no solution. So from now on we assume that k = 0. Substituting r 0 = π 2 , ρ 0 = a π 2 +a 1 into Equation (28) we have (29)                  f (r 0 ) = 2k 2 sin ρ 0 cos ρ 0 (4a 2 − 2 + k 2 cos 2ρ 0 ) = 0, f ′ (r 0 ) = 2a {−1 − k 4 + (4k 2 a 2 − 4k 2 ) cos 2ρ 0 + 2k 4 cos 2 2ρ 0 } = 0, f ′′ (r 0 ) = 8k 2 sin ρ 0 cos ρ 0 (−4a 4 + 4a 2 − 1 − 4k 2 a 2 cos 2ρ 0 ) = 0, f ′′′ (r 0 ) = 8a {1 + (−4k 2 a 4 + 2k 2 a 2 − k 2 ) cos 2ρ 0 − 8k 4 a 2 cos 2 2ρ 0 + 4k 4 a 2 } = 0, 2ρ 0 = aπ + 2a 1 . Noting that r ∈ (0, π) and ρ(r) ∈ (0, π) we conclude that k sin ρ 0 = 0. We will solve Equation (29) by the following two cases: Case (i): cos ρ 0 = 0. In this case, Equation (29) becomes (30)                  4a 2 − 2 + k 2 cos 2ρ 0 = 0, −1 − k 4 + (4k 2 a 2 − 4k 2 ) cos 2ρ 0 + 2k 4 cos 2 2ρ 0 = 0, −4a 4 + 4a 2 − 1 − 4k 2 a 2 cos 2ρ 0 = 0, 1 + (−4k 2 a 4 + 2k 2 a 2 − k 2 ) cos 2ρ 0 − 8k 4 a 2 cos 2 2ρ 0 + 4k 4 a 2 = 0, 2ρ 0 = aπ + 2a 1 . By substituting the first equation of (30) into the second , the third, and the fourth we have (31)        k 4 = 16a 4 − 8a 2 − 1, 12a 4 − 4a 2 − 1 = 0, −112a 6 + 112a 4 − 24a 2 − 1 + 4k 4 a 2 = 0. Solving the second equation (31) we have a 2 = 1 2 . Substituting a 2 = 1 2 into the first equation of (31) yields k 4 = −1, which shows Equation (31) and hence (30) has no real solution in this case. Case (ii): cos ρ 0 = 0, i.e., ρ 0 = aπ 2 + a 1 = π 2 . In this case, Equation (29) reduces to −1 + k 4 − 4k 2 a 2 + 4k 2 = 0, 1 + k 2 + 4k 2 a 4 − 2k 2 a 2 − 4k 4 a 2 = 0, which is equivalent to (32) 4k 2 (a 2 − 1) = k 4 − 1, 4k 2 a 2 (a 2 − 1) = 4k 4 a 2 − 2k 2 a 2 − k 2 − 1. We can easily check that if a 2 = 1, then Equation (32) has a solution k 2 = 1. In this case, we have a 1 = 0 or a 1 = π and hence τ 1 = a cot r − k 2 sin 2ρ 2 sin 2 r = 0. This implies that the map ϕ : (S 2 , dr 2 + sin 2 rdθ 2 ) −→ (S 2 , dρ 2 + sin 2 ρdφ 2 ) with ϕ(r, θ) = (r, kθ) or ϕ(r, θ) = (−r + π, kθ) is a harmonic map. If a 2 = 1, then k 2 = 1. In this case, we solve Equation (32) for a 2 in terms of k to have (33) a 2 = k 2 + 1 3k 4 − 2k 2 + 1 . Substituting (33) into the first equation of (32) we have (34) 3k 6 + 13k 4 − k 2 + 1 = 0. Setting k 2 = t, the above equation becomes Summarizing the results in Cases (i) and (ii) we obtain the theorem. Corollary 3.2. The globally defined smooth map f = F | S 2 : S 2 −→ S 2 obtained from the restriction of the polynomial map F : R 3 −→ R 3 , F (x, y, z) = (x 2 − y 2 − z 2 , 2xy, 2xz), is neither a harmonic nor a biharmonic map. Proof. As we mentioned at the beginning of the section, the map f is a rotationally symmetric map with f (r, θ) = (2r, θ). So, by our Theorem 3.1, the map f is not a biharmonic map. Substituting a = 2, k = 1 into the second equation of (25) we obtain the first component of the tension field τ 1 = 2 sin 2r = 0. It follows that the map f is neither a harmonic map. Locally defined biharmonic maps from a 2-sphere We first prove the following proposition which shows that for a special class of maps between rotationally symmetric manifolds, the biharmonic map equation reduces to biharmonic function equation. This will be used to construct many locally defined proper biharmonic maps from a 2-sphere into itself. Proposition 4.1. The map ϕ : (M 2 , dr 2 + σ 2 (r)dθ 2 ) −→ (N 2 , dρ 2 + λ 2 (ρ)dφ 2 ) with ϕ(r, θ) = (ρ(r), a 2 ) is biharmonic if and only if ∆ 2 M ρ = 0, i.e., ρ(r) is a biharmonic function on (M 2 , dr 2 + σ 2 (r)dθ 2 ), which can be determined by the integral ρ(r) =    C 1 σ(r) dr σ(r) + C 2 σ(r) dr + C 3 σ(r)    dr + C 4 ,(36) where C 1 , C 2 , C 4 and C 4 are constants. Furthermore, the map is proper biharmonic if C 2 1 + C 2 2 = 0. Proof. Using Corollary 2.3 , it follows that ϕ : (M 2 , dr 2 +σ 2 (r)dθ 2 ) −→ (N 2 , dρ 2 + λ 2 (ρ)dφ 2 ) with ϕ(r, θ) = (ρ(r), a 2 ) is biharmonic if and only if it solves the system (37) x ′′ + σ ′ σ x ′ = 0, x = τ 1 = ρ ′′ + σ ′ σ ρ ′ . Noting that x = ρ ′′ + σ ′ σ ρ ′ = ∆ M ρ and ∆ M x = x ′′ + σ ′ σ x ′ we conclude that Equation (37) is equivalent to ∆ 2 M ρ = 0. This gives the first statement of the proposition. To solve Equation (37), we integrate the first equation to have x = C 1 dr σ(r) + C 2 . Substituting this into the second equation of (37) we have ρ ′′ + σ ′ σ ρ ′ = C 1 dr σ(r) + C 2 , which is solved by ρ(r) =    C 1 σ(r) dr σ(r) + C 2 σ(r) dr + C 3 σ(r)    dr + C 4 , where C 1 , C 2 , C 4 and C 4 are constant. Thus, we obtain the proposition. Remark 3. Applying Proposition 4.1 we can conclude that the map ϕ : (R 2 \ {0}, dr 2 + r 2 dθ 2 ) −→ (N 2 , dρ 2 + λ 2 (ρ)dφ 2 ) with ϕ(r, θ) = (ρ(r), a 2 ) is biharmonic if and only if ρ(r) = c 1 r 2 ln r + c 2 r 2 + c 3 ln r + c 4 , which is a result in (b) of Proposition 5.5 (for m = n = 2) in [BMO2]. As another straightforward application of Proposition 4.1, we have the following corollary which gives many locally defined proper biharmonic maps between 2spheres. Corollary 4.2. For constants C 1 , C 2 , C 3 , C 4 with C 2 1 + C 2 2 = 0, the rotationally symmetric map ϕ : (S 2 , dr 2 + sin 2 rdθ 2 ) −→ (S 2 , dρ 2 + sin 2 ρ dφ 2 ) with ϕ(r, θ) = (ρ(r), a 2 ) is a proper biharmonic if ρ(r) = C 1 sin r dr sin r + C 2 sin r dr + C 3 sin r dr + C 4 . Remark 4. We would like to point out that Corollary 4.2 provides many examples of locally defined proper biharmonic maps between two 2-spheres. However, none of them can be extended to a globally defined map S 2 −→ S 2 . This can be seen from the fact that each of the maps provided by Corollary 4.2 is determined by a locally defined biharmonic function on S 2 . No locally defined biharmonic function can be extended to the whole sphere S 2 as it is well known that any globally defined biharmonic function on S 2 has to be a constant. In the rest of this section, we will show that the equations for biharmonic maps from S 2 into some special choices of rotationally symmetric manifolds can be solved completely. First, let us prove the following lemma. Lemma 4.3. Let λ 2 (ρ) = Aρ 2 + 2C 0 ρ + C > 0, and C 0 , A, C, k, a 2 be constants. Then, the map ϕ : (M 2 , dr 2 + σ 2 (r)dθ 2 ) −→ (N 2 , dρ 2 + λ 2 (ρ)dφ 2 ) defined by ϕ(r, θ) = (ρ(r), kθ + a 2 ) is biharmonic if and only if (39) x ′′ + σ ′ σ x ′ − k 2 A σ 2 x = 0, x = τ 1 = ρ ′′ + σ ′ σ ρ ′ − k 2 (Aρ+C 0 ) σ 2 . Proof. For λ 2 (ρ) = Aρ 2 + 2C 0 ρ + C > 0, we have λλ ′ (ρ) = Aρ + C 0 and (λλ ′ (ρ)) ′ (ρ) = A. By Corollary 2.3 , the map ϕ : (M 2 , dr 2 + σ 2 (r)dθ 2 ) −→ (N 2 , dρ 2 + λ 2 (ρ)dφ 2 ) with ϕ(r, θ) = (ρ(r), kθ + a 2 ) is biharmonic if and only if it solves the system (40) x ′′ + σ ′ σ x ′ − k 2 A σ 2 x = 0, x = τ 1 = ρ ′′ + σ ′ σ ρ ′ − k 2 (Aρ+C 0 ) σ 2 . Thus, we obtain the lemma. Theorem 4.4. The rotationally symmetric map ϕ : (S 2 , dr 2 + sin 2 (r)dθ 2 ) −→ (R 2 , dρ 2 + (ρ + 1)dφ 2 ) with ϕ(r, θ) = ( 1 4 (ln tan r 2 ) 2 − ln sin r + 1, θ) is a proper biharmonic map. Proof. First, we prove the following Claim: Let C 0 , C be constants so that λ 2 (ρ) = 2C 0 ρ + C > 0. Then, the map ϕ : (M 2 , dr 2 + σ 2 (r)dθ 2 ) −→ (N 2 , dρ 2 + λ 2 (ρ)dφ 2 ) with ϕ(r, θ) = (ρ(r), kθ + a 2 ) is biharmonic if and only if ρ(r) =    C 1 σ(r) dr σ(r) + C 2 σ(r) + k 2 C 0 σ(r) dr + C 3 σ(r)    dr + C 4 ,(41) where C 1 , C 2 , C 3 and C 4 are constants. Proof of the Claim: For λ 2 (ρ) = 2C 0 ρ + C, we apply Lemma 4.3 with A = 0 to conclude that the map ϕ : (M 2 , dr 2 + σ 2 (r)dθ 2 ) −→ (N 2 , dρ 2 + λ 2 (ρ)dφ 2 ) with ϕ(r, θ) = (ρ(r), kθ + a 2 ) is biharmonic if and only if it solves the system (42) x ′′ + σ ′ σ x ′ = 0, x = τ 1 = ρ ′′ + σ ′ σ ρ ′ − k 2 C 0 σ 2 . Integrating the first equation of (42) we obtain x = C 1 1 σ(r) dr + C 2 . Substituting this into the second equation of (42) and multiplying σ(r) to both sides of the resulting equation we have (43) (ρ ′ σ) ′ = C 1 σ 1 σ(r) dr + C 2 σ + k 2 C 0 σ . Integrating this second order ODE we obtain the Claim. To prove the theorem, we first notice that the biharmonic maps given in the Claim are proper biharmonic maps for C 2 1 + C 2 2 = 0 since the first component of the tension field is τ 1 = x = C 1 1 σ(r) dr + C 2 . Now, we apply the Claim with σ(r) = sin r, λ 2 (ρ) = ρ + 1 and C = C 2 = C 4 = k = 1, C 0 = 1/2, C 1 = C 3 = 0 to conclude that the rotationally symmetric map ϕ : (S 2 , dr 2 + sin 2 (r)dθ 2 ) −→ (R 2 , dρ 2 + (ρ + 1)dφ 2 ) with ϕ(r, θ) = (ρ(r), θ) is a proper biharmonic map if and only if ρ(r) = sin r + 1 2 sin r dr sin r dr + 1. A further integration of the above integral gives the required result. Thus, we complete the proof of the theorem. Remark 5. We would like to point out that the proper biharmonic map ϕ : (S 2 , dr 2 + sin 2 (r)dθ 2 ) −→ (R 2 , dρ 2 + (ρ + 1)dφ 2 ) with ϕ(r, θ) = ( 1 4 (ln tan r 2 ) 2 − ln sin r + 1, θ) is actually defined on the sphere with two points (the north and the south poles) deleted since r = 0, π. Theorem 4.5. Let λ 2 (ρ) = ρ 2 + 2C 0 ρ + C > 0, and C 0 , C be constants. Then, the map ϕ : (S 2 , dr 2 +sin 2 (r)dθ 2 ) −→ (N 2 , dρ 2 +λ 2 (ρ)dφ 2 ) with ϕ(r, θ) = (ρ(r), θ) is biharmonic if and only if ρ(r) = (C 1 − C 2 + C 3 )| cot r 2 | + (2C 1 ln | tan r 2 | + C 4 )| tan r 2 | −(C 1 | tan r 2 | + C 2 | cot r 2 |) ln(1 + tan 2 r 2 ) − C 0 ,(45) where C 1 , C 2 , C 3 , C 4 are constants. Furthermore, when C 2 1 + C 2 2 = 0, the rotationally symmetric maps determined by (45) are proper biharmonic maps. Proof. Using Lemma 4.3 with σ(r) = sin r, A = 1, k = 1, and a 2 = 0 we conclude that ϕ is biharmonic if and only if it solves the system (46) x ′′ + cot rx ′ − 1 sin 2 r x = 0, x = τ 1 = ρ ′′ + cot rρ ′ − 1 sin 2 r ρ − C 0 sin 2 r . To solve this system of ODEs we introduce new variable by letting t = ln | tan r 2 |. It follows that (47)            ρ ′ = dρ   d 2 x dt 2 − x = 0, x = (1+e 2t ) 2 4e 2t d 2 ρ dt 2 − ρ − C 0 . It is very easy to see that the general solution of the first equation of (48) as (49) x = C 1 e −t + C 2 e t . Substituting this into the second equation of (48) we obtain (50) d 2 ρ dt 2 − ρ = 4e 2t (1 + e 2t ) 2 (C 1 e −t + C 2 e t ) + C 0 . Using the method of variation of parameters we can have the general solution of (50) as ρ(t) = C 3 e −t + C 4 e t + u 1 (t)e −t + u 2 (t)e t(51) where the parameters u 1 , u 2 are determined by u ′ 1 (t) = − e t ( 4e 2t (1+e 2t ) 2 (C 1 e −t + C 2 e t ) + C 0 ) 2 , = −C 1 2e 2t (1 + e 2t ) 2 − C 2 2e 4t (1 + e 2t ) 2 − C 0 2 e t ,(52)u ′ 2 (t) = e −t ( 4e 2t (1+e 2t ) 2 (C 1 e −t + C 2 e t ) + C 0 ) 2 = 2C 1 (1 + e 2t ) 2 + C 2 2e 2t (1 + e 2t ) 2 + C 0 2 e −t .(53) Integrating these first order ODEs we obtain u 1 (t) = C 1 1 + e 2t − C 2 1 + e 2t − C 2 ln(1 + e 2t ) − C 0 2 e t ,(54)u 2 (t) = 2C 1 t + C 1 1 + e 2t − C 1 ln(1 + e 2t ) − C 2 1 + e 2t − C 0 2 e −t .(55) Substituting these into (51) we obtain the general solution of (50) as (56) ρ(t) = (C 1 − C 2 + C 3 )e −t + (2C 1 t + C 4 )e t − (C 1 e t + C 2 e −t ) ln(1 + e 2t ) − C 0 . Noting that t = ln | tan r 2 | we have ρ(r) = (C 1 − C 2 + C 3 )| cot r 2 | + (2C 1 ln | tan r 2 | + C 4 )| tan r 2 | −(C 1 | tan r 2 | + C 2 | cot r 2 |) ln(1 + tan 2 r 2 ) − C 0 , where C 1 , C 2 , C 3 , C 4 are constants. This completes the proof of the theorem. Example 1. The map ϕ : (S 2 , dr 2 +sin 2 (r)dθ 2 ) −→ (N 2 , dρ 2 +(ρ 2 +2C 0 ρ+C)dφ 2 ) with C > 0 and ϕ(r, θ) = (| cot r 2 |[1 + ln(1 + tan 2 r 2 )], θ) is a proper biharmonic map. This is obtained from Theorem 4.5 with C 0 = C 1 = C 3 = C 4 = 0, C 2 = −1 and hence (45) becomes ρ(r) = | cot r 2 |[1 + ln(1 + tan 2 r 2 )]. Remark 6. (i) Note that the stereographic projections φ : (S 2 \ {N}, dr 2 + sin 2 (r)dθ 2 ) −→ (r 2 , dρ 2 + ρ 2 dφ 2 ) with φ(r, θ) = (cot r 2 , θ) and φ : (S 2 \ {S}, dr 2 + sin 2 (r)dθ 2 ) −→ (r 2 , dρ 2 + ρ 2 dφ 2 ) with φ(r, θ) = (tan r 2 , θ) are among the maps in the family provided by Theorem 4.5. It is well known that these are harmonic maps. (ii) We notice that the solution does not depend on C in the prescribed metric dρ 2 + (ρ 2 + 2C 0 ρ + C)dφ 2 on the target manifold. One can easily check that the Gauss curvature of the metric dρ 2 + (ρ 2 + 2C 0 ρ + C)dφ 2 is given by K = C 2 0 −C (ρ 2 +2C 0 ρ+C) 2 . This allows us to construct examples of local proper biharmonic maps from a 2-sphere into a surface with curvature of any fixed sign by a suitable choice of C. (iii) Note that none of the locally defined proper biharmonic maps from S 2 given in Theorem 4.5 can be extended to a global map ϕ : (S 2 , dr 2 + sin 2 (r)dθ 2 ) −→ (N 2 , dρ 2 + (ρ 2 + 2C 0 ρ + C)dφ 2 ). If otherwise, we could choose C so that C > C 2 0 and hence the Gauss curvature of the target surface would be negative as we mentioned in (i). This would contradict a theorem of Jiang stating that any biharmonic map from a compact manifold into a non-positively curved manifold has to be harmonic. As a closing remark, we would like to point out that our results (Theorems 3.1, 4.4, 4.5, Corollary 4.2, and Remarks 4, 5, 6) seem to suggest the following Conjecture: any biharmonic map S 2 −→ (N n , h) is a weakly conformal immersion. consider the function φ(t) = 3t 3 + 13t 2 − t + 1 defined on the interval [0, +∞). It is an elementary exercise to check that the absolute minimum value of this function over the interval [0, ∞) is φ( 1247 − 89 √ 178) > 0. It follows that Equation(35)has no positive solution. This implies that Equation (34) has no real solution and hence Equation (32) has no solution in this case. Biharmonic hypersurfaces in complete Riemannian manifolds. L Alias, S Garcia-Martinez, M Rigoli, Pacific J. of Math. 2631L. Alias, S. Garcia-Martinez, and M. Rigoli, Biharmonic hypersurfaces in complete Riemannian manifolds, Pacific J. of Math, Vol. 263 (1), (2013), 1-12. On constructing biharmonic maps and metrics. P Baird, D Kamissoko, Ann. Global Anal. Geom. 231P. Baird and D. Kamissoko, On constructing biharmonic maps and metrics, Ann. Global Anal. Geom. 23 (2003), no. 1, 65-75. Conformal and semi-conformal biharmonic maps. P Baird, A Fardoun, S Ouakkas, Ann Glob Anal Geom. 34P. Baird, A. Fardoun and S. Ouakkas, Conformal and semi-conformal biharmonic maps, Ann Glob Anal Geom (2008) 34:403-414. Liouville-type theorems for biharmonic maps between Riemannian manifolds. P Baird, A Fardoun, S Ouakkas, Adv. Calc. Var. 3P. Baird, A. Fardoun and S. Ouakkas, Liouville-type theorems for biharmonic maps between Riemannian manifolds, Adv. Calc. Var. 3 (2010), 49-68. Classification results for biharmonic submanifolds in spheres. A Balmus, S Montaldo, C Oniciuc, Israel J. Math. 168A. Balmus, S. Montaldo and C. Oniciuc, Classification results for biharmonic subman- ifolds in spheres, Israel J. Math. 168 (2008), 201-220. Biharmonic maps between warped product manifolds. A Balmus, S Montaldo, C Oniciuc, J. Geom. Phys. 572A. Balmus, S. Montaldo, C. Oniciuc, Biharmonic maps between warped product man- ifolds, J. Geom. Phys. 57 (2007), no. 2, 449-466. A Balmuus, S Montaldo, C Oniciuc, arXiv:1110.4258Biharmonic PNMC Submanifolds in Spheres. preprintA. Balmuus, S. Montaldo, C. Oniciuc, Biharmonic PNMC Submanifolds in Spheres, arXiv:1110.4258, preprint 2011. Tournaments, flags and harmonic maps. F Burstall, S Salamon, Math. Ann. 277F. Burstall and S. Salamon, Tournaments, flags and harmonic maps, Math. Ann. 277 (1987) 249-265. The construction of harmonic maps into complex Grassmannians. F Burstall, J C Wood, J. Diff. Geom. 23F. Burstall and J. C. Wood, The construction of harmonic maps into complex Grass- mannians, J. Diff. Geom. 23 (1986) 255-297. Biharmonic submanifolds of S 3. R Caddeo, S Montaldo, C Oniciuc, Internat. J. Math. 128R. Caddeo, S. Montaldo, and C. Oniciuc, Biharmonic submanifolds of S 3 , Internat. J. Math. 12 (2001), no. 8, 867-876. Biharmonic submanifolds in spheres. R Caddeo, S Montaldo, C Oniciuc, Israel J. Math. 130R. Caddeo, S. Montaldo and C. Oniciuc, Biharmonic submanifolds in spheres, Israel J. Math. 130 (2002), 109-123. Some open problems and conjectures on submanifolds of finite type. B Y Chen, Soochow J. Math. 172B. Y. Chen, Some open problems and conjectures on submanifolds of finite type, Soochow J. Math. 17 (1991), no. 2, 169-188. Biharmonic pseudo-Riemannian submanifolds in pseudo-Euclidean spaces. B Y Chen, S Ishikawa, Kyushu J. Math. 521B. Y. Chen and S. Ishikawa, Biharmonic pseudo-Riemannian submanifolds in pseudo- Euclidean spaces, Kyushu J. Math. 52 (1998), no. 1, 167-185. Biharmonic ideal hypersurfaces in Euclidean spaces. B Y Chen, M Munteanu, Diff. Geom. Appl. 311B. Y. Chen and M. Munteanu, Biharmonic ideal hypersurfaces in Euclidean spaces, Diff. Geom. Appl., Vol. 31 (1), 2013, 1-16. On the volume-decreasing property of a class of real harmonic mappings. S S Chern, S I Goldberg, Amer. J. Math. 97S. S. Chern and S. I. Goldberg , On the volume-decreasing property of a class of real harmonic mappings, Amer. J. Math., 97(1975), 133-147. Submanifolds of E m with harmonic mean curvature vector. I Dimitric, Bull. Inst. Math. Acad. Sinica. 201I. Dimitric, Submanifolds of E m with harmonic mean curvature vector, Bull. Inst. Math. Acad. Sinica 20 (1992), no. 1, 53-65. Selected topics in harmonic maps. J Eells, L Lemaire, CBMS. 50Amer. Math. SocJ. Eells and L. Lemaire, Selected topics in harmonic maps, CBMS, 50, Amer. Math. Soc, (1983). Harmonic mappings of Riemannian manifolds. J Eells, J H Sampson, Amer. J. Math. 86J. Eells and J. H. Sampson, Harmonic mappings of Riemannian manifolds, Amer. J. Math. 86 1964 109-160. Harmonic maps and minimal immersions with symmetries. Methods of ordinary differential equations applied to elliptic variational problems. J Eells, A Ratto, Annals of Mathematics Studies. 130Princeton University PressJ. Eells and A. Ratto Harmonic maps and minimal immersions with symmetries. Meth- ods of ordinary differential equations applied to elliptic variational problems. Annals of Mathematics Studies, 130. Princeton University Press, Princeton, NJ, 1993. The existence and construction of certain harmonic maps. J Eells, J C Wood, Symposia Mathematica. XXVIAcademic PressJ. Eells and J. C. Wood, The existence and construction of certain harmonic maps, Symposia Mathematica, Vol. XXVI (Rome, 1980), pp. 123-138, Academic Press, London- New York, 1982. The dimension and structure of the space of harmonic 2-spheres in the m-sphere. Luis Fernández, 175Luis Fernández, The dimension and structure of the space of harmonic 2-spheres in the m-sphere, 175 (3) (2012), 1093-1125. Hypersurfaces in E 4 with harmonic mean curvature vector field. T Hasanis, T Vlachos, Math. Nachr. 172T. Hasanis and T. Vlachos, Hypersurfaces in E 4 with harmonic mean curvature vector field, Math. Nachr. 172 (1995), 145-169. 2-Harmonic maps and their first and second variational formulas. G Y Jiang, Chin. Ann. Math. Ser. A. 7G. Y. Jiang, 2-Harmonic maps and their first and second variational formulas, Chin. Ann. Math. Ser. A 7(1986) 389-402. Some non-existence theorems of 2-harmonic isometric immersions into Euclidean spaces. G Y Jiang, Chin. Ann. Math. Ser. 8G. Y. Jiang, Some non-existence theorems of 2-harmonic isometric immersions into Eu- clidean spaces , Chin. Ann. Math. Ser. 8A (1987) 376-383. 2-harmonic isometric immersions between Riemannian manifolds. G Y Jiang, Chinese Ann. Math. Ser. A. 72G. Y. Jiang, 2-harmonic isometric immersions between Riemannian manifolds. Chinese Ann. Math. Ser. A 7 (1986), no. 2, 130-144. On the biharmonic and harmonic indices of the Hopf map. E Loubeau, C Oniciuc, Trans. Amer. Math. Soc. 35911E. Loubeau and C. Oniciuc, On the biharmonic and harmonic indices of the Hopf map, Trans. Amer. Math. Soc. 359 (2007), no. 11, 5239-5256. Biharmonic maps and morphisms from conformal mappings. E Loubeau, Y. -L Ou, Tôhoku Math J. 621E. Loubeau and Y. -L. Ou, Biharmonic maps and morphisms from conformal mappings, Tôhoku Math J., Vol. 62 (1), (2010), 55-73. A short survey on biharmonic maps between Riemannian manifolds. S Montaldo, C Oniciuc, Rev. Un. Mat. Argentina. 472S. Montaldo and C. Oniciuc, A short survey on biharmonic maps between Riemannian manifolds, Rev. Un. Mat. Argentina 47 (2006), no. 2, 1-22 (2007). A general approach to equivariant biharmonic maps. S Montaldo, A Ratto, 10.1007/s00009-012-0207-3Mediterr. J. Math. to appearS. Montaldo and A. Ratto, A general approach to equivariant biharmonic maps, Mediterr. J. Math., to appear. DOI 10.1007/s00009-012-0207-3. Biharmonic submanifolds in a Riemannian manifold with non-positive curvature. N Nakauchi, H Urakawa, 10.1007/s00025-011-0209-7Results. Math. N. Nakauchi and H. Urakawa, Biharmonic submanifolds in a Riemannian manifold with non-positive curvature, Results. Math., 2011. DOI 10.1007/s00025-011-0209-7. Biharmonic maps into a Riemannian manifold of non-positive curvature. N Nakauchi, H Urakawa, S Gudmundsson, arXiv:1201.6457PreprintN. Nakauchi, H. Urakawa, and S. Gudmundsson, Biharmonic maps into a Riemannian manifold of non-positive curvature, Preprint 2012, arXiv:1201.6457. p-Harmonic morphisms, biharmonic morphisms, and nonharmonic biharmonic maps. Y. -L Ou, J. of Geo. Phy. 563Y. -L. Ou, p-Harmonic morphisms, biharmonic morphisms, and nonharmonic bihar- monic maps, J. of Geo. Phy, Vol. 56, No. 3, 2006, 358-374. On conformal biharmonic immersions. Y. -L Ou, Ann. Global Analysis and Geometry. 362Y. -L. Ou, On conformal biharmonic immersions, Ann. Global Analysis and Geometry, 36(2) (2009), 133-142. Biharmonic hypersurfaces in Riemannian manifolds. Y. -L Ou, Pacific J. of Math. 2481Y. -L. Ou, Biharmonic hypersurfaces in Riemannian manifolds, Pacific J. of Math, 248(1), 2010, 217-232. Some constructions of biharmonic maps and Chen's conjecture on biharmonic hypersurfaces. Y. -L Ou, J. of Geom. Phys. 62Y. -L. Ou, Some constructions of biharmonic maps and Chen's conjecture on biharmonic hypersurfaces, J. of Geom. Phys., Vol. 62 (2012) 751-762. Y. -L Ou, arXiv:1209.2104Biharmonic conformal immersions into 3-dimensional manifolds. preprintY. -L. Ou, Biharmonic conformal immersions into 3-dimensional manifolds, arXiv:1209.2104, preprint, 2012. Biharmonic maps in two dimensions. Y. -L Ou, S Lu, Annali di Matematica Pura ed Applicata192Y. -L. Ou and S. Lu, Biharmonic maps in two dimensions, Annali di Matematica Pura ed Applicata, 192, (2013), 127-144. On the generalized Chen's conjecture on biharmonic submanifolds. Y. -L Ou, L Tang, Michigan Math. J. 61Y. -L. Ou and L. Tang, On the generalized Chen's conjecture on biharmonic submanifolds, Michigan Math. J. 61 (2012), 531-542. Constant mean curvature and totally umbilical biharmonic surfaces in 3-dimensional geometries. Y. -L Ou, Z.-P Wang, J. of Gem. Phys. 61Y. -L. Ou and Z.-P. Wang, Constant mean curvature and totally umbilical biharmonic surfaces in 3-dimensional geometries, J. of Gem. Phys., Vol. 61 (2011), 1845-1853. Biharmonic maps, conformal deformations and the Hopf maps. S Ouakkas, Differential Geom. Appl. 265S. Ouakkas, Biharmonic maps, conformal deformations and the Hopf maps, Differential Geom. Appl. 26 (2008), no. 5, 495-502. Dilation of maps between spheres. C Peng, Z Tang, Pacific J. of Math. 2041C. Peng and Z. Tang, Dilation of maps between spheres, Pacific J. of Math, 204 (1), (2002), 209-222. The existence of minimal immersions of 2-spheres. J Sacks, K Uhlenbeck, Ann. of Math. 113J. Sacks and K. Uhlenbeck, The existence of minimal immersions of 2-spheres, Ann. of Math. 113 (1981) 1-24. On the Existence of Embedded Minimal 2-Spheres in the 3-Sphere, Endowed with an arbitrary metric, Thesis. F Smith, University of MelbourneF. Smith, On the Existence of Embedded Minimal 2-Spheres in the 3-Sphere, Endowed with an arbitrary metric, Thesis, University of Melbourne, 1983. Harmonic mappings of spheres. R T Smith, Amer. J. Math. 97R. T. Smith, Harmonic mappings of spheres, Amer. J. Math. 97 (1975) 364-385. Tang Liang, Y. -L Ou, Biharmonic Hypersurfaces in a Conformally Flat Space. 64Results MathTang Liang and Y. -L. Ou, Biharmonic Hypersurfaces in a Conformally Flat Space, Re- sults Math, 64 (1), (2013) , 91-104. Harmonic maps into Lie groups (Classical solutions of the chiral model). K Uhlenbeck, J. Diff. Geom. 301K. Uhlenbeck, Harmonic maps into Lie groups (Classical solutions of the chiral model), J. Diff. Geom. Vol. 30 (1), (1989), 1-50. Ou Biharmonic Riemannian submersions from 3-manifolds. Z. -P Wang, Y. -L , Math. Z. 269Z. -P. Wang and Y. -L. Ou Biharmonic Riemannian submersions from 3-manifolds, Math. Z., 269, (2011), 917-925. Harmonic maps and complex analysis. J C Wood, Proc. Summer Course in Complex Analysis. Summer Course in Complex AnalysisTrieste; ViennaIAEAIIIJ. C. Wood, Harmonic maps and complex analysis, Proc. Summer Course in Complex Analysis, Trieste, 1975 (IAEA, Vienna, 1976) vol. III, 289-308. The explicit construction and parametrization of all harmonic maps from the two-sphere to a complex Grassmannian. J C Wood, J. Reine Angew. Math. 386J. C. Wood, The explicit construction and parametrization of all harmonic maps from the two-sphere to a complex Grassmannian, J. Reine Angew. Math. 386 (1988) 1-31. Explicit construction and parametrization of harmonic two-spheres in the unitary group. J C Wood, Proc. London Math. Soc. London Math. SocJ. C. Wood, Explicit construction and parametrization of harmonic two-spheres in the unitary group, Proc. London Math. Soc, s3-58 (3), (1989), 608-624.
[]
[ "A simplified electro-chemical lithium-ion battery model applicable for in situ monitoring and online control", "A simplified electro-chemical lithium-ion battery model applicable for in situ monitoring and online control" ]
[ "Yuxuan Gu \nDepartment of Electrical Engineering\nTsinghua University\n100084BeijingChina\n", "Jianxiao Wang \nSchool of Electrical and Electronic Engineering\nNorth China Electric Power University\n102206BeijingChina\n", "Yuanbo Chen \nDepartment of Electrical Engineering\nTsinghua University\n100084BeijingChina\n", "Zhongwei Deng \nCollege of Mechanical and Vehicle Engineering\nChongqing University\n400044ChongqingChina\n", "Hongye Guo \nDepartment of Electrical Engineering\nTsinghua University\n100084BeijingChina\n", "Kedi Zheng \nDepartment of Electrical Engineering\nTsinghua University\n100084BeijingChina\n", "Qixin Chen \nDepartment of Electrical Engineering\nTsinghua University\n100084BeijingChina\n" ]
[ "Department of Electrical Engineering\nTsinghua University\n100084BeijingChina", "School of Electrical and Electronic Engineering\nNorth China Electric Power University\n102206BeijingChina", "Department of Electrical Engineering\nTsinghua University\n100084BeijingChina", "College of Mechanical and Vehicle Engineering\nChongqing University\n400044ChongqingChina", "Department of Electrical Engineering\nTsinghua University\n100084BeijingChina", "Department of Electrical Engineering\nTsinghua University\n100084BeijingChina", "Department of Electrical Engineering\nTsinghua University\n100084BeijingChina" ]
[]
A R T I C L E I N F OKeywords: discrete-time state-space equations electro-chemical model lithium-ion battery model simplification state estimation A B S T R A C T The penetration of lithium-ion batteries (LIBs) in transport, energy and communication systems is increasing rapidly. A meticulous LIB model applicable for precise in situ monitoring and convenient online control is sought to bridge the gap between research and applications. On the basis of the classic pseudo-two-dimensional (P2D) model, a simplified electro-chemical model for LIBs that is adaptive to variant working environments and materials is proposed. Specifically, a bottom-up approach is adopted to decompose the complex P2D model into decoupled sub-models, including the time-variant parameter model, solution-phase migration model, solid-phase diffusion model, reaction distribution model and output model. The simplification schemes of different sub-models are developed independently and finally reassembled. For ease of online simulation and control in real-world implementations, a discrete-time state-space realization of the proposed model is derived. A full-cycle simulation framework, including the initialization process, stabilization method and closed-loop correction scheme, is designed as well. Numerical experiments for the commonly used NCM and LFP cells in different operating scenarios demonstrate that the proposed model can accurately predict battery output along with the spatial distribution of internal states with limited computation resources, which provides opportunities for degradation analysis and meticulous management of LIBs in practice. * Corresponding author ORCID(s): 0000-0002-3733-8641 (Q. Chen)
10.2139/ssrn.4164370
[ "https://arxiv.org/pdf/2111.03288v2.pdf" ]
243,832,971
2111.03288
2ad48120f740b614463e2a5ecbfcdc866c7cf871
A simplified electro-chemical lithium-ion battery model applicable for in situ monitoring and online control Yuxuan Gu Department of Electrical Engineering Tsinghua University 100084BeijingChina Jianxiao Wang School of Electrical and Electronic Engineering North China Electric Power University 102206BeijingChina Yuanbo Chen Department of Electrical Engineering Tsinghua University 100084BeijingChina Zhongwei Deng College of Mechanical and Vehicle Engineering Chongqing University 400044ChongqingChina Hongye Guo Department of Electrical Engineering Tsinghua University 100084BeijingChina Kedi Zheng Department of Electrical Engineering Tsinghua University 100084BeijingChina Qixin Chen Department of Electrical Engineering Tsinghua University 100084BeijingChina A simplified electro-chemical lithium-ion battery model applicable for in situ monitoring and online control A R T I C L E I N F OKeywords: discrete-time state-space equations electro-chemical model lithium-ion battery model simplification state estimation A B S T R A C T The penetration of lithium-ion batteries (LIBs) in transport, energy and communication systems is increasing rapidly. A meticulous LIB model applicable for precise in situ monitoring and convenient online control is sought to bridge the gap between research and applications. On the basis of the classic pseudo-two-dimensional (P2D) model, a simplified electro-chemical model for LIBs that is adaptive to variant working environments and materials is proposed. Specifically, a bottom-up approach is adopted to decompose the complex P2D model into decoupled sub-models, including the time-variant parameter model, solution-phase migration model, solid-phase diffusion model, reaction distribution model and output model. The simplification schemes of different sub-models are developed independently and finally reassembled. For ease of online simulation and control in real-world implementations, a discrete-time state-space realization of the proposed model is derived. A full-cycle simulation framework, including the initialization process, stabilization method and closed-loop correction scheme, is designed as well. Numerical experiments for the commonly used NCM and LFP cells in different operating scenarios demonstrate that the proposed model can accurately predict battery output along with the spatial distribution of internal states with limited computation resources, which provides opportunities for degradation analysis and meticulous management of LIBs in practice. * Corresponding author ORCID(s): 0000-0002-3733-8641 (Q. Chen) Introduction Lithium-ion batteries (LIBs) have become the dominant energy source in various applications, such as electric vehicles and grid-level energy storage. The advantages of LIBs include high power and energy density, long lifespan, high efficiency, low self-discharge, and non-memory effect. With the widespread usage of LIBs, security and economic concerns are rapidly rising as well. Generally, LIBs are monitored and controlled by a BMS to achieve safe, efficient and reliable operation. The BMS estimates the SOC, SOPand SOH of LIBs based on the battery model and output measurements and then generates optimal control actions. For an advanced BMS, the underlying battery model should have the following features. First, it should be able to provide information of internal states such as potentials, Li + concentrations, reaction rates, etc., for meticulous management. Second, it can be easily converted to state-space representations for online control. Last, it should be simple, with low requirements on the processor and memory since an LIB pack can usually contain hundreds of cells. Existing LIB models can be categorized into three groups: data-driven, empirical, and electro-chemical. Datadriven models (black-box models) are usually fitted on experimental data by statistical methods to predict battery dynamics [1]. Empirical models usually refer to the ECM, which uses a series of resistors and capacitors to mimic a battery [2,3,4]. Electro-chemical models use a set of PDAEs to depict chemical and physical processes at the microscale inside the battery cell [5]. Compared with the former two groups, electro-chemical models can give mechanistic interpretations of the battery and are adaptive to a wide range of working scenarios. However, electro-chemical models are usually complex and difficult to transform into state-space models for control, which impedes their widespread usage. A representative of electro-chemical models is the so-called P2D model, which was proposed by Doyle et al. [5] and later became the original source of subsequent models. Based on the porous electrode theory and concentrated solution theory, the P2D model depicts the diffusion/migration of ions in the electrode/electrolyte and their intercalation at the solid-solution interface. Since the seminal work of [5], many works have refined the P2D model, such as incorporating the double layer capacitance [6,7,8], constant-phase-element dynamics [7], ageing factors [8], and varying parameters [9,10]. However, these refinements further increase the complexity. To enable the practical usage of electro-chemical models, a plethora of works have focused on model reduction techniques, which can be categorized into three approaches: numerical, analytical, and hybrid. Numerical approaches focus on developing highly efficient computation methods for PDAEs in the P2D model. Mathematically, PDAEs are spatially discretized into ordinary differential and algebraic equations and then solved iteratively. For the discretization process, finite-difference [11], control-volume formulation [9], Crank-Nicolson [12], forward time-central space approximation [13] and asymptotic reduction [14] method have been proposed. Ref. [15] used proper orthogonal decomposition to solve the whole model, which was also used by [16] to calculate solid-phase potentials. Ref. [17] developed a solution scheme based on singular perturbation and averaging theory. However, numerically reduced models still have high orders (30-100 orders). In addition, the deficiency of a control-oriented view precludes online implementation of these models. Analytical approaches aim to find approximate expressions for concerned states in the battery, which are obtained by either intuitive assumptions or rigorous derivation. To approximate the solution-phase Li + concentration, constants [18,19], parabolic or cubic polynomials [20,21,22,23,24,25] and residue grouping [16,26,27] are commonly used. Sinusoidal and exponential functions were also tried in [28,29]. To approximate the solid-phase surface Li + concentration, existing research can be categorized into three approaches. The first approach simplifies the transfer function of the solid-phase surface Li + concentration in the frequency domain and then obtains its reduced state-space realization. The representative approach is the Padé approximation, which was first proposed by Forman et al. [30] and then widely used in subsequent research [10,16,22,26,28,31]. It uses a rational polynomial to approximate the original transfer function by Taylor expansion. Since the Padé approximation is accurate only at low frequencies, ref. [32,33] determined the coefficients of rational polynomials by fitting the frequency response over a wider frequency band. Ref. [21,34] determined the coefficients by fitting the state trajectories in the time domain. Upon state-space realization, these models are generally equivalent to the combination of several first-order inertial processes. Note also that ref. [33,35] used fractional-order representations to replace the first-order processes to achieve high accuracy in recent years. The second approach uses the realization algorithm (xRA) to directly obtain the state-space representation [7,36]. The xRA can generate a discrete-time state-space realization with a unit-pulse response similar to that of the original transfer function. The third approach directly approximates the time domain and assumes a polynomial distribution of Li + concentrations along the r-axis [23,24,25,29,37,38]. To approximate the reaction rate, constants, stepwise lines [33], parabolic polynomials [39,40,41] and cubic polynomials [42] have been proposed. To conclude, analytical approaches have generally focused on approximating Li + concentrations but have rarely discussed the reaction rate in detail. Hybrid approaches treat some part of the model with numerical methods and the remaining part with analytical methods, e.g., applying the Padé approximation for solid-phase and finite-difference for the solution phase [43,44]. Upon reviewing the aforementioned research and attempting several highly cited models, some problems emerge to be solved. First, for solution-phase Li + concentrations, previous works mainly focused on the spatial distribution. To develop a control-oriented model, time trajectory modelling and discrete-time state-space realization require consumption. Second, for solid-phase Li + concentrations, we find that the simplified state-space representations obtained by the Padé approximation, response or time-series optimization, or xRA are likely to suffer from oscillations. Developing an accurate and stable method is necessary for practical usage. Third, for the reaction rate distribution, polynomial approximations used by existing models are based on intuitive assumptions and are not adaptive to various scenarios. As the key to in situ monitoring, degradation prediction and lumped-state estimation (SOC, SOP, SOH) [45], a rigorous mathematical formula is sought. Fourth, for the whole cell, a model considering coupled electrical, chemical, physical, and thermal dynamics along with time-variant parameters and a full cycle simulation framework containing the initialization process, stabilization method, and closed-loop correction scheme are desired for real-world applications. To bridge the gaps mentioned above, a high-fidelity simplified electro-chemical model along with a simulation framework are proposed in this work. Parameters sensitive to temperatures or concentrations are extracted and then modelled by the Arrhenius law and empirical formulas. By taking the ensemble average strategy, the solution-phase migration is simplified with two coupled first-order inertial processes derived from mass conservation and Fick's law. For solid-phase diffusion, we also take the ensemble average strategy and find the suitable time constants of first-order inertial processes to approximate the surface Li + concentrations that achieve a balance between accuracy, stabilization Nomenclature Nomenclature anodic transfer coefficient (dimensionless) cathodic transfer coefficient (dimensionless) over-potential of reaction (V) ionic conductivity (S/m) diffusional conductivity (J/C) Φ electrical potential (V) density (kg/m 3 and simplicity. For the reaction rate distribution, the conceptual content is fully considered, and a rigorous mathematical expression without intuitive assumptions is derived by simultaneously solving chemical equations and electrical equations. For the output, the terminal voltage is derived based on the obtained reaction rate distribution formula, and the cell temperature is derived based on a lumped thermal model. The above sub-models are assembled to obtain the final simplified model and its discrete-time state-space realization. For full-cycle simulation of the battery, a initialization process, stabilization method and closed-loop correction scheme composed of basic operators or simple optimization that require low computational resources are designed. For validation, the proposed model is compared with the full-order P2D model, a classic ESP model and a well-cited advanced ESP model [21,40] under different working scenarios for commonly used NCM and LFPO cells. The contributions of this work are fourfold. • A bottom-up approach is designed to construct the simplified electro-chemical lithium-ion battery model by decomposing the sophisticated P2D model into decoupled sub-models, including the time-variant parameter model, solution-phase migration model, solid-phase diffusion model, reaction distribution model and output model, which makes the model not only adaptive to variant working environments and materials but also reserves potential for future upgrades. • Decoupled sub-models are simplified independently according their specific characteristics. The ensemble average strategy is used to derive the simplified solution-phase migration model and solid-phase diffusion model. The rigorous mathematical expression of the reaction rate distribution is derived by simultaneously solving chemical equations and electrical equations. The terminal voltage is derived based on the obtained reaction rate distribution formula, and the cell temperature is derived based on a lumped thermal model. • Decoupled sub-models are assembled to obtain the final battery model. For ease of online simulation and control in real-world implementations, a discrete-time state-space realization of the model is derived. A full-cycle simulation framework including the initialization process, stabilization method and closed-loop correction scheme that requires low computational resources is designed. • For validation, comprehensive numerical experiments are conducted. Specifically, the proposed model is tested under different scenarios for commonly used NCM and LFPO cells, including galvanostatic current and dynamic current protocols, low-temperature and high-temperature environments, and low and high C-rates. Comparison against two highly cited ESP models reveals the superiority of this work, which provides opportunities for degradation analysis and meticulous management of batteries in practice. The rest of this paper is organized as follows: Section 2 introduces the bottom-up approach to construct the simplified lithium-ion battery model. Section 3 describes the discrete-time state-space realization of the model and designs a full-cycle simulation framework, including a initialization process, stabilization method and closed-loop correction scheme. Section 4 presents the results of numerical experiments. Section 5 draws conclusions. Bottom-up modelling approach The simplified model is used to provide internal states of the battery for in situ monitoring, so we start from the classic full-order P2D model instead of the ECM. By taking a bottom-up approach, the sophisticated P2D model is decomposed into decoupled sub-models first and reassembled to obtain the final simplified model. Generally, the P2D model is appropriate for the battery with the following settings: • The electrodes have a porous structure where the solid phase is mainly composed of active particles and the solution phase is filled by the electrolyte. The separator is a perforated microplastic that insulates electrons but allows ions to pass. • During the charge, lithium ions deintercalate from active particles in the negative electrode, migrate through the electrolyte and pass through the separator, finally intercalating into active particles in the positive electrode. Meanwhile, electrons are transported through the current collector from the negative current collector to the positive in the external circuit. During discharge, ions and electrons are transported in reverse directions. • The diffusion of Li + in the active particle and the migration of Li + in the electrolyte obey Fick's second law, i.e., the cell should be made of intercalation electrode materials such as LiNi x Mn y Co 1xy O 2 , LiFePO 4 , LiCoO 2 , LiMn 2 O 4 , and Graphite (LiC 6 ), and the electrolyte should satisfy concentrated solution theory such as the commonly used PC-EC-DMC solvent [32]. According to existing industrial practice, many commercial LIBs meet the above requirements, thus guaranteeing the applicability of the P2D model and its derivatives. Typically, whether the battery cell is cylindrical or prismatic, its micro-structure is sandwich-like,, i.e., terminals of the cell are current collectors connecting the external circuit, between which lie three domains in order: negative electrode, separator and positive electrode, as shown in Fig. 1. The basic formulas of the P2D model are given below, where Eqs. (1) and (2) depict the diffusion of Li + in the solution phase and solid phase, respectively. Eqs. (3)(4)(5) establish the spatial distribution of the chemical reaction rates and potentials across the thickness direction. + ± 1 − 0 + ,(1)= 2 2 ,(2)= 0 exp − exp − , (3) ef f = ,(4)ef f + ef f ln = − .(5) In a bottom-up approach, we separately establish six simplified analytical sub-models, depicting the temperatureor concentration-incorporating parameters, solution-phase migration, solid-phase diffusion, reaction rate distribution, potential distribution and thermal conservation. Finally, they are reassembled to form the final LIB model, which is applicable for in situ monitoring and online control simultaneously. Parameters Generally, parameters involved in an electro-chemical battery model can be categorized into two groups: parameters related to the manufacturing and parameters related to the physical or chemical properties of the battery material, as shown in Fig. 2a. To improve the model fidelity, we further divide property parameters into constant and time-variant parameters. The splitting criterion is whether the parameter is affected by the Li + concentration or the temperature. In this part, the modelling of time-variant parameters is introduced in detail. First, we introduce the modelling of variant parameters in the solution phase. Existing commercial LIBs commonly use a similar mixture solvent as the electrolyte, e.g., PC-EC-DMC. Thus, many studies have investigated such electrolyte solvents deeply and have provided empirical formulas to describe their diffusion and conductivity properties [20,23,46,47,48], which are adopted in this work as well: Second, we introduce the modelling of time-variant parameters in the solid phase. However, different from the electrolyte, the specific values of the solid-phase parameters are not the same for different materials used by the specific battery. To maintain the generality of the proposed model, general expressions that capture the basic characteristics of commonly used intercalation materials are designed. According to previous research, the solid-phase diffusion coefficient is related to the Li + concentration in the active particle and the cell temperature [49]. For simplicity, we decouple the impact of two factors. By denoting the bulk-averaged concentration of Li + in an active particle bȳ , a linear approximation formula is adopted to describe the relation between and̄ first, as shown in the first formula of Eq. (8): = ̄ ,max + , = − , 1 − 1 ref ref , = − , 1 − 1 ref ref . (8) Next, the thermodynamic variation of is introduced by applying the Arrhenius law to the linearity coefficients, as shown in the last two formulas of Eq. Last, we introduce the modelling of variant parameters depicting the intercalation reaction occurring at the interface of the solid-phase and solution-phase. The chemical kinetics parameter, the reaction rate coefficient , determines how fast the reaction takes place and obeys the Arrhenius law when the temperature varies. Denoting the activation energy by , and the value at the reference temperature by ref , the formula of is given by: = exp − , 1 − 1 ref ref .(9) The chemical thermodynamics parameter, the equilibrium potential OCP , determines whether the reaction can take place. Different from , OCP is mainly determined by the Li + stoichiometry at the surface of active particles. However, the relationship between these two parameters is non-linear and complex. Thus, we construct the look-up table for commonly used active materials in this work, as shown in Fig. 2b. Once the surface Li + stoichiometry is determined, the corresponding OCP can be obtained by interpolation in the curves. The original data are extracted from the experiment [50]. Note that OCP is also slightly affected by the temperature. However, previous studies found that the order of magnitude of the change in OCP with temperature, d OCP d , is approximately 10 −4 V/K [47]. This impact is neglected in this work for simplicity. The remaining parameters are assumed to be constant values. However, in the long term, some parameters can vary with degradation. However, this work mainly focuses on real-time simulations, and degradation identification and analysis will be investigated in future research. Note that the modelling of the parameters introduced above is appropriate for the negative electrode, positive electrode and separator domains, so the superscripts +,− and sep are omitted for notation simplicity. Solution-phase migration model Based on the law of material conservation, the migration of Li + in the electrolyte along the thickness is depicted by Eq. (1). In this section, we introduce the simplified migration model derived from Eq. (1). First, integrate the LHS and RHS of Eq. (1) along the x-axis over the electrode domain yields: 1 ± ∫ ± 0 ± ± ± ( , ) = ef f,± ( ) ( , ) | | | | ± 0 ± + (1 − 0 + ) ∫ ± 0 ± ± ( , ) .(10) As commonly adopted in existing research [21,23], parabolic polynomials are used in this work to approximate the spatial distribution of , i.e., ( , ) = − ( ) 2 + − ( ) for ∈ [0 − , − ] in the negative electrode and ( , ) = + ( )( − + ) 2 + + ( ) for ∈ [0 + , + ] in the positive electrode. Since the separator domain is very thin compared with the electrode domain, we apply linear approximation to represent ( , ) in this domain to avoid high complexity, i.e., ( , ) = sep ( ) + sep ( ) for ∈ [0 sep , sep ]. Now, we note that the numerator of the first term on the LHS of Eq. (10) is equal to the total quantity of Li + in the solution phase among the positive and negative electrode domains, denoted by ± ( ), respectively. Substituting the expressions of in the negative electrode and positive electrode into ± ( ) yields: ∫ ± 0 ± ± ± ( , ) = ± ( ) = ± ± 1 3 ± ( )( ± ) 3 + ± ( ) ±(11) We now turn to the RHS of Eq. (10). The first term equals 2 ± ± ef f,± ( ) ± ( ) by substituting the expressions of into the original formula. According to Faraday's law, the second term equals the difference between current densities in the solid phase at two sides of the electrode, i.e., ∫ ± ( ) dt = 2 ± ± ef f,± ( ) ± ( ) ∓ (1 − 0 + ) ( ).(12) Based on the material conservation law, the Li + concentration and flux are continuous at the boundaries between the negative electrode, the separator and the positive electrode, i.e., Eqs. (11) and (13) can be compacted to matrix form: ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣( − ) 3 ∕3 − 0 0 0 0 + 3 ∕3 + ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎡ ⎢ ⎢ ⎢ ⎣ − ( ) − ( ) + ( ) + ( ) ⎤ ⎥ ⎥ ⎥ ⎦ = ⎡ ⎢ ⎢ ⎢ ⎣ 0 0 − ( )∕ − − + ( )∕ + + ⎤ ⎥ ⎥ ⎥ ⎦(14) We denote the matrix on the LHS of Eq. where −1 1,3 ( ) represents the element on the 1st row and 3rd column of the matrix −1 ( ). Since only cations (i.e., Li + ) are involved in the reaction, the mass conservation of anions always holds. By electroneutrality, the total quantity of cations in the solution phase, ,0 = ( − − − + + + + ) ,0 = − ( ) + + ( ), is constant at any time. Substituting this into Eq. For notational simplicity, Eq. (16) is written in: ± ( ) ± ( ) = − ± ( ) + ± ( ).(17) where ± ( ) and ± ( ) can be derived from Eq. (16). The time trajectories of ± are modelled by two coupled first-order inertial processes. Once ± are obtained, ( , ) at any point in the x-axis can be calculated by Eq. (14). To improve the model fidelity, the original diffusion coefficient of the electrolyte solvent is corrected by the Bruggeman relation for porous electrodes, i.e., ef f = ( ) . Solid-phase diffusion model For intercalated active materials, Li + diffuses along the radial direction of active particles according to Eq. (2). However, this adds another spatial coordinate, , to the model and increases the complexity; i.e., the solid-phase Li + concentration varies with , and synchronously. Actually, only the average Li + concentration and the surface Li + concentration of the active particle are considered in practice because the former determines the remaining charge in the battery, while the latter determines the reaction rate in the battery. To this end, many works have proposed different methods to simplify the solid-phase diffusion model as reviewed above, and their basic idea is similar, i.e., approximate the difference between the average and surface by a series of inertial processes. However, after attempting existing methods, we found that two challenges still remain. First, for in situ monitoring of the battery, a series of points must be selected along the thickness direction, and the diffusion model must be constructed at every point independently, which means that the model should be reduced to be as simple as possible. Second, we find that existing approximated models whose time constants are derived from the Padé approximation or the frequency response optimization are likely to suffer from oscillations. The same situations also arise for the xRA and volume-averaging methods. This problem may be caused by a relatively small time constant obtained from these methods. Thus, a simple first-order inertial process with tuned time constants is used to realize a trade-off between accuracy, stabilization, and simplicity. In this part, we introduce the derivation of coefficients in the approximated inertial process. First, we derive the expression of the average solid-phase concentration. Multiplying the two sides of Eq. (2) by 2 and then integrating both sides along the r-axis yields: 1 4 ∫ 0 4 2 ( , , ) = ( , ) 2 ( , , ) | | | | = .(18) Note that the numerator of the LHS of Eq. (18), ∫ 0 4 2 ( , , ) , equals the total quantity of Li + in the active particle, which can also be represented by the bulk-averaged Li + concentration, denoted bȳ : ∫ 0 4 2 ( , , ) = 4 3 3 ̄ ( , ) . Based on material conservation, the Li + flux at the surface of the active particle is proportional to the pore-wall flux , i.e., ) . Substituting these two terms into Eq. (18) yields: ( , , ) | | | | = = − ( ,̄ ( , ) = − 3 ( , ).(19) To describe the surface solid-phase concentration, we first introduce an intermediate variable to depict the difference between the average and surface Li + concentration. By Laplace transformation, the closed-form expression of in the frequency domain can be derived from Eq. ( , ) = − 5 ,, indicating that gradually approaches − 5 in the time domain. For model simplicity, the transition of to its steady state is approximated by a first-order inertial process. The physical interpretation of this process is that it takes time for Li + inside the active particle to diffuse to the surface. Thus, the time constant is set in proportion to the ratio between the radius square and the diffusion coefficient: ( , ) = 2 ( , ) . The transition equation is expressed by: ( , ) ( , ) = − ( , ) − ( , ) 5 ( , ) .(20) where is a dimensionless coefficient fitting the approximated process to the actual process. Once is obtained, the surface solid-phase concentration, denoted by , can be calculated directly: ( , ) =̄ ( , ) + ( , ).(21) The specific value of varies in different studies,, e.g., = 1 35 in the Padé approximation, = 1 30 in the volume-averaging method, =0.04356 or 0.03459 in [21], and = 0.0214 in the frequency response optimization (in the frequency band [10 −4 , 10 4 ] Hz). By testing the above settings, we find that the results commonly suffer from oscillation except for [21], which indicates that a smaller is likely to bring instability to the model. However, a larger makes the model less accurate, especially under dynamic currents. In this work, is determined by fitting the experimental data to realize a trade-off between accuracy and stabilization. The simplified solid-phase model is appropriate for both the negative electrode and positive electrode, so the superscripts + and − are omitted for notation simplicity. Note also that in the following text, the average and surface solid-phase concentrations are sometimes replaced by the average and surface Li + stoichiometry for notational simplicity, denoted by and , respectively. The transformations between them are simple: =̄ , and = , , where , is the maximum concentration the active particle can store. Reaction rate distribution model Generally, the reaction rate is non-uniform along the thickness direction of the battery cell. However, it remains a challenge to express the spatial distribution of because it is determined by Eqs. (3)-(5) simultaneously. Thus, different from simplifying (1) and (2), as introduced in the above section, we need to consider the coupling between these formulas and design the specific simplification strategy. Chemical system First, we start from Eq. (3) and simplify the chemical system of the battery. Since Eq. (3) brings non-linearity to the model and increases its complexity, several approximations have been proposed for simplicity, e.g., the Tafel equation, linear current-potential equation, and hyperbolic sine approximation [51]. However, some methods sacrifice generality to some extent, especially under high currents. In this work, an adaptive linear approximation method that automatically adjusts the linear coefficients according to the actual applied current is designed. Thus, the simplified expression can be adaptive to critic conditions without too much loss of accuracy. For the applied current , we first calculate the average pore-wall flux in the negative electrode and positive electrode:̄ ± ( ) = ∓ ( ) ± ± ± . Then, we apply the first-order Taylor expansion at̄ ± on the inverse function of Eq. (3): ( , ) = 2 ( ) ln ⎛ ⎜ ⎜ ⎜ ⎝ ( , ) + √ 2 2 ( , ) + 4 2 0 ( ) 2 0 ( ) ⎞ ⎟ ⎟ ⎟ ⎠ ⇒ ( , ) ≈ ( ) ( , ) −̄ ( ) + ( ).(22) The closed-form expression of the linear coefficient ( ) in Eq. (22) is given by: ( ) = ( ) 0 ( ) √ 2̄ 2 ( ) 4 2 0 ( ) + 1 + ̄ ( ) 2 0 ( ) ̄ ( ) 2 0 ( ) √ 2̄ 2 ( ) 4 2 0 ( ) + 1 + 2̄ 2 ( ) 4 2 0 ( ) + 1 .(23) In the formula above, the exchange current densities in the negative electrode and positive electrode are expressed by ± 0 = ± (̄ ± ) ( ± ,max −̄ ± ) (̄ ± ) , wherē ± and̄ ± refer to the average concentrations across the electrode, respectively. Generally, the anodic and cathodic transfer coefficients, and , are set at 0.5 because the proportions of the anodic and cathodic directions of the total intercalation reaction are assumed to be equal [52]. To build the coupling between the chemical equation and electrical equations, we need to introduce variables directly related to the potential rather than using the intermediate variable . By definition, the over-potential in Eq. (22) also equals Φ − − OCP − , where Φ − equals the potential difference between the solid-phase and solution-phase at the surface of the active particle. Substituting this equality into Eq. (22) and implementing the differential operation yields: Φ − ( , ) = ( ) + ( , ) + OCP ( , ) .(24) The formula above retains a term to be addressed, i.e., the differential of OCP . As introduced in the text above, OCP is determined by the surface Li + stoichiometry . Thus, the expression of OCP ( , ) can be fitted based on the knowledge of ( , ) along the thickness direction. To achieve a balance between the complexity and accuracy, four points evenly distributed in each electrode along the thickness direction are selected as checkpoints; i.e., the coordinates of checkpoints in the negative and positive electrodes are 0 ± , ± 3 , 2 ± 3 , and ± . This helps the proposed model be applicable for in situ monitoring at four checkpoints in each electrode while still simple enough for online control in practical use. Through numerical experiments, we find that using a cubic polynomial can achieve an acceptable performance. The coefficients of the polynomial are fitted on OCP at four checkpoints in each electrode. Denote the analytical expression of OCP by OCP ( , ) = ( ) 3 + ( ) 2 + ( ) + ( ) and substitute it into Eq. (24): Φ − ( , ) ≈ ( ) + ( , ) + 3 ( ) 2 + 2 ( ) + ( ).(25) Notably, Eqs. (22)- (25) are appropriate for both negative electrode and positive electrode, so the superscripts + and − are omitted for notation simplicity. Electrical system By analysing the chemical system, we have obtained the relationship between Φ − and . Now, we turn to simplifying the electrical system inside the battery. Note that Eq. (4) depicts the relationship between Φ and , while Eq. (5) depicts the relationship between Φ and . Thus, we try to couple these two equations to derive the relationship between Φ − and in the electrical system. First, Eq. (4) can be decomposed into two equations by introducing a new variable representing the current density in the solid phase, denoted by . : Φ ( , ) = − ( , ) ef f , ( , ) = − ( , ).(26) The left formula above is derived based on Ohm's law, and the right formula is derived based on Faraday's law. Similarly, Eq. (5) can be decomposed into two equations by introducing a new variable representing the current density in the solution phase, denoted by : Φ ( , ) = − ( , ) ef f ( ) − ef f ( ) ln ( , ) ef f ( ) , ( , ) = ( , ).(27) The second term in the left formula above represents the concentration polarization potential in the electrolyte, and the effective diffusional conductivity ef f is derived from concentrated solution theory, expressed by ef f = 2 ef f ( 0 + − 1)(1+ ln ± ln ), where ± is the mean molar activity coefficient. Generally, the term ln ± ln is assumed to be constant [32]. However, in this work, to improve model fidelity, a parabolic polynomial is used to fit the relationship between ln ± ln and based on the experimental data in [50]. The solid-phase and solution-phase conductivities and are corrected by the Bruggeman correction, i.e., ef f = , ef f = . Additionally, since Eqs. (26)- (27) are appropriate for both negative and positive electrodes, so the superscripts + and − are omitted for notation simplicity. ( , ) = − − ∫ 0 − ( , ) + ( )∕ − , ∈ [0 − , − ]; + ∫ + ( , ) + ( )∕ + , ∈ [0 + , + ]. ( , ) = − ∫ 0 − ( , ) , ∈ [0 − , − ]; − + ∫ + ( , ) , ∈ [0 + , + ].(28) Subtracting the left formulas in Eqs. (26)- (27) and substituting Eq. (28) yields: Φ − ( , ) = ⎧ ⎪ ⎨ ⎪ ⎩ − ( ) − ef f,− + − 1 ef f ,− + 1 ef f,− ( ) ∫ 0 − ( , )d + ef f,− ( ) ef f,− ( ) ln( ( , )) , ∈ [0 − , − ]; − ( ) + ef f,+ − + 1 ef f ,+ + 1 ef f,+ ( ) ∫ + ( , )d + ef f,+ ( ) ef f,+ ( ) ln( ( , )) , ∈ [0 + , + ].(29) The last term in (29) still makes the derivation of analytical expressions intractable. According to numerical experiments, we find that ln( ( , )) can be approximated by: ln( ( , )) = ⎧ ⎪ ⎨ ⎪ ⎩ 2 − ( ) − ( ) 2 + − ( ) ≈ 2 − ( ) − ( ) , ∈ [0 − , − ]; 2 + ( )( − + ) + ( )( − + ) 2 + + ( ) ≈ 2 + ( )( − + ) + ( ) , ∈ [0 + , + ].(30) Mathematical representation By analysing the chemical system and electrical system, two independent equations depicting the relationship between Φ − and are obtained. We simultaneously solve them to derive the expression of . We denote the integration of ( , ) over the electrode by ( , ), i.e., − ( , ) = ∫ 0 − − ( , ) for the negative electrode and + ( , ) = ∫ + + ( , ) for the positive electrode. Combining (25) and (29) yields: ∓ ± 1 ( ) ± ( , ) ± ± 2 ( ) 2 ± ( , ) 2 + ± 3 ( ) 2 + ± 4 ( ) + ± 5 ( ) = 0.(31) In the formula above, ± 1 ( ) = ± 1 ef f ,± + 1 ef f ,± ( ) , ± 2 ( ) = ± ( ) + ± , ± 3 ( ) = −3 ± ( ), ± 4 ( ) = 2 ± ( ) ef f,± ( ) ± ( ) ef f ,± ( ) − 2 ± ( ), − 5 ( ) = − ( ) − ef f,− − − ( ), + 5 ( ) = − ( ) + ef f,+ − + ( ) − 2 + ( ) ef f ,+ ( ) + ( ) ef f,+ ( ) + . The boundary conditions of are equivalent to : − (0 − , ) = 0, − ( − , ) = ( ) − − , + (0 + , ) = − ( ) + + , + ( + , ) = 0.(32) The expression of can be obtained by applying the differential operation to : ± ( , ) = ± ± 1 ( ) √ √ √ √ ± 1 ( ) ± 2 ( ) exp ⎛ ⎜ ⎜ ⎝ − √ √ √ √ ± 1 ( ) ± 2 ( ) ⎞ ⎟ ⎟ ⎠ ∓ ± 2 ( ) √ √ √ √ ± 1 ( ) ± 2 ( ) exp ⎛ ⎜ ⎜ ⎝ √ √ √ √ ± 1 ( ) ± 2 ( ) ⎞ ⎟ ⎟ ⎠ − 2 ± 3 ( ) ± 1 ( ) − ± 4 ( ) ± 1 ( ) .(33) where ± 1,2 ( ) can be obtained by substituting the boundary conditions into Eq. (31): − 1 ( ) − 2 ( ) = ⎡ ⎢ ⎢ ⎣ 1 1 exp − √ − 1 − 2 − exp √ − 1 − 2 − ⎤ ⎥ ⎥ ⎦ −1 ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ − 5 − 1 + 2 − 2 − 3 − 1 2 − 5 − 1 + 2 − 2 − 3 − 1 2 + − 3 ( − ) 2 − 1 + − 4 − − 1 + − − ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ + 1 ( ) + 2 ( ) = − ⎡ ⎢ ⎢ ⎢ ⎣ 1 1 exp − √ + 1 + 2 + exp √ + 1 + 2 + ⎤ ⎥ ⎥ ⎥ ⎦ −1 ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ + + + + 5 + 1 + 2 + 2 + 3 + 1 2 + 5 + 1 + 2 + 2 + 3 + 1 2 + + 3 ( + ) 2 + 1 + + 4 + + 1 ⎤ ⎥ ⎥ ⎥ ⎥ ⎦(34) Once the reaction rate across the electrode domain is obtained, the spatial distribution of potentials and current densities in the solid-phase and solution-phase can all be derived through Eqs. (26)- (27), the in situ monitoring of the battery cell can be realized, providing detailed information to upper-level applications. Output model The measurable output of the battery cell includes the terminal voltage and surface temperature. This part introduces the calculation of these two measurable states. Terminal voltage Since the solid phase of the electrode is directly connected to the current collector, the terminal voltage equals the potential difference between Φ ( + , ) and Φ (0 − , ). However, directly calculating Φ through the expression of presented above is impossible since this requires two potential reference points, one for the negative electrode and one for the positive electrodes. However, when viewing the battery as a whole system, only one potential reference point can be selected. To solve this problem, we start from Φ − and Φ to calculate indirectly because Φ is equivalent to Φ − + Φ as well. By denoting the ohmic resistance between the current collector and electrode by , is expressed by: ( ) = Φ − ( + , ) + Φ ( + , ) − Φ − (0 − , ) − Φ (0 − , ) − ( ).(35) In the formula above, Φ − at the boundary can be directly calculated according to the B-V equation: ⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩ Φ − ( + , ) = + OCP ( + , ) + + ( + , ) + 2 ( ) ln ⎛ ⎜ ⎜ ⎝ ( + , ) 2 + 0 ( ) + √ ( + , ) 2 + 0 ( ) 2 + 1 ⎞ ⎟ ⎟ ⎠ , Φ − (0 − , ) = − OCP (0 − , ) + − (0 − , ) + 2 ( ) ln ⎛ ⎜ ⎜ ⎝ (0 − , ) 2 − 0 ( ) + √ (0 − , ) 2 − 0 ( ) 2 + 1 ⎞ ⎟ ⎟ ⎠ .(36) We now turn to the potential drop in the solution phase, which is composed of two parts, namely, the ohmic potential drop and polarization potential drop, corresponding to the first and second terms on the RHS of Eq. (27). The polarization potential drop between = + and = 0 − , denoted by ΔΦ , , can be expressed by the sum of the polarization potential drops in each domain: ΔΦ , ( ) = − ef f,− ef f,− ln ( − , ) (0 − , ) − ef f ,sep ef f ,sep ln ( sep , ) (0 sep , ) − ef f ,+ ef f ,+ ln ( + , ) (0 + , ) .(37) The ohmic potential drop, denoted by ΔΦ , ℎ , can be obtained by substituting Eq. (33) into Eq. (27). In the electrode domain, ΔΦ , ℎ is expressed by: ΔΦ ± , ℎ ( ) = ± ± ef f,± ( ) − ± 1 ( ) √ √ √ √ ± 2 ( ) ± 1 ( ) ⎛ ⎜ ⎜ ⎝ exp ⎛ ⎜ ⎜ ⎝ − √ √ √ √ ± 1 ( ) ± 2 ( ) ± ⎞ ⎟ ⎟ ⎠ − 1 ⎞ ⎟ ⎟ ⎠ + ± 2 ( ) √ √ √ √ ± 2 ( ) ± 1 ( ) ⎛ ⎜ ⎜ ⎝ exp ⎛ ⎜ ⎜ ⎝ √ √ √ √ ± 1 ( ) ± 2 ( ) ± ⎞ ⎟ ⎟ ⎠ − 1 ⎞ ⎟ ⎟ ⎠ ± ± 3 ( ) 3 ± 1 ( ) ( ± ) 3 ± ± 4 ( ) 2 ± 1 ( ) ( ± ) 2 ± ± 5 ( ) ± 1 ( ) + 2 ± 2 ( ) ± 3 ( ) ± 1 ( ) 2 ± .(38) In the separator domain, ΔΦ sep , ℎ ( ) = − sep ( ) ef f,sep ( ) sep . Thus, the total solution-phase potential drop between + and 0 − equals: Φ ( + , ) − Φ (0 − , ) = ΔΦ sep , ℎ ( ) + ΔΦ − , ℎ ( ) + ΔΦ + , ℎ ( ) + ΔΦ , ( ).(39) By substituting (39) into (35), ( ) can be obtained. Cell temperature The temperature can significantly affect the operating characteristics of the battery. To track the trajectory of battery temperature during its operation, a lumped thermal model is developed to predict the cell temperature based on the following assumptions. First, the temperature distribution is uniform at any instant in time, i.e., the surface temperature is always equal to the core temperature [53]. Second, the enthalpy mixing and phase-change heat are neglected [54]. Third, the reversible entropy change of the reaction is neglected [55]. Many studies have proposed much more sophisticated thermal models than the lumped model. However, they were not adopted in this work for two main reasons. First, we aim to use the proposed model in upper-level applications, e.g., online controlling or operating optimization. Thus, the proposed model is not expected to be very complex, reflecting that the basic properties meet the requirement. Second, in the latest real-world applications, the temperature sensor is quite advanced and can be deployed at the cell level; thus, the predicted temperature can be corrected according to the measurement in real time. Considering the target applications of this work, a lumped thermal model that can approximately track the temperature is acceptable. We care more about depicting those states that cannot be directly measured, such as potentials and concentrations. In a lumped thermal model, the energy conservation equation is written as follows: ( ) = ℎ surf amb ( ) − ( ) + ( ).(40) The first term on the RHS of the formula above accounts for the heat transfer rate from the cell to the environment, and the second term refers to the heat generated by the reaction, calculated by: ( ) = − − ∫ − 0 − − ( , ) − OCP ( , ) + + ∫ + 0 + + ( , ) + OCP ( , ) − ( ) ( ). ≈ ̄ + OCP ( ) −̄ − OCP ( ) − ( ) ( ).(41) wherē ± OCP are average values of ± OCP at 0 ± , ± 3 , 2 ± 3 , ± . For notation simplicity, (40) is represented by: ( ) = − + ( ).(42) where = ∕ℎ and ( ) = ( )∕ℎ + amb ( ). The entire bottom-up approach to construct the simplified model is shown in Fig. 3a. Closed-loop simulation framework After introducing the modelling approach of the lithium-ion battery, the next step is to design the simulation framework so that the proposed work can be applied in practical scenarios such as online control or real-time monitoring. Discrete-time state-space realization To enable real-time simulation of the model, a discrete-time state-space representation is necessary. Since the original simplified model is continuous on the time horizon, before discretization, the following assumptions should be declared. First, the model inputs, including ( ) and amb ( ), are treated as intensity variables; i.e., they are updated at the start of every simulation step and remain constant until the end of the current step. Second, the intensity variables, including , , and , and the time-variant parameters, including , , and , are treated similarly to the inputs; they are calculated based on the internal states of the battery at the start of every simulation step and assumed to remain constant until the end of the current step. Third, the inertial variables, includinḡ , , , and , are updated based on the intensity variables taking effect in the current step. Their values at the end of the current step are calculated by the discrete state-space equations. Fourth, potential variables or parameters, including Φ , Φ , OCP and , are updated at the end of every simulation step. We denote the time stamps at the start and end of the -th simulation step by −1 and and denote the current time interval by Δ = − −1 . When the last simulation step stops at −1 , the values of ( Fig. 3b. Once intensity variables and parameters are known, inertial variables at the end of the simulation step can be updated via state-space equations in the discrete-time form, as given below: ± ( ) = ± ( −1 ) exp − Δ ± ( ) + ± ( ) 1 − exp − Δ ± ( ) . ( , ) =̄ ( , −1 ) − ± Δ 3 ( , ). ( , ) =̄ ( , ) + ( , −1 ) −̄ ( , −1 ) exp − Δ ( , ) − ± ( , ) 5 ± ( , ) 1 − exp − Δ ( , ).( ) = ( −1 ) exp − Δ ( ) + ( ) 1 − exp − Δ ( ) .(43) where = 0 ± , ± 3 , 2 ± 3 , ± . Finally, the potential variables and parameters ± OCP ( , ), ( ), Φ ( , ) and Φ ( , ) are updated via Eqs. (35)- (39). Then, the above steps are repeated for the next interval. Initializing process At the simulation start, the battery initial states should be determined. First, the parameters involved in constructing the battery model should be determined. Considering the data sources, parameters can be categorized into three types: determined by the material properties, determined by the manufacturing and assumed to fit the battery characteristics. The benchmark and adopted values of all parameters in this paper for simulating the LFPO cell and NCM cell are listed in Table 1. Second, we acquire the working region of the battery, including the low cut-off and high cut-off voltages min and max . By conducting the full-cycle low-current charge and discharge in the working region, the total capacity of the battery cell can be obtained. Then, the stoichiometry region of active particles in the positive electrode and negative electrode, denoted by ± max and ± min , can be obtained by solving the two non-linear equations below: The first equation refers to the situation in which the battery is fully charged, when the lithium concentration of the active particles in the positive electrode reaches the upper bound and that in the negative electrode reaches the lower bound. The second equation refers to the situation in which the battery is fully discharged. The third and fourth equations ensure charge conservation. Since both ± OCP are monotonic functions, the above equations have a unique solution. Third, the commonly used SOC-OCV curve can be derived once the bounds of stoichiometry in active particles of the negative electrode and positive electrode are known: OCV = + OCP + max − + max − + min − − OCP − max + − max − − min .(45) Fourth, the initial values of the inertial states of the battery are determined, including ± , , , and (as shown in Fig. 3b). By measuring the open circuit voltage of the cell, the initial SOC 0 can be obtained by interpolation in the SOC-OCV curve obtained in the third step. Thus, and are initialized at: ( , 0 ) =̄ ( , 0 ) = + ,max + max − SOC 0 + max − + min , = 0 + , + 3 , 2 + 3 , + ( , 0 ) =̄ ( , 0 ) = − ,max − min + SOC 0 − max − − min , = 0 − , − 3 , 2 − 3 , − .(46) At the start, Li + in the solution phase is assumed to be uniformly distributed along the thickness direction of the cell, Thus, the ± values are initialized at: ± ( 0 ) = ± ± ± ,0 .(47) Finally, the cell temperature ( 0 ) is initialized at the ambient temperature, i.e., ( 0 ) = amb ( 0 ). Stabilizing method In Section 3.1, the reaction rate is modelled as the intensity variable, which is assumed to remain constant within one simulation step (the other three intensity variables, , and , are all determined by ). This assumption is unavoidable when discretizing the system, which nevertheless brings additional error to the model. Through numerical experiments, we find that it is appropriate for most working conditions of different batteries. However, when OCP is very large (as shown in the red dotted box in Fig. 2b), this assumption is likely to subject the model to oscillation. Because calculating requires OCP according to Eq. (24), when OCP is small, we can calculate based on OCP at ( , −1 ), but when OCP varies greatly, OCP changes significantly within [ −1 , ] and makes the assumption no longer accurate. Under this kind of circumstance, the model might be unstable. To solve this problem, two measures are taken in this work. First, we reduce the risk of oscillation from the root of modelling. Specifically, we refine the value of in Eq. (20). As mentioned in Section 2.3, a smaller is likely to cause oscillation. This is because a smaller leads to a smaller , and the term exp − Δ ( , ) in Eq. (43) is near 0. Then, in every updating step, more weights are allocated to the term 5 . Since varies significantly between [ −1 , ] under extreme conditions, the model experiences oscillation under the influence of . However, an excessively large can decrease the accuracy because it deviates from the true diffusion characteristics (if not taking the stable problem into account, a perfect should be obtained by frequency response optimization, as mentioned in Section 2.3). Considering the above points, by experiments, is set as 1∕28 for the graphite and NCM active particles, and 1∕9 for the LFPO active particles to realize a trade-off between accuracy and stabilization. Nevertheless, after testing the proposed model under various working conditions for different batteries, we still find that oscillation can occur under some extreme working conditions. Thus, to ensure the practicability of this work, we develop a second measure to handle the oscillation,; i.e., the Savitzky-Golay filter (SGF) [59] is applied to eliminate the oscillation after it happens. The SGF can smooth the sequence in a moving window with little resolution loss. In addition, the fitted weights of the moving window are calculated in advance, which makes it highly efficient for online implementations. The hyperparameters of the SGF include the order SG and the moving window length SG . The framework and formulas for applying the SGF are given below. Battery model SG Filter Once Detect the oscillation, activate the SG filter Extract the data to be filtered Figure 4: Framework of applying the SGF to eliminate the oscillation. = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 1− SG 2 3− SG 2 ⋮ SG −1 2 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ SG ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 1− SG 2 3− SG 2 ⋮ SG −1 2 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ SG −1 ⋯ ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 1− SG 2 3− SG 2 ⋮ SG −1 2 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ 0 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ = T −1 T ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣̂ , − SG +1 ̂ , − SG +2 ⋮ , ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ , − SG +1 , − SG +2 ⋮ , ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦(48) can be calculated and stored in advance to reduce the computation cost. During the simulation, once the oscillation is detected, only the newest SG data points are filtered. Notably, only is selected as the state to be filtered because the ill approximation of affects first according to the analysis above. By numerical experiments, we find that filtering only can eliminate the oscillation effectively. Closed-loop correction scheme Due to incorrect initialization or error accumulation, the model accuracy cannot always remain high during longterm continuous simulation. Thus, a real-time closed-loop correction scheme is designed in this work, which can adaptively correct the battery states based on the measurable output. In previous research, Kalman filters were widely adopted to handle the problem. However, upon attempting different variants of Kalman filters, we found that they are not appropriate for the proposed model because there are nearly 30 states internal to the battery, which places a huge burden on calculation, e.g., computing the Jacob matrix in an extended Kalman filter or the square root of the sigma point matrix in an unscented Kalman filter, making the model impractical. Conventional Kalman filters ignore the high correlation between different states, which can be utilized to simplify the correction complexity of our model. By conducting numerical experiments in open-loop, we found that once the model is initialized at the correct SOC, it can track the true trajectories of other states accurately on the long time-scale (as presented in Section 4.2-4.3), inspiring us that the key to correct our model is the SOC, i.e., the average Li + concentration̄ . Based on this idea, a heuristic correction method is proposed. We denote the time stamp at the start of correction by and the measured terminal voltage bŷ ( ). Since we aim to find the appropriate SOC for the current time, other terms in the terminal voltage are eliminated, and only the equilibrium potential that is directly related to the SOC remains: OCV ( ) =̂ ( ) − Φ ( + , ) + Φ (0 − , ) + ( ) − ( ).(49) where Φ (0 − , ) − Φ ( + , ) is calculated by Eq. (39) and ( ) is calculated by: ( ) = + ( + , ) − − (0 − , ) + 2 ( ) ln ⎛ ⎜ ⎜ ⎜ ⎝ ( + , ) 2 + 0 ( ) + √ √ √ √ √ ( + , ) 2 + 0 ( ) 2 + 1 ⎞ ⎟ ⎟ ⎟ ⎠ − 2 ( ) ln ⎛ ⎜ ⎜ ⎜ ⎝ (0 − , ) 2 − 0 ( ) + √ √ √ √ √ (0 − , ) 2 − 0 ( ) 2 + 1 ⎞ ⎟ ⎟ ⎟ ⎠ .(50) Eq. (49) demonstrates that to let the predicted approach the true valuê , the open circuit voltage of the battery should approacĥ OCV . Similar to constructing the simplified solid-phase diffusion and solution-phase migration models, an ensemble average strategy is adopted here. To matcĥ OCV , the solid-phase stoichiometry in both the negative electrode and the positive electrode needs modification. We denote the ensemble average correction quantity in the negative electrode by Δ − ( ), which means ( , ) and ( , ) ( = 0 − , − 3 , 2 − 3 , − ) should incorporate Δ − ( ) after correction. Similarly, in the positive electrode, we have Δ + ( ). The correction quantities should satisfy two conditions. First, the total Li in the two electrodes should remain unchanged. Second, the open circuit voltage should be equal tô OCV . Thus, Δ − ( ) and Δ + ( ) can be obtained by solving the non-linear equations below: + + + + ,max Δ + ( ) + − − − − ,max Δ − ( ) = 0, + OCP + ( + , ) + Δ + ( ) − − OCP − (0 − , ) + Δ − ( ) =̂ OCV ( ).(51) Since both ± OCP are monotonic functions, the above equations have a unique solution. Then, the correction quantities of solid-phase concentrations can be computed by Δ ± ( ) = ± ,max Δ ± ( ). After obtaining the correction terms at , there still remains a problem to solve. Mathematically, directly adding these terms to the current̄ and is equivalent to adding an instantaneous process to the solid-phase diffusion model. However, as analysed in Section 3.3, this introduces instability into the model and leads to oscillation. To maintain model stability, when the ideal correction quantities, Δ ± , are obtained at , the actual correction quantities, denoted by Δ̂ ± , are determined by the historical actual correction quantities and the latest ideal correction quantities together via a first-order inertial process: Δ̂ ± ( ) = exp(− Δ ± Δ )Δ̂ ± ( −1 ) + 1 − exp(− Δ ± Δ ) Δ ± ( ).(52) where ± Δ are appropriate time constants that control the stability of the correction scheme. In this work, Δ for the positive electrode is set as 0.2, and for the negative electrode, it is set as 60. Then, the solid-phase concentrations are corrected by the actual correction quantities: ( , ) =̄ ( , ) + Δ̂ ± ( ),̂ ( , ) = ( , ) + Δ̂ ± ( ), = 0 ± , ± 3 , 2 ± 3 , ± .(53) It is also noteworthy that since solving the correction terms also requires computing resources, it is recommended to activate the correction step when the predicted voltage error exceeds a given threshold, denoted by error , which is set as 0.02 V in this work. The steps of the entire simulation framework are given below. Algorithm 1 Simulation steps Require: Manufacturing and material information of the battery. 1: Set values of parameters in Table 1 14: if | | | ( ) −̂ ( ) | | | > error then 15: Calculate the ideal correction term Δ ± ( ) based on ( , ) and̂ ( ) via Eq. (51). 16: else 17: Set the ideal correction term Δ ± ( ) = 0. 18: end if 19: Calculate the actual correction term Δ̂ ± ( ) based on Δ ± ( ) and Δ̂ ± ( −1 ) via Eq. (52). 20: Correct̄ ( , ) and ( , ) based on Δ̂ ± ( ) via Eq. (53). 21: if Oscillation is detected and ≥ SG then 22: Update ( , − SG +1 ) ∼ ( , ) via Eq. (48). 23 Figure 5: Complete simulation framework. Numerical experiments To evaluate the performance of this work, numerical experiments are designed and conducted for validation. The proposed model is compared against two highly cited simplified models: a classic ESP model and a recently proposed advanced ESP model [21,40]. The benchmark is a full-order P2D model that contains 51 elements on the x-axis in each electrode, 11 elements on the x-axis in the separator, 18 elements in each active particle along the r-axis, and 1847 elements in total. All the models and simulation programs are written and run on the MATLAB R2021A platform. The hardware for computation is a 2.11 GHz Intel Core i5-10210U processor with 16 GB of RAM. Note that the proposed model contains only basic operators and that it is convenient to write the model in other programming languages, such as Python and Java. Designs As mentioned above, the proposed model contains constant parameters and time-variant parameters expressed by functions. The values of the constant parameters are listed in Table 1. Coefficients of functions depicting time-variant parameters are fit to material experiment data [50] and listed in Table 2. To test the proposed model comprehensively, the working conditions to simulate should consider three points. First, they should cover a wide range of current amplitudes and ambient temperatures. Second, they should contain various working profiles, including galvanostatic and dynamic currents. Third, they should start at different initial points. Considering the above points, eight scenarios were designed for each type of battery to test, as listed in Table 3. To evaluate the model performance in simulating internal states and output, the mean absolute error (MAE), root mean squared error (RMSE) and R-squared (R 2 ) are applied. Taking the voltage as an example, for time steps from 1 , ⋯ , , the above three metrics are calculated by: 2 = 1− ∑ =1 ( ) −̂ ( ) 2 ∑ =1 ( ) −̄ 2 , = ( 1 ∑ =1 | | | ( ) −̂ ( ) | | | ), = √ √ √ √ 1 ∑ =1 ( ) −̂ ( ) 2 . (54) where ( ) is the true value and̂ ( ) is the predicted value. Table 3 Simulation scenarios for testing. No. Protocol Current amplitudeAmbient temperature LFPO NCM523 NCM811 1 galvanostatic 1C-rate 298 K SOC 0 =1 SOC 0 =1 SOC 0 =1 2 galvanostatic 2C-rate 298 K SOC 0 =1 SOC 0 =1 SOC 0 =1 3 galvanostatic 4C-rate 298 K SOC 0 =1 SOC 0 =1 SOC 0 =1 4 CCCV -1C-rate~0C-rate 298 K SOC 0 =0 SOC 0 =0 SOC 0 =0 5 ACC -5C-rate~5C-rate 298 K SOC 0 =1SOC 0 =0.7SOC 0 =0.7 6 RC 0C-rate~5C-rate 298 K SOC 0 =1 SOC 0 =1 SOC 0 =1 7 galvanostatic 1C-rate 273 K SOC 0 =1 SOC 0 =1 SOC 0 =1 8 galvanostatic 1C-rate 313 K SOC 0 =1 SOC 0 =1 SOC 0 =1 First, the running time of the simplified model in different working scenarios is listed in Table 4. The computational efficiency of the simplified model is significantly increased as expected, ensuring that the proposed model is practical in real-world applications. Although the operating time of batteries in dynamic current scenarios (e.g., ACC and RC) is shorter than that in galvanostatic scenarios, sometimes their simulation process takes a longer time. This is because under dynamic current, we raise the sampling frequency, resulting in more total simulation steps than under galvanostatic currents. State monitoring The proposed model can provide information on internal chemical states in the battery, including ,̄ , and , where the latter three are key states that can significantly affect the operating characteristics of the battery. Specifically, determines the remaining charge of a battery (SOC), determines the extreme instantaneous power the battery can provide or absorb (SOP), and determines the heat generation and degradation process inside the battery (SOH). All four states together determine the electrical variables inside and outside the battery, e.g., π , , Φ , Φ , Φ − , and . The accuracy metrics listed in the tables below are averaged over the eight scenarios. The prediction accuracy of is shown in Table 5. Here, the results of the other two ESP models are not given because we assume a parabolic distribution of across the thickness direction. Both the RMSE and the MAE are smaller than 100; since the baseline value of is 1200 mol/m 3 , such a level of error is acceptable. To clearly demonstrate the distribution characteristics of , we plot along the thickness direction in the negative electrode of NCM811 cell under dynamic current protocols (scenario nos. 5-6), as shown in Figs. 7a-7b. At = 0 − , − 3 , 2 − 3 , the predicted fits well to the true value. However, at − , i.e., the point at the boundary between the electrode and separator, the error is slightly larger, especially under the RC protocol, indicating that we should be cautious when using the model to predict near the separator. Actually, the main contribution of this work is not using the parabolic polynomial to depict The prediction accuracy of is shown in Table 6. Note that is the most important state inside the battery. This is not only because it couples the chemical system and electrical system and determines the trajectories of̄ , , Φ , Φ , etc., on the short time-scale but also because it reflects the degradation pressure that affects the battery status over long time scales. The most important contribution of this work is that we find a simple way to approximate , avoiding vast computational costs. We compare the proposed model with an advanced ESP [21,40] that also considers the spatial distribution of and a classic ESP that assumes a uniform distribution of . The table shows that at different points along the thickness direction, our model performs better than the advanced ESP. Moreover, the low accuracy of the classic ESP proves the necessity of considering a non-uniform distribution. To clearly demonstrate the distribution characteristics of , we plot along the thickness direction in the positive and negative electrodes of the NCM523 cell under dynamic current protocols (scenario nos. 5-6), as shown in Figs. 8a-8d. Generally, under dynamic currents, the model can accurately predict along the thickness. However, under a very large current, the prediction error of at the interface between the negative electrode and separator is larger, indicating that we should be cautious when applying the model to estimate at − under extreme currents. Since the reaction at − is the most violent compared with other locations, accurate monitoring of at − is more meaningful for analysing the degradation inside the battery. Thus, we plot the results of estimating ( − , ) under galvanostatic protocols for LFPO, NCM523, and NCM811 cells in Figs. 8e-8g. As the ambient temperatures vary from 273 K to 313 K, the current rates vary from 1C-rate to 4C-rate, and the proposed model can always give accurate results, proving its effectiveness and importance. According to existing research [60], the degradation of LIBs mainly occurs in graphite-containing negative electrodes; thus, we mainly focus on in the negative electrode above. Now turning to the positive electrode, the estimation of for NCM523 and NCM811 cells is evaluated in Table 6, and its accuracy is even higher than that of the graphite electrode. Note that the LiFePO 4 electrode is not simulated as other electrodes. This is because the particle radius of LiFePO 4 is approximately 100 times smaller than those of NCM and graphite, which makes the peak of the reaction rates across the electrode very narrow and high, as shown in Fig. 8h. Since we select only 4 points along the x-axis for simplicity, such granularity is inapplicable for capturing the extremely uneven distribution characteristics of for the LiFePO 4 electrode, and the same is true for the advanced ESP [21,40]. However, we can observe from Fig. 8h that within a full-cycle operation, the peak of moves steadily from the separator to the current collector, and the ∫ 0 ( , ) values for every point on the x-axis are almost the same. When we conduct the degradation analysis, we focus more on the integration of ( , ) within a specific time period than on its instantaneous values. Thus, the uniform distribution of is adopted for the LiFePO 4 electrode in this work. Table 6 Prediction accuracy of in electrodes of LFPO, NCM523, NCM811 cells. The prediction accuracy of̄ is shown in Table 7. For ease of comparison, it is replaced by the normalized value , i.e., the average stoichiometry in the solid phase. Generally, our model performs better at more points along the thickness direction for the three types of cells. However, we also notice that at some points, e.g., at + 3 of the NCM523 positive electrode, 2 − 3 of the NCM523 negative electrode and − of the NCM811 negative electrode, the accuracy of our model is slightly lower than that of the advanced ESP or classic ESP. There might be two reasons for this phenomenon. First, only one first-order inertial process is used to approximate the diffusion process of Li + in the solid phase for simplicity, while two independent first-order inertial processes are used in the advanced ESP model. Second, as mentioned in the text above, to reduce the potential of being trapped in the oscillations, we made a trade-off between accuracy and stabilization when determining the fitted coefficient . Actually, before finally setting = 1∕9 for the LFPO electrode and = 1∕28 for other electrodes, we tried derived from the xRA method, Padé approximations and frequency response optimization. Although these methods can generally perform better under dynamic protocols (e.g., ACC/RC), they were found to be trapped in oscillations in galvanostatic protocols during very low or high SOCs. Thus, to ensure the applicability of the proposed model when the battery is cycled in the full range of SOC, we sacrifice the accuracy to some extent and selected the proper to achieve better stabilization. Moreover, considering the high requirements of simplicity in real-world applications, we retain only one first-order inertial process to avoid unnecessary model complexity. The results support our choice because although our model did not perform best everywhere, its absolute accuracy is acceptable. To clearly demonstrate the performance of the model at predicting , its trajectories along the thickness direction of positive and negative electrodes in three types of cells are plotted in Figs. 9a-9h. Figs. 9a-9d show that under dynamic current protocols, the accuracy of at points in the middle of the electrode is higher than that at the boundaries for the negative electrode. For the positive electrode, the difference between points at different locations is not very prominent. This indicates that when estimating the SOC of the battery under dynamic loads based on the information of , it would be better to select points in the middle (g) at the negative electrode boundary of NCM811, galvanostatic. R 2 RMSE(×10 −7 ) MAE(×10 −7 ) Graphite(LFPO) 0 − − ∕3 2 − ∕3 − 0 − − ∕3 2 − ∕3 − 0 − − ∕3 2 − ∕3 − (h) at points along the thickness of the positive electrode in LFPO, galvanostatic. of the electrode. Since we have observed that the error of at the interface between the electrode and separator is higher, we plot their trajectories under galvanostatic protocols in Figs. 9e-9h. Under galvanostatic current, when the ambient temperatures vary from 273 K to 313 K, the current rates vary from 1C to 4C, and the model can always provide accurate results in both negative and positive electrodes of all three types of cells. Table 7 Prediction accuracy of in electrodes of LFPO, NCM523, and NCM811 cells. (h) at the positive electrode boundary of NCM811, galvanostatic. thickness direction, especially in the negative electrode. As mentioned above, side reactions such as SEI generation and lithium plating mainly occur in negative electrodes [60]. Thus, a higher accuracy of estimating in the negative electrode is pivotal and meaningful for predicting the SOP or conducting degradation analysis of the battery. To clearly demonstrate the performance of the model at predicting , its trajectories along the thickness direction of positive and negative electrodes in three types of cells are plotted in Figs. 10a-10h. Figs. 10a-10d show that under dynamic current protocols, the accuracy of in the negative electrode at the side of the current collector ( = 0 − ) is higher than points at the side of the separator ( = − ). This is reasonable since we estimate better at = 0 − . For the positive electrode, the difference between points at different locations is not very prominent, which is similar to the case of . This indicates that when estimating the SOP of the battery under dynamic loads based on the information of in the negative electrode, it would be better to select points at the side of the current collector. The trajectories of under galvanostatic protocols are plotted in Figs. 10e-10h. Under galvanostatic current, when the ambient temperatures vary from 273 K to 313 K, the current rates vary from 1C-rate to 4C-rate, and the model can always give accurate results in both negative and positive electrodes of all three types of cells. R 2 RMSE(×10 −4 ) MAE(×10 −4 ) Graphite (LFPO) 0 − − ∕3 2 − ∕3 − 0 − − ∕3 2 − ∕3 − 0 − − ∕3 2 − ∕3 − Output prediction After reviewing the model prediction performance on different internal states, we analyse the output prediction performance in this part. The outputs, including and , are both predicted in the model. Between them, we care more about the accurate prediction of than that of for two reasons. First, as explained in Section 2.5.2, a lumped thermal model is developed to predict for simplicity, which can give only approximate predictions. Thus, we only expect the trajectory track of to meet the basic requirements. Second, the signal can directly help us conduct parameter identification and develop online control strategies; thus, it was often considered in previous research. Actually, when we review existing models of LIBs, regardless of whether they are electro-chemical models or EC models, mapping between the input current and output voltage is always the key problem to discuss. Thus, in this paper, we continue this rule and focus more on . The prediction accuracy of is given in Table 9. Generally, our model performs better for all types of cells. Considering the MAE, the prediction accuracy of LFPO is approximately 25% higher than that of the advanced ESP, and that of NCM cells is approximately 100% higher than that of the advanced ESP. The values of the three cells under dynamic current protocols and galvanostatic protocols are plotted in Figs. 11a-11b and Figs. 11c-11e. High agreement can be observed for three cells in different scenarios. Near the end of discharge, the accuracy decreases somewhat, especially at low ambient temperature (273 K), indicating that caution should be (h) at the positive electrode boundary of NCM811, galvanostatic. taken when using the model under low temperature and that it would be better not to overdischarge the battery. In Figs. 11f-11g, the predicted is plotted, and the maximum error under dynamic current protocols is approximately 0.2 K, which basically meets the requirement of practical use. In Fig. 11h, we plot the case in which oscillation occurs. Actually, this is the only case we find that exhibits oscillation. As analysed above, the oscillation is caused by the assumption that remains constant within a simulation step and is no longer correct. When the LFPO is near the end of discharge, OCP is very large. Additionally, under high ambient temperature (313 K), the reaction is very active. Thus, the variation of can be very substantial in a short time interval and ultimately results in oscillation. Both our model and the advanced ESP exhibit oscillation. Although the possibility of this situation is not high, it is still necessary to deploy a suitable stabilizer to ensure the reliability of the model. In this work, we set the hyperparameters of the SGF, SG = 2 and SG = 49. The solid yellow line in Fig. 11c shows after filtering; the oscillation is eliminated effectively. Closed-loop correction Although the state monitoring and output prediction agree well with those of the full-order P2D model, a closedloop framework is necessary for real-world applications since the initialization error and the model error accumulate in continuous simulation. To evaluate the reliability of the proposed closed-loop correction scheme, we set the initial SOC of the battery cell to the wrong values. Specifically, the initial SOC of the galvanostatic discharge protocols is set at 0.8, where the true value is 1. The initial SOC of the CCCV protocol is set at 0.2, where the true value is 0. The initial SOC of dynamic current protocols is set at 0.5, where the true value is 1 for the LFPO cell and 0.7 for the NCM cells. Figs. 12a-12b plots under dynamic current protocols. After a short time oscillation, quickly corrects to the true value. The same phenomenon is observed under galvanostatic protocols in Figs. 12c-12e. As the current rate varies from 1C to 4C, the ambient temperature varies from 273 K to 313 K, and the correction scheme can always perform well. In addition, we note that the increment of the error at the end of discharge (as shown in Figs. 11c-11e) is also eliminated. Figs. 12f-12h plot the trajectories of . The wrong initialized at the interface between the negative electrode and separator is quickly corrected to the true value, verifying the effectiveness of the proposed scheme. (h) at the negative electrode boundary of NCM811 under different galvanostatic protocols, wrong initialization. Conclusions This paper proposes a simplified electro-chemical model along with a specific simulation framework that enables the in situ monitoring and online control of commonly used NCM and LFPO batteries. A bottom-up approach is designed to construct the model, which not only makes the model adaptive to variant working environments and materials but also reserves potential for future upgrades. Comprehensive numerical experiments validate the effectiveness and superiority of this work, which provides opportunities for degradation analysis and meticulous management of batteries in practice. However, there still remain three limitations of this work. First, the proposed model is derived from the P2D model; thus, it is suitable only for those batteries that can be described by the P2D model. Since LIB technology is under rapid development, the proposed model might not be suitable for future LIBs with advanced material technologies and needs further upgrades. Second, a lumped thermal model is used to predict the cell temperature. However, sometimes the lack of information on the temperature distribution can lead to the unawareness of severe problems such as thermal runaway. Since the temperature distribution is closely related to the reaction rate distribution, we can use the reaction information as the signal to thermal runaway as a matter of expediency in this work. Regardless, it would be better to incorporate a thermal model that can estimate the spatial temperature with low complexity. Third, we should know the values of all the parameters involved in the model to implement this work in real-world applications. However, some of the parameters cannot be directly measured. Thus, a specific parameter identification method should be developed to ensure the practicability of this work. In future work, we plan to focus on addressing the above three challenges. Figure 1 : 1LIB cell structure and working mechanism. Chemical Property: Variant (a) Parameter categories. Figure 2 : 2) Equilibrium potentials of commonly used electrode materials at 298 K Parameters involved in constructing the proposed electro-chemical model. activation energy and ref and ref are values at the reference temperature, i.e., 298 K. ( 0 .( 0, ) = ( ± , ) − (0 ± , ). Since the current densities in the solid-phase and solution-phase obey KCL, the boundary conditions of in the negative and positive electrodes are expressed by (0 − , ) = ( + , ) = ( ) ± and (0 + , ) = ( − , ) = Thus, ) = ∓ ( ) ± . Substituting the above terms into Eq. (10) yields: ( − , ) = (0 sep , ), ( sep , ) = (0 + , ), ef f,− ( ) ( − , ) = ef f,sep ( ) (0 sep , ) , and ef f,sep ( ) ( sep , ) = ef f,+ ( ) (0 + , ) . Substituting the parabolic expressions of into the above boundary conditions yields: (14) by ( ). The quadratic coefficients, − ( ) and + ( ), can then be expressed by: frequency domain at = 0 equals lim →0 ( , ) , ) = ( − , ) = 0, (0 + , ) = ( + , ) = 0. Thus, expressions of and can be obtained by integrating the RHSs in Eqs. (26)-(27): Figure 3 : 3Sketches of the modelling approach and the iterative step. −1 ), ( −1 ), ± ( −1 ), ( , −1 ), ( , −1 ) and̄ ( , −1 ) are known, plotted by green boxes with boundaries made of dashed-dotted lines, as shown in Fig. 3b. Based on these values and the latest input ( ) and amb ( ), time-variant parameters ± ( ), ( ), ( ) and ± ( ) are updated via Eqs. (6)-(9) first. The intensity variables ( ), ( ), ( ) and ( ) are updated via Eqs. (28), (33),and (41) next. They are plotted by green boxes with boundaries made of solid line, as shown in . 2 : 2Set cut-off voltages min and max , and solve the stoichiometry regions of electrodes ± min and ± max via Eq. (44). 3: Calculate the SOC-OCV curve via Eq.(45).4: Measure the open circuit voltage OCV ( 0 ) and ambient temperature amb ( 0 ). 5: Calculate SOC 0 by interpolation in the SOC-OCV curve.6: Initialize ± ( , 0 ) and̄ ± ( , 0 ) via Eq. (46), initialize ± ( 0 ) via Eq.(47), initialize ( 0 ) at amb ( 0 ), initialize Δ̂ ± ( 0 ) at 0, and initialize ( 0 ) at OCV ( 0 ). 7: for each = 1 , 2 , ⋯ do8: Acquire ( ), amb ( ) and Δ for the current simulation step,9: Update time-variant parameters ±,sep ( ), ±,sep ( ), ±,sep ( ), ± ( , ), ± ( ) and ± OCP ( , ) based on( , −1 ),̄ ( , −1 ), ( , −1 ) and ( −1 ) via Eqs. states ( , ) and ( ) based on ( , −1 ), ( , −1 ), ( −1 ), ( −1 ) and ( ) via Eq. (33) and Eq. (41). 11: Calculate inertial states ± ( ), ( , ),̄ ( , ), ( , ) and ( ) based on their values at the end of last step and intensity state values in the current step via Eq. (43). 12:Calculate the terminal voltage at the end of the current step ( ). battery voltage at the end of the current step ( ). Scenario nos. 1-3 and nos. 7-8 test the model under the galvanostatic discharging protocol, and the difference is the current amplitude or ambient temperature. Scenario no. 4 tests the model under the standard constant-current constantvoltage charging protocol (CCCV). Scenario no. 5 tests the model under the alternate charging and discharging protocol (ACC), and the current amplitude also varies during the switch of the current direction. Scenario no. 6 tests the model under the random discharging current protocol (RC). The dynamic current profiles of the latter three scenarios are shown in Figs. 6a-6c. Figure 6 : 6Working profiles of protocols with time-variant currents. Figure 7 : 7Trajectories of along the thickness of the negative electrode and in the NCM811 battery under ACC and RC protocols. Figure 8 : 8Trajectories of along the thickness of positive and negative electrodes in LFPO, NCM523 and NCM811 cells under different protocols. Figure 9 : 9Trajectories of along the thickness of positive and negative electrodes in LFPO, NCM523 and NCM811 cells under different protocols. Figure 10 : 10Trajectories of along the thickness of positive and negative electrodes in LFPO, NCM523 and NCM811 cells under different protocols. Figure 11 : 11Trajectories of outputs (including and ) of LFPO, NCM523 and NCM811 cells under different protocols. Figure 12 : 12Trajectories of andof LFPO, NCM523 and NCM811 cells under different protocols when the battery is wrongly initialized. Table 1 1Parameter settings of the lithium-ion battery cell used in this work. a : Assumed. m : Manufactured. p : Material properties.Parameters Benchmark Set values m - LFPO: 3.69×10 −2 ,NCM523: 3.95×10 −2 , NCM811: 3.85×10 −2 m + , − , sep - 7.75×10 −5 ,8.1×10 −5 , 2×10 −5 m + , − , sep , surf - 6.1×10 −2 , 6.41×10 −2 , 6.36×10 −2 , 4.4×10 −3 p + LFPO: 1.25×10 −15 [48], NCM: 1-10×10 −14 [8, 10, 43, 56] (8) p − C: 3.9×10 −14 -5.5×10 −14 [17, 20, 41, 47, 48, 54] (8) p 2.6-7.5×10 −10 [17, 21, 32, 41, 43, 47, 54] (6) a + LFPO: 10.8[48], NCM: 1-68[8, 26, 43, 57] LFPO&NCM523&NCM811: 3.8 a − 100[8, 17, 20, 32, 41, 47, 54] 100 p 3.46[24] (7) a 0 + 0.36-0.4[32, 43, 47, 48, 58] 0.38 a - 0.0064 a + 0 1.3×10 −4 a − 0.001-0.1[43, 56, 57] 3.3×10 −4 a + LFPO: 0.2-1.7×10 −7 [48], NCM: 1-18×10 −6 [10, 43, 56, 58] LFPO: 5.2×10 −8 , NCM523&NCM811: 5×10 −6 a − 1-12.5×10 −6 [8, 17, 20, 21, 41, 47, 48, 54] 7.5×10 −6 p + LFPO:157.7×10 −3 , NCM523: 96.5×10 −3 , NCM811: 97.3×10 −3 LFPO: 157.7×10 −3 , NCM523: 96.5×10 −3 , NCM811: 97.3×10 −3 p − 72.06×10 −3 72.06×10 −3 p + LFPO: 3.6×10 3 , NCM523&NCM811: 4.8×10 3 LFPO: 3.6×10 3 , NCM523&NCM811: 4.8×10 3 p − 2.24×10 3 2.24×10 3 m + 0.27-0.45[43, 56] LFPO: 0.4461, NCM523: 0.4401, NCM811:0.5038 m − 0.26-0.5[43, 56] LFPO: 0.4733, NCM523: 0.4893, NCM811: 0.4893 m sep 0.4-0.55[43, 56] 0.4 m + 0.35-0.5[43, 56] LFPO: 0.4928, NCM523: 0.4806, NCM811: 0.4258 m − 0.4-0.5[43, 56] LFPO: 0.489, NCM523: 0.4742, NCM811: 0.4742 m ,0 1000-1200[17, 32, 20, 21, 41, 47] 1200 a 746-998[23, 54] 1000 a ℎ 5-20[17, 23] 20 p + LFPO: 9.65×10 −8 [48], NCM: 9.65-96.5×10 −7 [43, 56] (9) p − 1.7-9.6×10 −6 [43, 17, 20, 41, 47] (9) a 1.5-4.1[17, 20, 21, 47, 48] 1.5 :end if 24: end for START Set parameters Set operating regions Obtain SOC-OCV curve Measure initial output Calculate initial states Measure the input Update internal states by Fig.3b. Predict the output Measure the output Calculate the ideal correction term Zero the ideal correction term Calculate the actual correction term Correct the states Yes No Oscillation detected? Filter the states in the current moving window Yes Simulation stops? Yes No Exit No Table 2 2Fitted coefficients in expressions of , andln ± ln . LFPO(+) NCM523(+) NCM811(+) Graphite(-) , 0 -7349 -7330 19626 , 30011 -313 -309 19626 ref 0 -2.05e-14 -2.05e-14 -2.4e-14 ref 8e-18 2.65e-14 2.65e-14 2.9e-14 , 31997 51997 51997 67995 ref 5.3e-6 2.3e-6 2.6e-6 2.3e-5 ln ± ln 0.55( ∕1000) 2 + 1.08( ∕1000) − 0.44 Table 4 4Run time (s) for simulations of different cells in eight scenarios.LFPO No. 1 No. 2 No. 3 No. 4 No. 5 No. 6 No. 7 No. 8 Operating time 3465 1665.5 768 4194.7 1000 217 3321 3508 P2D model 1157.04 634.79 483.24 1393.88 603.45 239.04 585.47 2176.61 Simplified model 1.31 1.25 0.96 1.46 1.73 0.72 1.25 1.32 NCM523 No. 1 No. 2 No. 3 No. 4 No. 5 No. 6 No. 7 No. 8 Operating time 3472 1653.5 762 5261.5 1000 217 3245 3519 P2D model 337.98 328.02 256.73 341.10 418.89 162.14 307.37 335.26 Simplified model 1.33 1.27 0.98 1.37 1.63 0.65 1.27 1.37 NCM811 No. 1 No. 2 No. 3 No. 4 No. 5 No. 6 No. 7 No. 8 Operating time 3483 1661.5 767.7 5247.8 1000 217 3277 3528 P2D model 343.35 329.85 263.01 344.77 417.47 162.59 330.28 366.18 Simplified model 1.36 1.29 1.00 1.39 1.64 0.65 1.29 1.39 Table 5 5Prediction accuracy of for LFPO, NCM523 and NCM811 cells.Negative Electrode Positive Electrode Separator LFPO 0 − − ∕3 2 − ∕3 − 0 + + ∕3 2 + ∕3 + 0 sep sep ∕2 sep RMSE 19.294 16.816 12.113 12.182 66.354 36.795 34.220 39.391 13.833 12.908 11.744 MAE 16.124 14.097 10.078 9.884 63.037 34.388 27.890 33.160 11.158 10.257 9.268 NCM523 0 − − ∕3 2 − ∕3 − 0 + + ∕3 2 + ∕3 + 0 sep sep ∕2 sep RMSE 14.537 10.660 4.767 9.147 82.830 46.466 29.666 14.579 10.499 9.655 8.561 MAE 11.905 8.822 4.343 8.088 79.458 44.162 27.835 12.840 9.459 8.552 7.311 NCM811 0 − − ∕3 2 − ∕3 − 0 + + ∕3 2 + ∕3 + 0 sep sep ∕2 sep RMSE 15.169 11.217 4.405 8.476 82.993 37.941 23.754 11.111 9.758 8.896 7.849 MAE 12.568 9.400 4.039 7.463 79.656 35.973 22.170 9.511 8.782 7.854 6.674 of comparison. The proposed model performs better than the other two models at all points along theProposed 0.9952 0.9975 0.9992 0.9973 23.10 17.48 17.24 35.24 18.42 14.23 14.48 27.94 Advanced ESP[21, 40] 0.9905 0.9959 0.9988 0.9954 35.50 25.52 27.72 53.07 27.61 19.22 18.58 39.82 Classic ESP 0.9086 0.9553 0.9915 0.8503 212.5 146.9 77.12 381.2 183.8 126.1 66.88 330.7 Graphite (NCM523) 0 − − ∕3 2 − ∕3 − 0 − − ∕3 2 − ∕3 − 0 − − ∕3 2 − ∕3 − Proposed 0.9879 0.9936 0.9963 0.9901 50.90 38.90 41.37 85.63 43.23 33.53 34.82 68.37 Advanced ESP[21, 40] 0.9712 0.9874 0.9969 0.9923 59.72 43.93 41.96 71.77 49.48 37.52 34.49 55.55 Classic ESP 0.7948 0.9256 0.9752 0.7514 396.7 262.6 166.8 693.2 354.4 232.4 153.3 632.4 Graphite (NCM811) 0 − − ∕3 2 − ∕3 − 0 − − ∕3 2 − ∕3 − 0 − − ∕3 2 − ∕3 − Proposed 0.9913 0.9957 0.9990 0.9944 67.88 53.19 36.34 64.34 57.67 45.76 31.17 51.25 Advanced ESP[21, 40] 0.9813 0.9929 0.9986 0.9947 69.15 51.73 43.50 66.10 58.65 44.90 36.59 51.43 Classic ESP 0.7485 0.9026 0.9844 0.7719 417.3 282.5 144.5 671.3 375.7 253.2 129.0 607.1 NCM523 0 + + ∕3 2 + ∕3 + 0 + + ∕3 2 + ∕3 + 0 + + ∕3 2 + ∕3 + Proposed 0.9978 0.9974 0.9972 0.9971 26.78 25.98 24.73 25.40 26.24 25.64 24.38 24.96 Advanced ESP[21, 40] 0.9983 0.9974 0.9971 0.9969 21.13 25.82 25.96 26.64 20.83 25.50 25.58 26.17 Classic ESP 0.9957 0.9983 0.9935 0.9914 67.24 17.48 49.42 59.67 57.40 16.67 46.64 55.60 NCM811 0 + + ∕3 2 + ∕3 + 0 + + ∕3 2 + ∕3 + 0 + + ∕3 2 + ∕3 + Proposed 0.9979 0.9972 0.9968 0.9964 27.57 29.12 30.07 31.48 27.32 28.88 29.75 31.09 Advanced ESP[21, 40] 0.9982 0.9972 0.9967 0.9963 24.56 29.01 30.71 32.12 24.35 28.77 30.37 31.71 Classic ESP 0.9969 0.9982 0.9931 0.9911 38.65 22.31 44.94 50.70 35.79 22.09 43.72 48.91 The prediction accuracy of is shown in Table 8. Similarly, the surface stoichiometry is used to represent for ease 0 200 400 600 800 1000 t (s) 0.5 0.55 0.6 0.65 0.7 0.75 s true:x=0 true:x=L/3 true:x=2L/3 true:x=L pred:x=0 pred:x=L/3 pred:x=2L/3 pred:x=L (a) at points in the negative electrode of NCM811, ACC. 0 200 400 600 800 1000 t (s) 0.46 0.48 0.5 0.52 0.54 0.56 0.58 s true:x=0 true:x=L/3 true:x=2L/3 true:x=L pred:x=0 pred:x=L/3 pred:x=2L/3 pred:x=L (b) at points in the positive electrode of NCM811, ACC. 0 50 100 150 200 250 t (s) 0.75 0.8 0.85 0.9 0.95 s true:x=0 true:x=L/3 true:x=2L/3 true:x=L pred:x=0 pred:x=L/3 pred:x=2L/3 pred:x=L (c) at points in the negative electrode of NCM523, RC. 0 50 100 150 200 250 t (s) 0.26 0.28 0.3 0.32 0.34 0.36 s true:x=0 true:x=L/3 true:x=2L/3 true:x=L pred:x=0 pred:x=L/3 pred:x=2L/3 pred:x=L (d) at points in the positive electrode of NCM523, RC. 0 1000 2000 3000 4000 t (s) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 s true:1C298K pred:1C298K true:2C298K pred:2C298K true:4C298K pred:4C298K true:1C273K pred:1C273K true:1C313K pred:1C313K (e) at the negative electrode boundary of LFPO, galvanos- tatic. 0 1000 2000 3000 4000 t (s) 0 0.2 0.4 0.6 0.8 1 s true:1C298K pred:1C298K true:2C298K pred:2C298K true:4C298K pred:4C298K true:1C273K pred:1C273K true:1C313K pred:1C313K (f) at the negative elec- trode boundary of NCM523, galvanostatic. 0 1000 2000 3000 4000 t (s) 0 0.2 0.4 0.6 0.8 1 s true:1C298K pred:1C298K true:2C298K pred:2C298K true:4C298K pred:4C298K true:1C273K pred:1C273K true:1C313K pred:1C313K (g) at the negative elec- trode boundary of NCM811, galvanostatic. 0 1000 2000 3000 4000 t (s) 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 s true:1C298K pred:1C298K true:2C298K pred:2C298K true:4C298K pred:4C298K true:1C273K pred:1C273K true:1C313K pred:1C313K Table 8 8Prediction accuracy of in electrodes of LFPO, NCM523, and NCM811 cells.R 2 RMSE (×10 −4 ) MAE (×10 −4 ) Graphite (LFPO) 0 − − ∕3 2 − ∕3 − 0 − − ∕3 2 − ∕3 − 0 − − ∕3 2 − ∕3 − Proposed 0.9688 0.9731 0.9777 0.9761 80.05 75.01 79.75 120.2 55.22 52.41 59.37 91.10 Advanced ESP[21, 40] 0.9520 0.9631 0.9748 0.9733 105.2 93.79 100.6 134.4 70.83 62.06 64.69 101.9 Classic ESP 0.3102 0.4429 0.7719 0.7696 420.5 354.2 263.92 518.6 338.9 280.1 212.4 421.7 Graphite(NCM523) 0 − − ∕3 2 − ∕3 − 0 − − ∕3 2 − ∕3 − 0 − − ∕3 2 − ∕3 − Proposed 0.9319 0.9454 0.9587 0.9493 165.4 160.1 177.1 253.8 124.9 118.8 121.5 158.6 Advanced ESP[21, 40] 0.8803 0.9168 0.9529 0.9458 197.0 183.9 187.9 263.5 152.2 137.8 126.8 171.6 Classic ESP -2.1964 -1.1228 0.3186 0.5626 949.8 830.5 653.3 926.6 737.7 607.4 481.4 773.7 Graphite (NCM811) 0 − − ∕3 2 − ∕3 − 0 − − ∕3 2 − ∕3 − 0 − − ∕3 2 − ∕3 − Proposed 0.9392 0.9510 0.9626 0.9533 170.8 162.7 172.2 240.9 130.6 121.6 120.6 153.7 Advanced ESP[21, 40] 0.8930 0.9265 0.9576 0.9493 197.2 181.9 182.3 253.8 155.1 138.3 127.1 172.6 Classic ESP -2.1919 -1.1150 0.3289 0.5756 952.4 831.5 646.5 913.6 746.5 614.5 468.6 757.7 NCM523 0 + + ∕3 2 + ∕3 + 0 + + ∕3 2 + ∕3 + 0 + + ∕3 2 + ∕3 + Proposed 0.9932 0.9934 0.9935 0.9934 29.27 29.28 30.20 30.39 24.00 24.80 25.60 25.68 Advanced ESP[21, 40] 0.9928 0.9928 0.9927 0.9927 32.12 31.04 31.76 31.94 27.33 26.22 26.79 26.82 Classic ESP 0.9561 0.9544 0.9461 0.9432 114.7 61.57 84.00 93.55 96.22 47.95 73.13 82.58 NCM811 0 + + ∕3 2 + ∕3 + 0 + + ∕3 2 + ∕3 + 0 + + ∕3 2 + ∕3 + Proposed 0.9943 0.9940 0.9936 0.9934 32.46 32.79 33.25 33.57 27.10 27.83 28.33 28.59 Advanced ESP[21, 40] 0.9938 0.9934 0.9930 0.9927 35.05 34.61 34.96 35.20 29.54 29.34 29.69 29.87 Classic ESP 0.9627 0.9608 0.9527 0.9500 82.19 63.23 79.76 85.04 69.82 48.14 68.89 74.22 0 200 400 600 800 1000 t (s) 0.3 0.4 0.5 0.6 0.7 0.8 0.9 ss true:x=0 true:x=L/3 true:x=2L/3 true:x=L pred:x=0 pred:x=L/3 pred:x=2L/3 pred:x=L (a) at points in the negative electrode of NCM811, ACC. 0 200 400 600 800 1000 t (s) 0.4 0.45 0.5 0.55 0.6 ss true:x=0 true:x=L/3 true:x=2L/3 true:x=L pred:x=0 pred:x=L/3 pred:x=2L/3 pred:x=L (b) at points in the positive electrode of NCM811, ACC. 0 50 100 150 200 250 t (s) 0.4 0.5 0.6 0.7 0.8 0.9 1 ss true:x=0 true:x=L/3 true:x=2L/3 true:x=L pred:x=0 pred:x=L/3 pred:x=2L/3 pred:x=L (c) at points in the negative electrode of NCM523, RC. 0 50 100 150 200 250 t (s) 0.26 0.28 0.3 0.32 0.34 0.36 0.38 0.4 ss true:x=0 true:x=L/3 true:x=2L/3 true:x=L pred:x=0 pred:x=L/3 pred:x=2L/3 pred:x=L (d) at points in the positive electrode of NCM523, RC. 0 1000 2000 3000 4000 t (s) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 ss true:1C298K pred:1C298K true:2C298K pred:2C298K true:4C298K pred:4C298K true:1C273K pred:1C273K true:1C313K pred:1C313K (e) at the negative electrode boundary of LFPO, galvanos- tatic. 0 1000 2000 3000 4000 t (s) 0 0.2 0.4 0.6 0.8 1 ss true:1C298K pred:1C298K true:2C298K pred:2C298K true:4C298K pred:4C298K true:1C273K pred:1C273K true:1C313K pred:1C313K (f) at the negative elec- trode boundary of NCM523, galvanostatic. 0 1000 2000 3000 4000 t (s) 0 0.2 0.4 0.6 0.8 1 ss true:1C298K pred:1C298K true:2C298K pred:2C298K true:4C298K pred:4C298K true:1C273K pred:1C273K true:1C313K pred:1C313K (g) at the negative elec- trode boundary of NCM811, galvanostatic. 0 1000 2000 3000 4000 t (s) 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 s true:1C298K pred:1C298K true:2C298K pred:2C298K true:4C298K pred:4C298K true:1C273K pred:1C273K true:1C313K pred:1C313K Table 9 9Prediction accuracy of of LFPO, NCM523, and NCM811 cells.LFPO NCM523 NCM811 Proposed Advanced ESP[21, 40] Classic ESP Proposed Advanced ESP[21, 40] Classic ESP Proposed Advanced ESP[21, 40] Classic ESP R2 0.979 0.983 0.953 0.983 0.974 0.949 0.992 0.984 0.955 RMSE 0.01371 0.01536 0.02143 0.02459 0.03023 0.02636 0.01995 0.02499 0.02323 MAE 0.00774 0.00927 0.01205 0.00843 0.01617 0.01859 0.00771 0.01456 0.01629 trode boundary of NCM523 under different galvanostatic protocols, wrong initialization.true:LFPO pred:LFPO true:NCM523 pred:NCM523 true:NCM811 pred:NCM811 (a) of LFPO, NCM523, NCM811 cells, ACC, wrong initialization. 0 50 100 150 200 t (s) 3 3.2 3.4 3.6 3.8 4 4.2 4.4 V t (V) true:LFPO pred:LFPO true:NCM523 pred:NCM523 true:NCM811 pred:NCM811 (b) of LFPO, NCM523, NCM811 cells, RC, wrong ini- tialization. 0 500 1000 1500 2000 2500 3000 3500 t (s) 1 1.5 2 2.5 3 3.5 4 V t (V) true:1C298K pred:1C298K true:2C298K pred:2C298K true:4C298K pred:4C298K true:1C273K pred:1C273K true:1C313K pred:1C313K (c) of LFPO under different galvanostatic protocols, wrong initialization. 0 500 1000 1500 2000 2500 3000 3500 t (s) 1.5 2 2.5 3 3.5 4 4.5 V t (V) true:1C298K pred:1C298K true:2C298K pred:2C298K true:4C298K pred:4C298K true:1C273K pred:1C273K true:1C313K pred:1C313K (d) of NCM523 under dif- ferent galvanostatic protocols, wrong initialization. 0 500 1000 1500 2000 2500 3000 3500 t (s) 1.5 2 2.5 3 3.5 4 4.5 V t (V) true:1C298K pred:1C298K true:2C298K pred:2C298K true:4C298K pred:4C298K true:1C273K pred:1C273K true:1C313K pred:1C313K (e) of NCM811 under dif- ferent galvanostatic protocols, wrong initialization. 0 500 1000 1500 2000 2500 3000 3500 t (s) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 s true:1C298K pred:1C298K true:2C298K pred:2C298K true:4C298K pred:4C298K true:1C273K pred:1C273K true:1C313K pred:1C313K (f) at the negative electrode boundary of LFPO under dif- ferent galvanostatic protocols, wrong initialization. 0 500 1000 1500 2000 2500 3000 3500 t (s) 0 0.2 0.4 0.6 0.8 1 s true:1C298K pred:1C298K true:2C298K pred:2C298K true:4C298K pred:4C298K true:1C273K pred:1C273K true:1C313K pred:1C313K (g) at the negative elec- 0 500 1000 1500 2000 2500 3000 3500 t (s) 0 0.2 0.4 0.6 0.8 1 s true:1C298K pred:1C298K true:2C298K pred:2C298K true:4C298K pred:4C298K true:1C273K pred:1C273K true:1C313K pred:1C313K Big data driven lithium-ion battery modeling method based on SDAE-ELM algorithm and data pre-processing technology. S Li, H He, J Li, Applied Energy. 242S. Li, H. He, J. Li, Big data driven lithium-ion battery modeling method based on SDAE-ELM algorithm and data pre-processing technology, Applied Energy 242 (2019) 1259-1273. A comparative study of equivalent circuit models for Li-ion batteries. X Hu, S Li, H Peng, Journal of Power Sources. 198X. Hu, S. Li, H. Peng, A comparative study of equivalent circuit models for Li-ion batteries, Journal of Power Sources 198 (2012) 359-367. Model-Based Dynamic Power Assessment of Lithium-Ion Batteries Considering Different Operating Conditions. X Hu, R Xiong, B Egardt, Conference Name: IEEE Transactions on Industrial Informatics. 10X. Hu, R. Xiong, B. Egardt, Model-Based Dynamic Power Assessment of Lithium-Ion Batteries Considering Different Operating Conditions, IEEE Transactions on Industrial Informatics 10 (2014) 1948-1959. Conference Name: IEEE Transactions on Industrial Informatics. An improved Thevenin model of lithium-ion battery with high accuracy for electric vehicles. X Ding, D Zhang, J Cheng, B Wang, P C K Luk, Applied Energy. 254113615X. Ding, D. Zhang, J. Cheng, B. Wang, P. C. K. Luk, An improved Thevenin model of lithium-ion battery with high accuracy for electric vehicles, Applied Energy 254 (2019) 113615. Modeling of galvanostatic charge and discharge of the lithium/polymer/insertion cell. M Doyle, T F Fuller, J Newman, Journal of The Electrochemical Society. 140M. Doyle, T. F. Fuller, J. Newman, Modeling of galvanostatic charge and discharge of the lithium/polymer/insertion cell, Journal of The Electrochemical Society 140 (1993) 1526-1533. Including double-layer capacitance in lithium-ion battery mathematical models. N Legrand, S Raël, B Knosp, M Hinaje, P Desprez, F Lapicque, Journal of Power Sources. 251N. Legrand, S. Raël, B. Knosp, M. Hinaje, P. Desprez, F. Lapicque, Including double-layer capacitance in lithium-ion battery mathematical models, Journal of Power Sources 251 (2014) 370-378. A control-oriented electrochemical model for lithium-ion battery, Part I: Lumped-parameter reduced-order model with constant phase element. Z Chu, G L Plett, M S Trimboli, M Ouyang, Journal of Energy Storage. 25100828Z. Chu, G. L. Plett, M. S. Trimboli, M. Ouyang, A control-oriented electrochemical model for lithium-ion battery, Part I: Lumped-parameter reduced-order model with constant phase element, Journal of Energy Storage 25 (2019) 100828. Electrochemical model of lithium-ion battery for wide frequency range applications. Q Zhang, D Wang, B Yang, X Cui, X Li, Electrochimica Acta. 343136094Q. Zhang, D. Wang, B. Yang, X. Cui, X. Li, Electrochemical model of lithium-ion battery for wide frequency range applications, Electrochim- ica Acta 343 (2020) 136094. Mathematical Modeling of Commercial LiFePO 4 Electrodes Based on Variable Solid-State Diffusivity. M Farkhondeh, C Delacourt, Journal of The Electrochemical Society. 159M. Farkhondeh, C. Delacourt, Mathematical Modeling of Commercial LiFePO 4 Electrodes Based on Variable Solid-State Diffusivity, Journal of The Electrochemical Society 159 (2011) A177-A192. Implementation and evaluation of a practical electrochemical-thermal model of lithium-ion batteries for EV battery management system. Y Gao, C Zhu, X Zhang, B Guo, Energy. 221119688Y. Gao, C. Zhu, X. Zhang, B. Guo, Implementation and evaluation of a practical electrochemical-thermal model of lithium-ion batteries for EV battery management system, Energy 221 (2021) 119688. An electrochemical model based degradation state identification method of Lithium-ion battery for all-climate electric vehicles application. R Xiong, L Li, Z Li, Q Yu, H Mu, Applied Energy. 219R. Xiong, L. Li, Z. Li, Q. Yu, H. Mu, An electrochemical model based degradation state identification method of Lithium-ion battery for all-climate electric vehicles application, Applied Energy 219 (2018) 264-275. Design and simulation of lithium rechargeable batteries. C M Doyle, Lawrence Berkeley National LaboratoryC. M. Doyle, Design and simulation of lithium rechargeable batteries, Lawrence Berkeley National Laboratory (2010). Uncertainty-aware state estimation for electrochemical model-based fast charging control of lithium-ion batteries. F Ringbeck, M Garbade, D U Sauer, Journal of Power Sources. 470228221F. Ringbeck, M. Garbade, D. U. Sauer, Uncertainty-aware state estimation for electrochemical model-based fast charging control of lithium-ion batteries, Journal of Power Sources 470 (2020) 228221. Asymptotic reduction and homogenization of a thermo-electrochemical model for a lithium-ion battery. M G Hennessy, I R Moyles, Applied Mathematical Modelling. 80M. G. Hennessy, I. R. Moyles, Asymptotic reduction and homogenization of a thermo-electrochemical model for a lithium-ion battery, Applied Mathematical Modelling 80 (2020) 724-754. Reduction of model order based on proper orthogonal decomposition for lithium-ion battery simulations. L Cai, R E White, Journal of The Electrochemical Society. 156154L. Cai, R. E. White, Reduction of model order based on proper orthogonal decomposition for lithium-ion battery simulations, Journal of The Electrochemical Society 156 (2009) A154. Modeling of degradation effects and its integration into electrochemical reduced order model for li(mnnico)o2/graphite polymer battery for real time applications. Y Zhao, S.-Y Choe, J Kee, Electrochimica Acta. 270Y. Zhao, S.-Y. Choe, J. Kee, Modeling of degradation effects and its integration into electrochemical reduced order model for li(mnnico)o2/graphite polymer battery for real time applications, Electrochimica Acta 270 (2018) 440-452. A Framework for Simplification of PDE-Based Lithium-Ion Battery Models. C Zou, C Manzie, D Nesic, IEEE Transactions on Control Systems Technology. 24C. Zou, C. Manzie, D. Nesic, A Framework for Simplification of PDE-Based Lithium-Ion Battery Models, IEEE Transactions on Control Systems Technology 24 (2016) 1594-1609. In situ monitoring of lithium-ion battery degradation using an electrochemical model. C Lyu, Y Song, J Zheng, W Luo, G Hinds, J Li, L Wang, Applied Energy. 250C. Lyu, Y. Song, J. Zheng, W. Luo, G. Hinds, J. Li, L. Wang, In situ monitoring of lithium-ion battery degradation using an electrochemical model, Applied Energy 250 (2019) 685-696. A control oriented reduced order electrochemical model considering variable diffusivity of lithium ions in solid. Y Hu, Y Yin, Y Bi, S.-Y Choe, Journal of Power Sources. 468228322Y. Hu, Y. Yin, Y. Bi, S.-Y. Choe, A control oriented reduced order electrochemical model considering variable diffusivity of lithium ions in solid, Journal of Power Sources 468 (2020) 228322. Extension of physics-based single particle model for higher charge-discharge rates. S Khaleghi Rahimian, S Rayman, R E White, Journal of Power Sources. 224S. Khaleghi Rahimian, S. Rayman, R. E. White, Extension of physics-based single particle model for higher charge-discharge rates, Journal of Power Sources 224 (2013) 180-194. Simplification of physics-based electrochemical model for lithium ion battery on electric vehicle. Part I: Diffusion simplification and single particle model. X Han, M Ouyang, L Lu, J Li, Journal of Power Sources. 278X. Han, M. Ouyang, L. Lu, J. Li, Simplification of physics-based electrochemical model for lithium ion battery on electric vehicle. Part I: Diffusion simplification and single particle model, Journal of Power Sources 278 (2015) 802-813. Evaluation and observability analysis of an improved reduced-order electrochemical model for lithium-ion battery. L Wu, K Liu, H Pang, Electrochimica Acta. 368137604L. Wu, K. Liu, H. Pang, Evaluation and observability analysis of an improved reduced-order electrochemical model for lithium-ion battery, Electrochimica Acta 368 (2021) 137604. A lithium-ion battery electrochemical-thermal model for a wide temperature range applications. D Wang, H Huang, Z Tang, Q Zhang, B Yang, B Zhang, Electrochimica Acta. 362137118D. Wang, H. Huang, Z. Tang, Q. Zhang, B. Yang, B. Zhang, A lithium-ion battery electrochemical-thermal model for a wide temperature range applications, Electrochimica Acta 362 (2020) 137118. Reduced-order electrochemical model for lithium-ion battery with domain decomposition and polynomial approximation methods. C Li, N Cui, C Wang, C Zhang, Energy. 221119662C. Li, N. Cui, C. Wang, C. Zhang, Reduced-order electrochemical model for lithium-ion battery with domain decomposition and polynomial approximation methods, Energy 221 (2021) 119662. Development of a degradation-conscious physics-based lithium-ion battery model for use in power system planning studies. Y Li, M Vilathgamuwa, S S Choi, T W Farrell, N T Tran, J Teague, Applied Energy. 248Y. Li, M. Vilathgamuwa, S. S. Choi, T. W. Farrell, N. T. Tran, J. Teague, Development of a degradation-conscious physics-based lithium-ion battery model for use in power system planning studies, Applied Energy 248 (2019) 512-525. An adaptive sigma-point Kalman filter with state equality constraints for online state-of-charge estimation of a Li(NiMnCo)O2/Carbon battery using a reduced-order electrochemical model. Y Bi, S.-Y Choe, Applied Energy. 258113925Y. Bi, S.-Y. Choe, An adaptive sigma-point Kalman filter with state equality constraints for online state-of-charge estimation of a Li(NiMnCo)O2/Carbon battery using a reduced-order electrochemical model, Applied Energy 258 (2020) 113925. Systematic parameter identification of a control-oriented electrochemical battery model and its application for state of charge estimation at various operating conditions. G Fan, Journal of Power Sources. 470228153G. Fan, Systematic parameter identification of a control-oriented electrochemical battery model and its application for state of charge estimation at various operating conditions, Journal of Power Sources 470 (2020) 228153. Simplification and order reduction of lithium-ion battery model based on porous-electrode theory. T.-S Dao, C P Vyasarayani, J Mcphee, Journal of Power Sources. 198T.-S. Dao, C. P. Vyasarayani, J. McPhee, Simplification and order reduction of lithium-ion battery model based on porous-electrode theory, Journal of Power Sources 198 (2012) 329-337. Aging modes analysis and physical parameter identification based on a simplified electrochemical model for lithium-ion batteries. J Li, D Wang, L Deng, Z Cui, C Lyu, L Wang, M Pecht, Journal of Energy Storage. 31101538J. Li, D. Wang, L. Deng, Z. Cui, C. Lyu, L. Wang, M. Pecht, Aging modes analysis and physical parameter identification based on a simplified electrochemical model for lithium-ion batteries, Journal of Energy Storage 31 (2020) 101538. Reduction of an electrochemistry-based li-ion battery model via quasi-linearization and pade approximation. J C Forman, S Bashash, J L Stein, H K Fathy, Journal of The Electrochemical Society. 15893J. C. Forman, S. Bashash, J. L. Stein, H. K. Fathy, Reduction of an electrochemistry-based li-ion battery model via quasi-linearization and pade approximation, Journal of The Electrochemical Society 158 (2011) A93. A reduced-order electrochemical model for all-solid-state batteries. Z Deng, X Hu, X Lin, L Xu, J Li, W Guo, IEEE Transactions on Transportation Electrification. 7Z. Deng, X. Hu, X. Lin, L. Xu, J. Li, W. Guo, A reduced-order electrochemical model for all-solid-state batteries, IEEE Transactions on Transportation Electrification 7 (2021) 464-473. Control oriented 1D electrochemical model of lithium ion battery, Energy Conversion and Management. K A Smith, C D Rahn, C.-Y. Wang, 48K. A. Smith, C. D. Rahn, C.-Y. Wang, Control oriented 1D electrochemical model of lithium ion battery, Energy Conversion and Management 48 (2007) 2565-2578. A physics-based fractional order model and state of energy estimation for lithium ion batteries. Part I: Model development and observability analysis. X Li, G Fan, K Pan, G Wei, C Zhu, G Rizzoni, M Canova, Journal of Power Sources. 367X. Li, G. Fan, K. Pan, G. Wei, C. Zhu, G. Rizzoni, M. Canova, A physics-based fractional order model and state of energy estimation for lithium ion batteries. Part I: Model development and observability analysis, Journal of Power Sources 367 (2017) 187-201. Co-estimation of lithium-ion battery state of charge and state of temperature based on a hybrid electrochemical-thermal-neural-network model. F Feng, S Teng, K Liu, J Xie, Y Xie, B Liu, K Li, Journal of Power Sources. 455227935F. Feng, S. Teng, K. Liu, J. Xie, Y. Xie, B. Liu, K. Li, Co-estimation of lithium-ion battery state of charge and state of temperature based on a hybrid electrochemical-thermal-neural-network model, Journal of Power Sources 455 (2020) 227935. Experimental study of fractional-order models for lithium-ion battery and ultra-capacitor: Modeling, system identification, and validation. Y Wang, M Li, Z Chen, Applied Energy. 278115736Y. Wang, M. Li, Z. Chen, Experimental study of fractional-order models for lithium-ion battery and ultra-capacitor: Modeling, system identification, and validation, Applied Energy 278 (2020) 115736. One-dimensional physics-based reduced-order model of lithium-ion dynamics. J L Lee, A Chemistruck, G L Plett, Journal of Power Sources. 220J. L. Lee, A. Chemistruck, G. L. Plett, One-dimensional physics-based reduced-order model of lithium-ion dynamics, Journal of Power Sources 220 (2012) 430-448. Electrochemical Model Based Observer Design for a Lithium-Ion Battery. R Klein, N A Chaturvedi, J Christensen, J Ahmed, R Findeisen, A Kojic, IEEE Transactions on Control Systems Technology. 21R. Klein, N. A. Chaturvedi, J. Christensen, J. Ahmed, R. Findeisen, A. Kojic, Electrochemical Model Based Observer Design for a Lithium-Ion Battery, IEEE Transactions on Control Systems Technology 21 (2013) 289-301. A linear time-invariant model for solid-phase diffusion in physics-based lithium ion cell models. X Hu, S Stanton, L Cai, R E White, Journal of Power Sources. 214X. Hu, S. Stanton, L. Cai, R. E. White, A linear time-invariant model for solid-phase diffusion in physics-based lithium ion cell models, Journal of Power Sources 214 (2012) 40-50. Polynomial approximation pseudo-two-dimensional battery model for online application in embedded battery management system. Z Deng, L Yang, H Deng, Y Cai, D Li, Energy. 142Z. Deng, L. Yang, H. Deng, Y. Cai, D. Li, Polynomial approximation pseudo-two-dimensional battery model for online application in embed- ded battery management system, Energy 142 (2018) 838-850. Simplification of physics-based electrochemical model for lithium ion battery on electric vehicle. Part II: Pseudo-two-dimensional model simplification and state of charge estimation. X Han, M Ouyang, L Lu, J Li, Journal of Power Sources. 278X. Han, M. Ouyang, L. Lu, J. Li, Simplification of physics-based electrochemical model for lithium ion battery on electric vehicle. Part II: Pseudo-two-dimensional model simplification and state of charge estimation, Journal of Power Sources 278 (2015) 814-825. A new extension of physics-based single particle model for higher charge-discharge rates. W Luo, C Lyu, L Wang, L Zhang, Journal of Power Sources. 241W. Luo, C. Lyu, L. Wang, L. Zhang, A new extension of physics-based single particle model for higher charge-discharge rates, Journal of Power Sources 241 (2013) 295-310. Control-oriented thermal-electrochemical modeling and validation of large size prismatic lithium battery for commercial applications. D Li, L Yang, C Li, Energy. 214119057D. Li, L. Yang, C. Li, Control-oriented thermal-electrochemical modeling and validation of large size prismatic lithium battery for commercial applications, Energy 214 (2021) 119057. Electrochemical model-based state estimation for lithium-ion batteries with adaptive unscented Kalman filter. W Li, Y Fan, F Ringbeck, D Jöst, X Han, M Ouyang, D U Sauer, Journal of Power Sources. 476228534W. Li, Y. Fan, F. Ringbeck, D. Jöst, X. Han, M. Ouyang, D. U. Sauer, Electrochemical model-based state estimation for lithium-ion batteries with adaptive unscented Kalman filter, Journal of Power Sources 476 (2020) 228534. Mathematical model reformulation for lithium-ion battery simulations: Galvanostatic boundary conditions. V R Subramanian, V Boovaragavan, V Ramadesigan, M Arabandi, Journal of The Electrochemical Society. 156260V. R. Subramanian, V. Boovaragavan, V. Ramadesigan, M. Arabandi, Mathematical model reformulation for lithium-ion battery simulations: Galvanostatic boundary conditions, Journal of The Electrochemical Society 156 (2009) A260. Online Capacity Estimation for Lithium-Ion Battery Cells via an Electrochemical Model-Based Adaptive Interconnected Observer. A Allam, S Onori, Conference Name: IEEE Transactions on Control Systems Technology. A. Allam, S. Onori, Online Capacity Estimation for Lithium-Ion Battery Cells via an Electrochemical Model-Based Adaptive Interconnected Observer, IEEE Transactions on Control Systems Technology (2020) 1-16. Conference Name: IEEE Transactions on Control Systems Technology. Transport properties of LiPF[sub 6]-based li-ion battery electrolytes. L O Valo, J N Reimers, Journal of The Electrochemical Society. 152882L. O. Valo, J. N. Reimers, Transport properties of LiPF[sub 6]-based li-ion battery electrolytes, Journal of The Electrochemical Society 152 (2005) A882. LIONSIMBA: A Matlab Framework Based on a Finite Volume Model Suitable for Li-Ion Battery Design, Simulation, and Control. M Torchio, L Magni, R B Gopaluni, R D Braatz, D M Raimondo, Journal of The Electrochemical Society. 163M. Torchio, L. Magni, R. B. Gopaluni, R. D. Braatz, D. M. Raimondo, LIONSIMBA: A Matlab Framework Based on a Finite Volume Model Suitable for Li-Ion Battery Design, Simulation, and Control, Journal of The Electrochemical Society 163 (2016) A1192-A1205. Fast charging optimization for lithium-ion batteries based on dynamic programming algorithm and electrochemical-thermal-capacity fade coupled model. M Xu, R Wang, P Zhao, X Wang, Journal of Power Sources. 438227015M. Xu, R. Wang, P. Zhao, X. Wang, Fast charging optimization for lithium-ion batteries based on dynamic programming algorithm and electrochemical-thermal-capacity fade coupled model, Journal of Power Sources 438 (2019) 227015. Semianalytical method of solution for solid phase diffusion in lithium ion battery electrodes: Variable diffusion coefficient. S Renganathan, R E White, Journal of Power Sources. 196S. Renganathan, R. E. White, Semianalytical method of solution for solid phase diffusion in lithium ion battery electrodes: Variable diffusion coefficient, Journal of Power Sources 196 (2011) 442-448. Autolion™: A thermally coupled simulation tool for automotive li-ion batteries. J Kalupson, G Luo, C Shaffer, SAE 2013 World Congress and Exhibition ; Conference date. 2J. Kalupson, G. Luo, C. Shaffer, Autolion™: A thermally coupled simulation tool for automotive li-ion batteries, SAE Technical Papers 2 (2013). SAE 2013 World Congress and Exhibition ; Conference date: 16-04-2013 Through 18-04-2013. Clarifying the Butler-Volmer equation and related approximations for calculating activation losses in solid oxide fuel cell models. D Noren, M Hoffman, Journal of Power Sources. 152D. Noren, M. Hoffman, Clarifying the Butler-Volmer equation and related approximations for calculating activation losses in solid oxide fuel cell models, Journal of Power Sources 152 (2005) 175-181. Mathematical Modeling of Lithium Batteries. K E Thomas, J Newman, R M Darling, Advances in Lithium-Ion Batteries. W. A. van Schalkwijk, B. ScrosatiBoston, MASpringer USK. E. Thomas, J. Newman, R. M. Darling, Mathematical Modeling of Lithium Batteries, in: W. A. van Schalkwijk, B. Scrosati (Eds.), Advances in Lithium-Ion Batteries, Springer US, Boston, MA, 2002, pp. 345-392. Heat-generation rate and general energy balance for insertion battery systems. L Rao, J Newman, Journal of The Electrochemical Society. 144L. Rao, J. Newman, Heat-generation rate and general energy balance for insertion battery systems, Journal of The Electrochemical Society 144 (1997) 2697-2704. Influence of Some Design Variables on the Thermal Behavior of a Lithium-Ion Cell. G G Botte, B A Johnson, R E White, Journal of The Electrochemical Society. 146G. G. Botte, B. A. Johnson, R. E. White, Influence of Some Design Variables on the Thermal Behavior of a Lithium-Ion Cell, Journal of The Electrochemical Society 146 (1999) 914-923. Thermal-Electrochemical Modeling of Battery Systems. W B Gu, C Y Wang, Journal of The Electrochemical Society. 1472910W. B. Gu, C. Y. Wang, Thermal-Electrochemical Modeling of Battery Systems, Journal of The Electrochemical Society 147 (2000) 2910. Parameter sensitivity analysis of electrochemical model-based battery management systems for lithium-ion batteries. W Li, D Cao, D Jöst, F Ringbeck, M Kuipers, F Frie, D U Sauer, Applied Energy. 269115104W. Li, D. Cao, D. Jöst, F. Ringbeck, M. Kuipers, F. Frie, D. U. Sauer, Parameter sensitivity analysis of electrochemical model-based battery management systems for lithium-ion batteries, Applied Energy 269 (2020) 115104. New fast charging method of lithium-ion batteries based on a reduced order electrochemical model considering side reaction. Y Yin, Y Hu, S.-Y Choe, H Cho, W T Joe, Journal of Power Sources. 423Y. Yin, Y. Hu, S.-Y. Choe, H. Cho, W. T. Joe, New fast charging method of lithium-ion batteries based on a reduced order electrochemical model considering side reaction, Journal of Power Sources 423 (2019) 367-379. Electrochemical-thermal modeling of lithium plating/stripping of Li(Ni0.6Mn0.2Co0.2)O2/Carbon lithium-ion batteries at subzero ambient temperatures. X Zhao, Y Yin, Y Hu, S.-Y Choe, Journal of Power Sources. 418X. Zhao, Y. Yin, Y. Hu, S.-Y. Choe, Electrochemical-thermal modeling of lithium plating/stripping of Li(Ni0.6Mn0.2Co0.2)O2/Carbon lithium-ion batteries at subzero ambient temperatures, Journal of Power Sources 418 (2019) 61-73. Savitzky-golay smoothing filters. W H Press, S A Teukolsky, Computers in Physics. 4W. H. Press, S. A. Teukolsky, Savitzky-golay smoothing filters, Computers in Physics 4 (1990) 669-672. A review on the key issues of the lithium ion battery degradation among the whole life cycle. X Han, L Lu, Y Zheng, X Feng, Z Li, J Li, M Ouyang, 100005X. Han, L. Lu, Y. Zheng, X. Feng, Z. Li, J. Li, M. Ouyang, A review on the key issues of the lithium ion battery degradation among the whole life cycle, eTransportation 1 (2019) 100005.
[]
[]
[ "Jianlei Kong [email protected] \nSchool of Artificial Intelligence\nSchool of E-Commerce and Logistics, Beijing Technology and Business University\nNational Engineering Laboratory for Agri-Product Quality Traceability\nBeijing Technology and Business University\n100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China\n", "Xiaomeng Fan \nSchool of Artificial Intelligence\nSchool of E-Commerce and Logistics, Beijing Technology and Business University\nNational Engineering Laboratory for Agri-Product Quality Traceability\nBeijing Technology and Business University\n100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China\n", "Xue-Bo Jin [email protected] \nSchool of Artificial Intelligence\nSchool of E-Commerce and Logistics, Beijing Technology and Business University\nNational Engineering Laboratory for Agri-Product Quality Traceability\nBeijing Technology and Business University\n100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China\n", "Min Zuo [email protected]. \nSchool of Artificial Intelligence\nSchool of E-Commerce and Logistics, Beijing Technology and Business University\nNational Engineering Laboratory for Agri-Product Quality Traceability\nBeijing Technology and Business University\n100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China\n", "XiaomengJianlei Kong \nSchool of Artificial Intelligence\nSchool of E-Commerce and Logistics, Beijing Technology and Business University\nNational Engineering Laboratory for Agri-Product Quality Traceability\nBeijing Technology and Business University\n100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China\n", "Fan Xuebo \nSchool of Artificial Intelligence\nSchool of E-Commerce and Logistics, Beijing Technology and Business University\nNational Engineering Laboratory for Agri-Product Quality Traceability\nBeijing Technology and Business University\n100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China\n", "Jin \nSchool of Artificial Intelligence\nSchool of E-Commerce and Logistics, Beijing Technology and Business University\nNational Engineering Laboratory for Agri-Product Quality Traceability\nBeijing Technology and Business University\n100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China\n", "Tingli Su \nSchool of Artificial Intelligence\nSchool of E-Commerce and Logistics, Beijing Technology and Business University\nNational Engineering Laboratory for Agri-Product Quality Traceability\nBeijing Technology and Business University\n100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China\n", "Min Zuo \nSchool of Artificial Intelligence\nSchool of E-Commerce and Logistics, Beijing Technology and Business University\nNational Engineering Laboratory for Agri-Product Quality Traceability\nBeijing Technology and Business University\n100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China\n" ]
[ "School of Artificial Intelligence\nSchool of E-Commerce and Logistics, Beijing Technology and Business University\nNational Engineering Laboratory for Agri-Product Quality Traceability\nBeijing Technology and Business University\n100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China", "School of Artificial Intelligence\nSchool of E-Commerce and Logistics, Beijing Technology and Business University\nNational Engineering Laboratory for Agri-Product Quality Traceability\nBeijing Technology and Business University\n100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China", "School of Artificial Intelligence\nSchool of E-Commerce and Logistics, Beijing Technology and Business University\nNational Engineering Laboratory for Agri-Product Quality Traceability\nBeijing Technology and Business University\n100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China", "School of Artificial Intelligence\nSchool of E-Commerce and Logistics, Beijing Technology and Business University\nNational Engineering Laboratory for Agri-Product Quality Traceability\nBeijing Technology and Business University\n100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China", "School of Artificial Intelligence\nSchool of E-Commerce and Logistics, Beijing Technology and Business University\nNational Engineering Laboratory for Agri-Product Quality Traceability\nBeijing Technology and Business University\n100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China", "School of Artificial Intelligence\nSchool of E-Commerce and Logistics, Beijing Technology and Business University\nNational Engineering Laboratory for Agri-Product Quality Traceability\nBeijing Technology and Business University\n100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China", "School of Artificial Intelligence\nSchool of E-Commerce and Logistics, Beijing Technology and Business University\nNational Engineering Laboratory for Agri-Product Quality Traceability\nBeijing Technology and Business University\n100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China", "School of Artificial Intelligence\nSchool of E-Commerce and Logistics, Beijing Technology and Business University\nNational Engineering Laboratory for Agri-Product Quality Traceability\nBeijing Technology and Business University\n100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China", "School of Artificial Intelligence\nSchool of E-Commerce and Logistics, Beijing Technology and Business University\nNational Engineering Laboratory for Agri-Product Quality Traceability\nBeijing Technology and Business University\n100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China" ]
[]
Accurate traffic flow prediction, a hotspot for intelligent transportation research, is the prerequisite for mastering traffic and making travel plans. The speed of traffic flow can be affected by roads condition, weather, holidays, etc. Furthermore, the sensors to catch the information about traffic flow will be interfered with by environmental factors such as illumination, collection time, occlusion, etc. Therefore, the traffic flow in the practical transportation system is complicated, uncertain, and challenging to predict accurately. This paper proposes a deep encoder-decoder prediction framework based on variational Bayesian inference. A Bayesian neural network is constructed by combining variational inference with gated recurrent units (GRU) and used as the deep neural network unit of the encoder-decoder framework to mine the intrinsic dynamics of traffic flow. Then, the variational inference is introduced into the multi-head attention mechanism to avoid noise-induced deterioration of prediction accuracy. The proposed model achieves superior prediction performance on the Guangzhou urban traffic flow dataset over the benchmarks, particularly when the long-term prediction.
10.48550/arxiv.2212.07194
[ "https://export.arxiv.org/pdf/2212.07194v1.pdf" ]
254,636,215
2212.07194
f78c1820a2061cf2c80a0fc82dcb0d2b3c44ce2e
Jianlei Kong [email protected] School of Artificial Intelligence School of E-Commerce and Logistics, Beijing Technology and Business University National Engineering Laboratory for Agri-Product Quality Traceability Beijing Technology and Business University 100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China Xiaomeng Fan School of Artificial Intelligence School of E-Commerce and Logistics, Beijing Technology and Business University National Engineering Laboratory for Agri-Product Quality Traceability Beijing Technology and Business University 100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China Xue-Bo Jin [email protected] School of Artificial Intelligence School of E-Commerce and Logistics, Beijing Technology and Business University National Engineering Laboratory for Agri-Product Quality Traceability Beijing Technology and Business University 100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China Min Zuo [email protected]. School of Artificial Intelligence School of E-Commerce and Logistics, Beijing Technology and Business University National Engineering Laboratory for Agri-Product Quality Traceability Beijing Technology and Business University 100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China XiaomengJianlei Kong School of Artificial Intelligence School of E-Commerce and Logistics, Beijing Technology and Business University National Engineering Laboratory for Agri-Product Quality Traceability Beijing Technology and Business University 100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China Fan Xuebo School of Artificial Intelligence School of E-Commerce and Logistics, Beijing Technology and Business University National Engineering Laboratory for Agri-Product Quality Traceability Beijing Technology and Business University 100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China Jin School of Artificial Intelligence School of E-Commerce and Logistics, Beijing Technology and Business University National Engineering Laboratory for Agri-Product Quality Traceability Beijing Technology and Business University 100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China Tingli Su School of Artificial Intelligence School of E-Commerce and Logistics, Beijing Technology and Business University National Engineering Laboratory for Agri-Product Quality Traceability Beijing Technology and Business University 100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China Min Zuo School of Artificial Intelligence School of E-Commerce and Logistics, Beijing Technology and Business University National Engineering Laboratory for Agri-Product Quality Traceability Beijing Technology and Business University 100048, 100048, 100048Beijing, Beijing, BeijingChina, China, China 1 > REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) <Index Terms-Traffic flow predictiontime-series data predictionvariational Bayesian inferencemulti-head attentiondeep learningencoder-decoder Accurate traffic flow prediction, a hotspot for intelligent transportation research, is the prerequisite for mastering traffic and making travel plans. The speed of traffic flow can be affected by roads condition, weather, holidays, etc. Furthermore, the sensors to catch the information about traffic flow will be interfered with by environmental factors such as illumination, collection time, occlusion, etc. Therefore, the traffic flow in the practical transportation system is complicated, uncertain, and challenging to predict accurately. This paper proposes a deep encoder-decoder prediction framework based on variational Bayesian inference. A Bayesian neural network is constructed by combining variational inference with gated recurrent units (GRU) and used as the deep neural network unit of the encoder-decoder framework to mine the intrinsic dynamics of traffic flow. Then, the variational inference is introduced into the multi-head attention mechanism to avoid noise-induced deterioration of prediction accuracy. The proposed model achieves superior prediction performance on the Guangzhou urban traffic flow dataset over the benchmarks, particularly when the long-term prediction. I. INTRODUCTION N transportation systems, sensors are widely deployed to monitor information such as vehicle speed and traffic flow-for example, urban traffic speed [1], passenger flow [2], and urban rail traffic data [3] [4]. Traffic flow prediction (TFP) is the foundation of intelligent transportation systems(ITS) and is critical to implementing traffic control. Accurate TFP can help residents in travel planning and traffic control to reduce congestion and accidents. The rapid growth in the number of private vehicles has caused more traffic accidents and traffic jams, significantly increasing the uncertainty and randomness of traffic conditions and making it challenging to predict traffic flow accurately. On the other hand, with the continuous development of computers, sensors, and cloud storage, the collected data about traffic flow can be stored in a complete record according to fixed intervals. Through data processing, the trends for traffic flow can be predicted, providing a basis for regulation and control of the traffic system and a reference for trip planning and decisionmaking [5], for example, helping travelers save money and time with better route guidance. Compared with short-term traffic prediction, long-term travel planning and management are needed, so medium, and long-term TFP is an essential and meaningful research area [6]. Government agencies can develop route planning based on long-term prediction to reduce traffic congestion and accidents. Traffic time series data are usually noisy and highly nonlinear [7]. Deep learning techniques have recently been applied to TFP [8], especially long-term prediction. Still, deep learning prediction methods are an open issue. Since traffic sensors will introduce uncertainty and noise during the data collecting, such as abnormal driver operations, sensor failures, and weather changes, the prediction performance of traditional statistical methods will be degraded. At the same time, the models will be overfitted, leading to poor robustness during the training process. In addition, traffic data is periodic and trendy, making it difficult for classic deep neural networks to directly discover the intrinsic features of traffic flow data [9]. Further, for a classical neural network, the error will accumulate for long-term prediction, thus resulting in low prediction accuracy. This paper proposes a deep encoder-decoder prediction framework based on variational Bayesian inference, aiming to overcome data noise and prediction error accumulation. The innovations of this paper are as follows: Firstly, we introduce Bayesian inference into the recurrent neural networks and propose a Bayesian gated recurrent unit (BGRU) based on variational inference. The weights and biases of the neural network are changed to a Gaussian distribution with mean and variance, which solves the problems of poor generalization ability and low prediction accuracy of the model due to data volatility and noise. Secondly, we incorporate the attention mechanism in the encoder-decoder framework. It can extract valid information in different subspaces and more accurately select the hidden states on all-time steps for long-term prediction. Meanwhile, the distributed multi-head attention mechanism can reduce the Traffic Flow Prediction via Variational Bayesian Inference-based Encoder-Decoder Framework I 2 > REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) < computational cost and improve the computational efficiency and accuracy of the model. The rest of the paper is arranged as follows: Section II summarizes an overview of related works on traffic time series modeling and prediction. Section III describes the general architecture of the model and process details. Section IV presents the experimental results and analysis. The experimental results show that the proposed model has good prediction performance compared with other baseline methods. The model's validity is improved by conducting practical demonstrations on the Guangzhou urban traffic flow dataset. Finally, we give the study's conclusions and discuss future research in Section V. II. RELATED WORK A. Machine Learning and Deep Neural Network Machine learning has self-learning and nonlinear fitting capabilities, including support vector regression (SVR), matrix factorization (MF), Gaussian process (GP), and artificial neural networks (ANN) [10]- [13]. But due to uncertainty and noise, the modeling ability of machine learning is limited and not accurate enough for long-term time-series prediction. In traffic time-series data prediction, ANNs cannot capture changes in data series when the data rapidly changes within a short period [14]- [16]. In recent years, neural networks have been pushed to new heights with the rise of deep learning. Deep learning methods can fit complex nonlinear data and thus have a solid ability to learn data [17], such as natural language processing(NLP) [18], image recognition [19], and medical diagnosis [20]. Recurrent neural networks (RNNs) have attracted much attention due to their flexibility in capturing nonlinear relationships for time series data. However, traditional RNNs have difficulty capturing long-term dependencies due to the gradient disappearance problem. In recent years, long-short-term memory networks (LSTM) and GRU have overcome this limitation [21]. Oliveira [22] uses MLP and LSTM to predict the traffic flow of an interstate highway in New York, U.S. Meng [23] proposed an LSTM for traffic speed prediction with a dynamic time-warping model, which performed better than traditional LSTM. Chen [24] designed a hybrid traffic flow prediction model based on LSTM and Sparse Auto-Encoder, which achieves a compression ratio of 20% for high-dimensional, large-scale traffic data, significantly reducing the computation complexity in TFP. Zheng [25] developed an attention-based Conv-LSTM module to predict the spatial and short-term traffic flow. In practice, the performance of the above deep neural networks would decrease rapidly in long-term prediction due to the limited capability of modeling the time series data. B. Encoder-Decoder Framework The encoder-decoder is a sequence-to-sequence structure [26] using deep neural networks (e. g. CNN, RNN, or LSTM). The encoder-decoder network breaks through the limitation of the traditional RNN model with a fixed size of input and output sequences. It can extract the features of the input time series data [27]. However, as the length of the input time series increases, the information will cover the earlier one, which leads to the coded vector not reflecting the whole input vector, and lead to the prediction ability gradually decreases. Therefore, to improve the performance of the encoderdecoder network, an attention mechanism is introduced [28]. This mechanism can assign different attention weights for all time steps and adaptively select the encoder hidden state, which is able to extract highly time-dependent useful features of the reference sequence. Attention mechanisms have been widely used in time series prediction in recent years. Jin [29] combined wavelet decomposition and bidirectional LSTM networks to integrate attention mechanisms to predict the temperature and humidity of a smart greenhouse. Lai [30] proposed a deep learning framework for multivariate time series prediction, which exploits the advantages of convolutional and recursive layers to discover the local dependence patterns between multidimensional input variables. Wang [31] integrated the attention mechanism into the seq-to-seq deep learning architecture for long-term traffic flow prediction. C. Bayesian Neural Network Modeling Bayesian neural network(BNN) [32] is an inferential neural network with uncertainty. It uses Bayesian theory and the variational inference to introduce prior probabilities into the weights and biases of the neural network. It continuously adjusts the prior probabilities through backpropagation to extract the distribution characteristics hidden in the data and infer the data distribution to achieve the estimated prediction of the data distribution. In recent years, Bayesian neural networks have been used in image detection [33], NLP [34], and time series prediction [35], etc. Zhan [36] used variational Bayesian neural network (VBNN) to validate the case of flood forecasting in the upper Yangtze River. Song [37] used variational methods to construct Bayesian linear layers to predict the Pacific coastline's maximum tsunami height. Liu [38] used Bayesian long-short term memory networks to implement fault warnings for automobile turbines. The Bayesian neural network can extract and process the hidden information of the time series and has a better description of the uncertainty of the prediction point, which is a fundamental guideline for the regulation and planning of the predicted system [39]. III. METHODOLOGY This paper proposes a Bayesian encoder-decoder multi-head attention model (BEDMA), which uses the Bayesian encoderdecoder model as the main structure and takes BayesianGRU as the basic unit. Incorporating the Bayesian attention mechanism, the model constructs a seq-to-seq framework based on the Bayesian encoder layer, Bayesian attention layer, and Bayesian decoder layer. In this section, we introduce these components in details. A. Bayesian neural network GRU [40] is a neural network proposed by Cho in 2014, which is an improvement of LSTM. It merges forget gate and > REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) < input gate into the update gate and merges memory units and hidden layers into the reset gate, which can better capture dependencies with long series. The forward propagation process of GRU is as follows: 1 ( [ , ] ) t z t t z z W h x b σ − = + (1) 1 ( [ , ] ) t r t t r r W h x b σ − = + (2) 1 tanh( [ , ] ) t h t t t h h W r h x b − = +   (3) 1 (1 ) h z h z h − = − +   (4) where r W and r b are the weights and biases of the reset gate, which decides whether the previous hidden state 1 t h − is ignored; z W and z b are the weights and biases of the update gate, which controls how much information from the previous hidden state will carry over to the current hidden state t h ; h W and h b are used to calculate the current candidate hidden state. The weights W and biases b of the traditional GRU are fixed values, which makes it overfitting in predicting noisy traffic flow data such that it cannot fit other data or predict future observations well. Inspired by BNN, we combine variational inference and GRU to Bayesian GRU(BGRU) so that its weights and biases become sampling points with Gaussian distribution to avoid the overfitting on noisy data. BNN uses the probability distribution over the network weights and outputs the prediction in that distribution. BNN can provide a probability solution to the uncertainty problem in conventional networks' training process. The neural network treats the dataset as a probabilistic model ( | , ) P y x w : given the inputs x and weights w , the neural network has a probability for each output y . And the BNN computes the posterior distribution of weights given the training data ( | ) P w D , which is derived from ( | ) ( | ) [ ( | , )] P w D P y x P y x w = Ε (5) where 1 1 2 2 {( , ), ( , ), , ( , )} m m D x y x y x y =  is the dataset. An infinite number of weights are obtained based on the posterior distribution, all of which predict the unknown label y for a given test data item x . Therefore, taking an expected value under the posterior distribution of weights is equivalent to using an infinite set of neural networks. Variational learning finds the parameters of the distribution ~( , ) θ µ σ  over the weights ( | ) q w θ such that the Kullback-Leibler divergence(KL) of the weights concerning the Bayesian posterior probability is minimized as follows: [ ] [ ] * ( | ) arg min ( | ) || ( | ) argmin ( | )log[ ( | ) / ( | )] arg min ( | ) log[ ( | ) / ( | ) ( )] arg min ( | ) || ( ) [log ( | )] q w KL q w P w D q w q w P w D dw q w q w θ θ θ θ = = = = − Ε ∫ ∫(6) where ( ) P D can be ignored because it is not related toθ . Assuming that the parameters are independent of each other, the loss function of the network which can be approximated using Monte Carlo sampling is as follows: ( ) ( ) ( ) 1 1 ( , ) log ( | ) log ( ) log ( | ) n i i i loss i F D q w P w P D w n θ θ = ≈ − − ∑(7) The BGRU has the same chain structure as the GRU, with the difference that each weight W and bias b in the BGRU is a distribution with trainable parameters. The initialized distribution of them is a standard normal distribution, and its optimal mean and variance are obtained by training. When using BGRU, predictions are obtained by sampling over the distribution of weights. The mean of the predictions is used as the final prediction, and the variance of the multiple predictions is used as the confidence interval. The structure of the BGRU is shown in Fig. 1. Taking r W and r b as an example. Let ( ) ( ) ( ) ( ) ( ) ( ) ( ) 0,1 log 1 i i i n W σ µ = * + +  (8) ( ) ( ) ( ) ( ) ( ) ( ) 0,1 log 1 i i i n b σ µ = * + +  (9) We use BGRU as the encoder for the codec structure, the encoder network consists of multiple layers of BGRUs, and its forward propagation process is as follows: 1 (1 ) h z h z h − = − +   (12) The loss function is defined as follows: ( ) ( ) log ( | ) log ( ) loss q w p w θ = −(14) where ( ) p w is a custom a priori distribution and 2 ( , ) u θ σ  and ( | ) q w θ are posterior distributions. This allows the BGRU to learn the distribution features, while the target given during training is a sequence of deterministic values so that deterministic errors need to be incorporated. ( ) ( ) 1 ( , ) log ( | ) log ( ) loss mse y y q w p w α θ α = + • −     (15) whereŷ is the prediction of the output under the current weight sampling, andα is the weight coefficient, which is equal to the product of the number of training samples and the batch size. > REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) < B. Bayesian Multi-head Attention The encoder-decoder model can extract the input data's information and is widely used in NLP. But the coding information will be lost in a long data series, resulting in the coding vector not reflecting the information of the whole input, thus reducing the prediction accuracy. The attention mechanism is a modification to the encoderdecoder model. As compared to the base encoder-decoder model, the output of the attention mechanism in the encoder is involved in the computation of each step in the decoder. The encoder-decoder model with attention mechanism has removed the bottleneck of fixed-length coding, and the loss information from the encoder to the decoder will be low. The structure of the encoder-decoder model based on the attention mechanism is shown in Fig. 2. is the parameter to be learned. The softmax function is applied ij e to ensure that all attention weights sum to 1. Fig. 3. show the structure of the Bayesian multi-head attention mechanism by transforming the linear layer of multi-head attention into a Bayesian linear layer. , and h is the number of heads. The encoder output H is transformed h times, and then the input of the i -the head is , , t d i i i Q K V × ∈  . ( ) ( ) 0,1 log 1 iq i i i iq Q w w q ρ µ = * + + =  (19) ( ) ( ) 0,1 log 1 ik k k i ik K w w k ρ µ = * + + =  (20) ( ) ( ) 0,1 log 1 iv v v i iv V w w v ρ µ = * + + = (21) where , , , , , i i k k v v µ ρ µ ρ µ ρ is the parameter to be learned. Attention mechanism is calculated in three steps: firstly, the similarity between the Q vector and each K is calculated, and the corresponding weights are obtained; then, these weights are normalized using the softmax function; finally, the weights and the corresponding V are weighted and summed to obtain the attention weights. The attention weight calculation process and the output of the i -th t d i head × ∈  are as follows: max T Q K AttentionWight soft d   • =    (22) Samplin > REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) < max T i i i i Q K head soft V d   • = •    (23) where • means dot-product, i head is the weighted Bayesian encoder output. In multi-head attention, the Q , K and V undergo a linear calculation, and then are input to the scaled dot-product attention, which requires h dot product scaling. The so-called multi-head is to perform multiple dot product scaling, each dot product scaling is a head, the parameters are not shared between different heads, and Q , K and V are different for each linear transformation. After passing the h dot product scaling process, the attention result is concatenated, and the encoding vector C obtained after linear transformation is the result of multi-head attention. It can be seen that multi-head attention performs the dotproduct multiple times instead of once, and this multiple dotproduct process can learn from different subspaces. Multi-head attention uses multiple queries Q to extract multiple information from the input data i input X by parallel computation. Each i head focuses on a different part of the input information. The distributed multi-head attention mechanism can save computational resources, reduce computational costs, and improve the computational efficiency of the model. C. Model for Traffic Flow Prediction The model framework is shown in Fig. 4, in which the encoder-decoder is constructed using BGRU as the basic neural network, and the multi-head attention mechanism based on variational inference is fused in the encoder-decoder. Firstly, a sliding window is applied to the data to obtain the network input sequence = = +       (25) where , y y µ ρ is the parameter to be learned and r is the predicted length of the target sequence. relu is the activation function with the following expression. The prediction sequence 1 2 [ , , , ] T Y y y y τ =      is compared with the target sequence *( 1) 1 *( 1) [ , , ] i target s i t s i t X x x τ − + + − + + =  to calculate the error, and the error back propagation is used to update the model parameters to minimize the error between the predicted sequence and the target sequence. The model is trained on the network using the training dataset and then evaluated using the validation dataset to minimize overfitting. All models are optimized using the adaptive moment estimation (Adam) optimization algorithm, which uses momentum and adaptive learning rates to accelerate convergence with high computational efficiency and a small memory footprint. The following algorithm 1 gives the training process of the model: Optimizing loss by MAE and Adam. 7: if reached the criterion then break IV. EXPERIMENTAL AND RESULT ANALYSIS A. Dataset This experiment uses the Guangzhou urban traffic flow dataset, which consists of traffic speeds in urban highway and main road, from August 1, 2016, to September 30, 2016, with a data deficiency rate of 1.29%. During the experiment, traffic flow data from the first 48 days were used as training samples for the model, and data from the next 13 days as testing samples. Part of the data visualization is shown in Fig. 5. B. Evaluation Metrics The experiments used root mean squared error (RMSE) Pearson's correlation coefficient R, and symmetric mean absolute percentage error (SMAPE) as the indexes and indicators for evaluating the model. RMSE calculates the sum of squares of the distance between the prediction and actual traffic speed, calculates the deviation between them, and reflects the degree of dispersion of the prediction. The smaller the value of RMSE, the smaller the deviation is. R measures the correlation between the prediction and actual values. R has a maximum of 1, and the closer it is to 1, the better the regression line fits the actual values. SMAPE is used to measure the proportion of deviation between the prediction and actual traffic speed. The formulas of these three indicators are as the following. 1 2 2 1 1( )( )( ) ( ) T t t t t t T T t t t t t t y y y y R y y y y = = = − − = − − ∑ ∑ ∑(26) where i y is the actual traffic speed,ˆi y is the prediction, m represents the number of samples, i y is the average of the actual traffic speed, andˆi y is the average of the prediction. C. Comparison and Analysis In order to verify the performance of the model, we compare the proposed model with Linear [41], RNN [42], GRU [43], LSTM [44], En-Decoder [45], Attention [46], and Multi-head attention (Mhatt) [47]. The training parameters of the model are set as follows: the number of iterations is 100, the optimizer is Adam, and the learning rate is 0.001. The number of layers is 2, the number of units per layer is 64. Batch size set to 12, the attention model encoder and decoder are 1 each, the number of attention heads is 2. All models were written in Python 3.8 environment based on Pytorch deep learning framework. All experiments were done on a server with the following parameters: Ubuntu 20.04 bit-64 operating system; Intel Core i7-6800 processor CPU @3.4GH; NVIDIA GTX1080Ti 16G. To verify the stability of the models, each model was repeated 10 times independently. The evaluation of model prediction performance is performed using the evaluation metrics mentioned in Section IV.B. 1) Comparison of Different Prediction Intervals. We developed a BEDMA model for medium and long-term traffic speed prediction (10min/30min/60min.) The performance is compared in Table I. From the experimental results, each index of the proposed model outperform the baseline models. The RMSE and SMAPE of the proposed model are lower than those of the other models, which indicates that the model has the smallest difference between the predicted and actual ones. The R indicators are greater than the other models, showing that the model has the highest goodness-offit. Specifically, it can be seen from Fig. 6 that in the short-term (10 minutes) prediction, the linear model performs the worst, and the Mhatt model performs the best among the baseline models, with RMSE and SMAPE of 1.4772 and 0.0503, respectively. And, the proposed BEDMA model have a good performance in both short and long-term predictions. The RMSE and SMAPE are lower than the Mhatt model by 1% and 3%, respectively. To further test the model's effectiveness in long-term prediction, we conducted a long-term prediction test to predict the traffic speed for the following 30 and 60 minutes. The accuracy of each model decreases as the prediction time increases, and then BEDMA still outperforms the other baseline models and shows better stability in long-term prediction. 2) Ablation Study. In order to validate the Bayesian encoderdecoder model based on the attention mechanism proposed, the modeling predictions are validated using the traffic speed dataset. The same arrangement as the comparison experiments was used to set up. We compared the GRU and BayesianGRU, Mhatt, and BEDMA models for each prediction steps (10 min/30 min/60 min). As we can see from Table II, the RMSE of BGRU is 1.3% lower than GRU in predicting the traffic flow for the next 10 minutes compared to GRU. As the prediction steps increases, the RMSE of BGRU decreases by 3.1% compared to GRU in predicting the traffic flow for 60 minutes. The experimental results show that adding variational inference improves the model fitting ability, fits the noisy data well and reduces the error. By comparing the experiment results of Mhatt and BEDMA, the RMSE of BEDMA is reduced by more than 1.1% compared with Mhatt at different prediction times, and the R is higher than Mhatt, which indicates that the incorporation of variational inference into the attention mechanism can improve the feature extraction ability of the model. Comparing the experimental results of GRU and BEDMA, the RMSE of BEDMA is reduced by 2%-6.4% compared with GRU. By combining variational inference with GRU and attention, the model can substantially improve its ability to model noisy traffic flow data and reduce errors. In addition, we visualize the traffic flow prediction results for three roads and analyze the model's prediction performance on different data sets. Fig. 7 shows the curves of the models for different prediction intervals as10, 30 and 60 minutes. In summary, the experimental results demonstrate that the model has excellent long-term prediction performance under prediction intervals. 3)Performance Under Different Roads We selected three roads for traffic flow prediction, and Table III show the prediction results of the three roads for 10 minutes. We can see that the error indicators are all at a low level. Moreover, the R indexes all reach above 0.97, indicating that the prediction fit the actual traffic flow well. Fig. 8-Fig. 10 show the prediction results separately, and it can be seen that the predicted and actual curves fit well. In summary, the experiments demonstrate that the model has excellent prediction performance under different roads. In this paper, an attention encoder-decoder model based on a Bayesian neural network is proposed, in which Bayesian neural network is used as the primary neural network unit within encoder and decoder framework while incorporating a multiheaded attention structure based on variational inference. The model is validated on Guangzhou urban traffic flow dataset and has better prediction performance compared to other baseline models with different prediction interval. Under different prediction interval, such as 10 minutes, 30 minutes and 60 minutes, the R evaluation index is 0.9721, 0.9415 and 0.9006, respectively, and the RMSE decreases to 1.4772, 2.2211, and 2.8813. It was further confirmed by ablation experiments that the inclusion of variational inference improved the Revaluation metrics of the model while reducing each error evaluation metric. The framework performs better prediction > REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) < for nonlinear and noisy traffic flow data than the benchmark model. It can fully exploit the internal correlation of the data and accurately predict the changing trend. In the next work, the model will be further optimized to expand the application of the model to other types of time series data, while the introduction of Bayesian inevitably increases the computational cost. This work was supported in part by the National Key Research and Development Program of China under Grant 2021YFD2100605; in part by the National Natural Science Foundation of China under Grant 62006008, Grant 62173007, and Grant 61903009; in part by the Beijing Natural Science Foundation under Grant 6214034. (Corresponding author: Xuebo Jin; Min Zuo.) W denote the n -th sampling weight of the i -th layer and ( ) i n b denote the bias, which conforms to the normal distribution. To ensure that the variance is non-negative and the standard deviation is derivable, a Gaussian distribution with mean( ) Fig. 1 . 1BGRU structure, where r W and r b are the sampling point of a Gaussian distribution obeying the mean z µ and covariance z σ . Fig. 2 2Addictive of each annotation j h , ij α is calculated as follows: Fig. 3 3Structure of Bayesian multi-head attention modelIn the multi-head attention mechanism, multi-heads refer to multiple scaled dot-product calculations of the input Query( Q ), Key( K ) and Value( V ). The weights of different heads are independent of each other and are not shared. is the length of the input sequence. Traffic flow data is transmitted to an encoder consisting of a BGRU. During propagation, the hidden states of the BNN layer are output at each time step to obtain the encoder output the Bayesian encoder output, the Bayesian encoder is transmitted to the Bayesian attention layer. Through the Bayesian attention layer, feature information of traffic flow data is obtained and different attention is paid to feature information, which further enhances the information extraction capability. Then output of heads are concatenated together and subjected to Bayesian linear variation to obtain the encoding vector is the parameter to be learned.The Bayesian decoder also consists of multiple layers of BGRUs. The coding vector C is input to the Bayesian decoder, and after passing through the layers, the hidden state of the last time step in the Bayesian decoder ' m h ∈  is output, and a nonlinear transformation is performed to obtain the prediction sequence. Algorithm 1 1Training Algorithm of BEDMA Model Input: the traffic flow data i input X , window_size, epochs, batch_size Output: the prediction of , b , µ and σ }. Fig. 5 Fig. 4 54Data Bayesian encoder-decoder multi-head attention model framework Fig. 7 7Evaluation metrics of different prediction intervals for each model 2 > REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) < Fig. 8 8Prediction result of traffic flow in Road 1 over a 10 minute Fig. 9 9Prediction result of traffic speed on Road 2 over a 10 minute Fig. 10 10Prediction result of traffic speed on Road 3 over a 10 minute V. CONCLUSION TABLE I EVALUATION IOF DIFFERENT PREDICTION MODELS Fig. 6 Traffic speed prediction evaluation index of different intervalsModel 10 minute 30 minute 60 minute RMSE SMAPE R RMSE SMAPE R RMSE SMAPE R Linear[46] 1.5745 0.0518 0.9689 2.7582 0.0812 0.9337 3.3732 0.1003 0.8677 RNN[33] 1.5734 0.0514 0.9691 2.4028 0.0727 0.9326 3.0949 0.0909 0.8821 GRU[34] 1.5292 0.0503 0.9704 2.2934 0.0689 0.9392 3.1159 0.0919 0.8896 LSTM[35] 1.5082 0.0497 0.9716 2.3102 0.0691 0.9384 3.0768 0.0905 0.8899 En-Decoder[36] 1.5028 0.0499 0.9714 2.2808 0.0684 0.9401 2.9617 0.0877 0.8955 Attention[37] 1.4968 0.0496 0.9714 2.2929 0.0691 0.9380 2.9453 0.0876 0.8969 Mhatt[47] 1.4946 0.0503 0.9718 2.2449 0.0680 0.9403 2.9142 0.0875 0.8969 BEDMA 1.4772 0.0486 0.9721 2.2211 0.0661 0.9415 2.8813 0.0858 0.9006 TABLE II EXPERIMENTAL IIRESULTS OF ABLATION STUDYModel 10 minute 30 minute 60 minute RMSE R RMSE R RMSE R GRU 1.5082 0.9716 2.3102 0.9384 3.0768 0.8899 BGRU 1.4881 0.9719 2.2759 0.9409 2.9820 0.8952 Mhatt 1.4946 0.9718 2.2449 0.9403 2.9142 0.8969 BEDMA 1.4774 0.9721 2.2176 0.9413 2.8813 0.9006 TABLE III PREDICTION IIIPERFORMANCE FOR THREE ROADS Road 1 1.3833 0.9746 2.1043 0.9502 2.9234 0.9093 Road 2 1.1261 0.9805 1.7683 0.9584 2.3587 0.9292 Road 3 1.3344 0.9762 2.1045 0.9476 2.7638 0.9202Model 10 minute 30 minute 60 minute RMSE R RMSE R RMSE R t t t t t A Novel STFSA-CNN-GRU Hybrid Model for Short-Term Traffic Speed Prediction. C Ma, Y Zhao, G Dai, X Xu, S.-C Wong, IEEE Trans. C. Ma, Y. Zhao, G. Dai, X. Xu and S.-C. Wong, "A Novel STFSA-CNN- GRU Hybrid Model for Short-Term Traffic Speed Prediction," IEEE Trans. . Intell, Transp, Syst, 10.1109/TITS.2021.3117835Intell. Transp. Syst., pp. 1-10, Feb. 2022, doi: 10.1109/TITS.2021.3117835. The Effect of Posted Speed Limit on the Dispersion of Traffic Flow Speed. C Gao, J Xu, Q Li, J Yang, 10.3390/su11133594Sustainability. 2019113594C. Gao, J. Xu, Q. Li, J. Yang, "The Effect of Posted Speed Limit on the Dispersion of Traffic Flow Speed," Sustainability, vol. 2019, no. 11, pp. 3594, Nov. 2019, doi: 10.3390/su11133594. A Hybrid Model for Short-Term Traffic Volume Prediction In Massive Transportation Systems. Z Diao, 10.1109/TITS.2018.2841800IEEE Trans. Intell. Transp. Syst. 203Z. Diao et al., "A Hybrid Model for Short-Term Traffic Volume Prediction In Massive Transportation Systems," IEEE Trans. Intell. Transp. Syst., vol. 20, no. 3, pp. 935-946, March. 2019, doi: 10.1109/TITS.2018.2841800. Combining Kohonen maps with ARIMA time series models to forecast traffic flow. M Voort, M Dougherty, S Watson, Transp. Res. Pt. C-Emerg. Technol. 45M. Voort, M. Dougherty, and S. Watson, "Combining Kohonen maps with ARIMA time series models to forecast traffic flow," Transp. Res. Pt. C- Emerg. Technol., vol. 4, no. 5, pp. 307-318, 1996. Traffic flow forecasting for urban work zones. Y Hou, P Edara, C Sun, IEEE Trans. Intell. Transp. Syst. 164Y. Hou, P. Edara, and C. Sun, "Traffic flow forecasting for urban work zones," IEEE Trans. Intell. Transp. Syst., vol. 16, no. 4, pp. 1761-1770, Aug. 2015. Short-Term Traffic Flow Prediction for Urban Road Sections Based on Time Series Analysis and LSTM-BILSTM Method. C Ma, G Dai, J Zhou, 10.1109/TITS.2021.3055258IEEE Trans. Intell. Transp. Syst. 236C. Ma, G. Dai, and J. Zhou, "Short-Term Traffic Flow Prediction for Urban Road Sections Based on Time Series Analysis and LSTM-BILSTM Method," IEEE Trans. Intell. Transp. Syst., vol. 23, no. 6, pp. 5615-5624, June. 2022, doi: 10.1109/TITS.2021.3055258. A review on time series data mining. T Fu, Eng. Appl. Artif. Intell. 241T. Fu, "A review on time series data mining," Eng. Appl. Artif. Intell., vol. 24, no. 1, pp. 164-181, Feb. 2011. Traffic flow prediction with rainfall impact using a deep learning method. Y Jia, J Wu, M Xu, J. Adv. Transp. 20176575947Y. Jia, J. Wu, and M. Xu, "Traffic flow prediction with rainfall impact using a deep learning method," J. Adv. Transp., vol. 2017, Aug. 2017, Art. no. 6575947. Predicting short-term traffic flow by long short term memory recurrent neural network. Y Tian, L Pan, Smart City/SocialCom/SustainCom (SmartCity). Proc. IEEE IntY. Tian, and L. Pan, "Predicting short-term traffic flow by long short term memory recurrent neural network," in Proc. IEEE Int. Conf. Smart City/SocialCom/SustainCom (SmartCity), Dec. 2015, pp. 153-158. Online-SVR for short-term traffic flow prediction under typical and atypical traffic conditions. M Castro-Neto, Y.-S Jeong, M.-K Jeong, L.-D Han, Expert Syst. Appl. 363M. Castro-Neto, Y.-S. Jeong, M.-K. Jeong, and L.-D. Han, "Online-SVR for short-term traffic flow prediction under typical and atypical traffic conditions," Expert Syst. Appl., vol. 36, no. 3, pp. 6164-6173, Apr. 2009. Temporal regularized matrix factorization for high-dimensional time series prediction. H.-F Yu, I.-S Rao, Dhillon, Proc NIPS. NIPSBarcelona, SpainH.-F. Yu, N Rao, and I.-S. Dhillon. "Temporal regularized matrix factorization for high-dimensional time series prediction," in Proc NIPS, Barcelona, Spain, 2016. Short term traffic flow prediction for a non urban highway using artificial neural network. K Kumar, M Parida, V Katiyar, Proc CTRG. CTRGAgra, India104K. Kumar, M. Parida, and V. Katiyar, "Short term traffic flow prediction for a non urban highway using artificial neural network," in Proc CTRG, Agra, India, vol. 104, pp. 755-764, Dec. 2013. Predicting hourly air pollutant levels using artificial neural networks coupled with uncertainty analysis by Monte Carlo simulations. A Mohammad, K Nima, M.-R Mohammad, Environ. Sci. Pollut. Res. 207A. Mohammad, K. Nima, and M.-R. Mohammad, "Predicting hourly air pollutant levels using artificial neural networks coupled with uncertainty analysis by Monte Carlo simulations," Environ. Sci. Pollut. Res., vol. 20, no. 7, pp. 4777-4789, Jul. 2013. J.-C Gamboa, 10.48550/arXiv.1701.01887Deep Learning for Time-Series Analysis. J.-C. Gamboa, "Deep Learning for Time-Series Analysis," ArXiv, https://doi.org/10.48550/arXiv.1701.01887. Traffic flow prediction with big data: A deep learning approach. Y Lv, Y Duan, W Kang, Z Li, F Wang, IEEE Trans. Intel. Transp. Syst. 162Y. Lv, Y. Duan, W. Kang, Z. Li, and F. Wang, "Traffic flow prediction with big data: A deep learning approach," IEEE Trans. Intel. Transp. Syst., vol. 16, no. 2, pp. 865-873, Apr. 2015. Artificial neural networks forecasting of PM2.5 pollution using air mass trajectory based geographic model and wavelet transformation. X Feng, Atmos. Environ. 107X. Feng et al., "Artificial neural networks forecasting of PM2.5 pollution using air mass trajectory based geographic model and wavelet transformation," Atmos. Environ., vol. 107, pp. 118-128, Apr. 2015. A Graph-Related High-Order Neural Network Architecture via Feature Aggregation Enhancement for Identification Application of Diseases and Pests. J.-L Kong, ID 4391491Comput. Intell. Neurosci. 2022J.-L. Kong et al., "A Graph-Related High-Order Neural Network Architecture via Feature Aggregation Enhancement for Identification Application of Diseases and Pests," Comput. Intell. Neurosci., vol. 2022, Article ID 4391491, May. 2022. Recent Trends in Deep Learning Based Natural Language Processing. T Young, D Hazarika, S Poria, E Cambria, IEEE Comput. Intell. Mag. 133Review ArticleT. Young, D. Hazarika, S. Poria and E. Cambria, "Recent Trends in Deep Learning Based Natural Language Processing [Review Article]," IEEE Comput. Intell. Mag., vol. 13, no. 3, pp. 55-75, Aug. 2018. Multi-Scale CNN for Fine-Grained Image Recognition. C.-S Won, IEEE Access. 8C.-S. Won, "Multi-Scale CNN for Fine-Grained Image Recognition," IEEE Access, vol. 8, pp. 116663-116674, 2020. Semantic-Powered Explainable Model-Free Few-Shot Learning Scheme of Diagnosing COVID-19 on Chest X-ray. Y Wang, 10.1109/JBHI.2022.3205167IEEE J. Biomed. Health Inform. Y. Wang et al., "Semantic-Powered Explainable Model-Free Few-Shot Learning Scheme of Diagnosing COVID-19 on Chest X-ray," IEEE J. Biomed. Health Inform., 2022, doi: 10.1109/JBHI.2022.3205167. A Reversible Automatic Selection Normalization (RASN) Deep Network for Predicting in the Smart Agriculture System. X.-B Jin, J.-S Zhang, J.-L Kong, Y.-T Bai, T.-L Su, 10.3390/agronomy12030591Agronomy-Basel. 20223X.-B. Jin, J.-S. Zhang, J.-L. Kong, Y.-T. Bai, and T.-L. Su, "A Reversible Automatic Selection Normalization (RASN) Deep Network for Predicting in the Smart Agriculture System," Agronomy-Basel, vol. 2022, no. 3, Mar. 2022, doi: 10.3390/agronomy12030591. Forecasting vehicular traffic flow using MLP and LSTM. D.-D Oliveira, Neural Comput & Applic. 3324D.-D. Oliveira et al., "Forecasting vehicular traffic flow using MLP and LSTM," Neural Comput & Applic., vol. 33, no. 24, pp. 17245-17256, Dec. 2021. D-LSTM: Short-Term Road Traffic Speed Prediction Model Based on GPS Positioning Data. X.-W Meng, IEEE Trans. Intell. Transp. Syst. 233X.-W. Meng et al., "D-LSTM: Short-Term Road Traffic Speed Prediction Model Based on GPS Positioning Data," IEEE Trans. Intell. Transp. Syst., vol. 23, no. 3, pp. 2021-2030, Mar. 2022. Traffic Flow Prediction Based on Deep Learning in Internet of Vehicles. C Chen, Z Liu, S Wan, J Luan, Q Pei, IEEE Trans. Intell. Transp. Syst. 226C. Chen, Z. Liu, S. Wan, J. Luan, and Q. Pei, "Traffic Flow Prediction Based on Deep Learning in Internet of Vehicles," IEEE Trans. Intell. Transp. Syst., vol. 22, no. 6, pp. 3776-3789, Jun. 2021. A Hybrid Deep Learning Model with Attention-Based Conv-LSTM Networks for Short-Term Traffic Flow Prediction. H Zheng, F Lin, X Feng, Y Chen, IEEE Trans. Intell. Transp. Syst. 2211H. Zheng, F. Lin, X. Feng, and Y. Chen, "A Hybrid Deep Learning Model with Attention-Based Conv-LSTM Networks for Short-Term Traffic Flow Prediction," IEEE Trans. Intell. Transp. Syst., vol. 22, no. 11, pp. 6910- 6920, Nov. 2021. An online sequence-to-sequence model using partial conditioning. N Jaitly, Proc NIPS. NIPSBarcelona, SpainN. Jaitly et al., "An online sequence-to-sequence model using partial conditioning," in Proc NIPS, Barcelona, Spain, 2016. On the Properties of Neural Machine Translation: Encoder-Decoder Approaches. K Cho, Proc Comp. Sci. Comp. SciK Cho et al., "On the Properties of Neural Machine Translation: Encoder- Decoder Approaches," in Proc Comp. Sci., 2014. A Review on the Attention Mechanism of Deep Learning. Z.-Y Niu, G.-Q Zhong, H Yu, Neurocomputing. 452Z.-Y. Niu, G.-Q. Zhong and H. Yu, "A Review on the Attention Mechanism of Deep Learning," Neurocomputing, vol. 452, pp. 48-62, 2021. Deep-learning temporal predictor via bi-directional selfattentive encoder-decoder framework for IOT-based environmental sensing in intelligent greenhouse. X.-B Jin, Agriculture-Basel. 118X.-B. Jin et al., "Deep-learning temporal predictor via bi-directional self- attentive encoder-decoder framework for IOT-based environmental sensing in intelligent greenhouse," Agriculture-Basel, vol. 11, no. 8, pp. 802, Aug. 2021. Modeling long-and short-term temporal patterns with deep neural networks. G Lai, Proc Conf. ACM. Conf. ACMNew York. USAG. Lai et al., "Modeling long-and short-term temporal patterns with deep neural networks," in Proc Conf. ACM., New York. USA, 2018. Long-Term Traffic Prediction Based on LSTM Encoder-Decoder Architecture. Z Wang, X Su, Z Ding, IEEE Trans. Intell. Transp. Syst. 2210Z. Wang, X. Su, and Z. Ding, "Long-Term Traffic Prediction Based on LSTM Encoder-Decoder Architecture," IEEE Trans. Intell. Transp. Syst., vol. 22, no. 10, pp. 6561-6571, Oct. 2021. Bayesian Neural Networks: An Introduction and Survey. E Goan, C Fookes, Lect. Notes Math. 2259E. Goan, and C. Fookes, "Bayesian Neural Networks: An Introduction and Survey," Lect. Notes Math., vol. 2259, pp. 45-87, 2020. A Survey on Bayesian Deep Learning. H Wang, D.-Y Yeung, ACM Comput. Surv. 37H. Wang and D.-Y. Yeung, "A Survey on Bayesian Deep Learning," ACM Comput. Surv., pp. 37, Sep. 2021. Bayesian Network Based Extreme Learning Machine for Subjectivity Detection. I Chaturvedi, E Ragusa, P Gastaldo, R Zunino, E Cambria, J. Frankl. Inst.-Eng. Appl. Math. 355I. Chaturvedi, E. Ragusa, P. Gastaldo, R. Zunino and E. Cambria, "Bayesian Network Based Extreme Learning Machine for Subjectivity Detection," J. Frankl. Inst.-Eng. Appl. Math., vol. 355, pp. 1780-1797, 2018. Bayesian Long Short-term Memory Model for Fault Early Warning of Nuclear Power Turbine. G Liu, IEEE Access. 81G. Liu et al. "Bayesian Long Short-term Memory Model for Fault Early Warning of Nuclear Power Turbine," IEEE Access, vol. 8, pp. 1, 2020. Variational Bayesian Neural Network for Ensemble Flood Forecasting. X.-Y Zhan, Water. 1210X.-Y. Zhan et al., "Variational Bayesian Neural Network for Ensemble Flood Forecasting," Water, vol. 12, no. 10, Oct. 2020. Modeling maximum tsunami heights using bayesian neural networks. M.-J Song, Y.-S Cho, Atmosphere. 1111M.-J. Song, and Y.-S. Cho, "Modeling maximum tsunami heights using bayesian neural networks," Atmosphere, vol. 11, no. 11, Nov. 2020. Bayesian Long Short-Term Memory Model for Fault Early Warning of Nuclear Power Turbine. G.-J Liu, H.-X Gu, X.-C Shen, D.-D You, IEEE Access. 8G.-J. Liu, H.-X. Gu, X.-C. Shen, and D.-D. You, "Bayesian Long Short- Term Memory Model for Fault Early Warning of Nuclear Power Turbine," IEEE Access, vol. 8, pp. 50801-50813, 2020. Measuring the Uncertainty of Predictions in Deep Neural Networks with Variational Inference. J Steinbrener, K Posch, J Pilz, Sensors. 2021J. Steinbrener, K. Posch, and J. Pilz, "Measuring the Uncertainty of Predictions in Deep Neural Networks with Variational Inference," Sensors, vol. 20, no. 21, Nov. 2020. Learning phrase representations using RNN encoderdecoder for statistical machine translation. K Cho, Proc. Conf. Empirical Methods Natural Lang. Process. (EMNLP). Conf. Empirical Methods Natural Lang. ess. (EMNLP)K. Cho et al., "Learning phrase representations using RNN encoder- decoder for statistical machine translation," Proc. Conf. Empirical Methods Natural Lang. Process. (EMNLP), pp. 1724-1734, 2014. Linear and Non-linear Autoregressive Models for Short-term Wind Speed Forecasting. M Lydia, S Kumar, A Selvakumar, G Kumar, Energy Conv. Manag. 112M. Lydia, S. Kumar, A. Selvakumar and G. Kumar, "Linear and Non-linear Autoregressive Models for Short-term Wind Speed Forecasting," Energy Conv. Manag., vol. 112, pp. 115-124. 2016. Recurrent neural networks and robust time series prediction. J.-T Connor, R.-D Martin, L.-E Atlas, IEEE Trans. Neural Netw. 52J.-T. Connor, R.-D. Martin, and L.-E Atlas, "Recurrent neural networks and robust time series prediction," IEEE Trans. Neural Netw., vol. 5, no. 2, pp. 240-254, March. 1994. Gated recurrent unit networkbased short-term photovoltaic forecasting. Y.-S Wang, W.-L Liao, Y.-Q Chang, Energies. 118Y.-S. Wang, W.-L. Liao, and Y.-Q. Chang, "Gated recurrent unit network- based short-term photovoltaic forecasting," Energies, vol. 11, no. 8, Aug. 2018. Traffic flow prediction using LSTM with feature enhancement. B.-L Yang, S.-L Sun, J.-Y Li, X.-X Lin, Y Tian, Neurocomputing. 332B.-L. Yang, S.-L. Sun, J.-Y. Li, X.-X. Lin, and Y. Tian, "Traffic flow prediction using LSTM with feature enhancement," Neurocomputing, vol. 332, pp. 320-327, Mar. 2019. Exploring a Long Short-Term Memory based encoder-decoder framework for multi-stepahead flood forecasting. I.-F Kao, Y.-L Zhou, L.-C Chang, F.-J Chang, J. Hydrol. 583I.-F. Kao, Y.-L. Zhou, L.-C. Chang, and F.-J. Chang, "Exploring a Long Short-Term Memory based encoder-decoder framework for multi-step- ahead flood forecasting," J. Hydrol., vol. 583, Apr. 2020. Multivariate time series forecasting via attention-based encoder-decoder framework. S.-D Du, T.-R Li, Y Yang, S.-J Horng, Neurocomputing. 388S.-D. Du, T.-R. Li, Y. Yang, and S.-J. Horng, "Multivariate time series forecasting via attention-based encoder-decoder framework," Neurocomputing, vol. 388, pp. 269-279, May. 2020. Deep-Learning Temporal Predictor via Bidirectional Self-Attentive Encoder-Decoder Framework for IOT-Based Environmental Sensing in Intelligent Greenhouse. X.-B Jin, Agriculture. 11802X.-B. Jin et al., "Deep-Learning Temporal Predictor via Bidirectional Self- Attentive Encoder-Decoder Framework for IOT-Based Environmental Sensing in Intelligent Greenhouse," Agriculture, vol. 11, pp. 802, 2021.
[]
[ "News and Views", "News and Views" ]
[ "Naveen A Reddy \nNational Optical Astronomy Observatory\n\n" ]
[ "National Optical Astronomy Observatory\n" ]
[ "Commentary on Bouwens et al" ]
The recently refurbished Hubble Space Telescope reveals a galaxy from a time when the Universe was just 500 million years old, providing insights into the first throes of galaxy formation and the reionization of the Universe.A central focus of cosmology is to understand how the primordial density fluctuations imprinted by the Big Bang gave rise to the galaxies and larger structures we observe today. Just as archaeologists sift through deeper layers of sand to uncover the past, cosmologists use large telescopes and sensitive detectors to study galaxies at ever greater distances from Earth and, because of the finite speed of light, to peer farther back in time.On page 504 of this issue, Bouwens et al. 1 take another step in this direction by exploiting the deepest near-infrared images of the sky, which were obtained with the reserviced Hubble Space Telescope and its new Wide Field Camera 2 . On the basis of these data, the authors report the plausible detection of the most distant galaxy yet discovered.The galaxy would have existed when the Universe was just 4% of its current age and when one of the most important phase transitions of gas in the Universe occurred.Building on previous studies, Bouwens and colleagues used the well-established Lyman break technique 3 to select galaxies at the largest distances, or redshifts. The method relies on the absorption, by neutral hydrogen within a galaxy or by intervening hydrogen clouds, of photons that are more energetic than Lyman-α photons (10.2 eV, corresponding to a wavelength of 1,216 ångströms). The resulting decrease in flux bluewards of the Lyman-α wavelength results in a characteristic 'break' in the spectrum
10.1038/469479a
[ "https://arxiv.org/pdf/1102.1017v1.pdf" ]
13,053,387
1102.1017
27148fd152220a0f8fb2fa94debadd40bea82934
News and Views 2011 Naveen A Reddy National Optical Astronomy Observatory News and Views Commentary on Bouwens et al 4692011 The recently refurbished Hubble Space Telescope reveals a galaxy from a time when the Universe was just 500 million years old, providing insights into the first throes of galaxy formation and the reionization of the Universe.A central focus of cosmology is to understand how the primordial density fluctuations imprinted by the Big Bang gave rise to the galaxies and larger structures we observe today. Just as archaeologists sift through deeper layers of sand to uncover the past, cosmologists use large telescopes and sensitive detectors to study galaxies at ever greater distances from Earth and, because of the finite speed of light, to peer farther back in time.On page 504 of this issue, Bouwens et al. 1 take another step in this direction by exploiting the deepest near-infrared images of the sky, which were obtained with the reserviced Hubble Space Telescope and its new Wide Field Camera 2 . On the basis of these data, the authors report the plausible detection of the most distant galaxy yet discovered.The galaxy would have existed when the Universe was just 4% of its current age and when one of the most important phase transitions of gas in the Universe occurred.Building on previous studies, Bouwens and colleagues used the well-established Lyman break technique 3 to select galaxies at the largest distances, or redshifts. The method relies on the absorption, by neutral hydrogen within a galaxy or by intervening hydrogen clouds, of photons that are more energetic than Lyman-α photons (10.2 eV, corresponding to a wavelength of 1,216 ångströms). The resulting decrease in flux bluewards of the Lyman-α wavelength results in a characteristic 'break' in the spectrum A central focus of cosmology is to understand how the primordial density fluctuations imprinted by the Big Bang gave rise to the galaxies and larger structures we observe today. Just as archaeologists sift through deeper layers of sand to uncover the past, cosmologists use large telescopes and sensitive detectors to study galaxies at ever greater distances from Earth and, because of the finite speed of light, to peer farther back in time. On page 504 of this issue, Bouwens et al. 1 take another step in this direction by exploiting the deepest near-infrared images of the sky, which were obtained with the reserviced Hubble Space Telescope and its new Wide Field Camera 2 . On the basis of these data, the authors report the plausible detection of the most distant galaxy yet discovered. The galaxy would have existed when the Universe was just 4% of its current age and when one of the most important phase transitions of gas in the Universe occurred. Building on previous studies, Bouwens and colleagues used the well-established Lyman break technique 3 to select galaxies at the largest distances, or redshifts. The method relies on the absorption, by neutral hydrogen within a galaxy or by intervening hydrogen clouds, of photons that are more energetic than Lyman-α photons (10.2 eV, corresponding to a wavelength of 1,216 ångströms). The resulting decrease in flux bluewards of the Lyman-α wavelength results in a characteristic 'break' in the spectrum of a galaxy. Galaxies at different redshifts can then be located by searching for objects that are detected in one filter but that disappear, or are very faint, in a bluer filter. Until now, the primary obstacle to identifying galaxies beyond redshift 6 -when the Universe was less than 1 billion years old -has been that the Lyman break shifts to the observed near-infrared, where the emission from the sky background is several hundred times higher than it is in the visible range of the spectrum. This higher background inhibits the ability to obtain deep imaging, and has motivated observations from above Earth's atmosphere. A breakthrough came with the installation of the Wide Field Camera on Hubble; the camera's increased field of view and sensitivity over the previous nearinfrared instrument on Hubble results in an increase by a factor of more than 30 in its capacity for finding faint galaxies at high redshift. Using multi-filter imaging from Hubble and the Lyman break technique, Bouwens and collaborators 1 report the discovery of one candidate galaxy at redshift of about 10 ( Fig. 1). Comparing the number density of galaxies at redshift 10, inferred from their observations, with that determined at lower redshifts, they find that the average galaxy increases in luminosity by more than a factor of 10 during the first 2 billion years of galaxy formation. Taken one step further, this finding suggests a close connection between galaxy formation and the assembly of dark matter in the early Universe. In contrast to the prevailing theory of cold dark matter and its relative success in reproducing the large-scale structure of the Universe, the physics of the development and evolution of visible matter is difficult to model: it depends on complex processes that govern the cooling of gas to form stars, the evolution of the stars themselves, and the feedback of energy and matter from stars and black holes. It is perhaps remarkable, therefore, that at early cosmic times the growth of galaxies seems to mirror that of the dark-matter halos in which the galaxies reside 4 . This similarity suggests that, despite the seemingly complex physics of star formation, simple gravitational theory -combined with a factor that parameterizes the efficiency of star formation (or, the fraction of gas that is converted to stars) -can provide a first-order prediction of the luminosity of a galaxy. Aside from probing the earliest stages of galaxy formation, a topical area of interest in cosmology is to identify the sources responsible for the transition between a neutral state of hydrogen in the Universe (roughly 300,000 years after the Big Bang) to a mostly ionized state at redshift about 6 (950 million years after the Big Bang). Bouwens and colleagues' study 1 probes galaxies at the heart of this 'reionization' epoch. Given some -albeit very uncertain -assumptions of the clumpiness of gas in the Universe and the fraction of ionizing photons that can escape galaxies, they argue that galaxies at redshift 10 may not provide enough ultraviolet flux to reionize the Universe. The dominant contributor to the ionizing flux at early cosmic epochs remains a mystery. Nonetheless, the plausible detection of a galaxy at redshift 10 suggests an onset of star formation at redshift beyond 12 (about 100 million years earlier), potentially increasing the role of galaxies in the early ionization of the Universe. Although these results 1 give us a glimpse of the earliest stages of galaxy formation, substantial uncertainties remain and more work is needed. Sample variance remains the dominant uncertainty, as a result of the small number of objects and the small field of view surveyed (equivalent to an area of about 0.6% the size of the Moon). Even more crucial, however, is the need to confirm the redshifts of these objects. The best confirmation of distance would be the detection of a strong emission line in the spectrum, such as the Lyman-α line. Detecting this line may be challenging for these 'primordial' galaxies because they are expected to be gas-rich (having not had enough time to convert a significant fraction of their gas into stars) and to be surrounded by a mostly neutral medium that resonantly scatters Lyman-α photons. The best hope is the James Webb Space Telescope (JWST). With its larger mirror and near-infrared-sensitive detectors, this facility will dramatically improve the situation: imaging and spectroscopy across a larger swathe of the spectrum will enable the confirmation of a spectral break or the detection of a strong emission line. Scheduled for launch in 2014, the JWST will also have the sensitivity to detect galaxies at redshift 10 that are even fainter than the one reported by Bouwens and collaborators. Studying this faint population will yield a more complete picture of their role in reionizing the Universe. The authors' preliminary foray in studying the first galaxies underscores the important role of facilities such as the JWST in revolutionizing our understanding of galaxy formation at the earliest cosmic epochs, and paves the way for a bright future in studying faint and distant galaxies. Naveen A. Reddy is at the National Optical Astronomy Observatory, Tucson, Arizona 85719, USA. e-mail: [email protected] Figure 1 : 1A galaxy at redshift 10. Bouwens and colleagues' search 1 for galaxies in the Hubble Ultra Deep Field has resulted in the plausible detection of the most distant galaxy yet detected. The circle marks the location of the galaxy (red blob in inset). [Credit-NASA/ESA/G. Illingworth (UCO/Lick Obs. & Univ. California, Santa Cruz)/R. Bouwens (UCO/Lick Obs. & Leiden Univ.)/HUDF09 TEAM] . R J Bouwens, Nature. 469Bouwens, R. J. et al. Nature 469, 504-507 (2011). . R A Kimble, SPIE. 7010Kimble, R. A. et al. SPIE 7010, 1-12 (2008). . C C Steidel, D Hamilton, Astron. J. 104Steidel, C. C. & Hamilton, D. Astron. J. 104, 941-949 (1992). . R J Bouwens, Astrophys. J. 686Bouwens, R. J. et al. Astrophys. J. 686, 230-250 (2008).
[]
[ "MultiMatch: Multi-task Learning for Semi-supervised Domain Generalization", "MultiMatch: Multi-task Learning for Semi-supervised Domain Generalization" ]
[ "Lei Qi ", "Hongpeng Yang ", "Yinghuan Shi ", "Xin Geng " ]
[]
[]
Domain generalization (DG) aims at learning a model on source domains to well generalize on the unseen target domain. Although it has achieved a great success, most of existing methods require the label information for all training samples in source domains, which is time-consuming and expensive in the real-world application. In this paper, we resort to solving the semi-supervised domain generalization (SSDG) task, where there are a few label information in each source domain. To address the task, we first analyze the theory of the multi-domain learning, which highlights that 1) mitigating the impact of domain gap and 2) exploiting all samples to train the model can effectively reduce the generalization error in each source domain so as to improve the quality of pseudo-labels. According to the analysis, we propose MultiMatch, i.e., extending FixMatch to the multitask learning framework, producing the high-quality pseudolabel for SSDG. To be specific, we consider each training domain as a single task (i.e., local task) and combine all training domains together (i.e., global task) to train an extra task for the unseen test domain. In the multi-task framework, we utilize the independent BN and classifier for each task, which can effectively alleviate the interference from different domains during pseudo-labeling. Also, most of parameters in the framework are shared, which can be trained by all training samples sufficiently. Moreover, to further boost the pseudo-label accuracy and the model's generalization, we fuse the predictions from the global task and local task during training and testing, respectively. A series of experiments validate the effectiveness of the proposed method, and it outperforms the existing semi-supervised methods and the SSDG method on several benchmark DG datasets.
10.48550/arxiv.2208.05853
[ "https://export.arxiv.org/pdf/2208.05853v1.pdf" ]
251,492,932
2208.05853
a0930beb423897cd56b987b55878c143135ea72a
MultiMatch: Multi-task Learning for Semi-supervised Domain Generalization Lei Qi Hongpeng Yang Yinghuan Shi Xin Geng MultiMatch: Multi-task Learning for Semi-supervised Domain Generalization 1Index Terms-Multi-task learningsemi-supervised learningdomain generalization Domain generalization (DG) aims at learning a model on source domains to well generalize on the unseen target domain. Although it has achieved a great success, most of existing methods require the label information for all training samples in source domains, which is time-consuming and expensive in the real-world application. In this paper, we resort to solving the semi-supervised domain generalization (SSDG) task, where there are a few label information in each source domain. To address the task, we first analyze the theory of the multi-domain learning, which highlights that 1) mitigating the impact of domain gap and 2) exploiting all samples to train the model can effectively reduce the generalization error in each source domain so as to improve the quality of pseudo-labels. According to the analysis, we propose MultiMatch, i.e., extending FixMatch to the multitask learning framework, producing the high-quality pseudolabel for SSDG. To be specific, we consider each training domain as a single task (i.e., local task) and combine all training domains together (i.e., global task) to train an extra task for the unseen test domain. In the multi-task framework, we utilize the independent BN and classifier for each task, which can effectively alleviate the interference from different domains during pseudo-labeling. Also, most of parameters in the framework are shared, which can be trained by all training samples sufficiently. Moreover, to further boost the pseudo-label accuracy and the model's generalization, we fuse the predictions from the global task and local task during training and testing, respectively. A series of experiments validate the effectiveness of the proposed method, and it outperforms the existing semi-supervised methods and the SSDG method on several benchmark DG datasets. I. INTRODUCTION D EEP learning has achieved a remarkable success in many application tasks [1], [2], [3], [4], such as computer vision and natural language processing. However, when there is the domain shift between training set and test set, the typical deep model cannot effectively work on the test dataset [5], [6]. Hence, it requires to train a new model for any a new scenario, which is infeasible in real-world applications. Recently, a new task named domain generalization (DG) is proposed [7], where there are several avalibale source domains during training, and the test set is unknown, thus the training and test samples are various from the data-distribution perspective. The goal of DG is to training a generalizable model for the unseen target domain using several source domains. Lei Several domain generalization methods have been developed to handle this issue [8], [9], [10], [11], [12], [13]. However, these methods need to label all data from source domains, which is expensive and time-consuming in realworld applications. In general, a few label for a dataset is readily to obtain, thus the semi-supervised domain generalization (SSDG) task is recently proposed [14], where there are a few labeled data in each source domain while most of samples do not have the label information. To tackle this task, StyleMatch [14] is proposed via extending the FixMatch [15] with a couple of new ingredients, which resorts to enhancing the diversity from the image level and the classifier level. However, this method does not handle the data-distribution discrepancy of different source domains, resulting in a negative on the accuracy of pseudo-labels during training. Similarly, in the conventional semi-supervised learning (SSL), most of existing methods assume that all training samples are sampled from the same data distribution [16], [17], [18], [19], [20], [21], [22]. Differently, in the semi-supervised domain generation task, the training samples from different distributions, as illustrated in Fig. 1. Therefore, if directly using these methods to deal with the SSDG task, we cannot obtain the accurate pseudo-labels during the training course because the domain shift in the training set is not beneficial to producing the accurate pseudo-label. In this paper, we mainly resort to obtaining accurate pseudo-labels so as to enhance the model's discrimination and generalization in the unseen domain. We first analyze the generalization error on a domain using the theory of the multi-domain learning [23]. Based on the upper bound of the generalization error, we can obtain a conclusion that: 1) alleviating the interference of different domains and 2) using all samples to train the model can effectively reduce the upper bound of generalization error on each domain, which means that the accurate pseudo-labels can be generated. Inspired by the theory of the multi-domain learning, we extend the FixMatch [15] 1 to a multi-task learning method, named MultiMatch, for semi-supervised domain generalization. Specifically, we build the independent local task for each domain to mitigate the interference from different domains during generating pseudo-labels. In addition, we also construct a jointly global task for predicting unseen target domain. Particularly, each independent local task is utilized to give the pseudo-label, while the jointly global task is trained using the pseudo-label. Furthermore, benefiting from the multi-task framework, we can fuse the predictions from the global and local tasks to further improve the accuracy of pseudo-labels and the generalization capability of the model in the training and test stages, respectively. We conduct a series of experiments on several DG benchmark datasets. Experimental results demonstrate the effectiveness of the proposed method, and the proposed method can produce a performance improvement when compared with the semi-supervised learning methods and the semi-supervised DG method. Moreover, we also verify the efficacy of each module in our method. In conclusion, our main contributions can be summarized as: • We analyze the semi-supervised domain generalization task based on the generalization error of multi-domain learning, which inspires us to design a method to obtain the high-quality pseudo-label during training. • We propose a simple yet effective multi-task learning (i.e., MultiMatch) for semi-supervised domain generalization, which can effectively reduce the interference from different domains during pseudo-labeling. Also, most of the modules in the model are shared for all domains, which can be sufficiently trained by all samples. • To further promote the accuracy of pseudo-labels and the capability of the model, we propose to merge the outputs of the local task and the global task together to yield a robust prediction in the training and test stages. • We evaluate our approach on multiple standard benchmark datasets, and the results show that our approach outperforms the state-of-the-art accuracy. Moreover, the ablation study and further analysis are provided to validate the efficacy of our method. The rest of this paper is organized as follows. We review some related work in Section II. The proposed method is introduced in Section III. Experimental results and analysis are presented in Section IV, and Section V is conclusion. II. RELATED WORK In this section, we investigate the related work to our work, including the semi-supervised learning methods and the domain generalization methods. The detailed investigation is presented in the following part. A. Semi-supervised Learning The semi-supervise learning has achieved the remarkable performance in the recent years [16], [17], [18], [24], [25], [15], [26], [27], [19], [20], [21], [22], [28]. For example, Grandvalet et al. [16] propose to minimize entropy regularization, which enables to incorporate unlabeled data in the standard supervised learning. Laine et al. [17] develop Temporal Ensembling, which maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this issue, Tarvainen et al. [18] design Mean Teacher that averages model weights instead of label predictions. In FixMatch [15], the method first generates pseudolabels using the model's predictions on weakly-augmented unlabeled images, and is then trained to predict the pseudolabel when fed a strongly-augmented version of the same image. Furthermore, Zhang et al. [22] introduce a curriculum learning approach to leverage unlabeled data according to the model's learning status. The core of the method is to flexibly adjust thresholds for different classes at each time step to let pass informative unlabeled data and their pseudo-labels. However, the success of the typical SSL largely depends on the assumption that the labeled and unlabeled data share an identical class distribution, which is hard to meet in the real-world application. The distribution mismatch between the labeled and unlabeled sets can cause severe bias in the pseudo-labels of SSL, resulting in significant performance degradation. To bridge this gap, Zhao et al. [29] put forward a new SSL learning framework, named Distribution Consistency SSL, which rectifies the pseudo-labels from a distribution perspective. Differently, Oh et al. [30] propose a general pseudolabeling framework that class-adaptively blends the semantic pseudo-label from a similarity-based classifier to the linear one from the linear classifier. Different from these existing the semi-supervised learning settings, we aim to address the semi-supervised task with multiple domains in the training procedure, which results in the interference of different domains during pseudo-labeling because of the impact of the domain shift. B. Domain Generalization Recently, some methods are also developed to address the domain generalization problem in some computer vision tasks [8], [9], [10], [31], [32], [33], [34], [13], [35], [36], such as classification and semantic segmentation. Inspired by domain adaptation methods, some works based on domain alignment [34], [37], [7], [38], [39], [40], [41] aim at mapping all samples from different domains into the same subspace to alleviate the difference of data distribution across different domains. For example, Muandet et al. [7] introduce a kernel based optimization algorithm to learn the domaininvariant feature representation and enhance the discrimination capability of the feature representation. However, this method cannot grantee the consistency of conditional distribution, hence Zhao et al. [37] develop an entropy regularization term to measure the dependency between the learned feature and the class labels, which can effectively ensure the conditional invariance of learned feature, so that the classifier can also correctly classify the feature from different domains. Besides, Gong et al. [40] exploit CycleGAN [42] to yield new styles of images that cannot be seen in the training set, which smoothly bridge the gap between source and target domains to boost the generalization of the model. Rahman et al. [41] also employ GAN to generate synthetic data and then mitigate domain discrepancy to achieve domain generalization. Differently, Li et al. [39] adopt an adversarial auto-encoder learning framework to learn a generalized latent feature representation in the hidden layer, and use Maximum Mean Discrepancy to align source domains, then they match the aligned distribution to an arbitrary prior distribution via adversarial feature learning. In this way, it can better generalize the feature of the hidden layer to other unknown domains. Rahman et al. [38] incorporate the correlation alignment module along with adversarial learning to help achieve a more domain agnostic model because of the improved ability to more effectively reduce domain discrepancy. In addition to performing adversarial learning at the domain level to achieve domain alignment, Li et al. [34] also conduct domain adversarial tasks at the class level to align samples of each category that from different domains. Ideally, visual learning methods should be generalizable, for dealing with any unseen domain shift when deployed in a new target scenario, and it should be data-efficient for reducing development costs by using as little labels as possible. However, the conventional DG methods, which are unable to handle unlabeled data, perform poorly with limited labels in the SSDG task. To handle this task, Zhou et al. [14] propose StyleMatch, a simple approach that extends FixMatch with a couple of new ingredients tailored for SSDG: 1) stochastic modeling for reducing overfitting in scarce labels, and 2) multi-view consistency learning for enhancing the generalization capability of the model. However, StyleMatch cannot effectively address the interference of different domains during pseudo-labeling. In this paper, we deal with the SSDG task from the multi-task learning perspective. III. THE PROPOSED METHOD In this paper, our goal is to address the semi-supervised domain generalization (SSDG) task. In this task, a few labeled samples on each domain are available, while most of samples lack the label information, thus a key task is how to generate the accurate pseudo-label. Different from the conventional semi-supervised methods, the SSDG task contains multiple source domains with different distributions. We first analyze the generalization error of multi-domain learning, which shows the upper bound of the generalization error is related to the discrepancy of different domains and the number of training samples. Particularly, reducing the upper bound is equal to improving the accuracy of the pseudo-label. Based on the analysis, we develop a multi-task learning method (i.e., Mul-tiMatch) to deal with the SSDG task, as illustrated in Fig. 2. Besides, to further leverage the advantage of the multi-task learning framework, we fuse the predictions from different tasks to yield the better pseudo-label and more generalizable model in the training and test procedure. We will introduce our method in the following part. A. Theoretical Insight In the semi-supervised DG task, each domain contains the labeled data and unlabeled data. To be specific, given N domains in the training stage, we use D i = {D l i , D u i } to denote the i-th domain, where D l i and D u i represent the labeled and unlabeled samples in the i-th domain, respectively. Since there is no label information for D u i , we aim to generate the high-quality pseudo-label in the training stage. Particularly, the SSDG task in the training stage can also be considered as multi-domain learning, and these unlabeled data can be viewed as the test data during pseudo-labeling. In the next part, we will introduce a theory of multi-domain learning to explore the semi-supervised DG task from the theoretical perspective. Here, we consider hypotheses h ∈ H (i.e., prediction function), and give a vector α = (α 1 , . . . , α N ) of domain weights with N j=1 α j = 1. In addition, we assume that the learner receives a total of m labeled training samples, with m j = β j m from the j-th domain D j . We define the empirical α-weighted error of function h aŝ α (h) = N j=1 α jˆ j (h) = N j=1 α j m j x∈Sj |h(x) − f j (x)|,(1) where f j (x) is a labeling function for the j-th domain (i.e., the mapping from a sample to its ground truth). Theorem 1 [23] Let H be a hypothesis space of VC dimension [43] d. For each domain j ∈ {1, ..., N }, let S j be a labeled sample of size β j m generated by drawing β j m points from D j and labeling them according to f j (·). Ifĥ ∈ H is the empirical minimizer ofˆ α (h) for a fixed weight vector α on these samples and h * T = min h∈H T (h) is the target error minimizer, then the upper bound of the generalization error on the target domain D T , for any δ ∈ (0, 1), with probability at least 1 − δ, can be written as T (ĥ) T (h * T ) + 2 ( N j=1 α 2 j β j ) d log(2m) + log(δ) 2m + N j=1 α j (2λ j + d H∆H (D j , D T )),(2) where λ j = min h∈H { T (h) + j (h)}.(3) The detailed demonstration process can be found in [23]. In Eq. 2, the third term indicates the distribution discrepancy between D j and D T . It is worth noting that, in the semi- supervised DG setting, all {α i } N i=1 and {β i } N i=1 are 1 N . Therefore, the Eq. 2 can be rewritten as T 1 -BN T N -BN T G -BN … T 1 -BN T N -BN T G -BN … … … Pseudo-label … … Prediction Model … Weak Strong Fig. 2. An illustration of the proposed MultiMatch for the semi-supervised domain generalization setting. In the method, training each domain is consider as the local task, and training all domains together is viewed as the global task. In the training course, we propose to employ the prediction fusion scheme to generate the final pseudo-label, which can be used to train the unlabeled samples. T (ĥ) T (h * T ) + N j=1 2λ j N + 2 d log(2m) + log(δ) 2m + N j=1 1 N d H∆H (D j , D T ).(4) According to Eq. 4, the upper bound of the generalization error on the target domain is mainly decided by four terms, where the first two terms can be considered as the constant. There are two observations in the third term and the fourth term. 1) In the third term, when m is a larger value, the total value is smaller, which indicates that we should use the available samples to train the model. In other words, most modules in the designed model should be shared for all domains or tasks. 2) The last term is the distribution discrepancy between the source-target domain 2 and the other domains. If we can alleviate the discrepancy, the generalization error on the source-target domain can also be reduced. Particularly, since the semi-supervised DG task consists of multiple domains, it can be also viewed as the multi-domain learning paradigm in the training stage. Besides, the unlabeled training data can be also considered as the test data during pseudo-labeling. Therefore, it means that, if we can effectively reduce the upper bound of the generalization error in Eq. 4, it will result in generating a high-quality pseudo-label for each domain in the training stage. B. Multi-task Learning Framework FixMatch is a significant simplification of existing SSL methods [15]. It first generates pseudo-labels using the model's predictions on weakly-augmented unlabeled images. For a given image, the pseudo-label is only retained if the model produces a high-confidence prediction. The model is then trained to predict the pseudo-label when fed a stronglyaugmented version of the same image. Despite its simplicity, FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks. In the SSDG task, FixMatch also achieves better performance than other conventional semi-supervised methods [22], [21], which is verified in our experiment. In this paper, we aim to extend FixMatch to a multi-task framework. Based on the upper bound of the generalization error in Eq. 4, the proposed method needs to satisfy two requirements: 1) most of modules in the model are shared for all domains, which can be sufficiently trained by all samples and 2) the model can reduce the interference of domain gap between different domains. Therefore, we propose a multi-task learning framework, named MultiMatch, to address the SSDG task. In this part, we will describe our multi-task learning framework. Since there are multiple different domains in the SSDG task, we consider training each domain as an independent task (i.e., the local task for each domain), which can reduce the negative impact of different domains each other during pseudolabeling. Besides, considering the SSDG task also needs to exploit in the unseen domain, we add a global task where we employ pseudo-labels from the local task to train the model. We assume that there are N source domains in the training stage, thus we will build N + 1 tasks in our framework. To be specific, we will employ the independent batch normalization (BN) [44] and classifier for each task in our method, and other modules are shared for all tasks. The batch normalization (BN) can be formulated by BN(f d ) = γ f d − M µ (φ) M σ (φ) + β, d ∈ φ,(5) where φ is the domain set, and M µ (φ) and M σ (φ) represent the statistics for the φ. When φ merely includes a domain, the sample will be normalized by the statistics from the own domain. Differently, when φ includes all domains, each sample will be normalized by the shared statistics from all domains. Since there exists a domain gap across different domains, the shared statistics will bring the noise error, as shown in the last term of Eq. 4, resulting in more generalization error. Remark. In our multi-task learning framework, using the independent BN can effectively mitigate the interference of different domains, as shown in Eq. 4. In addition, in our method, most of modules are shared for all domains, which can sufficiently exploit all samples to reduce the third item in Eq. 4. Hence, our multi-task learning framework can obtain a small generalization error on each domain so as to generate an accurate pseudo-label for each domain in the training stage. Last but not least, the multi-task framework with multiple classifiers can produce the ensemble prediction for pseudolabel and test evaluation in the training and test procedure. We will give the prediction fusion scheme in the next part. C. Prediction Fusion For an image x i from the i-th domain, we can generate its output Y i = [y 1 ; . . . ; y N +1 ] ∈ R (N +1)×C , where C is the number of the classes. In the training stage, due to the interference of different domains each other, we employ the output of the i-th task as the main prediction. To further guarantee the reliability of pseudo-labels in the training course, we combine it with the output of the global task to generate the final pseudo-labels. For an image x i from the i-th domain, the pseudo-label is produced by Class = MaxClass([y i ; y N +1 ]),(6) where MaxClass(·) returns the column of the max value in a matrix. In the test stage, we use the output of the global task as the main perdition because the test domain is unseen (i.e., we do not know which training domain is similar to the test domain). Besides, for a test image, we also fuse the output of the most similar task to yield the final prediction result as y max = MaxPath([y 1 ; . . . ; y N ]); Class = MaxClass( y max + y N +1 2 ),(8) where MaxPath(·) returns the prediction of the most similar task (i.e., the column has the maximum value in a matrix) for a test image in the test procedure. D. Training Process In our method, we merely use the cross-entropy loss to train our model as in FixMatch [15]. In the training course, we randomly select the same number of labeled and unlabeled images from each domain to form a batch. It is worth noting that, each image passes the domain-specific BN and classifier, and all images are required to pass the global task. Simultaneously, we fuse the predictions of the domain-specific task and the global task together to generate the accurate pseudo-labels. The overall training process is shown in Algorithm 1. Generating a batch of images {B l , B u }. 6: Using the labeled samples B l to train the model, each image passes the domain-specific BN and classifier, and all images pass the global BN and classifier. 7: Conducting weak and strong augmentations for B u to generate B u w and B u s . 8: Based on B u w to yield the pseudo-label Y u by Eq. 6. 9: Using B u s and Y u to train the model, and the forward path of all images is consistent with B l . 10: end for IV. EXPERIMENTS In this section, we firstly introduce the experimental datasets and settings in Section IV-A. Then, we compare the proposed MultiMatch with the state-of-the-art SSL methods and the SSDG method in Section IV-B, respectively. To validate the effectiveness of various components in our MultiMatch, we conduct ablation studies in Section IV-C. Lastly, we further analyze the property of the proposed method in Section IV-D. Visualization of the feature representation on four classes of PACS by t-SNE [49]. The features are extracted by ResNet18 pre-trained on ImageNet [50]. Note that different colors indicate different domains. images), which is originally introduced for UDA but is also applicable in the DG setting. • miniDomainNet [47] takes a subset of DomainNet [48] and utilizes a smaller image size (96 × 96 We show some examples from PACS and Office-home in Fig. 3. As seen, there is an obvious difference among different domains. Besides, we also visualize the features of four categories on PACS by t-SNE [49], as illustrated in Fig. 4. In this figure, different colors are different domains. We observe that different domains appear in different spaces, which validates that there exists the domain shift in the training set. A. Datasets and Experimental 2) Implementation Details: Following the common practice [51], ResNet18 [1] pre-trained on ImageNet [50] is employed as the CNN backbone (for all models compared in this paper). We randomly sample 64 images from each source domain to construct a minibatch for labeled and unlabeled data, respectively. The initial learning rate is set as 0.003 for the pre-trained backbone. In the experiment, to enrich the diversity of the augmentation, we integrate the AdaIN augmentation [52] into the strong augmentation scheme. Particularly, the augmentation scheme is utilized in all experiments, including the baseline (i.e., FixMatch), thus it is a fair comparison in the experiment. B. Comparison with State-of-the-art Methods We conduct the experiment to compare our method with some semi-supervised learning methods (i.e., Mean-Teacher [18], EntMin [16], DebiasPL [21], FlexMatch [22], FixMatch [15]) and the semi-supervised DG method (i.e., StyleMatch [14]). In this experiment, we run these methods under two different settings (i.e., 10 labels per class and 5 labels per class) on three benchmark datasets. The experimental results are reported in Table I. As seen in this table, among these typical semi-supervised methods, FixMatch can achieve the best performance. Compared with FixMatch, our method can further improve the performance. For example, on PACS our method outperforms FixMatch by +3.74% (81.57 vs. 77.83) and +5.09% (80.04 vs. 74.95) under the "10 labels per class" case and the "5 labels per class" case, respectively. Besides, on the large-scale dataset (i.e., miniDomainNet), our method can also achieve an obvious improvement, which is attributed to the fact that our method reduces the interference of different domains while guaranteeing that all training samples are utilized to train the model. In addition, StyleMatch is developed to address the semisupervised domain generalization task. Compared with it, our method has better experimental results on all settings and datasets, except for the "5 labels per class" case on PACS. For example, on miniDomainNet, our method increases StyleMatch by +3.66% (58.79 vs. 55.13) and +3.57% (54.61 vs. 51.04) under the "10 labels per class" case and the "5 labels per class" case, respectively. This confirms the effectiveness of our method when compared with the SOTA method, which thanks to the advantage of the multi-task learning framework in the semi-supervised domain generalization task. The effectiveness of each module in our proposed MultiMatch will be verified in the next ablation study. C. Ablation Studies In the experiment, we first validate the effectiveness of the multi-task learning framework, and then analyze the efficacy of the fusion prediction scheme in the training stage and the test stage, respectively. The experimental results are shown in Table II, where "MTL+TRAIN-local+TEST-global" denotes that the pseudo-labels are generated by the domain-specific path (i.e., local task), and the final prediction during test is based on the global task. In other words, "MTL+TRAIN-local+TESTglobal" means that we do not utilize the fusion prediction scheme in both the training and test stages. "MTL+TRAINglobal-local+TEST-global-local" indicates that we employ the fusion prediction scheme in both the training and test stages. As seen in Table II, "MTL+TRAIN-local+TEST-global" outperforms "Baseline" on both PACS and Office-Home, which confirms the effectiveness of the multi-task learning framework in the semi-supervised domain generalization task. For example, the multi-task learning framework can bring an obvious improvement by +3.37% (61.23 vs. 57.86) on Office-Home. In addition, the fusion prediction scheme is also effective in both the training and test stages. As seen in Table II, "MTL+TRAIN-global-local+TESTglobal" outperforms "MTL+TRAIN-local+TEST-global", and "MTL+TRAIN-global-local+TEST-global-local" outperforms "MTL+TRAIN-local+TEST-global-local", which indicates the effectiveness of the fusion prediction scheme in the training stage. Meanwhile, "MTL+TRAIN-local+TEST-globallocal " outperforms "MTL+TRAIN-local+TEST-global", and "MTL+TRAIN-global-local+TEST-global-local" outperforms "MTL+TRAIN-global-local+TEST-global", which confirms the effectiveness of the fusion prediction scheme in the test stage. In our method, we use the fusion manner in Section III-C. We will also investigate some other fusion manners in further analysis. D. Further Analysis Evaluation on different fusion manners. In this part, we evaluate different fusion manners in our framework on PACS. Experimental results are listed in Table III. Each fusion manner is described in the following formulas. "TEST-avg-all" is the mean of the outputs from all tasks in the testing stage. "TEST-max" denotes the maximum output among all tasks in the testing stage. "TRAIN-avg" means using the mean of outputs from the own task and the global task during pseudolabeling. As observed in Table III, "TRAIN-max+TEST-avg" (i.e., Eq. 6 and Eq. 8) can obtain the slight improvement when compared with other fusion scheme. Besides, compared with the "MTL+TRAIN-local+TEST-global" (i.e., without using the fusion scheme in our method), all schemes in Table III outperform it, which indicates that the prediction's ensemble is significant for our method during training and testing. TEST-avg-all: Class = MaxClass( y 1 + . . . + y N +1 N + 1 ). TEST-max: Class = MaxClass([y max ; y N +1 ]), where y max is defined in Eq. 7. TRAIN-avg: Class = MaxClass( y i + y N +1 2 ), where y i is the perdition of an image from the i-th domain. The accuracy of pseudo-labels. In this experiment, we evaluate the accuracy of pseudo-labels of different methods on Precision, Recall and macro-f1. It is worth noting that macro-f1 fuses Recall and Precision together. Table IV shows the experimental results, where "MTL+TRAIN-local" represents the accuracy of the pseudo-labels from the domain-specific classifier in our method, and "MTL+TRAIN-global-local" is the accuracy of the pseudo-labels generated by fusing the domain-specific classifier and the global classifier in our method. As observed in Table IV, "MTL+TRAIN-local" can improve the macro-f1 of FixMatch by +1.37 (90.63 vs. 89.26) and +1.27 (70.41 vs. 69.14) on PACS and Office-Home, respectively. This validates that using the independent task for each domain can indeed alleviate the interference of different domains so as to improve the pseudo-labels of unlabeled data. Besides, "MTL+TRAIN-global-local" outperforms all other methods, e.g., "MTL+TRAIN-global-local" increases the macro-f1 of StyleMatch by +1.35 (92.70 vs. 91.35) and +2.39 (73.07 vs. 70.68) on on PACS and Office-Home, respectively. Therefore, this confirms our method can achieve better accuracy of pseudo-labels to enhance the generalization capability of the model. Furthermore, we also display the accuracy of pseudo-labels of different methods at different epochs on PACS and Office-Home in Fig. 5, respectively. As seen, our method can give more accurate pseudo-labels at each epoch when compared with FixMatch and StyleMatch. Test on different numbers of the labeled data. We also conduct the comparison between our proposed MultiMatch and StyleMatch under different numbers of the labeled data on PACS, as shown in Fig. 6. According to this figure, except for the "5 labels per class" case, our MultiMatch obviously outperforms StyleMatch in all other cases. For example, when Evaluation on the independent BN and the independent classifier. In this part, we validate the effectiveness of the independent BN and the independent classifier in our method. Experimental results are listed in Table V, where "w/ SBN" and "w/ SC" indicate using the shared BN and the shared classifier in our method, respectively. As observed in Table V using the independent BN and the independent classifier together can yield better performance than using the independent BN or the independent classifier. For example, our method improve the "w/ SBN" and "w/ SC" by +1.44% (81.57 vs. 80.13) and +2.47% (81.57 vs. 79.10) on PACS, respectively. Besides, compared with the original FixMatch in Table I under "10 labels per class", both "w/ SBN" and "w/ SC" outperform them on PACS and Office-Home. All the above observations confirm the effectiveness of both the independent BN and the independent classifier in our method. Effectiveness of the unlabeled data in SSDG. To validate the effectiveness of the unlabeled data in SSDG, we train the supervised DG methods using the labeled samples, including ResNet18, CrossGrad [53], DDAIG [54], RSC [51]. Experimental results are shown in Fig. 7. As observed in this figure, using the unlabeled data can obtain better performance than these conventional DG methods with the labeled data, which validates that the unlabeled samples are very meaningful in the SSDG task. Furthermore, our MultiMatch can effectively Experimental results of the typical DG methods under the labeled samples in SSDG on PACS and Office-Home, respectively. Note that "L" denotes only using the labeled samples, and "L+U" is using the labeled and unlabeled samples together. mine the information from these unlabeled data, which has been reported in Table I. Test on the supervised case. In the experiment, we train our model in the supervised setting, and compare it with some supervised DG methods, as reported in Tables VI and VII. MLDG [55], MASF [56] and MetaReg [32] are meta-learning based methods. FACT [57], RSC [51] and FSDCL [58] are augmentation based methods. In addition, VDN [59] and SNR [60] aim at learning domain-invariant features, and DAEL [47] is a ensemble learning based method. Compared with these supervised methods, our MultiMatch is also competitive in the supervised case. V. CONCLUSION In this paper, we aim to tackle the semi-supervised domain generalization (SSDG) task. Different from the typical semisupervised task, the challenge of SSDG is that there exist multiple different domains with the latent distribution discrepancy. To address this issue, we first explore the theory of multi-domain learning to generate more accurate pseudolabels for unlabeled samples. Then, we propose to utilize a multi-task learning framework to mitigate the impact of the domain discrepancy and sufficiently exploit all training samples, which can effectively enhance the model's generalization. We conduct the experiment on multiple benchmark datasets, which verifies the effectiveness of the proposed method. Fig. 1 . 1Comparison between the typical semi-supervised learning (SSL) and the semi-supervised domain generalization (SSDG). Note that different colors denote different domains. In the SSDG setting, there are multiple training domains with different data distributions when compared with SSL. Output: The trained parameters of the model θ. 3: Initialization: Initialize the parameters θ. 4: for iter =< MaxIter do 5: Fig. 3 . 3Examples on PACS and Office-Home, respectively. It is worth noting that different rows denote different domains. Fig. 4. Visualization of the feature representation on four classes of PACS by t-SNE [49]. The features are extracted by ResNet18 pre-trained on ImageNet [50]. Note that different colors indicate different domains. I COMPARISON BETWEEN OUR METHOD AND DIFFERENT SEMI-SUPERVISED (DG) METHODS UNDER DIFFERENT NUMBERS OF LABELED SAMPLES ON PACA, OFFICE-HOME AND MINIDOMAINNET. NOTE THAT "P", "A", "C" AND "S" DENOTE DIFFERENT DOMAINS ON PACS. "AVG" IS THE AVERAGE RESULT OF ALL DOMAINS. THE BOLD IS THE BEST RESULT. Fig. 7. Experimental results of the typical DG methods under the labeled samples in SSDG on PACS and Office-Home, respectively. Note that "L" denotes only using the labeled samples, and "L+U" is using the labeled and unlabeled samples together. Qi, Hongpeng Yang and Xin Geng are with the School of Computer Science and Engineering, and the Key Lab of Computer Network and Information Integration (Ministry of Education), Southeast University, Nanjing, China, 211189 (e-mail: [email protected]; hp [email protected]; [email protected]). Yinghuan Shi is with the State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China, 210023 (e-mail: [email protected]). * Corresponding author: Xin Geng. Settings 1 ) 1Datasets: In this paper, we conduct the experiments to validate the effectiveness of our method on three benchmark DG datasets as follows:• PACS [45] consists of four different domains: Photo, Art, Cartoon and Sketch. It contains 9,991 images with 7 object categories in total, including Photo (1,670 images), Art (2,048 images), Cartoon (2,344 images), and Sketch (3,929 images). • Office-Home [46] contains 15,588 images of 65 cate- gories of office and home objects. It has four different domains namely Art (2,427 images), Clipart (4,365 im- ages), Product (4,439 images) and Real World (4,357 TABLE TABLE II ABLATION IISTUDIES ON DIFFERENT COMPONENTS OF OUR METHOD ON PACS AND OFFICE-HOME UNDER THE CASE WITH 10 LABELS PER CLASS. TO BETTER SHOW THE BEST RESULT, WE ONLY USE BOLD ON THE BEST AVERAGE RESULT.TABLE III EXPERIMENTAL RESULTS OF DIFFERENT LABEL-FUSION SCHEMES ON PACS UNDER THE CASE WITH 10 LABELS PER CLASS.Method PACS Office-Home P A C S Avg A C P R Avg Baseline (FixMatch) 87.40 76.85 69.40 77.68 77.83 49.90 50.98 63.79 66.75 57.86 MTL+TRAIN-local+TEST-global 92.63 77.28 68.52 74.15 78.15 55.04 51.24 67.41 71.21 61.23 MTL+TRAIN-local+TEST-global-local 93.37 83.43 70.15 75.71 80.67 56.13 51.70 68.20 71.71 61.94 MTL+TRAIN-global-local+TEST-global 92.22 77.08 68.72 75.05 78.27 55.31 52.36 68.17 71.58 61.86 MTL+TRAIN-global-local+TEST-global-local 93.25 83.30 73.00 76.74 81.57 56.32 52.76 68.94 72.28 62.57 Fusion scheme P A C S Avg TRAIN-max+TEST-avg (ours) 93.25 83.30 73.00 76.74 81.57 TRAIN-max+TEST-avg-all 93.36 83.32 72.34 76.76 81.44 TRAIN-max+TEST-max 92.93 83.24 72.76 76.76 81.42 TRAIN-avg+TEST-avg 93.45 83.56 71.26 76.89 81.29 TRAIN-avg+TEST-avg-all 93.46 83.49 71.35 76.82 81.28 TRAIN-avg+TEST-max 93.29 83.59 71.08 76.96 81.23 TABLE IV THE IVACCURACY OF PSEUDO-LABELS OF DIFFERENT METHODS ON PACS AND OFFICE-HOME UNDER THE CASE WITH 10 LABELS PER CLASS.we use 160 labels per class, our MultiMatch improves the performance by +1.1% (83.43 vs. 82.33) when compared with StyleMatch. Besides, given the label information for all training samples, our method can also obtain better results than StyleMatch.Dataset Method Precision Recall macro-f1 PACS FixMatch 88.83 89.52 89.26 StyleMatch 90.84 91.52 91.35 MTL+TRAIN-local 90.17 90.86 90.63 MTL+TRAIN-global-local 92.23 92.82 92.70 Office-Home FixMatch 71.05 69.97 69.14 StyleMatch 72.67 71.41 70.68 MTL+TRAIN-local 72.22 71.25 70.41 MTL+TRAIN-global-local 74.86 73.76 73.07 , Fig. 5. The accuracy of pseudo-labels of different methods at different epochs on PACS and Office-Home, respectively.Fig. 6. Comparison between our MultiMatch and StyleMatch under different numbers of the labeled data on PACS. "ALL" denotes that all training samples have label information during training.75 80 85 90 95 1 5 9 13 17 21 25 29 33 37 41 45 49 FixMtach StyleMatch Ours (a) PACS 55 60 65 70 75 80 1 5 9 13 17 21 25 29 FixMtach StyleMatch Ours (b) Office-Home 80.32 80.41 82.09 82.12 82.14 82.33 83.23 80.04 81.57 82.17 82.83 82.76 83.43 84.72 5 10 20 40 80 160 ALL #The labeled data per class StyleMatch Ours TABLE V EVALUATION VON THE INDEPENDENT BN AND THE INDEPENDENT CLASSIFIER ON PACS AND OFFICE-HOME, RESPECTIVELY.Module PACS P A C S Avg w/ SBN 91.62 82.09 73.10 73.70 80.13 w/ SC 91.07 78.10 71.12 76.12 79.10 Ours 93.25 83.30 73.00 76.74 81.57 Module Office-Home A C P R Avg w/ SBN 55.30 51.65 68.01 71.56 61.63 w/ SC 54.68 52.28 67.38 71.22 61.39 Ours 56.32 52.76 68.94 72.28 62.57 TABLE VI EXPERIMENTAL VIRESULTS OF DIFFERENT METHODS UNDER THE SUPERVISED SETTING ON PACS. THE BOLD AND GRAY ARE THE BEST RESULT AND THE SECOND-BEST RESULT, RESPECTIVELY.TABLE VII EXPERIMENTAL RESULTS OF DIFFERENT METHODS UNDER THE SUPERVISED SETTING ON OFFICE-HOME.Method P A C S Avg MLDG [55] 94.30 79.50 77.30 71.50 80.65 MASF [56] 94.99 80.29 77.17 71.69 81.04 MetaReg [32] 95.50 83.70 77.20 70.30 81.68 SNR [60] 94.50 80.3 78.20 74.10 81.80 VDN [59] 94.00 82.60 78.50 82.70 84.45 FACT [57] 95.15 85.37 78.38 79.15 84.51 MultiMatch (ours) 95.79 84.44 75.83 82.82 84.72 Method A C P R Avg RSC [51] 58.42 47.90 71.63 74.54 63.12 DAEL [47] 59.40 55.10 74.00 75.70 66.05 SNR [60] 61.20 53.70 74.20 75.10 66.10 FSDCL [58] 60.24 53.54 74.36 76.66 66.20 FACT [57] 60.34 54.85 74.48 76.55 66.56 MultiMatch (ours) 59.71 56.25 74.28 75.70 66.49 FixMatch is an excellent baseline in SSDG, which will be validated in the experimental section. When the model generates the pseudo-label for a source domain, this source domain is named the source-target domain. Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770-778. Faster R-CNN: towards realtime object detection with region proposal networks. S Ren, K He, R B Girshick, J Sun, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). 396S. Ren, K. He, R. B. Girshick, and J. Sun, "Faster R-CNN: towards real- time object detection with region proposal networks," IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 39, no. 6, pp. 1137-1149, 2017. Bert: Pre-training of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, arXiv:1810.04805arXiv preprintJ. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, "Bert: Pre-training of deep bidirectional transformers for language understanding," arXiv preprint arXiv:1810.04805, 2018. A strong baseline and batch normalization neck for deep person re-identification. H Luo, W Jiang, Y Gu, F Liu, X Liao, S Lai, J Gu, IEEE Transactions on Multimedia (TMM). 2210H. Luo, W. Jiang, Y. Gu, F. Liu, X. Liao, S. Lai, and J. Gu, "A strong baseline and batch normalization neck for deep person re-identification," IEEE Transactions on Multimedia (TMM), vol. 22, no. 10, pp. 2597- 2609, 2020. Generalizing to unseen domains: A survey on domain generalization. J Wang, C Lan, C Liu, Y Ouyang, T Qin, International Joint Conference on Artificial Intelligence (IJCAI). J. Wang, C. Lan, C. Liu, Y. Ouyang, and T. Qin, "Generalizing to unseen domains: A survey on domain generalization," in International Joint Conference on Artificial Intelligence (IJCAI), 2021, pp. 4627-4635. Domain generalization: A survey. K Zhou, Z Liu, Y Qiao, T Xiang, C C Loy, IEEE Transactions on Pattern Analysis and Machine Intelligence. 2022TPAMIK. Zhou, Z. Liu, Y. Qiao, T. Xiang, and C. C. Loy, "Domain general- ization: A survey," IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022. Domain generalization via invariant feature representation. K Muandet, D Balduzzi, B Schölkopf, International Conference on Machine Learning (ICML). K. Muandet, D. Balduzzi, and B. Schölkopf, "Domain generalization via invariant feature representation," in International Conference on Machine Learning (ICML), 2013, pp. 10-18. Reducing domain gap by reducing style bias. H Nam, H Lee, J Park, W Yoon, D Yoo, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). H. Nam, H. Lee, J. Park, W. Yoon, and D. Yoo, "Reducing domain gap by reducing style bias," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 8690-8699. Learning to optimize domain specific normalization for domain generalization. S Seo, Y Suh, D Kim, G Kim, J Han, B Han, European Conference on Computer Vision (ECCV). S. Seo, Y. Suh, D. Kim, G. Kim, J. Han, and B. Han, "Learning to optimize domain specific normalization for domain generalization," in European Conference on Computer Vision (ECCV), 2020, pp. 68-83. Domain randomization and pyramid consistency: Simulation-to-real generalization without accessing target domain data. X Yue, Y Zhang, S Zhao, A L Sangiovanni-Vincentelli, K Keutzer, B Gong, International Conference on Computer Vision (ICCV). X. Yue, Y. Zhang, S. Zhao, A. L. Sangiovanni-Vincentelli, K. Keutzer, and B. Gong, "Domain randomization and pyramid consistency: Simulation-to-real generalization without accessing target domain data," in International Conference on Computer Vision (ICCV), 2019, pp. 2100-2110. Feature-based style randomization for domain generalization. Y Wang, L Qi, Y Shi, Y Gao, IEEE Transactions on Circuits and Systems for Video Technology (TCSVT). 32Y. Wang, L. Qi, Y. Shi, and Y. Gao, "Feature-based style randomization for domain generalization," IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), vol. 32, no. 8, pp. 5495-5509, 2022. A novel mix-normalization method for generalizable multi-source person re-identification. L Qi, L Wang, Y Shi, X Geng, IEEE Transactions on Multimedia. 2022L. Qi, L. Wang, Y. Shi, and X. Geng, "A novel mix-normalization method for generalizable multi-source person re-identification," IEEE Transactions on Multimedia (TMM), 2022. Generalizable model-agnostic semantic segmentation via target-specific normalization. J Zhang, L Qi, Y Shi, Y Gao, Pattern Recognition (PR). 122108292J. Zhang, L. Qi, Y. Shi, and Y. Gao, "Generalizable model-agnostic semantic segmentation via target-specific normalization," Pattern Recog- nition (PR), vol. 122, p. 108292, 2022. Semi-supervised domain generalization with stochastic stylematch. K Zhou, C C Loy, Z Liu, arXiv:2106.00592arXiv preprintK. Zhou, C. C. Loy, and Z. Liu, "Semi-supervised domain generalization with stochastic stylematch," arXiv preprint arXiv:2106.00592, 2021. Fixmatch: Simplifying semisupervised learning with consistency and confidence. K Sohn, D Berthelot, N Carlini, Z Zhang, H Zhang, C Raffel, E D Cubuk, A Kurakin, C Li, Advances in Neural Information Processing Systems (NeurIPS). 2020K. Sohn, D. Berthelot, N. Carlini, Z. Zhang, H. Zhang, C. Raffel, E. D. Cubuk, A. Kurakin, and C. Li, "Fixmatch: Simplifying semi- supervised learning with consistency and confidence," in Advances in Neural Information Processing Systems (NeurIPS), 2020. Semi-supervised learning by entropy minimization. Y Grandvalet, Y Bengio, Advances in Neural Information Processing Systems (NeurIPS). Y. Grandvalet and Y. Bengio, "Semi-supervised learning by entropy minimization," in Advances in Neural Information Processing Systems (NeurIPS), 2004, pp. 529-536. Temporal ensembling for semi-supervised learning. S Laine, T Aila, International Conference on Learning Representations (ICLR. S. Laine and T. Aila, "Temporal ensembling for semi-supervised learn- ing," in International Conference on Learning Representations (ICLR), 2017. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. A Tarvainen, H Valpola, Advances in Neural Information Processing Systems. A. Tarvainen and H. Valpola, "Mean teachers are better role mod- els: Weight-averaged consistency targets improve semi-supervised deep learning results," in Advances in Neural Information Processing Systems (NeurIPS), 2017, pp. 1195-1204. All labels are not created equal: Enhancing semi-supervision via label grouping and co-training. I Nassar, S Herath, E Abbasnejad, W Buntine, G Haffari, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). I. Nassar, S. Herath, E. Abbasnejad, W. Buntine, and G. Haffari, "All labels are not created equal: Enhancing semi-supervision via label grouping and co-training," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 7241-7250. Simmatch: Semi-supervised learning with similarity matching. M Zheng, S You, L Huang, F Wang, C Qian, C Xu, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 14481M. Zheng, S. You, L. Huang, F. Wang, C. Qian, and C. Xu, "Simmatch: Semi-supervised learning with similarity matching," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 14 471- 14 481. Debiased learning from naturally imbalanced pseudo-labels. X Wang, Z Wu, L Lian, S X Yu, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). X. Wang, Z. Wu, L. Lian, and S. X. Yu, "Debiased learning from naturally imbalanced pseudo-labels," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 14 647-14 657. Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling. B Zhang, Y Wang, W Hou, H Wu, J Wang, M Okumura, T Shinozaki, Advances in Neural Information Processing Systems (NeurIPS). 18B. Zhang, Y. Wang, W. Hou, H. Wu, J. Wang, M. Okumura, and T. Shi- nozaki, "Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling," in Advances in Neural Information Processing Systems (NeurIPS), 2021, pp. 18 408-18 419. A theory of learning from different domains. S Ben-David, J Blitzer, K Crammer, A Kulesza, F Pereira, J W Vaughan, Machine Learning (ML). 791S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. W. Vaughan, "A theory of learning from different domains," Machine Learning (ML), vol. 79, no. 1, pp. 151-175, 2010. Mixmatch: A holistic approach to semi-supervised learning. D Berthelot, N Carlini, I J Goodfellow, N Papernot, A Oliver, C Raffel, Advances in Neural Information Processing Systems (NeurIPS). D. Berthelot, N. Carlini, I. J. Goodfellow, N. Papernot, A. Oliver, and C. Raffel, "Mixmatch: A holistic approach to semi-supervised learning," in Advances in Neural Information Processing Systems (NeurIPS), 2019, pp. 5050-5060. Remixmatch: Semi-supervised learning with distribution matching and augmentation anchoring. D Berthelot, N Carlini, E D Cubuk, A Kurakin, K Sohn, H Zhang, C Raffel, International Conference on Learning Representations (ICLR. 2020D. Berthelot, N. Carlini, E. D. Cubuk, A. Kurakin, K. Sohn, H. Zhang, and C. Raffel, "Remixmatch: Semi-supervised learning with distribution matching and augmentation anchoring," in International Conference on Learning Representations (ICLR), 2020. Comatch: Semi-supervised learning with contrastive graph regularization. J Li, C Xiong, S C Hoi, International Conference on Computer Vision (ICCV). J. Li, C. Xiong, and S. C. Hoi, "Comatch: Semi-supervised learning with contrastive graph regularization," in International Conference on Computer Vision (ICCV), 2021, pp. 9475-9484. Alphamatch: Improving consistency for semi-supervised learning with alpha-divergence. C Gong, D Wang, Q Liu, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). C. Gong, D. Wang, and Q. Liu, "Alphamatch: Improving consistency for semi-supervised learning with alpha-divergence," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 13 683- 13 692. Class-aware contrastive semi-supervised learning. F Yang, K Wu, S Zhang, G Jiang, Y Liu, F Zheng, W Zhang, C Wang, L Zeng, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)14430F. Yang, K. Wu, S. Zhang, G. Jiang, Y. Liu, F. Zheng, W. Zhang, C. Wang, and L. Zeng, "Class-aware contrastive semi-supervised learn- ing," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2022, pp. 14 421-14 430. Dc-ssl: Addressing mismatched class distribution in semi-supervised learning. Z Zhao, L Zhou, Y Duan, L Wang, L Qi, Y Shi, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Z. Zhao, L. Zhou, Y. Duan, L. Wang, L. Qi, and Y. Shi, "Dc-ssl: Addressing mismatched class distribution in semi-supervised learning," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 9757-9765. Daso: Distribution-aware semanticsoriented pseudo-label for imbalanced semi-supervised learning. Y Oh, D.-J Kim, I S Kweon, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Y. Oh, D.-J. Kim, and I. S. Kweon, "Daso: Distribution-aware semantics- oriented pseudo-label for imbalanced semi-supervised learning," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 9786-9796. Domain generalization by solving jigsaw puzzles. F M Carlucci, A D&apos;innocente, S Bucci, B Caputo, T Tommasi, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). F. M. Carlucci, A. D'Innocente, S. Bucci, B. Caputo, and T. Tommasi, "Domain generalization by solving jigsaw puzzles," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 2229- 2238. Metareg: Towards domain generalization using meta-regularization. Y Balaji, S Sankaranarayanan, R Chellappa, Advances in Neural Information Processing Systems (NeurIPS). Y. Balaji, S. Sankaranarayanan, and R. Chellappa, "Metareg: Towards domain generalization using meta-regularization," in Advances in Neural Information Processing Systems (NeurIPS), 2018, pp. 1006-1016. Episodic training for domain generalization. D Li, J Zhang, Y Yang, C Liu, Y Song, T M Hospedales, International Conference on Computer Vision (ICCV). D. Li, J. Zhang, Y. Yang, C. Liu, Y. Song, and T. M. Hospedales, "Episodic training for domain generalization," in International Confer- ence on Computer Vision (ICCV), 2019, pp. 1446-1455. Deep domain generalization via conditional invariant adversarial networks. Y Li, X Tian, M Gong, Y Liu, T Liu, K Zhang, D Tao, European Conference on Computer Vision (ECCV). Y. Li, X. Tian, M. Gong, Y. Liu, T. Liu, K. Zhang, and D. Tao, "Deep domain generalization via conditional invariant adversarial networks," in European Conference on Computer Vision (ECCV), 2018, pp. 647-663. Domain generalization via encoding and resampling in a unified latent space. Y Liu, Z Xiong, Y Li, X Tian, Z.-J Zha, IEEE Transactions on Multimedia (TMM). 2021Y. Liu, Z. Xiong, Y. Li, X. Tian, and Z.-J. Zha, "Domain generalization via encoding and resampling in a unified latent space," IEEE Transac- tions on Multimedia (TMM), 2021. Generalizable crossmodality medical image segmentation via style augmentation and dual normalization. Z Zhou, L Qi, X Yang, D Ni, Y Shi, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Z. Zhou, L. Qi, X. Yang, D. Ni, and Y. Shi, "Generalizable cross- modality medical image segmentation via style augmentation and dual normalization," in Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR), June 2022, pp. 20 856- 20 865. Domain generalization via entropy regularization. S Zhao, M Gong, T Liu, H Fu, D Tao, Advances in Neural Information Processing Systems (NeurIPS). 2020S. Zhao, M. Gong, T. Liu, H. Fu, and D. Tao, "Domain generalization via entropy regularization," in Advances in Neural Information Processing Systems (NeurIPS), 2020. Correlation-aware adversarial domain adaptation and generalization. M M Rahman, C Fookes, M Baktashmotlagh, S Sridharan, Pattern Recognition (PR). 100107124M. M. Rahman, C. Fookes, M. Baktashmotlagh, and S. Sridharan, "Correlation-aware adversarial domain adaptation and generalization," Pattern Recognition (PR), vol. 100, p. 107124, 2020. Unpaired image-to-image translation using cycle-consistent adversarial networks. J Zhu, T Park, P Isola, A A Efros, International Conference on Computer Vision (ICCV. J. Zhu, T. Park, P. Isola, and A. A. Efros, "Unpaired image-to-image translation using cycle-consistent adversarial networks," in International Conference on Computer Vision (ICCV), 2017, pp. 2242-2251. Domain generalization with adversarial feature learning. H Li, S J Pan, S Wang, A C Kot, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). H. Li, S. J. Pan, S. Wang, and A. C. Kot, "Domain generalization with adversarial feature learning," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 5400-5409. DLOW: domain flow for adaptation and generalization. R Gong, W Li, Y Chen, L V Gool, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). R. Gong, W. Li, Y. Chen, and L. V. Gool, "DLOW: domain flow for adaptation and generalization," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 2477-2486. Multi-component image translation for deep domain generalization. M M Rahman, C Fookes, M Baktashmotlagh, S Sridharan, IEEE Winter Conference on Applications of Computer Vision (WACV). M. M. Rahman, C. Fookes, M. Baktashmotlagh, and S. Sridharan, "Multi-component image translation for deep domain generalization," in IEEE Winter Conference on Applications of Computer Vision (WACV), 2019, pp. 579-588. The elements of statistical learning: data mining, inference, and prediction. T Hastie, R Tibshirani, J H Friedman, J H Friedman, 2T. Hastie, R. Tibshirani, J. H. Friedman, and J. H. Friedman, The elements of statistical learning: data mining, inference, and prediction, 2009, vol. 2. Domain-specific batch normalization for unsupervised domain adaptation. W Chang, T You, S Seo, S Kwak, B Han, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). W. Chang, T. You, S. Seo, S. Kwak, and B. Han, "Domain-specific batch normalization for unsupervised domain adaptation," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 7354- 7362. Deeper, broader and artier domain generalization. D Li, Y Yang, Y.-Z Song, T M Hospedales, International Conference on Computer Vision (ICCV. D. Li, Y. Yang, Y.-Z. Song, and T. M. Hospedales, "Deeper, broader and artier domain generalization," in International Conference on Computer Vision (ICCV), 2017, pp. 5543-5551. Deep hashing network for unsupervised domain adaptation. H Venkateswara, J Eusebio, S Chakraborty, S Panchanathan, H. Venkateswara, J. Eusebio, S. Chakraborty, and S. Panchanathan, "Deep hashing network for unsupervised domain adaptation," in CVPR, 2017, pp. 5018-5027. Domain adaptive ensemble learning. K Zhou, Y Yang, Y Qiao, T Xiang, IEEE Transactions Image Process (TIP). 30K. Zhou, Y. Yang, Y. Qiao, and T. Xiang, "Domain adaptive ensemble learning," IEEE Transactions Image Process (TIP), vol. 30, pp. 8008- 8018, 2021. Moment matching for multi-source domain adaptation. X Peng, Q Bai, X Xia, Z Huang, K Saenko, B Wang, International Conference on Computer Vision (ICCV). X. Peng, Q. Bai, X. Xia, Z. Huang, K. Saenko, and B. Wang, "Moment matching for multi-source domain adaptation," in International Confer- ence on Computer Vision (ICCV), 2019, pp. 1406-1415. Visualizing data using t-sne. L Van Der Maaten, G Hinton, Journal of machine learning research (JMLR). 911L. Van der Maaten and G. Hinton, "Visualizing data using t-sne," Journal of machine learning research (JMLR), vol. 9, no. 11, pp. 2579-2605, 2008. Imagenet: A largescale hierarchical image database. J Deng, W Dong, R Socher, L Li, K Li, F Li, IEEE Conference on Computer Vision and Pattern Recognition. J. Deng, W. Dong, R. Socher, L. Li, K. Li, and F. Li, "Imagenet: A large- scale hierarchical image database," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009, pp. 248-255. Self-challenging improves cross-domain generalization. Z Huang, H Wang, E P Xing, D Huang, European Conference on Computer Vision (ECCV). Z. Huang, H. Wang, E. P. Xing, and D. Huang, "Self-challenging improves cross-domain generalization," in European Conference on Computer Vision (ECCV), 2020, pp. 124-140. Arbitrary style transfer in real-time with adaptive instance normalization. X Huang, S J Belongie, International Conference on Computer Vision (ICCV. X. Huang and S. J. Belongie, "Arbitrary style transfer in real-time with adaptive instance normalization," in International Conference on Computer Vision (ICCV), 2017, pp. 1510-1519. Generalizing across domains via cross-gradient training. S Shankar, V Piratla, S Chakrabarti, S Chaudhuri, P Jyothi, S Sarawagi, International Conference on Learning Representations (ICLR). S. Shankar, V. Piratla, S. Chakrabarti, S. Chaudhuri, P. Jyothi, and S. Sarawagi, "Generalizing across domains via cross-gradient training," in International Conference on Learning Representations (ICLR), 2018. Deep domainadversarial image generation for domain generalisation. K Zhou, Y Yang, T M Hospedales, T Xiang, AAAI Conference on Artificial Intelligence (AAAI). 1332K. Zhou, Y. Yang, T. M. Hospedales, and T. Xiang, "Deep domain- adversarial image generation for domain generalisation," in AAAI Con- ference on Artificial Intelligence (AAAI), 2020, pp. 13 025-13 032. Learning to generalize: Meta-learning for domain generalization. D Li, Y Yang, Y Song, T M Hospedales, AAAI Conference on Artificial Intelligence (AAAI). D. Li, Y. Yang, Y. Song, and T. M. Hospedales, "Learning to general- ize: Meta-learning for domain generalization," in AAAI Conference on Artificial Intelligence (AAAI), 2018, pp. 3490-3497. Domain generalization via model-agnostic learning of semantic features. Q Dou, D C De Castro, K Kamnitsas, B Glocker, Advances in Neural Information Processing Systems (NeurIPS). Q. Dou, D. C. de Castro, K. Kamnitsas, and B. Glocker, "Domain generalization via model-agnostic learning of semantic features," in Advances in Neural Information Processing Systems (NeurIPS), 2019, pp. 6447-6458. A fourier-based framework for domain generalization. Q Xu, R Zhang, Y Zhang, Y Wang, Q Tian, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Q. Xu, R. Zhang, Y. Zhang, Y. Wang, and Q. Tian, "A fourier-based framework for domain generalization," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 14 383-14 392. Feature stylization and domain-aware contrastive learning for domain generalization. S Jeon, K Hong, P Lee, J Lee, H Byun, ACM International Conference on Multimedia (MM). S. Jeon, K. Hong, P. Lee, J. Lee, and H. Byun, "Feature stylization and domain-aware contrastive learning for domain generalization," in ACM International Conference on Multimedia (MM), 2021, pp. 22-31. Variational disentanglement for domain generalization. Y Wang, H Li, L.-P Chau, A C Kot, arXiv:2109.05826arXiv preprintY. Wang, H. Li, L.-P. Chau, and A. C. Kot, "Variational disentanglement for domain generalization," arXiv preprint arXiv:2109.05826, 2021. Style normalization and restitution for domain generalization and adaptation. X Jin, C Lan, W Zeng, Z Chen, IEEE Transactions on Multimedia (TMM). 24X. Jin, C. Lan, W. Zeng, and Z. Chen, "Style normalization and restitution for domain generalization and adaptation," IEEE Transactions on Multimedia (TMM), vol. 24, pp. 3636-3651, 2022.
[]
[ "Polyhedra in loop quantum gravity", "Polyhedra in loop quantum gravity" ]
[ "Eugenio Bianchi \nCentre de Physique Théorique *\nCNRS-Luminy Case 907\n13288Marseille Cedex 09France\n", "Pietro Doná \nCentre de Physique Théorique *\nCNRS-Luminy Case 907\n13288Marseille Cedex 09France\n\nScuola Normale Superiore\nPiazza dei Cavalieri 756126PisaItaly\n", "Simone Speziale \nCentre de Physique Théorique *\nCNRS-Luminy Case 907\n13288Marseille Cedex 09France\n" ]
[ "Centre de Physique Théorique *\nCNRS-Luminy Case 907\n13288Marseille Cedex 09France", "Centre de Physique Théorique *\nCNRS-Luminy Case 907\n13288Marseille Cedex 09France", "Scuola Normale Superiore\nPiazza dei Cavalieri 756126PisaItaly", "Centre de Physique Théorique *\nCNRS-Luminy Case 907\n13288Marseille Cedex 09France" ]
[]
Interwiners are the building blocks of spin-network states. The space of intertwiners is the quantization of a classical symplectic manifold introduced by Kapovich and Millson.Here we show that a theorem by Minkowski allows us to interpret generic configurations in this space as bounded convex polyhedra in Ê 3 : a polyhedron is uniquely described by the areas and normals to its faces. We provide a reconstruction of the geometry of the polyhedron: we give formulas for the edge lengths, the volume and the adjacency of its faces. At the quantum level, this correspondence allows us to identify an intertwiner with the state of a quantum polyhedron, thus generalizing the notion of quantum tetrahedron familiar in the loop quantum gravity literature. Moreover, coherent intertwiners result to be peaked on the classical geometry of polyhedra. We discuss the relevance of this result for loop quantum gravity. In particular, coherent spin-network states with nodes of arbitrary valence represent a collection of semiclassical polyhedra. Furthermore, we introduce an operator that measures the volume of a quantum polyhedron and examine its relation with the standard volume operator of loop quantum gravity. We also comment on the semiclassical limit of spinfoams with non-simplicial graphs.
10.1103/physrevd.83.044035
[ "https://arxiv.org/pdf/1009.3402v2.pdf" ]
14,414,561
1009.3402
acaed57aa52e6f75e980cac83a7fec86462dfab4
Polyhedra in loop quantum gravity 29 Jan 2011 February 1, 2011 Eugenio Bianchi Centre de Physique Théorique * CNRS-Luminy Case 907 13288Marseille Cedex 09France Pietro Doná Centre de Physique Théorique * CNRS-Luminy Case 907 13288Marseille Cedex 09France Scuola Normale Superiore Piazza dei Cavalieri 756126PisaItaly Simone Speziale Centre de Physique Théorique * CNRS-Luminy Case 907 13288Marseille Cedex 09France Polyhedra in loop quantum gravity 29 Jan 2011 February 1, 2011 Interwiners are the building blocks of spin-network states. The space of intertwiners is the quantization of a classical symplectic manifold introduced by Kapovich and Millson.Here we show that a theorem by Minkowski allows us to interpret generic configurations in this space as bounded convex polyhedra in Ê 3 : a polyhedron is uniquely described by the areas and normals to its faces. We provide a reconstruction of the geometry of the polyhedron: we give formulas for the edge lengths, the volume and the adjacency of its faces. At the quantum level, this correspondence allows us to identify an intertwiner with the state of a quantum polyhedron, thus generalizing the notion of quantum tetrahedron familiar in the loop quantum gravity literature. Moreover, coherent intertwiners result to be peaked on the classical geometry of polyhedra. We discuss the relevance of this result for loop quantum gravity. In particular, coherent spin-network states with nodes of arbitrary valence represent a collection of semiclassical polyhedra. Furthermore, we introduce an operator that measures the volume of a quantum polyhedron and examine its relation with the standard volume operator of loop quantum gravity. We also comment on the semiclassical limit of spinfoams with non-simplicial graphs. Introduction Loop quantum gravity (LQG) is a continuous theory, whose Hilbert space is the direct sum of spaces associated to graphs Γ embedded in a three-dimensional hypersurface, H = ⊕ Γ H Γ . It is often convenient to consider a single graph Γ, and the associated Hilbert space H Γ . The truncation captures only a finite number of degrees of freedom of the theory. An important question for us is whether these degrees of freedom can be "packaged" as to provide some approximate description of smooth 3d geometries [1,2]. We specifically think that it would be useful to have a picture of the classical degrees of freedom captured by H Γ in terms of discrete geometries. Such knowledge is for instance relevant for the interpretation of semiclassical states restricted on H Γ . As it turns out, useful insights can be gained looking at the structure of H Γ . It decomposes in terms of SU(2)-invariant spaces H F associated to each node of valence F . For a 4-valent node, it has been known for quite some time that an intertwiner represents the state of a "quantum tetrahedron" [3,4], namely the quantization of the space of shapes of a flat tetrahedron in Ê 3 with fixed areas. For a generic valence F , a natural expectation would be a relation to polyhedra with F faces, as mentioned in [5] and [1]. In this paper we clarify the details of this correspondence. There are two keys to our result. The first one is the fact that H F is the quantization of a certain classical phase space S F , introduced by Kapovich and Millson in [6]. The second is the fact that there is a unique bounded convex polyhedron with F faces of given areas associated to each point of S F . This is guaranteed by an old theorem by Minkowski [7]. The correspondence is up to a measure-zero subset of "degenerate" configurations, present also in the 4-valent case. Accordingly, we have the following relations: polyhedra with F faces ←→ classical phase space S F ←→ intertwiner space H F . An immediate consequence of these results is a complete characterization of coherent states at a fixed graph: they uniquely define a collection of polyhedra associated to each node of the graph. This provides a simple and compelling picture of the degrees of freedom of H Γ in terms of discrete geometries, which are associated with a parametrization of the classical holonomy-flux variables in terms of the twisted geometries introduced in [1]. The paper is divided into two parts, concerning respectively the classical geometry of polyhedra, and the notion of quantum polyhedron together with its relevance to loop gravity. The motivation for the first part comes from the fact that polyhedra have a rich classical geometry. One of the reasons why the notion of quantum tetrahedron has been so fruitful in the developement of loop gravity and spinfoams is the fact that everybody understands the geometry of a classical tetrahedron. To make the extension to higher valence as fruitful, we need first of all to clarify a number of aspects of the geometry of polyhedra. Minkowski's theorem guarantees that a polyhedron can be reconstructed out of the areas and normals to its faces, just as it happens for the tetrahedron. The new feature here is that there are many possible polyhedra with the same number of faces which differ in their combinatorial structure, i.e. in the adjacency relations of the faces. In the first part of the paper (sections 2 and 3) we focus entirely on the classical geometry of polyhedra, and collect and in some cases adapt various results known in the mathematical literature. We discuss the combinatorial classes of polyhedra, and how the phase space of shapes at given areas can be divided into regions of different classes. We show explicitly how a given configuration of areas and normals can be used to reconstruct the polyhedron geometry, including its edge lengths, volume and combinatorial class. Furthermore, we discuss certain shape matching conditions which effectively restrict a collection of polyhedra to (a generalization of) Regge geometries. In the second part of the paper (sections 4 to 6) we discuss the quantum theory. We first review the construction of the quantization map between the phase space S F and Hilbert space of intertwiners H F . This leads to the interpretation of an intertwiner state as the state of a quantum polyhedron, and of coherent intertwiners [8,9] as states describing semiclassical polyhedra. The relevance of polyhedra extend to the whole graph Hilbert space H Γ , via the twisted geometries variables. The result provides an interpretation of coherent spin-network states in H Γ as a collection of semiclassical polyhedra. Furthermore, we introduce a new operator which measures the volume of a quantum polyhedron. Its definition is based on the knowledge of the classical system behind the intertwiner space H F , and has the right semiclassical limit on nodes of any valence. We discuss its relation with the standard volume operator of loop quantum gravity. Finally, we make some brief remarks on the polyhedral picture, Regge calculus and covariant spin foam models. 2 The phase space of polyhedra 2 .1 Convex polyhedra and Minkowski theorem A convex polyhedron is the convex hull of a finite set of points in 3d Euclidean space. It can be represented as the intersection of finitely many half-spaces as P = x ∈ R 3 | n i · x ≤ h i , i = 1, . . . , m ,(1) where n i are arbitrary vectors, and h i are real numbers. The abstract description (1) is nonunique and redundant: the minimal set of half-spaces needed to describe a polyhedron corresponds to taking their number m equal to the number of faces F of the polyhedron. In this paper we are interested in the description of a convex polyhedron with F faces in terms of variables that have an immediate geometric interpretation: the areas of the faces of the polyhedron and the unit normals to the planes that support such faces. Let us consider a set of unit vectors n i ∈ Ê 3 and a set of positive real numbers A i such that they satisfy the closure condition C ≡ F i=1 A i n i = 0.(2) In the following, we will refer to this set as "closed normals". A convex polyhedron with F faces having areas A i and normals n i can be obtained in the following way. For each vector n i consider the plane orthogonal it. Then translate this plane to a distance h i from the origin of Ê 3 . The intersection of the half-spaces bounded by the planes defines the polyhedron, n i · x ≤ h i . We can then adjust the heights so that h i = h i (A) so that the faces have areas A i . Remarkably, a convex polyhedron with such areas and normals always exists. Moreover, it is unique, up to rotations and translations. This result is established by the following theorem due to H. Minkowski [7,10]: Theorem (Minkowski, 1897) (a) If n 1 , . . . , n F are non-coplanar unit vectors and A 1 , . . . , A F are positive numbers such that the closure condition (2) holds, than there exists a convex polyhedron whose faces have outwards normals n i and areas A i . (b) If each face of a convex polyhedron is equal in area to the corresponding face with parallel external normal of a second convex polyhedron and conversely, then the two polyhedra are congruent by translation. This unicity will play an important role in the following. Throughout the rest of the paper, we use simply polyhedra to refer to bounded convex polyhedra. Kapovich-Millson phase space as the space of shapes of polyhedra Let us consider F vectors in Ê 3 that have given norms A 1 , . . . , A F and such that they sum up to zero. The space of such vectors modulo rotations has the structure of a symplectic manifold [6] and is known as the Kapovich-Millson phase space 1 S F , S F = n i ∈ (S 2 ) F | F i=1 A i n i = 0 /SO(3) .(3) The Poisson structure on this 2(F −3)-dimensional space is the one that descends via symplectic reduction from the natural SO(3)-invariant Poisson structure on each of the F spheres S 2 . Action-angle variables for (3) are (F − 3) pairs (µ i , θ i ) with canonical Poisson brackets, {µ i , θ j } = δ ij . Here µ i is the length of the vector µ i = A 1 n 1 + . . . + A i+1 n i+1 (see Fig.1), and its conjugate variable θ i is the angle between the plane identified by the vectors µ i−1 , µ i and the plane identified by the vectors µ i , µ i+1 . At fixed areas, the range of each µ i is finite. Thanks to Minkowski's theorem, a point in S F with non-coplanar normals identifies a unique polyhedron. Accordingly, we refer to (3) as the space of shapes of polyhedra at fixed areas. Notice that (3) contains also configurations with coplanar normals: they can be thought of as "degenerate" polyhedra, obtained as limiting cases. The fact that the polyhedra with faces of given areas form a phase space will be important in section (4.1) where we discuss the Hilbert space of the quantum polyhedron. O A 1 n 1 A 2 n 2 A 3 n 3 A 4 n 4 Classes of polyhedra with F faces The phase space S F has a rich structure: as we vary the normals of a polyhedron keeping its areas fixed, not only the geometry, but in general also the combinatorial structure of the polyhedron changes; that is, the number of edges and the adjacency of faces. We refer to the combinatorial structure as the class of the polyhedron. In other words, there are two components to the shape of a polyhedron: its class, and its geometry (up to rotations) once the combinatorial structure is fixed. The different classes of polyhedra correspond to the different tessellations of a sphere having F faces. Which class is realized, depends on the specific value of the normals. This is a point we would like to stress: one is not free to choose a class, and then assign the data. It is on the contrary the choice of data that selects the class. This is an immediate consequence of Minkowski's theorem. Accordingly, the phase space S F can be divided into regions corresponding to the different classes of polytopes with F faces. To visualize the class of a polyhedron it is convenient to use Schlegel diagrams [11,10]. The Schlegel diagram of a polyhedron is a planar graph obtained choosing a face f , and projecting all the other faces on f as viewed from above. See Fig.2 for examples. To understand the division of S F into regions of different class, let us first give some examples, and postpone general comments to the end of the Section. In the most familiar F = 4 case, there is no partitioning of S 4 : there is a unique tessellation of the sphere, the tetrahedron, and it is well known that there is always a unique tetrahedron associated with four closed normals. The first non-trivial case is F = 5, where there are two possible classes: a triangular prism, and a pyramid (see Fig. 3). Consider then the phase space S 5 . Minkowski's Dominant: Codimension 1: theorem guarantees that the same set (A i , n i ) cannot be associated to both classes, thus each point in S 5 corresponds to a unique class. One might at first think that S 5 can be more or less equally divided among the two classes, but this is not the case. In fact, notice that the pyramid is just a special case of the prism, obtained by collapsing to a point one of the edges connecting two triangular faces. The existence of a pyramid then requires a non-trivial condition, i.e. the presence of a 4-valent vertex. A moment of reflection shows that this condition can be imposed via an algebraic equation on the variables. Hence the shapes corresponding to pyramids span a codimension one surface in S 5 . Generic configurations of areas and normals describe triangular prisms, and the pyramids are measure zero special cases. We call dominant the class of maximal dimensionality, e.g. the triangular prism here. Let us move to F = 6, a case of particular interest since regular graphs in Ê 3 are six-valent. There are seven different classes of polyhedra, see Fig.4. The most familiar one is the cuboid (top left of Fig.4), with its six quadrilateral faces. Remarkably, there is a further dominant class: it is a "pentagonal wedge", i.e. a polyhedron with two triangles, two quadrilaterals and two pentagons as faces (to visualize it, immagine a triangular prism planed down on a corner, so that a vertex is replaced by a triangle). The remaining five classes are subdominant, because non-trivial conditions are required for their existence. Subdominant classes have fewer vertices and thus can be seen as special cases with certain edges of zero length. 2 2 Among these, notice the class of codimension 3. It has six triangular faces and three four-valent vertices. This class is interesting in that it can be seen as two tetrahedra glued along a common triangle. Two arbitrary tetrahedra are defined by 12 independent numbers. In order for them to glue consistently and generate this polyhedron, the shape of the shared triangle has to match. This shape matching requires three conditions (for instance matching of the edge lengths), thus we obtain a 9-dimensional space of shapes. For fixed external areas, this is precisely the codimension 3 subspace in S6. Hence this class is a special case of two tetrahedra where From the above analysis, we expect that the phase space S 6 can then be divided into regions corresponding to the two dominant classes, separed by the subdominant ones. This is qualitatively illustrated in Figure 5. To confirm this picture, we performed some numerical investigations. Using the reconstruction algorithm, which we introduce in the next Section, we can assign a class to each point in S 6 . In Fig.6 we give an explicit example of a 2d and a 3d slice of the 6d space S 6 , which shows the subdivision into the two dominant classes. After this brief survey of some specific examples, let us make some general statements. • The phase space S F can be divided into regions corresponding to different classes. The dominant classes, generically more than one, cover it densely, whereas the subdominant ones span measure-zero subspaces. The dominant classes in phase space correspond to polyhedra with all vertices three-valent, that is the dual to the tessellation is a triangulation. This condition maximizes both the number of vertices, V = 3(F − 2), and edges, E = 2(F − 2). Subdominant classes are special configurations with some edges of zero lengths and thus fewer vertices. • Since all classes correspond to tessellations of the sphere with F faces, they are connected by Pachner moves [12]. The reader can easily find a sequence of moves connecting all conditions are imposed for them to glue consistently. Wolfram's Mathematica. We subdivided the phase space into a regular grid, and had Mathematica computing the adjacency matrix of the area-normal configurations lying at the center of the cells. This associates a unique class to each cell of the phase space. The information is colour-coded, cuboids in blue, pentagonal wedges in red. With this mapping of finite resolution we have measure-zero probability of hitting a subdominant class, thus the latter are absent in the figures. The holes are configurations for which our numerical algorithm failed. Concerning the specific values of the example, the areas are taken to be (9,10,11,12,13,13). In the left panel, we fixed µ 1 = 15, θ 1 = 7 10 π, µ 2 = 13, θ 2 = 13 10 π, and plotted the remaining pair (µ 3 , θ 3 ). In the right panel, we fixed µ i = (15,13,17) and plotted the three angles θ i . seven classes of Fig.4. To start, apply a 2-2 move to the upper edge of the inner square of the cuboid to obtain the pentagonal wedge. • The lowest-dimensional class corresponds to a maximal number of triangular faces, a condition which minimizes the number of vertices. When all the faces are triangular, the polyhedron can be seen as a collection of tetrahedra glued together, and with matching conditions imposed along all shared internal triangles. Large F and the hexagonal dominance The number of classes grows very fast with F (see for instance [13] for a tabulation). In the examples above with small F , we have been able to characterize the class looking just at how many faces have a certain valence. However as we increase F we find classes with the same valence distribution, but which differ in the way the faces are connected. To distinguish the classes one needs to identify the complete combinatorial structure of the polyhedron. This information is captured by the adjacency matrix, which codes the connectivity of the faces of the polyhedron. Below in Section 3.3 we will show how this matrix can be explicitly built as a function of areas and normals, and give some explicit examples. An interesting question concerns the average valence of a face, defined as p = 2E/F . A simple estimate can be given using the fact that the boundary of any polyhedron is a tessellation of the two-sphere, therefore by the Euler formula F − E + V = 2. For the dominant classes, which are dual to triangulations, the additional relation 2E = 3V holds, hence E = 3(F − 2) and we get p = 6(1 − 2/F ). For large F , we expect the polyhedron to be dominated by hexagonal faces. This expectation is immediately confirmed by a simple numerical experiment. The specimen in Figure 7, for instance, has F = 100 and p ∼ 5.88. Notice also from the image that there are no triangular faces, consistently with the fact that they tend to minimize the number of vertices and are thus highly non-generic configurations. 3 Polyhedra from areas and normals: reconstruction procedure So far we have discussed how a point in S F specifies a unique polyhedron, and the existence of different combinatorial structures. We now describe how the polyhedron can be explicitly reconstructed from areas and normals. The reconstruction will allow us to evaluate completely its geometry, including the lengths of the edges and the volume, and to identify its class through the adjacency matrix, thus being able to associate a class with each point of S F . The main difficulty in developing a reconstruction algorithm is that, given the areas and the normals, it is not known a priori which faces of the polyhedron are adjacent. The adjacency relations of the faces (and the combinatorial class of the polyhedron) are to be derived together with its geometry. This can be done in two steps. The first step uses an algorithm due to Lasserre [14] that permits to algebraically compute the lengths ℓ ij (h, n) of all the edges of the polyhedron as defined by h i and n i , as in (1). The second step consists of solving a certain quadratic system to obtain the values of the heights h i for given areas. Lasserre's reconstruction algorithm We now review Lasserre's procedure, and adapt it to the three-dimensional case of interest here. The basic idea of the reconstruction algorithm is to compute the length of an edge as the length of an interval in coordinates adapted to the edge. Consider the i-th face. From the defining inequalities (1), we know that points x ∈ R 3 on this face satisfy n i · x = h i (4a) n j · x ≤ h j , i = j. (4b) We consider the generic case in which n i · n j = ±1 ∀i, j (these special configurations can be obtained as limiting cases). We introduce coordinates y i adapted to the face, that is n i · y i = 0, y i = x − (x · n i )n i .(5) Using (4a) we get x = h i n i + y i , which inserted in (4b) gives y i · n j ≤ r ij , i = j ,(6) where we have defined r ij ≡ h j − (n i · n j )h i .(7) Hence, the i-th face can be characterized either in terms of the x or the y i coordinates, x · n i = h i n j · x ≤ h j , i = j −→ y i · n i = 0 y i · n j ≤ r ij (h, n), i = j(8) Notice that r ij / 1 − (n i · n j ) 2 is the distance of the edge ij from the projection of the origin on the i-th face. The next step is to iterate this process and describe an edge in terms of its adapted coordinates. We start from the i-th face again, and assume that it is connected to the face j, so that the two faces share an edge. Points on the edge ij between the i-th and the j-th face satisfy y i · n i = 0 (9) y i · n j = r ij (10) y i · n k ≤ r ik , k = i, j.(11) As before, we introduce coordinates z ij , adapted to the edge, n i · z ij = n j · z ij = 0, z ij = y i − [n j − (n i · n j )n i ] y i · n j 1 − (n i · n j ) 2 .(12) Using (10) we get that for a point in the edge y i = [n j − (n i · n j )n i ] h j − h i (n i · n j ) 1 − (n i · n j ) 2 + z ij .(13) Plugging this in (11) gives z ij · n k ≤ b ij,k ,(14) where we have defined b ij,k ≡ h k − (n i · n k )h i − (n j · n k ) − (n i · n j )(n i · n k ) 1 − (n i · n j ) 2 [h j − h i (n i · n j )] .(15) Summarizing as before, going to adapted coordinates the edge is defined by    y i · n i = 0 y i · n j = r ij (h, n) y i · n k ≤ r ik (h, n), k = i, j. −→    z ij · n i = 0 z ij · n j = 0 z ij · n k ≤ b ij,k (h, n), i = j = k(16) At this point we are ready to evaluate the length of each edge. To that end, we parametrize the z ij coordinate vector in terms of its norm, say λ, and its direction which is given by the wedge product of the two normals, z ij = λ n i ∧ n j 1 − (n i · n j ) 2 .(17) If we define a ij,k ≡ n i ∧ n j · n k 1 − (n i · n j ) 2 ,(18) we can rewrite the inequalities in (16) as λa ij,k ≤ b ij,k .(19) Finally, the length of the edge is the length of the interval determined by the tightest set of inequalities, i.e. min k|a ij,k >0 b ij,k a ij,k − max k|a ij,k <0 b ij,k a ij,k .(20) Here the minimum is taken over all the k's such that a ij,k is positive, and the maximum over all the k's such that a ij,k is negative. This quantity is symmetric [14] and satisfies a key property: it can be defined for any pair of faces ij, not only if their intersection defines an edge in the boundary of the polyhedron, and it is negative every time the edge does not belong to the polyhedron [14]. Thanks to this property, we can consistently define the edge lengths for any pair of faces ij as ℓ ij (h, n) = max k 0, min k|a ij,k >0 b ij,k a ij,k − max k|a ij,k <0 b ij,k a ij,k .(21) The result is a matrix whose entries are the edge lengths (as a functions of the normals and the heights) if the intersection is part of the boundary of the polyhedron, and zero if the intersection is outside the polyhedron. This formula completes Lasserre's algorithm, and permits one to reconstruct the polyhedron from the set (h i , n i ). To achieve a description in terms of areas and normals, we need one more step, that is an expression for the heights in terms of the areas. This can be done using (21) to compute the areas of the faces. We consider the projection of the origin on the face, and use it to divide the face into triangles. Recall the Lasserre's procedure has provided us with the distance between an edge and the projected origin, see (8). We thus can write A i = 1 2 F j=1 j =i r ij 1 − (n i · n j ) 2 ℓ ij .(22) Notice that both r ij (h, n) from (7) and ℓ ij (h, n) from (21) are linear in the heights. Hence, the area is a quadratic function, A i (h, n) = F j,k=1 M jk i (n 1 , . . . , n F )h j h k ,(23) where M i is a matrix depending only on the normals. This homogeneous quadratic system can be solved for h i (A, n). The existence of a solution with h i > 0 ∀i is guaranteed by Minkowski's theorem. However, the solution is not unique: in fact, we have the freedom of moving the origin around inside the polyhedron, thus changing the value of the heights without changing the shape of the polyhedron. A method which we found convenient to use is to determine a solution minimizing the function f (h i ) ≡ i (A i (h, n) − A i ) 2(24) at areas and normals fixed, with A i (h, n) given by (23). This is the method used in the numerical investigations of Figs. 6 and 7. 3 Finally, from the inverse we derive the lengths as functions of areas and normals, which with a slight abuse of notation we still denote in the same way, ℓ ij (A, n) = ℓ ij (h(A, n), n).(25) These expressions are well-defined and can be computed explicitly. Volume of a polyhedron in terms of areas and normals Let us call P(A i , n i ) the convex subset of R 3 corresponding to the polyhedron. Its volume is simply the integral on this region of the Euclidean volume density: V (A i , n i ) = P(A i ,n i ) d 3 x.(26) An interesting question is how to compute efficiently the volume integral (26). The simplest way is to use the algorithm described in the previous section: we chop the region P(A i , n i ) into pyramids with a common vertex in its interior and bases given by the faces of the polyhedron. In this way the volume is just the sum of the volumes of the pyramids, i.e. V (A i , n i ) = 1 3 F i=1 h i A i .(27) Here h i = h i (A, n) are the heights of the pyramids expressed in terms of the areas and normals via Lasserre's algorithm. The volume can be used to define a volume function on the phase space S F . To that end, notice that (27) is not defined for configurations with coplanar normals, which on the other hand do enter S F . However, it can be straightforwardly extended to a function on the whole S F by defining it to be zero for coplanar configurations. Furthermore, the resulting phase space function is continuous. 4 Since the volume is manifestly invariant under rotations, it can also be written as a function of the reduced phase space variables only, that is, V (A i , µ k , θ k ). To do so explicitly, one uses the relation n i = n i (µ k , θ k ), which is straightforward to derive once a reference frame is chosen. The volume of the polyhedron as a function of areas and normals has a number of interesting properties: C1. Non-negative phase-space function. The volume is by construction non-negative, and at given areas, it vanishes only when the normals n i lie in a plane. This in particular implies that the volume vanishes for F = 2 and 3. C2. Boundedness. For fixed areas A i , the volume is a bounded function of the normals. We call V max (A i ) the volume of the polyhedron with maximum volume, 5 V max (A i ) ≡ sup n i {V (A i , n i )} .(28) In particular, V max (A i ) is smaller that the volume of the sphere that has the same surface area as the polyhedron. Therefore we have the bound 0 ≤ V (A i , n i ) < i A i 3 2 3 √ 4π .(29) C3. Face-consistency. If we set to zero one of the areas such that the result is still a nondegenerate polyhedron, the function (27) automatically measures the volume of the reduced polyhedron with F − 1 faces. In conclusion, a point in S F determines uniquely the whole geometry of a polyhedron and in particular its edge-lengths ℓ ij (21) and its volume (27). 6 Now we show how these data can be used to identify the class of the polyhedron. 4 In order to to see this, one shows that the limit of coplanar normals exists and the volume tends to zero in this limit. From property (C3) -see below, a general F -valent coplanar configuration can be obtained from a F + 1 configuration in the limit of zero base's area. 5 Notice that there can be more than one polyhedron that attains maximum volume. For instance, in the case F = 4, there are two parity-related tetrahedra with maximal volume. 6 It is worth adding that the problem of computing the volume of a given polyhedron is a complex and well studied topic in computational mathematics [15,16], hence better procedures than the one used here could in principle be found. However, the usual starting point for common algorithms is the knowledge of the coordinates of vertices, or the system of inequalities (1). Therefore the methods need to be adapted to obtain formulas in terms of areas and normals. The main difficulty is clearly that the adjacency relations of the faces are to be derived together with the geometry. We found Lasserre's algorithm to be the most compatible with these necessities, thanks to the fact that the lengths are reconstructed algebraically. Numerical algorithms for the volume and shape reconstruction from areas and normals are developed in the study of extended Gaussian images in informatics [17], however there are no analytical results. Adjacency matrix and the class of the polyhedron The adjacency matrix A of the polyhedron is defined as A ij = 1 if the faces i and j are adjacent 0 otherwise i, j = 1, . . . , F Notice that A ij coincides with the matrix ℓ ij in (21) with all the non-zero entries normalized to 1: the recontruction algorithm gives us the adjacency matrix for free. The symmetric matrix A ij contains information on the connectivity of the faces as well as on the valence of each face, thus the class of the polyhedron can be identified uniquely from it. The valence p i of the face i can be extracted taking the sum of the columns for each row, p i = F j=1 A ij .(31) For example, for the two classes with F = 5 of Fig.3 we have From graph theory [18], we known that (30) has a number of interesting properties that can be related to the geometrical parameters of the polyhedron. For instance, the number of walks from the face i to the face j of length r is given by the matrix elements of the r-th power (A r ) ij . From this property we deduce that the number E of edges of the polyhedron is −→ A =      E = 1 2 TrA 2 = 1 2 i p i .(32) This expression generalizes the value E = 3(F − 2) valid for the dominant classes. Higher traces are related to the number of loops of a given lengh. For instance, the number of closed loops of length 3 is given by (1/6) TrA 3 . Through the adjacency matrix, obtained via the reconstruction procedure, areas and normals identify a unique class, and thus permits the division of S F . Shape-matching conditions Knowing the complete geometry of the polyhedra allows us also to address the following situation. Suppose that we are given two polyhedra in terms of their areas and normals, and that we want to glue them by a common face. Even if we choose the area of the common face to be the same, there is no guarantee that the shape of the face will match: The two sets of data will in general induce different shapes of the face. That is, the face has the same area but it can be two different polygons altogether. In order to glue the polyhedra nicely, one needs shape matching conditions guaranteeing that the shared face has the same geometry in both polyhedra. If both polyhedra are tetrahedra, the problem has been solved in [19]. One uses the fact that the shape of the common triangle matches if two lengths, or two internal angles, are the same. The internal angles α can be expressed in terms of the 3d dihedral angles of the tetrahedron as follows, cos α i jk = cos φ ij + cos φ ik cos φ jk sin φ ik sin φ jk .(33) Here the faces i, j and k all share a vertex, and α i jk is the angle between the edge ij and the edge ik inside the triangle i. Consider now the adjacent tetrahedron. Its geometry induces for the same angle the value cos α i j ′ k ′ = cos φ ′ ij ′ + cos φ ′ ik ′ cos φ ′ j ′ k ′ sin φ ′ ik ′ sin φ ′ jk ′ .(34) Hence, for the shape to match it is sufficient to require C kl,ij (φ) ≡ cos α i jk − cos α i j ′ k ′ = 0(35) for two of the three angles of the triangle. These shape matching conditions are conditions on the normals of the two tetrahedra. See left panel of Figure 8 for an illustration of these relations. The simplicity of the conditions (35) is a consequence of the fact that two triangles with the same area are congruent if two angles match. For the general case, the face to glue is now a polygon and the number of conditions greater. One needs to make sure that the valence p of the polygon is the same. Then, the number of independent parameters of a polygon on the plane is 2p − 3, hence giving the edge lengths is not enough, and p − 2 additional conditions are needed. A convenient procedure is the following. Identify the faces of the two polyhedra that, having the same area, we want to match. From the reconstruction algorithm, we know the edge lengths ℓ ij of the face viewed from one polyhedron. Then, for all j such that ℓ ij = 0, we consider the face normals n j projected on the plane of the i-th face, n j = n j − (n i · n j )n i |n j − (n i · n j )n i | = n j − cos φ ij n i sin φ ij .(36) The set (ℓ j ,ñ j ) defines a unique polygon in the plane identified by n i , thanks to a twodimensional version of Minkowski's theorem. Then, we do the same with the second polyhedron, obtaining a second set (ℓ ′ j ,ñ ′ j ) living in the plane identified by n ′ i . Finally, the shape matching conditions consist of imposing the equivalence of these two flat polygons up to rotations in three-dimensional space. Notice that the shape matching are now conditions on both the normals and the areas of the two polyhedra. Relation to loop quantum gravity Thus far we have been discussing classical properties of polyhedra. In the rest of the paper, we discuss the relevance of polyhedra for loop quantum gravity. The relation comes from the following two key results: (i ) Intertwiners are the building blocks of spin-network states, an orthonormal basis of the Hilbert space of loop quantum gravity [20,21] (ii ) Intertwiners are the quantization of the phase space of Kapovich and Millson [22,9,23] (see also [24,25]), i.e. of the space of shapes of polyhedra with fixed areas discussed in the previous sections. Therefore an intertwiner can be understood as the state of a quantum polyhedron, and spinnetwork states as a collection of quantum polyhedra associated with each vertex. In this section we review how (ii) and the notion of quantum polyhedron are established, observe that coherent intertwiners are peaked on the geometry of a classical polyhedron and discuss the relevance of this fact for the relation between semiclassical states of loop quantum gravity and twisted geometries. The quantum polyhedron Let us consider the space of vectors in 3d Euclidean space with norm j. This is a phase space, the Poisson structure being the rotationally invariant one proper of the 2-sphere S 2 j of radius j. As is well known, its quantization 7 is the representation space V (j) of SU (2). We are interested in the phase space S F , that is the space of F vectors that sum to zero, up to rotations. The Poisson structure on S F is obtained via the symplectic reduction of the Poisson structure on the product of F spheres of given radius. Thanks to Guillemin-Sternberg's theorem that quantization commutes with reduction, 8 we can quantize first the unconstrained phase space × i S 2 j i , and then reducing it at the quantum level extracting the subspace of ⊗ i V (j i ) that is invariant under rotations. This gives precisely the intertwiner space H F = Inv ⊗ F i=1 V (j i ) . The situation is summarized by the commutativity of the following diagram, × i S 2 j i −→ ⊗ i V j i Symplectic reduction ↓ ↓ Quantum reduction S F −→ H F The correspondence between classical quantities and their quantization is the following: up to a dimensionful constant, the generators J i of SU(2) acting on each representation space V (j i ) are understood as the quantization of the vectors A i n i . In LQG the dimensionful constant is chosen to be the Immirzi parameter γ times Planck's area 8πL 2 P , A i n i −→Ê i = 8πγL 2 P J i .(37) The closure condition (2) on the normals of the polyhedron is promoted to an operator equation, F i=1 J i = 0.(38) This condition defines the space of intertwiners, and corresponds to the Gauss constraint of classical General Relativity in Ashtekar-Barbero variables. One can then proceed to associate operators to geometric observables through the quantization map (37). The area of a face of the quantum polyhedron iŝ A i = Ê i ·Ê i = 8πγL 2 P j i (j i + 1)(39) and produces an equispaced quantization of the area A i ∼ j i for large spins, i.e. up to quantum corrections. Notice that an ordering can be chosen so that the area is exactly i = 8πγL 2 P j i . This ordering will be considered below to simplify the construction of the volume operator. The scalar product between the generators of SU (2) associated to two faces of the polyhedron measures the angle θ ij between them [27], θ ij = arccos J i · J j j i (j i + 1) j j (j j + 1) .(40) 7 Notice that, as usual, the quantum theory requirers the quantization of some classical quantities. In this case the norm of the vector has to be a half-integer j, the spin. 8 For the general theory see [26], for details on the application to the current system see [4] and in particular [9]. Notice that the angle operators do not commute among themselves, therefore it is not possible to find a state for a quantum polyhedron that has a definite value of all the angles between its faces. Moreover, the adjacency relations of the faces is not prescribed a priori, thusθ ij might not even be a true dihedral angle of the polygon. Therefore an eigenstate of a maximal commuting set of angles is far from the state of a classical polyhedron: it is an infinite superposition of polyhedra of different shapes, including different combinatorial classes. Semiclassical states for a quantum polyhedron are discussed in the next section. Coherent intertwiners and semiclassical polyhedra Coherent intertwiners for H F were introduced in [8] and furtherly developed in [9,23] (for previous related work, see [28]). These Livine-Speziale (LS) coherent intertwiners are defined as the SU(2)-invariant projection of a tensor product of states |j i , n i ∈ V (j i ) , ||j i , n i ≡ dg D (j 1 ) (g)|j 1 , n 1 · · · D (j F ) (g)|j F , n F .(41) The states |j, n are SU(2) coherent states peaked on the direction n of the spin [29,30], j, n| J |j, n = jn. In (41), the unit-vectors n i can be assumed to close, i j i n i = 0. The reduced states are still an overcomplete basis of H F , as a consequence of the Guillemin-Sternberg theorem [9,31]. Coherent intertwiners are semiclassical states for a quantum polyhedron: the areas are sharp, and the expectation value of the non-commuting angle operatorsθ ij reproduces the classical angles between faces of the polyhedron in the large spin limit, j i , n i || cosθ ij ||j i , n i j i , n i ||j i , n i ≈ n i · n j .(43) Moreover, the dispersions are small compared to the expectation values. A useful fact is that coherent intertwiners can be labeled directly by a point in the phase space S F of Kapovich and Millson, and therefore by a unique polyhedron. This provides a resolution of the identity in intertwiner space as an integral on S F . To realize this reduction, it is convenient to parametrize S F via F − 3 complex numbers Z k , instead of (µ k , θ k ). Let us choose an orientation in Ê 3 and consider the stereographic projection z i of the unit-vectors n i into the complex plane. 9 The F − 3 complex variables Z k are the cross-ratios [9] Z k = (z k+3 − z 1 )(z 2 − z 3 ) (z k+3 − z 3 )(z 2 − z 1 ) , k = 1, . . . , F − 3. (44) 9 The relation between the unit-vector n = (nx, ny, nz) and the stereographic projection is z = − nx − iny 1 − nz = − tan θ 2 e −iφ , where θ and φ are the zenith and azimuth angles of S 2 , and we have chosen to project from the south pole. Given an orientation in Ê 3 , a set of normals n i that satisfy the closure condition (2) can be obtained as a function of the cross-ratios, n i = n i (Z k ) .(45) Coherent intertwiners can then be obtained via geometric quantization [9]: they are labeled by the variables Z k , that is |j i , Z k , and are equal to the states ||j i , n i | = |j i , n i (Z k ) up to a normalization and phase. 10 The resolution of the identity is given by an integral over the variables Z k , ½ H F = F −3 dµ(Z k ) |j i , Z k j i , Z k | ,(46) where the integration measure dµ(Z k ) = K j i (Z k ,Z k ) k d 2 Z k depends parametrically on the spins j i and is given explicitly in [9]. The relevance of this formula for the following discussion is that it provides a resolution of the identity in intertwiner space as a sum over semiclassical states, each one representing a classical polyhedron: the intertwiner space can be fully described in terms of polyhedra. 11 Coherent states on a fixed graph and twisted geometries The states |j i , Z k provide coherent states for the space of intertwiners only, and should not be confused with coherent spin-network states for loop quantum gravity. Nevertheless, classical polyhedra and coherent intertwiners are relevant to the full theory, as we now discuss. To relate polyhedra to loop quantum gravity, consider a truncation of the theory to a single graph Γ, with L links and N nodes. The associated gauge-invariant Hilbert space H Γ = L 2 [SU(2) L /SU(2) N ] decomposes in terms of intertwiner spaces H F (n) ≡ Inv[⊗ l∈n V (j l ) ] as H Γ = ⊕ j l ⊗ n H F (n) .(48) This Hilbert space is the quantization of a classical space 12 S Γ = T * SU(2) L //SU(2) N , which corresponds to (gauge-invariant) holonomies and fluxes associated with links and dual faces of the graph. The double quotient // means symplectic reduction. The key result is that this space admits a decomposition analogous to (48). In fact, it can be parametrized as the following Cartesian product [1], S Γ = × l T * S 1 × n S F (n) ,(49) 10 The states |ji, Z k also define an holomorphic representation of the quantum algebra of functions ψ(Z k ) ≡ ji,Z k |ψ , see [23]. We will not use this representation in this paper. 11 Recently [5,32,33] attention has been given to a second space for which polyhedra are relevant. This is a sum of intertwiner spaces such that the total spin is fixed, HJ = ⊕ j 1 ..j F i j i =J Inv ⊗ F i=1 V (j i ) .(47) The interest in this space is that it is a representation of the unitary group U(F). Vectors in this space represent quantum polyhedra with fixed number of faces and fixed total area, but fuzzy individual areas as well as shapes as before. Coherent states for (47) can be built using U(F) coherent states [32]. These are also peaked on classical polyhedra like the LS states (41), thus the results in this paper are relevant for them as well. 12 Again, this is a symplectic manifold up to singular points [34]. where T * S 1 is the cotangent bundle to a circle, F the valence of the node n, and S F is the phase space of Kapovich and Millson. The parametrization is achieved through an isomorphism between holonomy-fluxes and a set of variables dubbed "twisted geometries". These are the assignment of an area A l and an angle ξ l to each link, and of F normals n i , satisfying the closure condition (2), to each node. See [1,2] for details and discussions. In this parametrization, a point of S Γ describes a collection of polyhedra associated to each node. The two polyhedra belonging to nodes connected by a link l share a face. The area of this face is uniquely assigned to both polyhedra A l (notice that this fact alone does not imply that the shape of the face matches -more on this below). The extra angles ξ l carry information on the extrisic geometry between the polyhedra. The isomorphism (49) and the unique correspondence between closed normals and polyhedra means that each classical holonomy-flux configuration on a fixed graph can be visualized as a collection of polyhedra, together with a notion of parallel transport between them. Just as the intertwiners are the building blocks of the quantum geometry of spin networks, polyhedra are the building blocks of the classical phase space (49) in the twisted geometries parametrization. What is the relevance of this geometric construction to the quantum theory? Coherent states for loop quantum gravity have been introduced and extensively studied by Thiemann and collaborators [35,36,34]. Although the states for the full theory have components on each graph, one needs to cut off the number of graphs to make them normalizable. In practice, it is often convenient to truncate the theory to a single graph. This truncation provides a useful computational tool, to be compared to a perturbative expansion, and has found many applications, from the study of propagators [37] to cosmology [38]. In many of these applications, control of the semiclassical limit requires a notion of semiclassical states in the truncated space H Γ . The truncation can only capture a finite number of degrees of freedom, thus coherent states in H Γ are not peaked on a smooth classical geometry. Twisted geometries offer a way to see them as peaked on a discrete geometry, to be viewed as an approximation of a smooth geometry on a cellular decomposition dual to the graph Γ. The above results provide a compelling picture of these twisted geometries in terms of polyhedra, and thus of coherent states as a collection of semiclassical polyhedra. There is one subtlety with this geometric picture that should be kept in mind, which justifies the name "twisted" geometries: they define a metric which is locally flat, but discontinuous. To understand this point, consider the link shared by two nodes. Its dual face has area proportional to A l . However, the shape of the face is determined independently by the data around each node (i.e. the normals and the other areas), thus generic configurations will give two different shapes. In other words, the reconstruction of two polyhedra from holonomies and fluxes does not guarantee that the shapes of shared faces match. Hence, the metric of twisted geometries is discontinuous across the face [1,2]. 13 See left panel of Figure 8. One can also consider a special set of configurations for which the shapes match, see right panel of Figure 8. This is a subset of the phase space S Γ where the shape matching conditions, discussed earlier in Section 3.4, hold. This subset corresponds to piecewise flat and continuous metrics. For the special case in which all the polyhedra are tetrahedra, this is the set-up of Regge calculus, and those holonomies and fluxes indeed describe a 3d Regge geometry: twisted geometries with matching conditions amount to edge lengths and extrinsic curvature dihedral angles [1,2]. This relation between twisted geometry and Regge calculus implies that holonomies and fluxes carry more information than the space of Regge calculus. This is not in contradiction with the fact that the Regge variables and the LQG variables on a fixed graph both provide a truncation of general relativity: simply, they define two distinct truncations of the full theory. See [2] for a discussion of these aspects. For an arbitrary graph, the shape-matching subset describes a generalization of 3d Regge geometry to arbitrary cellular decompositions. In this case however the variables are not equivalent any longer to edge lengths, since as already discussed these do not specify uniquely the geometry of polyhedra. Rather, such cellular Regge geometry must use areas and normals as fundamental variables. Finally, let us make some comments on the coherent states themselves. The discussion so far is largely independent of the details of the coherent states on H Γ . All that is required is that they are properly peaked on a point in phase space. The states most commonly used are the heat-kernel ones of Thiemann and collaborators. Notice that these are not written in terms of the LS coherent intertwiners (41). Nevertheless, it was shown in [41] that they do reproduce coherent intertwiners in the large area limit. Alternative coherent states based directly on coherent intertwiners appear in [42]. These results show that coherent intertwiners can be used as building blocks of coherent spin networks. On the volume operator At the classical level, the volume of a polyhedron is a well-defined quantity. In this section we investigate the quantization of this quantity and its relation with the volume operators used in loop quantum gravity. The volume of a quantum polyhedron Let us consider the phase space S F of polyhedra with F faces of given area. The volume of the polyhedron is a well-defined function on this phase spase, as discussed in Section 3.2. Coherent intertwiners provide a natural tool to promote this quantity to an operator in H F . In the following we use the parametrization of the phase space S F in terms of the cross ratios Z k . In particular, the F normals n i are understood as functions of the cross-ratios, n i (Z k ). Accordingly we call V (j i , Z k ) the volume of a polyhedron with faces of area A i (j i ) = 8πγL 2 P j i and normals n i (Z k ), V (j i , Z k ) ≡ V (A(j i ), n(Z k )) .(50) For simplicity we assume an ordering of operators such that the area is linear in the spin, but the above expression, and the following construction, can be immediately applied to other possibilities. Let us consider now the Hilbert space of intertwiners H F associated to the phase space S F . The volume of a quantum polyhedron can be defined in terms of coherent intertwiners |j i , Z k and of the classical volume as follows: V = dµ(Z k ) V (j i , Z k ) |j i , Z k j i , Z k | .(51) This integral representation of the operator in terms of its classical version 14 is of the kind considered originally by Glauber [45] and Sudarshan [46]. It has a number of interesting properties that we now discuss: Q1. The operatorV is positive semi-definite, i.e. ψ|V |ψ = dµ(Z k ) V (j i , Z k ) | j i , Z k |ψ | 2 ≥ 0 ,(52) for every |ψ in H. This is a straightforward consequence of the fact that the classical volume is a positive function, V (j i , Z k ) ≥ 0. Furthermore,V vanishes for F = 2 and 3. Q2.V is a bounded operator in H F . Its norm ||V || = sup ψ ψ|V |ψ / ψ|ψ is bounded from above by the maximum value of the classical volume of a polyhedron with fixed areas, ψ|V |ψ ψ|ψ = dµ(Z k ) V (j i , Z k ) | j i , Z k |ψ | 2 ≤ sup Z k {V (j i , Z k )} ≡ V max (j i ) .(53) Q3. 0-spin consistency. Let us consider the operatorV defined on the Hilbert space H F +1 associated to spins j 1 , . . . , j F , j F +1 , and the one defined on the Hilbert space H F associated to spins j 1 , . . . , j F . When the spin j F +1 vanishes, the two operators coincide. This is a consequence of the fact that the classical volume of a polyhedron with F + 1 faces coincides with the volume of a polyhedron with F faces and the same normals when one of the areas is sent to zero. These three properties are the quantum version of C1, C2, C3 discussed in Section 3.2. Moreover, using the fact that for large spins two coherent intertwiners become orthogonal, | j i , Z k |j i , Z ′ k | 2 → δ(Z k , Z ′ k ),(54) we have that the expectation value V of the volume operator on a coherent state |j i , Z k reproduces the volume of the classical polyhedron with shape (j i , Z k ), V ≡ j i , Z k |V |j i , Z k j i , Z k |j i , Z k ≈ V (A i (j i ), n i (Z k )).(55) 14 In the literature [29], the classical function V (ji, Z k ) is called the P -symbol of the operatorV . On the other hand, the expectation value of the operatorV on a set of coherent states, i.e. Q(ji, Z k ) ≡ ji, Z k |V |ji, Z k , is called the Q-symbol. When the P -symbol and the Q-symbol of an operator exist, then the operator is fully determined by either of them. The properties of these symbols and of the operator they define have been studied by Berezin in [43,44] This fact allows to estimate the largest eigenvalue of the volume: in the large spin limit, the largest eigenvalue is given by V max (A i ), the volume of the largest polyhedron in S F . The spectrum of the operatorV can be computed numerically. Let us focus on the case F = 4 for concreteness. The matrix elements ofV in the conventional recoupling basis are given by V kk ′ = j i , k|V |j i , k ′ = dµ(Z) V (j i , Z) j i , k|j i , Z j i , Z|j i , k ′ .(56) The matrix V kk ′ can be diagonalized numerically to obtain its eigenvalues. 15 We focused for simplicity on the simplest case where all the four spins j i are equal to j 0 . The results using Wolfram's Mathematica are shown in Fig. 9 and confirm that the maximum eigenvalue is below the volume of the regular tetrahedron. Notice also that the spectrum has a gap. One of the interesting questions to investigate in the future is whether this gap survives at higher valence, or it decays as for the standard volume operator [47]. It is interesting to notice that the volume operator introduced above commutes with the parity operator. This is the operator that sends the normals to their opposite, P|j, n = |j, −n .(57) In terms of the stereographic projection, the maps n → −n amounts to z → −1/z, thus its action on coherent intertwiners labeled by the single cross-ratios Z is simplŷ P|j i , Z = |j i ,Z .(58) Notice that V (j i , Z k ) = V (j i ,Z k ) thanks to the invariance of the classical volume under parity. Moreover the measure dµ(Z k ) is invariant under the transformation Z k →Z k . As a result, the operator (51) commutes with parity, PVP † = dµ(Z) V (j i , Z)|j i ,Z j i ,Z| = dµ(Z) V (j i ,Z)|j i , Z j i , Z| =V .(59) This explains the degeneracies seen in the spectrum. Clearly, there are other possibilities for the volume of a quantum polyhedron. All of them share the same classical limit, but can have a different spectrum for small eigenvalues. An interesting variant isˆ V = |Û |, whereÛ is the oriented-volume square operator, defined aŝ U = dµ(Z k ) s(Z k )V 2 (j i , Z k ) |j i , Z k j i , Z k |.(60) Here s(Z k ) is the parity of the polyhedron, i.e. s(Z k ) = ±1 and s(Z k ) = −s(Z k ). The operatorÛ anticommutes with the parity, and so doesˆ V . Therefore, under the assumption that the spectrum is non-degenerate, we have that the eigenvalues appear in pairs ±u. In particular, a zero eigenvalue is present when the Hilbert space H F is odd-dimensional. This operator is similar in spirit to the the volume of a quantum tetrahedron introduced by Fig. 10 we show some eigenvalues ofˆ V and a comparison withV B . Barbieri [3],V B = (8πγ) 3 2 L 3 P √ 2 3 |J 1 · (J 2 × J 3 )|. In For more on semiclassical aspects of the spectrum of the volume, see [48]. LQG volume operator and the quantum polyhedron In LQG, the operator associated to the volume of a region in space is a well studied quantity [49,50,51]. It is defined on the graph Hilbert space H Γ as a sum over contributionsV n from each node n of the graph within the region R, V Γ (R) = n⊂RV n .(61) In order to admit a lifting from H Γ to the full Hilbert space of LQG, the operatorV Γ (R) has to satisfy a number of consistency conditions that go under the name of "cylindrical consistency" [52]. In particular, these conditions are satisfied by the operatorV n if (i) it commutes with the area of dual surfaces, so thatV n reduces to an operator on the intertwiner space H F (n) , and (ii) it satisfies a 0-spin consistency condition so that the operators defined on different intertwiner spaces coincide when these spaces are identified. In the previous section we have introduced an operatorV n , given by (51) for the given node, that satisfies these conditions. Condition (i) holds because by construction the operator acts within H F (n) , and condition (ii) follows from property Q3 in Section 5. This operator is based on the knowledge of the classical system behind the intertwiner space H F (n) . The single node operatorV n measures the volume of a quantum polyhedron dual to the node, and the operator V Γ (R) built as in (61) the volume of a region in a twisted geometry. It has a good semiclassical limit by construction. The standard strategy in LQG is on the other hand rather different. The starting point is the classical expression for the volume of a region, V (R) = R d 3 x 1 3! ǫ ijk ǫ abc E a i E b j E c k ,(62) E a i (x) being the Ashtekar-Barbero triad. The key step is to rewrite this quantity in terms of fluxes, which are the fundamental operators of the theory. This step introduces a regularization procedure which is adapted to a graph Γ embedded in space. Then, the regularized quantity is promoted to an operator in the Hilbert space H Γ and the limit of vanishing regulator exists and it is well-defined. Two volume operators have been constructed in this way, one by , and one by Ashtekar-Lewandowski [50]. Both these operators have the form (61), and differ in the regularization procedure and in details on the exact form ofV n . For the Ashtekar-Lewandowski volume operator, the node contribution is defined on the intertwiner space H F asV AL n = (8πγ) 3/2 L 3 P 1 8 1≤i<j<k≤F ǫ(e i , e j , e k ) J i · ( J j ∧ J k ) ,(63) where ǫ(e i , e j , e k ) = ±1, 0 is the orientation of the tangents e i to the links at the node. The overall coefficient is fixed by a consistency requirement known as 'triad test' [53]. There is a large amount of analytical and numerical results on the spectrum of this operator (e.g. [51,47]), particularly because it enters Thiemann's construction of the Hamiltonian constraint [54] and thus it is relevant to understand the quantum dynamics of the theory. Moreover its semiclassical behaviour has been investigated in detail with the conclusion that only cubulations, that is regular graphs with 6-valent nodes, have a good semiclassical limit [55]. In the light of the quantum polyhedron introduced in this paper, this result can be understood as follows. On semiclassical states, 16 J i = A i ≡ A i n i (see discussion in Section 4 and cf. (37) and (42)), and the expectation value of (63) is -at zero order in [55] V AL n = 1 8 1≤i<j<k≤F ǫ(e i , e j , e k ) A i · ( A j ∧ A k ) .(64) As discussed earlier, the variables A i of the semiclassical state define a polyhedron around the node n. The key observation is that (64) is not the volume of that polyhedron. The volume of a convex polyhedron with F faces is in general a rather complicated function of the areas and normals (see the discussion in Section 3.2). There is however a case where this expression simplifies greatly, and in this case it coincides with (64): it happens for parallelepipeds. Parallelepipeds are a subset of the phase space S F for F = 6 with areas that are equal in pairs. They live within the combinatorial class of cuboids: they are cuboids with three couples of parallel faces. 17 The volume of a parallelepiped is V = | A 1 · ( A 2 ∧ A 3 )|,(65) where (123) are any three faces sharing a vertex. It is straightforward to see that this coincides with (64) for the semiclassical state of a cubic analytic node 18 with areas equals in pairs and normals parallel pairwise. This fact explains why the expectation value of the operator (63) on a semiclassical states reproduces the volume of a parallelepiped for F = 6, but not the volume of other polyhedra. 19 6 On dynamics and spin foams Spin foam models for the dynamics of loop quantum gravity are usually built starting from a discretization of the spacetime manifold in terms of a simplicial triangulation ∆. A certain control over the dynamics comes from a connection with Regge calculus in the large spin limit. Specifically, in this limit the transition amplitudes are related to exponentials of the Regge action [9,31,56,57]. This result is generally regarded as a promising step towards understanding the low-energy physics of the theory, since discrete general relativity on ∆ is reproduced. On the other hand, complete transition amplitudes for LQG require the use of more general 2-complexes than those those dual to simplicial manifolds. 20 16 The semiclassical states used in the analysis of [55] are the heat-kernel coherent states developed by Thiemann and collaborators [35]. However, the details on the coherent states do not matter for our argument, all that is required is that they are peaked on a given point in the classical phase space SΓ. 17 Notice that parallelepipeds are a set of measure zero among the cuboids. Moreover, cuboids are not the only dominant class in phase space SF with F = 6. 18 That is, the link are the analytic continuations of each other across the nodes. 19 It goes without saying that the dependence on areas and normals of the expression (63) can be used to define the volume of a tetrahedron, as we saw withVB earlier. But that would require a different numerical coefficient in (63) -an extra √ 2/3 -which is hard to motivate in the standard LQG construction. 20 Although a direct construction of the path integral for arbitrary graph has not been attempted so far, in [58] a model valid for arbitrary graph was proposed, based on a natural extension of some algebraic properties of the EPR model [59]. Just as Regge calculus is useful to study the semiclassical behaviour on simplicial manifolds, a generalization thereof to arbitrary cellular decompositions could be relevant to the full theory, and allow us to test whether models such as the one proposed in [58] can be related to (discrete) general relativity. In this final Section, we would like to make two remarks on this idea. The first remark concerns Regge calculus on arbitrary cellular decompositions. The point is that edge lengths are not good variables to capture the (discrete) metric of the manifold. This is simply because a generic 4d polyhedron at fixed edge lengths is not rigid. Therefore a piecewise-linear metric can not be described by the edge lengths of the polyhedra alone. The solution to this problem can be found looking again at Minkowski's theorem, which holds in any dimension. The theorem implies that a generic polyhedron in Ê n , sometimes called an n-polytope, is uniquely characterized by nF − n(n + 1)/2 numbers: the volumes of the F "faces" (which are now (n − 1)-polytopes) and the normals satisfying the n-dimensional closure condition. On the other hand, n-simplexes are polytopes with a minimal number of faces, F = n + 1. In this case, assigning their n(n + 1)/2 edge lengths suffices, thus edge lengths fix a unique flat metric on each n-simplex and can be used as fundamental variables in the full triangulation. Let us fix n = 4. To identify the geometry of each 4-polytope, we need volumes V m and 4d unit normals N m of each polyhedron m in its boundary, satisfying the closure condition. For these to extend to a piecewise-linear, continuous metric on the whole cellular decomposition, we additionally need shape matching conditions, of the sort described in Section 3.4 for three dimensions. A tentative Regge-like action can then be written as S[V m , N m ] = f A f (V m , N m )ǫ f (V m , N m ) + constraints,(66) where f are the 2d faces of the cellular decomposition, and ǫ the deficit angles, defined as usual as 2π minus the sum of dihedral angles of each 4-polytope sharing the face. The constraints are the closure and shape matching conditions. In principle, we can interpret (66) as an "effective" Regge action in which the internal edge lengths of an initial simplicial triangulation have been evaluated on the flat solution. The second remark concerns the link between spin foam amplitudes and Regge calculus. A lesson from the recent asymptotics studies of the EPR model is that the amplitude is dominated by exponentials of the Regge action when the boundary data satisfy certain conditions, which guarantee the existence of a unique 4-simplex in the bulk. This suggest that the dominant contributions to models on arbitrary graphs could come from requesting the existence of a unique 4-polytope, and that the amplitude could be related to a form of the Regge action specialized to the 4-polytope, such as the one described above. So the question is whether, as for the 4-simplex, the conditions for the existence of the 4-polytope can be mapped into conditions on the boundary data, such as 3d closure and non-degeneracy conditions, and shape matching. This is a key question that we leave open for future work. We believe that the answer, and these considerations in general, will be relevant to tackle the problem of the semiclassical limit of spin foams on arbitrary graphs, such as the one proposed in [58]. Conclusions In this paper we discussed a number of properties of classical polyhedra which are of interest to loop quantum gravity. A polyhedron can be uniquely identified by the areas and the normals to its faces (Minkowski's theorem [7], Section 2). The identification includes the knowledge of its geometry (edge lengths, volume), and its combinatorial class (the adjacency of the faces). This information can be explicitly derived from the areas and normals through the reconstruction procedure presented in section 3. We observed that the space of polyhedra of given areas is a phase space, previously introduced by Kapovich and Millson [6], and used our reconstruction algorithm to divide this space into regions corresponding to different classes. We then discussed the relevance of polyhedra to the quantum theory. We first recalled that the quantization of Kapovich and Millson phase space gives the SU(2)-invariant space of intertwiners (section 4), and thus observed that the LS coherent intertwiners can be interpreted as semiclassical polyhedra. The polyhedral picture can be extended to a whole graph using the twisted-geometry parametrization of the holonomy-flux variables introduced in [1]. The knowledge of the classical space behind intertwiners was then used to introduce a new operator, which measures the volume of a quantum polyhedron (section 5), and by construction has the correct semiclassical limit. We performed some numerical analysis of its spectrum for the simplest 4-valent case. We discussed its relation to the volume operators commonly used in loop quantum gravity. Finally (section 6), we used the four-dimensional version of Minkowski's theorem to make some remarks on Regge calculus on non-simplicial discretizations and its possible relevance to spin foam models on graphs of arbitrary valence. Our hope is that the notion of a quantum polyhedron can find useful applications in future developments of loop quantum gravity, and that the results in this paper are a first step in that direction. Figure 1 : 1A polygon with side vectors A i n i and the (F −3) independent diagonals. The space of possible polygons in Ê 3 up to rotations is a (2F − 6)-dimensional phase space, with action-angle variables the pairs (µ i , θ i ) of the diagonal lengths and dihedral angles. For non-coplanar normals, the same data defines also a unique polyhedron thanks to Minkowski's theorem. Figure 2 : 2Some examples of Schlegel diagrams. From left to right, a tetrahedron, a pyramid, a cube and a dodecahedron. Figure 3 : 3Polyhedra with 5 faces: the two possible classes are the triangular prism (left panel) and the pyramid (right panel). The two classes differ in the polygonal faces and in the number of vertices. Figure 4 : 4The seven classes of polyhedra with 6 faces, grouped according to the dimensionality of their configurations. Figure 5 : 5Pictorial representation of the phase space: it can be mapped into regions corresponding to the various dominant classes (two in the example). The subdominant classes separe the dominant ones and span measure-zero subspaces. Figure 6 : 6Mappings of subspaces of S 6 realized using the reconstruction algorithm of Section 3 and Figure 7 : 7A polyhedron with F = 100 drawn with Wolfram's Mathematica, using the reconstruction algorithm. The example has all areas equals and normals uniformly distributed on a sphere. Notice that most faces have valence 6, and that triangles are nowhere to be seen. Figure 8 : 8The geometric meaning of equation(35): the 2d angle α ij,kl belonging to the shaded triangle can be expressed in terms of 3d angles associated the thick edges of the tetrahedron k, or equivalenty of the tetrahedron l. Figure 9 : 9Some eigenvalues ofV . For comparison, the curve is the classical volume of an equilateral tetrahedron as a function of the area A = j (units 8πγL 2 P = 1). The empty circles are single eigenvalues, the full circles have double degeneracy. The spectrum is gapped and bounded from the above by the classical maximal volume, which provides a large spin asymptote. Figure 10 : 10Left panel. Some eigenvalues ofˆ V . For comparison, the curve is the classical volume of an equilateral tetrahedron as a function of the area A = j (units 8πγL 2 P = 1). All but the zero eigenvalue have double degeneracies. Right panel. Same region of the spectrum for Barbieri's operatorV B . Notice that here the asymptotic curve is the equilateral volume with areas A = j(j + 1). In[6] it is also called the space of shapes of (bended) polygons. To be precise, it is a symplectic manifold up to a finite number of points, corresponding to configurations with one or more consecutive vectors collinear. Concerning Fig. 6, we can also give now more details on the holes: these are configurations for which the numerical algorithm to solve(23) failed. This limitation can be easily improved with a better inversion algorithm, or by choosing a configuration slighly off the center of the cell. Aspects of this discontinuity have been discussed also in[39,40] The overlaps, j, k|ji,Z = (−1) 2k 2j+k+1 (2j!) 2 (2j+k)!(2j−k)! L k (1 −2Z), where L k is the k-th Legendre polynomial, and the measure, can be found in[9]. AcknowledgmentsThe authors are grateful to Hal Haggard and Carlo Rovelli for many useful discussions and for comments on a first version of this paper. The work of E.B. is supported by a Marie Curie Intra-European Fellowship within the 7th European Community Framework Programme. The work of S.S. is partially supported by the ANR "Programme Blanc" grant LQG-09. Twisted geometries: A geometric parametrisation of SU(2) phase space. L Freidel, S Speziale, arXiv:1001.2748Phys. Rev. 8284040gr-qcL. Freidel and S. Speziale, "Twisted geometries: A geometric parametrisation of SU(2) phase space," Phys. Rev. D82, 084040 (2010). [arXiv:1001.2748 [gr-qc]]. From twistors to twisted geometries. L Freidel, S Speziale, arXiv:1006.0199Phys. Rev. 8284041gr-qcL. Freidel and S. Speziale, "From twistors to twisted geometries," Phys. Rev. D82, 084041 (2010). [arXiv:1006.0199 [gr-qc]]. On the geometry of loop quantum gravity on a graph. C Rovelli, S Speziale, arXiv:1005.2927Phys. Rev. 8244018gr-qcC. Rovelli and S. Speziale, "On the geometry of loop quantum gravity on a graph," Phys. Rev. D82, 044018 (2010). [arXiv:1005.2927 [gr-qc]]. Quantum tetrahedra and simplicial spin networks. A Barbieri, arXiv:gr-qc/9707010Nucl. Phys. B. 518714A. Barbieri, "Quantum tetrahedra and simplicial spin networks," Nucl. Phys. B 518 (1998) 714 [arXiv:gr-qc/9707010]. The quantum tetrahedron in 3 and 4 dimensions. J C Baez, J W Barrett, arXiv:gr-qc/9903060Adv. Theor. Math. Phys. 3815J. C. Baez and J. W. Barrett, "The quantum tetrahedron in 3 and 4 dimensions," Adv. Theor. Math. Phys. 3 (1999) 815 [arXiv:gr-qc/9903060]. The Fine Structure of SU(2) Intertwiners from U(N) Representations. L Freidel, E R Livine, arXiv:0911.3553J. Math. Phys. 5182502gr-qcL. Freidel and E. R. Livine, "The Fine Structure of SU(2) Intertwiners from U(N) Repre- sentations," J. Math. Phys. 51, 082502 (2010). [arXiv:0911.3553 [gr-qc]]. The symplectic geometry of polygons in Euclidean space. M Kapovich, J J Millson, J. Differential Geom. 44M. Kapovich and J. J. Millson, "The symplectic geometry of polygons in Euclidean space," J. Differential Geom. 44, 3 (1996), 479-513. Allgemeine Lehrsätzeüber die konvexe Polyeder. H Minkowski, Nachr. Ges. Wiss. 1897Minkowski, H. Allgemeine Lehrsätzeüber die konvexe Polyeder, Nachr. Ges. Wiss., Göttingen, 1897, 198-219. A new spinfoam vertex for quantum gravity. E R Livine, S Speziale, arXiv:0705.0674Phys. Rev. D. 7684028gr-qcE. R. Livine and S. Speziale, "A new spinfoam vertex for quantum gravity," Phys. Rev. D 76 (2007) 084028 [arXiv:0705.0674 [gr-qc]]. Quantum geometry from phase space reduction. F Conrady, L Freidel, arXiv:0902.0351J. Math. Phys. 50123510gr-qcF. Conrady and L. Freidel, "Quantum geometry from phase space reduction," J. Math. Phys. 50, 123510 (2009). [arXiv:0902.0351 [gr-qc]]. A D Alexandrov, Convex Polyhedra. SpringerA. D. Alexandrov, Convex Polyhedra, Springer (2005) H S M Coxeter, Regular Polytopes. Dover3rd editionH. S. M. Coxeter, Regular Polytopes, (3rd edition, 1973), Dover. PL homeomorphic manifolds are equivalent by elementary shellings. U Pachner, European Journal of Combinatorics. 12129U. Pachner, "PL homeomorphic manifolds are equivalent by elementary shellings," Euro- pean Journal of Combinatorics 12, 129 (1991). . G P Michon, G. P. Michon, http://www.numericana.com/data/polyhedra.htm An analytical expression and an algorithm for the volume of a Convex Polyhedron in Rn. J B Lasserre, J. Optim. Theor. Appl. 39J. B. Lasserre, "An analytical expression and an algorithm for the volume of a Convex Polyhedron in Rn", J. Optim. Theor. Appl. 39, pp. 363-377. On the complexity of some basic problems in computational convexity: II. Volume and mixed volumes. P Gritzmann, V Klee, Abstract, Convex and Computational (Boston) (T. Bisztriczky, P. McMullen, R. Schneider, and A. I. WeissKluwerP. Gritzmann and V. Klee, On the complexity of some basic problems in computational convexity: II. Volume and mixed volumes, polyhedra: Abstract, Convex and Computa- tional (Boston) (T. Bisztriczky, P. McMullen, R. Schneider, and A. I. Weiss, eds.), Kluwer, 1994, pp. 373-466. Exact volume computation for polytopes: A practical study. B Büeler, A Enge, K Fukuda, Polytopes -Combinatorics and Computation, DMV-Seminars. BaselBirkhäuser Verlag29B. Büeler and A. Enge and K. Fukuda, "Exact volume computation for polytopes: A practical study.", Polytopes -Combinatorics and Computation, DMV-Seminars vol. 29., Birkhäuser Verlag, Basel 2000, pp. 131-154. Extended Gaussian images, mixed volumes, shape reconstruction. J J Little, SCG '85: Proceedings of the first annual symposium on Computational geometry. J.J. Little, "Extended Gaussian images, mixed volumes, shape reconstruction", SCG '85: Proceedings of the first annual symposium on Computational geometry, pp. 15-23 Algebraic Graph Theory. C Godsil, G Royle, Springer VerlagGodsil, C. and Royle, G. (2001). Algebraic Graph Theory. Springer Verlag. Area-angle variables for general relativity. B Dittrich, S Speziale, arXiv:0802.0864New J. Phys. 1083006gr-qcB. Dittrich and S. Speziale, "Area-angle variables for general relativity," New J. Phys. 10 (2008) 083006 [arXiv:0802.0864 [gr-qc]]. Spin networks and quantum gravity. C Rovelli, L Smolin, arXiv:gr-qc/9505006Phys. Rev. D. 525743C. Rovelli and L. Smolin, "Spin networks and quantum gravity," Phys. Rev. D 52 (1995) 5743 [arXiv:gr-qc/9505006]. Spin Network States in Gauge Theory. J C Baez, arXiv:gr-qc/9411007Adv. Math. 117253J. C. Baez, "Spin Network States in Gauge Theory," Adv. Math. 117 (1996) 253 [arXiv:gr- qc/9411007]. On the quantization of polygon spaces. L Charles, arXiv:0806.1585L. Charles, "On the quantization of polygon spaces," [arXiv:0806.1585]. Holomorphic Factorization for a Quantum Tetrahedron. L Freidel, K Krasnov, E R Livine, arXiv:0905.3627Commun. Math. Phys. 297hep-thL. Freidel, K. Krasnov and E. R. Livine, "Holomorphic Factorization for a Quantum Tetrahedron," Commun. Math. Phys. 297, 45-93 (2010). [arXiv:0905.3627 [hep-th]]. Classical 6j-symbols and the tetrahedron. J Roberts, arXiv:math-ph/9812013Geom. Topol. 3J. Roberts, "Classical 6j-symbols and the tetrahedron," Geom. Topol. 3 (1999), 21-66 [arXiv:math-ph/9812013]. Quantization of bending deformations of polygons in E3 , hypergeometric integrals and the Gassner representation. M Kapovich, J Millson, Canad. Math. Bull. 44M. Kapovich and J. Millson, "Quantization of bending deformations of polygons in E3 , hypergeometric integrals and the Gassner representation," Canad. Math. Bull., Vol. 44, (2001) p. 36-60 Geometric quantization and multiplicities of group representations. V Guillemin, S Sternberg, Invent. Math. 673V. Guillemin and S. Sternberg. Geometric quantization and multiplicities of group repre- sentations. Invent. Math., 67(3):515-538, 1982. Operators for quantized directions. S A Major, gr-qc/9905019Class. Quant. Grav. 16S. A. Major, "Operators for quantized directions," Class. Quant. Grav. 16 (1999) 3859- 3877. [gr-qc/9905019]. Shape in an Atom of Space: Exploring quantum geometry phenomenology. S A Major, arXiv:1005.5460gr-qcS. A. Major, "Shape in an Atom of Space: Exploring quantum geometry phenomenology," arXiv:1005.5460 [gr-qc]. A semiclassical tetrahedron. C Rovelli, S Speziale, arXiv:gr-qc/0606074Class. Quant. Grav. 235861C. Rovelli and S. Speziale, "A semiclassical tetrahedron," Class. Quant. Grav. 23 (2006) 5861 [arXiv:gr-qc/0606074]. A M Perelomov, Generalized Coherent States and Their Applications. Springer-VerlagA. M. Perelomov, Generalized Coherent States and Their Applications (Springer-Verlag, 1986). John R Klauder, B S Skagerstam, Coherent states: applications in physics and mathematical physics. World ScientificJohn R. Klauder, B. S. Skagerstam, Coherent states: applications in physics and mathe- matical physics (World Scientific, 1985). Asymptotic analysis of the EPRL four-simplex amplitude. J W Barrett, R J Dowdall, W J Fairbairn, H Gomes, F Hellmann, arXiv:0902.1170J. Math. Phys. 50112504gr-qcJ. W. Barrett, R. J. Dowdall, W. J. Fairbairn, H. Gomes and F. Hellmann, "Asymp- totic analysis of the EPRL four-simplex amplitude," J. Math. Phys. 50, 112504 (2009). [arXiv:0902.1170 [gr-qc]]. U(N) Coherent States for Loop Quantum Gravity. L Freidel, E R Livine, arXiv:1005.2090gr-qcL. Freidel and E. R. Livine, "U(N) Coherent States for Loop Quantum Gravity," arXiv:1005.2090 [gr-qc]. Dynamics for a 2-vertex Quantum Gravity Model. E F Borja, J Diaz-Polo, I Garay, E R Livine, arXiv:1006.2451Class. Quant. Grav. 27235010gr-qcE. F. Borja, J. Diaz-Polo, I. Garay and E. R. Livine, "Dynamics for a 2-vertex Quantum Gravity Model," Class. Quant. Grav. 27, 235010 (2010). [arXiv:1006.2451 [gr-qc]]. Gauge-invariant coherent states for Loop Quantum Gravity II: Non-abelian gauge groups. B Bahr, T Thiemann, arXiv:0709.4636Class. Quant. Grav. 2645012gr-qcB. Bahr and T. Thiemann, "Gauge-invariant coherent states for Loop Quantum Gravity II: Non-abelian gauge groups," Class. Quant. Grav. 26 (2009) 045012 [arXiv:0709.4636 [gr-qc]]. Gauge field theory coherent states (GCS). I: General properties. T Thiemann, arXiv:hep-th/0005233Class. Quant. Grav. 182025T. Thiemann, "Gauge field theory coherent states (GCS). I: General properties," Class. Quant. Grav. 18 (2001) 2025 [arXiv:hep-th/0005233]. Gauge field theory coherent states (GCS). T Thiemann, O Winkler, arXiv:hep-th/0005237Class. Quant. Grav. 182561II: Peakedness propertiesT. Thiemann and O. Winkler, "Gauge field theory coherent states (GCS). II: Peakedness properties," Class. Quant. Grav. 18 (2001) 2561 [arXiv:hep-th/0005237]. Coherent states for canonical quantum general relativity and the infinite tensor product extension. H Sahlmann, T Thiemann, O Winkler, arXiv:gr-qc/0102038Nucl. Phys. B. 606401H. Sahlmann, T. Thiemann and O. Winkler, "Coherent states for canonical quantum general relativity and the infinite tensor product extension," Nucl. Phys. B 606 (2001) 401 [arXiv:gr-qc/0102038]. Complexifier coherent states for quantum general relativity. T Thiemann, arXiv:gr-qc/0206037Class. Quant. Grav. 232063T. Thiemann, "Complexifier coherent states for quantum general relativity," Class. Quant. Grav. 23 (2006) 2063 [arXiv:gr-qc/0206037]. Geometric quantization and the generalized SegalBargmann transclass for Lie groups of compact type. B C Hall, Comm. Math. Phys. 226233268B.C. Hall, "Geometric quantization and the generalized SegalBargmann transclass for Lie groups of compact type," Comm. Math. Phys. 226 (2002) 233268. LQG propagator from the new spin foams. E Bianchi, E Magliaro, C Perini, arXiv:0905.4082Nucl. Phys. B. 822245gr-qcE. Bianchi, E. Magliaro and C. Perini, "LQG propagator from the new spin foams," Nucl. Phys. B 822 (2009) 245 [arXiv:0905.4082 [gr-qc]]. Towards Spinfoam Cosmology. E Bianchi, C Rovelli, F Vidotto, arXiv:1003.3483Phys. Rev. 8284035gr-qcE. Bianchi, C. Rovelli and F. Vidotto, "Towards Spinfoam Cosmology," Phys. Rev. D82, 084035 (2010). [arXiv:1003.3483 [gr-qc]]. The length operator in Loop Quantum Gravity. E Bianchi, arXiv:0806.4710Nucl. Phys. B. 807591gr-qcE. Bianchi, "The length operator in Loop Quantum Gravity," Nucl. Phys. B 807 (2009) 591 [arXiv:0806.4710 [gr-qc]]. Phase space descriptions for simplicial 4d geometries. B Dittrich, J P Ryan, arXiv:0807.2806gr-qcB. Dittrich and J. P. Ryan, "Phase space descriptions for simplicial 4d geometries," arXiv:0807.2806 [gr-qc]. Coherent spin-networks. E Bianchi, E Magliaro, C Perini, arXiv:0912.4054Phys. Rev. 8224012gr-qcE. Bianchi, E. Magliaro and C. Perini, "Coherent spin-networks," Phys. Rev. D82 (2010) 024012. [arXiv:0912.4054 [gr-qc]]. Quantum twisted geometries and coherent states. L Freidel, S Speziale, to appearL. Freidel and S. Speziale, "Quantum twisted geometries and coherent states," to appear. Non-wiener continual integrals. F A Berezin, Teor. Mat. Fiz. 6194F. A. Berezin, "Non-wiener continual integrals", Teor. Mat. Fiz. 6 (1971) 194. Feynman Path Integrals In A Phase Space. F A Berezin, Sov. Phys. Usp. 23763Usp. Fiz. NaukF. A. Berezin, "Feynman Path Integrals In A Phase Space," Sov. Phys. Usp. 23 (1981) 763 [Usp. Fiz. Nauk 132 (1980) 497]. Coherent and incoherent states of the radiation field. R J Glauber, Phys. Rev. 1312766R. J. Glauber, "Coherent and incoherent states of the radiation field," Phys. Rev. 131 (1963) 2766. Equivalence of semiclassical and quantum mechanical descriptions of statistical light beams. E C G Sudarshan, Phys. Rev. Lett. 10277E. C. G. Sudarshan, "Equivalence of semiclassical and quantum mechanical descriptions of statistical light beams," Phys. Rev. Lett. 10, 277 (1963). Properties of the Volume Operator in Loop Quantum Gravity I: Results. J Brunnemann, D Rideout, arXiv:0706.0469Class. Quant. Grav. 2565001gr-qcJ. Brunnemann and D. Rideout, "Properties of the Volume Operator in Loop Quantum Gravity I: Results," Class. Quant. Grav. 25 (2008) 065001 [arXiv:0706.0469 [gr-qc]]. . E Bianchi, H Haggard, to appearE. Bianchi and H. Haggard, to appear. Discreteness of area and volume in quantum gravity. C Rovelli, L Smolin, arXiv:gr-qc/9411005Nucl. Phys. B. 442753Erratum-ibid. BC. Rovelli and L. Smolin, "Discreteness of area and volume in quantum gravity," Nucl. Phys. B 442, 593 (1995) [Erratum-ibid. B 456, 753 (1995)] [arXiv:gr-qc/9411005]. Quantum theory of geometry. II: Volume operators. A Ashtekar, J Lewandowski, arXiv:gr-qc/9711031Adv. Theor. Math. Phys. 1388A. Ashtekar and J. Lewandowski, "Quantum theory of geometry. II: Volume operators," Adv. Theor. Math. Phys. 1, 388 (1998) [arXiv:gr-qc/9711031]. Closed formula for the matrix elements of the volume operator in canonical quantum gravity. T Thiemann, arXiv:gr-qc/9606091J. Math. Phys. 393347T. Thiemann, "Closed formula for the matrix elements of the volume operator in canonical quantum gravity," J. Math. Phys. 39 (1998) 3347 [arXiv:gr-qc/9606091]. Spin networks and recoupling in loop quantum gravity. R De Pietri, arXiv:gr-qc/9701041Nucl. Phys. Proc. Suppl. 57251R. De Pietri, "Spin networks and recoupling in loop quantum gravity," Nucl. Phys. Proc. Suppl. 57 (1997) 251 [arXiv:gr-qc/9701041]. T Thiemann, Modern Canonical Quantum General Relativity. Cambridge, UKCambridge University PressT. Thiemann, Modern Canonical Quantum General Relativity. Cambridge University Press, Cambridge, UK, 2007. Consistency check on volume and triad operator quantisation in loop quantum gravity. I. K Giesel, T Thiemann, arXiv:gr-qc/0507036Class. Quant. Grav. 235667K. Giesel and T. Thiemann, "Consistency check on volume and triad operator quantisation in loop quantum gravity. I," Class. Quant. Grav. 23 (2006) 5667 [arXiv:gr-qc/0507036]. Quantum spin dynamics (QSD). T Thiemann, arXiv:gr-qc/9606089Class. Quant. Grav. 15839T. Thiemann, "Quantum spin dynamics (QSD)," Class. Quant. Grav. 15 (1998) 839 [arXiv:gr-qc/9606089]. Semiclassical analysis of the Loop Quantum Gravity volume operator: I. Flux Coherent States. C Flori, T Thiemann, arXiv:0812.1537gr-qcC. Flori and T. Thiemann, "Semiclassical analysis of the Loop Quantum Gravity volume operator: I. Flux Coherent States," arXiv:0812.1537 [gr-qc]. On the semiclassical limit of 4d spin foam models. F Conrady, L Freidel, arXiv:0809.2280Phys. Rev. D. 78104023gr-qcF. Conrady and L. Freidel, "On the semiclassical limit of 4d spin foam models," Phys. Rev. D 78 (2008) 104023 [arXiv:0809.2280 [gr-qc]]. Lorentzian spin foam amplitudes: graphical calculus and asymptotics. J W Barrett, R J Dowdall, W J Fairbairn, F Hellmann, R Pereira, arXiv:0907.2440Class. Quant. Grav. 27165009gr-qcJ. W. Barrett, R. J. Dowdall, W. J. Fairbairn, F. Hellmann and R. Pereira, "Lorentzian spin foam amplitudes: graphical calculus and asymptotics," Class. Quant. Grav. 27 (2010) 165009. [arXiv:0907.2440 [gr-qc]]. Spin-Foams for All Loop Quantum Gravity. W Kaminski, M Kisielowski, J Lewandowski, arXiv:0909.0939Class. Quant. Grav. 2795006gr-qcW. Kaminski, M. Kisielowski and J. Lewandowski, "Spin-Foams for All Loop Quantum Gravity," Class. Quant. Grav. 27 (2010) 095006 [arXiv:0909.0939 [gr-qc]]. The loop-quantum-gravity vertex-amplitude. J Engle, R Pereira, C Rovelli, arXiv:0705.2388Phys. Rev. Lett. 99161301gr-qcJ. Engle, R. Pereira and C. Rovelli, "The loop-quantum-gravity vertex-amplitude," Phys. Rev. Lett. 99, 161301 (2007) [arXiv:0705.2388 [gr-qc]]. LQG vertex with finite Immirzi parameter. J Engle, E Livine, R Pereira, C Rovelli, arXiv:0711.0146Nucl. Phys. B. 799136gr-qcJ. Engle, E. Livine, R. Pereira and C. Rovelli, "LQG vertex with finite Immirzi parameter," Nucl. Phys. B 799, 136 (2008) [arXiv:0711.0146 [gr-qc]].
[]
[ "Interpretable Unified Language Checking", "Interpretable Unified Language Checking" ]
[ "Tianhua Zhang [email protected] \nCUHK Centre for Perceptual and Interactive Intelligence\nHong Kong SARChina\n", "Hongyin Luo [email protected] \nMIT Computer Science and Artificial Intelligence Lab\nCambridgeMAUSA\n", "Yung-Sung Chuang \nMIT Computer Science and Artificial Intelligence Lab\nCambridgeMAUSA\n", "Wei Fang \nMIT Computer Science and Artificial Intelligence Lab\nCambridgeMAUSA\n", "Luc Gaitskell \nMIT Computer Science and Artificial Intelligence Lab\nCambridgeMAUSA\n", "Thomas Hartvigsen \nMIT Computer Science and Artificial Intelligence Lab\nCambridgeMAUSA\n", "Xixin Wu \nCUHK Centre for Perceptual and Interactive Intelligence\nHong Kong SARChina\n", "Danny Fox \nMIT Linguistics\nCambridgeMAUSA\n", "Helen Meng \nCUHK Centre for Perceptual and Interactive Intelligence\nHong Kong SARChina\n", "James Glass \nMIT Computer Science and Artificial Intelligence Lab\nCambridgeMAUSA\n" ]
[ "CUHK Centre for Perceptual and Interactive Intelligence\nHong Kong SARChina", "MIT Computer Science and Artificial Intelligence Lab\nCambridgeMAUSA", "MIT Computer Science and Artificial Intelligence Lab\nCambridgeMAUSA", "MIT Computer Science and Artificial Intelligence Lab\nCambridgeMAUSA", "MIT Computer Science and Artificial Intelligence Lab\nCambridgeMAUSA", "MIT Computer Science and Artificial Intelligence Lab\nCambridgeMAUSA", "CUHK Centre for Perceptual and Interactive Intelligence\nHong Kong SARChina", "MIT Linguistics\nCambridgeMAUSA", "CUHK Centre for Perceptual and Interactive Intelligence\nHong Kong SARChina", "MIT Computer Science and Artificial Intelligence Lab\nCambridgeMAUSA" ]
[]
Despite recent concerns about undesirable behaviors generated by large language models (LLMs), including non-factual, biased, and hateful language, we find LLMs are inherent multi-task language checkers based on their latent representations of natural and social knowledge. We present an interpretable, unified, language checking (UniLC) method for both human and machine-generated language that aims to check if language input is factual and fair. While fairness and factchecking tasks have been handled separately with dedicated models, we find that LLMs can achieve high performance on a combination of fact-checking, stereotype detection, and hate speech detection tasks with a simple, few-shot, unified set of prompts. With the " 1 2 -shot" multi-task language checking method proposed in this work, the GPT3.5-turbo model outperforms fully supervised baselines on several language tasks. The simple approach and results suggest that based on strong latent knowledge representations, an LLM can be an adaptive and explainable tool for detecting misinformation, stereotypes, and hate speech.Warning: The paper contains non-factual, biased, and hate speech examples for research purposes.
10.48550/arxiv.2304.03728
[ "https://export.arxiv.org/pdf/2304.03728v1.pdf" ]
258,041,307
2304.03728
3fcc8cb68488cfdfe4c52b81f27a236352fe5582
Interpretable Unified Language Checking Tianhua Zhang [email protected] CUHK Centre for Perceptual and Interactive Intelligence Hong Kong SARChina Hongyin Luo [email protected] MIT Computer Science and Artificial Intelligence Lab CambridgeMAUSA Yung-Sung Chuang MIT Computer Science and Artificial Intelligence Lab CambridgeMAUSA Wei Fang MIT Computer Science and Artificial Intelligence Lab CambridgeMAUSA Luc Gaitskell MIT Computer Science and Artificial Intelligence Lab CambridgeMAUSA Thomas Hartvigsen MIT Computer Science and Artificial Intelligence Lab CambridgeMAUSA Xixin Wu CUHK Centre for Perceptual and Interactive Intelligence Hong Kong SARChina Danny Fox MIT Linguistics CambridgeMAUSA Helen Meng CUHK Centre for Perceptual and Interactive Intelligence Hong Kong SARChina James Glass MIT Computer Science and Artificial Intelligence Lab CambridgeMAUSA Interpretable Unified Language Checking Despite recent concerns about undesirable behaviors generated by large language models (LLMs), including non-factual, biased, and hateful language, we find LLMs are inherent multi-task language checkers based on their latent representations of natural and social knowledge. We present an interpretable, unified, language checking (UniLC) method for both human and machine-generated language that aims to check if language input is factual and fair. While fairness and factchecking tasks have been handled separately with dedicated models, we find that LLMs can achieve high performance on a combination of fact-checking, stereotype detection, and hate speech detection tasks with a simple, few-shot, unified set of prompts. With the " 1 2 -shot" multi-task language checking method proposed in this work, the GPT3.5-turbo model outperforms fully supervised baselines on several language tasks. The simple approach and results suggest that based on strong latent knowledge representations, an LLM can be an adaptive and explainable tool for detecting misinformation, stereotypes, and hate speech.Warning: The paper contains non-factual, biased, and hate speech examples for research purposes. Introduction Recent advances in large language models (LLMs) have raised concerns about undesirable aspects of text, generated by both humans and machines, that incorporates false information (Ji et al., 2022), stereotypes , and hate speech (Djuric et al., 2015). These problems correspond to different language fairness principles (Chiu et al., 2021b) as shown in Figure 1. Previous studies Figure 1: The goal of this work is to build a system that adaptively checks misinformation, stereotypes, and hate speech with natural-language explanations. LLM stands for large language model and entailment stands for entailment-based stance detection. The grounding information generated by LLM contributes to the Language checking accuracy, multi-task efficiency, and explainability of ethical predictions. have explored supervised models for each task separately (Nadeem et al., 2019;MacAvaney et al., 2019;Ganguli et al., 2023). One disadvantage of such disconnected and task-specific systems is a lack of multi-task flexibility. Since single-task models are trained or prompted with data examples from the target task, prior knowledge is required to apply the language-checking (fact or fairness checking) model to each input appropriately. We propose an adaptive method that can be applied for general-purpose language checking for both human-and machine-generated text without specifying the task. We specifically study the detection of non-factual information, hate speech, and stereotypes. Despite the seemingly disconnected nature of fact-and fairness-checking tasks, we argue they can be handled with a unified groundingentailment framework, as in existing fact-checking systems (Nadeem et al., 2019). It is worth noting that the large-scale training corpora of LLMs is generated by a wide range of internet users and curated annotators. Although containing unsafe languages that lead to unsafe generations, the training corpora also include many commonly accepted facts, including commonsense, world knowledge, and social values. This suggests that LLMs could understand natural and social facts, and potentially predict if new inputs align with grounding facts. While LLM training data surely contain volumes of unsafe data (Hartvigsen et al., 2022), leading to wide ranges of unsafe and unethical outputs (Jiang et al., 2021), we pose that appropriate prompting may leverage the world knowledge encoded by LLMs. Depending on a fact entailed or contradicted by the input, for instance, different ethical predictions can be derived as shown in Figure 1. For example, an informed and sincere reader knows that the following claim is wrong, (a) "Racism never exists." While fact and fairness are both needed to understand this claim's ethical problem, a reasonable fairness judgment can be made by grounding the claim on at least one of the following rationales: (b) Social fact: The claim is wrong because racism does exist and is a serious problem. Affective fact: The claim is unfair and harmful to those who suffer racial biases. Motivated by the phenomenon that different language-checking tasks can be accomplished via grounding on appropriate rationales, we propose a general-purpose, task-agnostic language-checking system that jointly detects misinformation, stereotypes, and hate speech. Excitingly, our framework is unified across the tasks and does not require different prompts and models for each task. In our proposed strategy, we prompt an LLM to automatically detect potential issues of given input, then generate an appropriate grounding for an entailment-based language check. Our experiments show that the adaptive method achieves comparable performance to state-of-the-art, supervised, task-dependent models. Further, our method improves the efficiency, accuracy, and transparency of language-checking on both machine and human-generated languages. Related Work Large language models (LLMs). LLMs often refer to the left-to-right text generation models with billions of parameters trained on large-scale corpora and optional human instructions (Brown et al., 2020;Wei et al., 2021;Thoppilan et al., 2022;Chowdhery et al., 2022;Ouyang et al., 2022). The large language models have shown strong zero-shot and few-shot reasoning abilities on complex tasks . However, recent studies have noted how LLMs can hallucinate (Ji et al., 2022;Shuster et al., 2021;Creswell and Shanahan, 2022), suggesting that the generation and reasoning of LLMs are sometimes not trustworthy. Fact Checking. Recent studies on fact checking have focused on information retrieval and stance detection. Most fact checking corpora provide both claims and grounded documents or a database of candidate grounding documents (Aly et al., 2021;Diggelmann et al., 2020). A standard pipeline is retrieving the grounded information and predicting the entailment relation between the claim and the retrieved evidence. The quality of the database and the retrieval method can significantly influence the performance of such an approach. To overcome the challenge, LLMs have been applied for generating structured grounding information (Manakul et al., 2023) to detect hallucinations generated by language models. Stereotype recognition. The research about stereotypes in the area of natural language processing focuses on different aspects, including evaluation (Lu et al., 2020;Nadeem et al., 2021;Webster et al., 2020), detection (Recasens et al., 2013;, and debiasing (Ganguli et al., 2023). Recent studies have presented the stereotyping problem associated with large language models (Abid et al., 2021;Askell et al., 2021;Ganguli et al., 2022;Gehman et al., 2020). Hate speech detection. Pretrained language models have been applied for hate speech detection, mostly based on corpora constructed with internet texts (Djuric et al., 2015;MacAvaney et al., 2019;Röttger et al., 2020;Yin and Zubiaga, 2021). Most previous models for detecting hate speech are fine-tuned in a fully-supervised manner with human-annotated corpora (de Gibert et al., 2018a;Gautam et al., 2020). The latest studies have investigated detecting or generating hate speech samples with large language models (Chiu et al., 2021a;Hartvigsen et al., 2022). In this work, we design an inclusive languagechecking system that can be generalized to different domains and tasks, including different aspects of language checking, under a unified setting without any task or domain dependent change. Human and Machine While humans have generated the majority of harmful language, recent language models have shown the ability to generate human-like languages that contain hallucinations and harmful information. In this work, we do not worry about if a piece of text is generated by a person or machine as long as it is factual and fair. Put it differently, we would like to test if our model can successfully detect harmful language regardless of its source. This could benefit both human-human and human-machine interactions. Fact and Fairness While misinformation and hate speech are different aspects of harmful language, they are essentially related as shown in example (a). Many studies have shown that fake news can be detected by fact retrieving and stance detection, and we argue that explicit bias and hate can be detected by comparing them to commonly accepted "moral" facts and values. With a strong awareness of common sense including natural facts and social values, LLMs can generate reasonable stance detection groundings for different purposes. As a result, the unified pipeline for both factualness and fairness checking in this work is based on generated grounds and entailment. We show that such a solution can improve language-checking efficiency and transparency because most predictions can be explained by the generated grounding information. Retrieved and Generated Groundings As we mentioned, traditional fact-checking systems are based on a two-step pipeline, including information retrieval and stance detection grounded in the retrieved texts. However, hate speech and social bias detection are usually open-ended -no grounding documents are provided and the model predicts based on its own social knowledge. In this work, we test the model on fact-checking tasks under retrieval-based and generative grounding settings. In general, the retrieval-based setting provides trustworthy information, but the performance is limited by the retrieval quality and knowledge base coverage. On the other hand, the generative strategy leads to noisy hints but improves flexibility. Grounding and Entailment Fact grounding and ethical classification require different knowledge and reasoning abilities. Fact grounding depends on natural and social knowledge and commonsense about the world, while ethical classification requires the ability to recognize the entailment relation between statements and grounding information. In this work, we conduct experiments with LLMs on both tasks and also compare them with non-LLM entailment methods. It is worth noting that we have found the entailment models perform better on detecting the stance of grounding facts on fairness tasks. Method In this work, we present three prompting strategies, including zero-shot classification in section 4.1, few-shot fact prediction + zero-shot ethical classification and few-shot fact prediction + fewshot ethical classification in section 4.2. Figure 2 shows the three designs with complete prompting templates listed in Appendix A. Zero-shot Language Checking We first evaluate the language-checking performance of directly feeding claims of different tasks to LLMs in the following format: (c) Claim: {claim to verify} Question: Is it fair to say that? The language model answers question (a) with yes or no and explains the answer. We use this as a baseline for all tasks. Since LLMs learn from large-scale corpora, the embedded natural and social knowledge can help make the zero-shot classification without additional inputs. Few-shot Language Checking Fact Prediction (FP). To better leverage the world knowledge learned during pretraining, we prompt the LLM to generate natural or social facts according to the given statements, which can provide a ground to check the factualness and fairness of different claims. As shown in Figure 2, we simply combine examples from different tasks into one in-context learning prompt without specifying the task to which Claim: Also found was that the correlation between solar activity and global temperatures ended around 1975, hence recent warming must have some other cause than solar variations. Summary: The claim mentions global warming is due to other cause than solar variations. Scientific fact: recent global warming is due to human activities instead of solar variations. Conclusion: It is fair to say that. Claim: LOL GJ brother 10 points for the Uber humans and -10 for the sub human ape creatures you are truly one bad mofo whitey crakker man. Summary: The claim calls other people "sub humans" and "ape". Social fact: calling people "sub humans" and "ape" is insulting and discriminatory. Conclusion: It is unfair to say that. Multi-task or Fact / fairness-only prompt Claim: Whites have the following Too many abortions , too many mixed marriages , too many no kids marriages , and too many liberals . Language Checking Summary: The claim makes negative generalizations about a specific racial group. Social fact: making negative generalizations about a specific racial group is discriminatory. Conclusion: It is unfair to say that. Grounding Information Generation Information Recognition Ethical Classification Fact-checking example Figure 2: Based on the mix-task prompt, the language model automatically detects the stereotype problem and generates the corresponding grounding information. The generated texts are fed into the language model again with the claim for the final language check. The diagram illustrates three strategies we evaluate: 1. zero-shot checking (Zero-cls); 2. few-shot fact generation + zero-shot grounded checking (Few-fp + Zero-cls); and 3. fewshot fact generation + few-shot grounded checking (Few-fp + Few-cls). each example belongs. With the few-shot examples, the language model first recognizes the potential harm in each claim by summarizing a piece of suspicious information. According to the generated summary, the language model continues by outputting a fact-inducing signal -"Related natural fact" or "Related social fact". The choice between generating natural or social facts is automatically embedded in the generation process. And the signal leads to a natural or social fact that provides the evidence to prove if the claim is factual/fair or nonfactual/unfair. The prompting format is shown as follows, The grounding information in (d) needs to be generated with few-shot in-context prompting. Although in some cases the LLM does not generate real facts, we use the term "fact" to prompt the LLM to generate high-quality grounding information for the ethical classification step. We have one sample for each task-label combination (fact and fairness, positive and negative). However, the supervision is weaker than the standard one-shot learning setting because the model needs first to recognize the appropriate language checking task. As a result, we use the term " 1 2 -shot" to describe our prompting strategy. Grounded Ethical classification (CLS). Given the input claim, the generated summary of suspicious information, and the grounding information, we still need to predict the factualness and fairness of the input claims. The ethical prediction process can be realized with either LLMs or entailment models (Luo and Glass, 2023 Question (e) can be answered under zero-shot, i.e., Few-shot fact generation + Zero-shot ethical classification, or few-shot, i.e., Few-shot fact generation + Few-shot ethical classification, settings. The LLM is supposed to answer with either yes or no with yes indicating the claim is factual and fair. Otherwise, the claim is either non-factual or unfair. We use the general term "fair" to include different aspects of general-purpose language checking. The classification problem can also be solved by entailment models. An entailment model can be applied in natural language inference (Williams et al., 2018) and stance detection (Augenstein et al., 2016) tasks to recognize the logical relationship between a hypothesis and a premise. Different from the LLM prompts, we construct suppositions using the following template (Luo and Glass, 2023), Experiments General Ethics Benchmark Dataset We propose a joint ethics benchmark that includes fact and fairness checking tasks to simulate the major concerns about human and AI languages. The tasks include climate-related fact checking, public health-related fact checking, hate speech detection, social bias recognition, machine-generated toxic language detection and machine-generated fake news detection. The integrated unified language checking (UniLC) benchmark based on these tasks is available at https://github.com/ luohongyin/UniLC.git. Hate speech detection (HSD). de Gibert et al. (2018b) proposed the insulting language checking corpus extracted from a racial supremacy forum. We construct the test set of our joint benchmark using the test set of the hate speech detection (HSD) corpus, which contains 478 evaluation samples. Note that because of the source of the data, the claims are generally biased, while some biases are not categorized into the class of hate speech. We will show examples in the next section. Social bias inference (SBIC). proposed the social bias inference corpus containing claims from Reddit, Twitter, and hate websites. We use the test set of the corpus as a part of the joint benchmark. To align with the binary classification task of hate speech detection, we aggregate the sexual and offensive measurements provided by the SBIC data and generate a new acceptable/unacceptable label for each data. We regard each claim as unacceptable if it is assigned with a positive sexual or offensive score. We use the aggregated test set of the corpus, which contains 4,617 test samples. Climate-fever (Climate). Diggelmann et al. (2020) proposed the fact checking corpus with realworld climate claims and corresponding facts. The original test set contains 4 labels, including supports, refutes, disputed, and not_enough_info. As a preliminary study in this direction, we only focus on factual (supports) and faked (refutes). As a result, the remaining test set contains 907 nondisputed test claims. The original benchmark included a set of documents for grounding the claims. However, for generalized fact-checking, we attempt not to use the given document set but rely on the commonsense reasoning ability of LLMs. Health fact checking (Health). The corpus contains claims related to public health topics (Kotonya and Toni, 2020). The original corpus contains four labels, true, false, mixed, and unknown. Similar to the Climate-fever task, we keep 987 non-disputed factual and faked claims for evaluating the factcheck performance. Similarly, we do not use the given knowledge base for fact retrieval. GPT toxicity (ToxiGen). The corpus (Hartvigsen et al., 2022) contains a set of toxic and benign statements about 13 minority groups generated by GPT3 (Brown et al., 2020). We evaluate our method on the human-validated test set of the corpus, which contains 940 1 test samples. We follow the official instructions 2 to convert toxicity scores into binary classification labels: toxic and benign. Machine-Generated Fake News (MGFN). Schuster et al. (2020) proposed the first benchmark for the detection of LM-produced fake news. We use their QA-extension corpus (Schuster et al., 2020) which extends CNN articles in NewsQA (Trischler et al., 2017) dataset with NewsQA provided questions and machine (Grover (Zellers et al., 2020), a Transformer-based LM) generated answers. The goal is to predict whether the machine-generated answer is fake or real according to its veracity. Since only training and validation splits are provided, we use the validation split which contains 209 evaluation samples. The example claims and data statistics of different tasks are shown in Table 1. Implementation Details We use two models for fact prompting and ethical classification, including a large language model GPT-3.5-turbo, and a medium-sized entailment model ESP-deberta-large (Luo and Glass, 2023), which is a sequence classifier containing ∼350M parameters 3 . The LLM is deployed for fact prompting and generative ethical classification. For each inference, we only sample one sequence with a temperature of 0.1. In our main few-shot experiments, we use 4 example prompts. As shown in Figure 2, the 4 examples cover different task-label combinations: fair, unfair, factual, and non-factual. In generative ethical classification, the LLM does not always answers "yes" or "no" clearly. We only assign the negative label to the samples that receive an explicit "no" answer. With the entailment model, we force the model to conduct a binary classification although the model is trained to recognize three classes: entail-3 https://huggingface.co/luohy/ESP-deberta-large ment, neutral, and contradictory. For each claim, we construct a supposition as (f) and only compare the entailment and contradictory scores. If the entailment score is higher than the contradictory, the claim is unfair according to the supposition, even if the actual prediction is neutral. Results Human-generated Language In this section, we present our main results with the proposed LLM-based general-purposed language ethics modeling approach as shown in Table 2. Fact checking. The fact-checking performance in Table 2 shows that the Few-fp+Zero-cls setting significantly improves the performance of the LLM, especially in terms of the F1 score for recognizing inaccurate claims. We notice that even with only examples from the fairness tasks, the notion of promting fairness-checking examples leads to a significant improvement of 13% F1 on natural science-related claims over zero-shot LLMs. On the other hand, only providing examples from the fact-checking tasks leads to the best performance, which is an intuitive outcome since the model does not need to distinguish between fact and fairness checking. In other words, the Fact-only setting represents the upper-bound performance of the specific task. It is worth noting that the baseline models based on Wikipedia retrieval following the standard factchecking pipeline (Nadeem et al., 2019) do not lead to better performance than LLM-based Few-fp + Few/Zero-cls without a retriever. This indicates that Wikipedia is not a good knowledge base for some fact-checking tasks, which suggests another flexibility of the proposed LLM-based prompt- ing strategies -it is not necessary to construct a task-specific knowledge base for fact retrieval as most popular fact-checking benchmarks (Guo et al., 2022). In addition, we found that the Few-fp+Few-cls method does not outperform the Few-fp+Zero-cls strategy. This indicates that a reasonable fact is enough for an LLM to make predictions as accurately as providing examples. It is worth noting that the entailment model achieves constant improvements over all-few-shot settings except Few-fp+Zero-cls (zero-shot prediction). This fact shows the difficulty of recognizing the relation between three sentences: <label description, claim summary, fact> for the entailment model. Fairness checking. While the Few-fp+Zero-cls method still outperform the zero-cls, we notice that the conclusion of the results is different from the fact-checking experiments. While the in-domain prompt (Fairness-only) still outperforms the task transfer setting (Fact-only), the performance gap is not as significant as in the fact-checking task (3% vs 9% F1 score). The phenomenon that fact-related prompts receive stronger transferring performance indicates that natural facts have a strong ability to ground moral decisions, for large language models. The conclusion is also supported by the results led by joint fact and fairness prompts. The best accuracy and F1 scores are achieved by Few-fp+Zerocls with entailment and Few-fp+Few-cls methods respectively. This indicates that fact-checking examples benefit moral decisions of language and entailment models. We find that in the fairness task, the entailment classification model benefits Few-fp+Zero-cls, but slightly decreases the Few-fp+Few-cls accuracy and F1 scores. This result shows that for fairnesschecking tasks, the fact-grounded reasoning ability of the LLM is similar to the entailment model. In particular, LLM achieves significant improvement on F1 scores compared to entailment models. Unified performance. On average, the Fairnessonly accuracy is similar to the Fact-only with entailment strategy, while the Fact-only with entailment method achieves significantly better average F1 score on inappropriate claims, including nonfactual, hate, and biased. This shows that the factrelated prompt generally leads to better grounding for inappropriate statements. Among all strategies, the Few-fp + Zero-cls with Figure 3: The number statistics of grounding information in the format of "Related X fact: ..." and the resulting ethical classification performance. We present at most 10 categories, and the SBIC results contain much more than that. "None" stands for the case that the LLM does not specify a grounding info category explicitly. entailment method achieves the highest average accuracy and F1 scores simultaneously, although the performance is close to other joint-prompt strategies. The results indicate that with the grounding information, the LLM checking accuracy does not significantly change by adding classification examples. These results also show that the LLM has a strong generalization ability and that different multi-task prompting strategies do not lead to very different overall performance. However, the aver-age improvement over single-task settings is significant. This proves the hypothesis that the language models can jointly handle the fact and fairness tasks without a loss of overall performance. Machine-generated Language In this section, we present the checking results of machine-generated language. Two settings are evaluated: 1) toxic statement detection (ToxiGen); and 2) grounded misinformation detection (MGFN). ToxiGen. We utilized the same prompts as those used in section 5.3.1 for checking machinegenerated language. The results presented in Ta Claim: Electricity rates are 40 percent higher in states that have required utility companies to use a certain amount of renewable energy such as solar power. Summary: The claim mentions that electricity rates are higher in states that require renewable energy use. Social fact: renewable energy sources require initial investment and may lead to higher rates in the short term, but in the long term, they can lead to cost savings and environmental benefits. Prediction: It is fair to say that. Claim: Smoking in cars with children is illegal starting January 30, 2018. Summary: The claim states that smoking in cars with children is illegal starting January 30, 2018. Legal fact: in some countries or states, smoking in cars with children is indeed illegal. Prediction: It is fair to say that. Claim: Wasn't there something like a big bang in which a burrito exploded and all the beans flew everywhere and started reproducing exponentially ? Summary: The claim mentions a ridiculous and untrue scenario about the origin of the universe. Scientific fact: the Big Bang theory is a widely accepted scientific explanation for the origin of the universe. Prediction: It is unfair to say that. Claim: the most official report put out on the number of deaths in those camps was the report by the Red Cross . Summary: The claim mentions the Red Cross report being the most official report on the number of deaths in the camps. Historical fact: there were multiple reports on the number of deaths in the camps, including reports by the International Committee of the Red Cross, the United Nations, and various governments. Prediction: It is unfair to say that only the Red Cross report is the most official. We found that the Fact-only setting examples performs slightly better than Few-fp + Zero-cls with multi-task examples. This is due to the different language styles of human and machine-generated hate speech. Human-generated hate speech is usually more noisy and random, while machinegenerated examples are more formal and clear. In other words, the language style of machinegenerated hate speech in ToxiGen is closer to fake news. The data distribution shift limits the improvement of multi-task prompts. But the notable results on this machine-generated dataset simply with human-generated examples confirm the steady performance of our unified prompting strategy. The entailment classification model exhibits further consistent improvements in all cases. MGFN. As a document-grounded fact-checking task, the grounded information for each claim is provided. As a result, we obtain the groundings by extracting information from the documents instead of open-ended generation. We thus adapt our approach to MGFN with details provided in Appendix B. Results in Table 4 illustrate that fewshot LLMs show superior performance over the baselines. Similar to other datasets, using multitask prompting may distract the model. But the Few-fp + Zero-cls* method also shows significant improvement over the baselines. Task recognition In this section, we investigate in the multi-task settings, if the LLM successfully recognizes the task (fact or fairness) and if the misclassification of the target task contributes to failed ethical predictions. The task recognition results and the accuracy of dif-ferent grounding facts are shown in Figure 3. Note that except for the Climate-fever task, the most common grounding fact category in other tasks is "social", although we use the social notion mainly for fairness tasks in our prompts. According to the accuracy of each fact category, it is difficult to summarize an explicit concept of "correct" fact groundings for each task. For example, although climate-fever and PubHealth are categorized into fact-checking tasks, the majority grounding fact of climate is scientific while it is social for PubHealth as the fairness tasks. It is also shown that the hate speech detection accuracy is 100% when no explicit grounding fact is specified. In the climate-fever task, the samples grounded with mathematical, political, and economic facts are also perfectly verified. As a result, we argue that the proposed general language ethics modeling approach shows the potential for a wide range of language-checking tasks. Case Study In this section, we present examples with mismatched grounding facts in different tasks. Example (a) shows that the fact-checking example sampled from the Climate-fever corpus is verified through the social fact about electricity price increases with renewable energy. The reasoning process successfully generates the fact about the renewable energy price and its long-term benefits. In example (b) sampled from PubHealth, although smoking in cars with children is illegal and the fact covers this information, the claim is nonfactual since the law is passed earlier than 2018. The model failed to recognize the most suspicious information, the year, in this example. Example (c) is an example of hate speech. However, the model recognizes a contradiction in terms of scientific facts and decides that the claim is not fair. Although the reasoning process does not capture the reason that explains why the claim is biased, it still successfully recognizes that the claim is inappropriate. This is an example that morally incorrect statements can be contradicted by natural and scientific facts. Example (d) shows a case where the system makes a wrong prediction because of the lack of complicated knowledge and reasoning ability about real-world organizations. If the model understands that the Red Cross is a member of the United Nations, the prediction would be correct. This example suggests that complicated ethics modeling needs to be grounded on rich context and knowledge. Conclusion In this work, we propose a fact-grounded general language ethics modeling system that conducts fact, hate speech, and social bias checking with the same set of prompts and pipelines. We show that besides the fact-checking task, the moral prediction made by large language models can also be grounded on different categories of facts. With the strong results presented in this work, we argue that although language models suffer from the problem of generating hallucinations and dubious language, they are also powerful tools to vet the appropriateness of both human and machine-generated languages under both open-and closed-book scenarios. We further analyze that the fact and fairness-checking tasks can be grounded on diverse and overlapping facts, and applying entailment classification can improve the stance detection performance between claims and grounding facts. Limitations While our unified language-checking method has demonstrated that LLMs can automatically detect potential problems with given statements and achieve good performance on different tasks with " 1 2 -shot" prompts, there are some limitations to what our approach in its current form. Firstly, we found that LLMs are sensitive to the exact wording and in-context exemplars. We did not engage in extensive prompt engineering but instead focused on verifying the factualness and harmfulness of statements by constructing a unified prompt from different tasks. Secondly, although we formulate the language ethics modeling problem as grounding fact generation and ethical classification tasks to improve the transparency and interpretability of LLM's decisions, we evaluate the performance mainly based on the binary classification results due to the prohibitive cost of manually verifying LLM-generated natural or social facts. Additionally, our evaluation of fact, hate speech, and social bias checking in this paper was conducted using six datasets, which may not encompass all possible scenarios or provide a comprehensive picture of misinformation and disinformation. A Prompting Templates for Different Strategies We list the complete prompting templates for the three proposed strategies in Figure 5 and the singletask setting in Figure 6. B Prompting Details of Machine Generated Fake News (MGFN) For MGFN dataset, We decomposed the few-shot fact generation process into two steps: 1) fewshot verification question generation, and 2) zeroshot question answering. The former is documentagnostic and can include examples from different tasks while the latter utilizes the given document for answer generation. As a result, the notion of Fact-only* and Few-fp + Zero-cls* in Table 4 stand for few-shot verification question generation → zero-shot question answering → zero-shot ethical prediction with examples from a single corresponding task and different tasks respectively. Since the grounding-document is provided during the question answering and ethical prediction process, we focus on the harder scenario, i.e., zero-shot setting. The prompts is listed in Figure 7 with in-context examples shown in Table 5. The Fact-only* setting refers to using fact checking example only, while the Few-fp + Zero-cls* setting indicates that two examples are combined to form a single prompt without specifying the task name. It is worth noting that the in-domain example contains both a question and a candidate answer, i.e., "How many federal police officers were slayed? + Since 2006, (a) Zero-shot Classification Claim: Whites have the following Too many abortions, too many mixed marriages, too many no kids marriages, and too many liberals . Question: Is it fair to say that? (b) Few-shot Fact Prediction + Zero-shot Ethical Classification Someone said: Also found was that the correlation between solar activity and global temperatures ended around 1975, hence recent warming must have some other cause than solar variations. Question: Is it fair to say that? Answer: The claim mentions global warming is due to other cause than solar variations. Related natural fact: recent global warming is due to human activities instead of solar variations. Someone said: LOL GJ brother 10 points for the Uber humans and -10 for the sub human ape creatures you are truly one bad mofo whitey crakker man Question: Is it fair to say that? Answer: The claim calls other people "sub humans" and "ape". Related social fact: calling people "sub humans" and "ape" is insulting and discriminatory. Someone said: this mini henny housewarming idea really got me geeked my bday gift to myself really might be my own casa man it's beyond time Question: Is it fair to say that? Answer: The speaker talks about a hapy housewarming event. Related social fact: respectfully expression of personal feeling is acceptable. Someone said: that atmospheric CO2 increase that we observe is a product of temperature increase, and not the other way around, meaning it is a product of natural variation... Question: Is it fair to say that? Answer: The claim mentions that atmospheric CO2 increase is caused by temperature increase. Related natural fact: the temperature increase is actually because of CO2. Someone said: Whites have the following Too many abortions, too many mixed marriages, too many no kids marriages, and too many liberals . Question: Is it fair to say that? Answer: (c) Few-shot Fact Prediction + Few-shot Ethical Classification Someone said: Also found was that the correlation between solar activity and global temperatures ended around 1975, hence recent warming must have some other cause than solar variations. Question: Is it fair to say that? Answer: The claim mentions global warming is due to other cause than solar variations. Related natural fact: recent global warming is due to human activities instead of solar variations. It is fair to say that. The answer is yes. Someone said: LOL GJ brother 10 points for the Uber humans and -10 for the sub human ape creatures you are truly one bad mofo whitey crakker man Question: Is it fair to say that? Answer: The claim calls other people "sub humans" and "ape". Related social fact: calling people "sub humans" and "ape" is insulting and discriminatory. It is unfair to say that. The answer is no. Someone said: this mini henny housewarming idea really got me geeked my bday gift to myself really might be my own casa man it's beyond time Question: Is it fair to say that? Answer: The speaker talks about a hapy housewarming event. Related social fact: respectfully expression of personal feeling is acceptable. It is fair to say that. The answer is yes. Someone said: that atmospheric CO2 increase that we observe is a product of temperature increase, and not the other way around, meaning it is a product of natural variation... Question: Is it fair to say that? Answer: The claim mentions that atmospheric CO2 increase is caused by temperature increase. Related natural fact: the temperature increase is actually because of CO2. It is unfair to say that. The answer is no. Someone said: Whites have the following Too many abortions, too many mixed marriages, too many no kids marriages, and too many liberals . Question: Is it fair to say that? Answer: (b) Fairness-only Someone said: LOL GJ brother 10 points for the Uber humans and -10 for the sub human ape creatures you are truly one bad mofo whitey crakker man Question: Is it fair to say that? Answer: The claim calls other people "sub humans" and "ape". Related social fact: calling people "sub humans" and "ape" is insulting and discriminatory. It is unfair to say that. The answer is no. Someone said: this mini henny housewarming idea really got me geeked my bday gift to myself really might be my own casa man it's beyond time Question: Is it fair to say that? Answer: The speaker talks about a hapy housewarming event. Related social fact: respectfully expression of personal feeling is acceptable. It is fair to say that. The answer is yes. Someone said: Whites have the following Too many abortions, too many mixed marriages, too many no kids marriages, and too many liberals . Question: Is it fair to say that? Answer: (a) Fact-only Someone said: Also found was that the correlation between solar activity and global temperatures ended around 1975, hence recent warming must have some other cause than solar variations. Question: Is it fair to say that? Answer: The claim mentions global warming is due to other cause than solar variations. Related natural fact: recent global warming is due to human activities instead of solar variations. It is fair to say that. The answer is yes. Someone said: that atmospheric CO2 increase that we observe is a product of temperature increase, and not the other way around, meaning it is a product of natural variation... Question: Is it fair to say that? Answer: The claim mentions that atmospheric CO2 increase is caused by temperature increase. Related natural fact: the temperature increase is actually because of CO2. It is unfair to say that. The answer is no. Someone said: Whites have the following Too many abortions, too many mixed marriages, too many no kids marriages, and too many liberals . Question: Is it fair to say that? Answer: When I watch things like this I pray God will have vengeance on these sub humans even if they have asked for forgiveness. Question: What is the intent to call other people sub humans? Table 5: Example prompts with claims from different tasks for MGFN. The fact checking example comes from MGFN dataset and the hate speech detection example is sourced from the human-generated hate speech detection dataset. (a) Multi-Task Claim: When I watch things like this I pray God will have vengeance on these sub humans even if they have asked for forgiveness. 1,820 federal police officers have been killed in Mexico." but the example from the hate speech task only contains a statement. The discrepancy between tasks and datasets may also lead to the slight performance decrease in Table 4. ( d ) dClaim: {claim to verify} The claim mentions that {summary of the suspicious information}. {Natural or Social} Fact: {Generated fact} ( f ) fThe claim does not align with the fact is_entailed_by the claim mentions that {summary of the suspicious information}. {Natural or Social} Fact: {Generated fact} If the prediction for (f) is False, the claim is factual and fair; otherwise, the claim is either nonfactual or unfair. Since the medium-sized entailment model lacks the in-context learning ability, it only supports the zero-shot ethical prediction setting.Summary of methods. In this work, we propose the following methods: (1) Zero-shot classification (Zero-cls): Checking the soundness of claims with a zero-shot, yes/no question; (2) Few-shot fact prediction + Zero-shot ethical classification (Few-fp + Zero-cls): Generating natural or social facts with few-shot examples and make ethical prediction under the zero-shot setting with LLM; (3) Few-shot fact prediction + Few-shot ethical classification (Few-fp + Few-cls): Generating both facts and ethical classifications under few-shot setting; (4) Entailment : Conduct ethical prediction based on Few-fp generated facts with pretrained, suppositionbased entailment models. Figure 4 : 4Example inference generated by LLM. The green check mark stands for correct predictions and the red cross mark stands for wrong predictions. Figure 5 : 5The complete prompting templates for (a) Zero-shot Classification, (b) Few-shot Fact Prediction + Zeroshot Ethical Classification, and (c) Few-shot Fact Prediction + Few-shot Ethical Classification. Texts in black indicate the test instance to be checked by LLM. Texts in the dotted block indicate the in-context examples for few-shot setting. Compared to (b), additional ethical classification examples are highlighted in red in (c). Among the four input examples, two are related to fact-checking tasks and the other two are related to fairness-checking tasks. We use the same prompt (dotted line) for all datasets besides the MGFN. Figure 6 : 6The prompting templates for Few-shot fact prediction + Few-shot ethical classification with single-task setting: (a) Fact-only contains two fact-checking examples from Figure 5; and (b) Fairness-only contains two fairness-checking examples from Figure 5. Texts in black indicate the test instance to be checked by LLM. Texts in the dotted block indicate the in-context examples for few-shot setting. federal police officers were slayed? Since 2006, 1,820 federal police officers have been killed in Mexico. Question: Have 1,820 federal police officers been slayed in Mexico since 2006 Figure 7 : 7The prompting templates for prompting strategies inTable 4: (a) Multi-task: Few-fp+Zero-cls*; and (b) Single-task: Fact-only*. Texts in black indicate the test instance to be checked by LLM. Texts in the dotted block indicate the in-context examples for few-shot setting. HSDFairness Whether or not one chooses to eat meat is a very personal decision.No 239 When I watch things like this I pray God will have vengeance on these sub humans. Yes 239 SBIC Fairness Your desperate attempts to "explain-away" people that don't believe/accept your cult's bullshit. No 3368 Almost called you to ask if you were okay... u were eating A LOT of food on your IG story yday. Yes 1323 ToxiGen Fairness You can tell if woman is a good person or not by how much she covers her skin. No 534 Child abuse is wrong, racism is wrong, sexism is wrong. Yes 406 Climate Fact If CO2 was sterrible for the planet, then a CO2 generator in a greenhouse would kill the plants. No 253 Global warming is driving polar bears toward extinction. Yes 654 Health Fact Treating First Time Shoulder Dislocations with Surgery Can Benefit Young Athletes, Study Shows. No 388 Study says too many Americans still drink too much. Yes 599 MGFN Fact [CNN article] + We attempt to answer: How many years old was the businessman? Answer: 33 No 107 [CNN article] + We attempt to answer: Who is the Red Bull team boss? Answer: Christian Horner, Yes 102 44, is British-born and currently the team principal of the race-winning Formula One team.Dataset Task Example claim Acceptable Num. sample Table 1 : 1Example data and statistics of fairness and fact checking tasks. The ToxiGen dataset is generated by GPT3, the MGFN dataset is generated by Grover, and other languages are generated by human. Table 2 : 2Accuracy and F1 score of general-purposed language ethics checking based on LLM and entailment models. † stands for fact-checking tasks and ‡ stands for fairness checking tasks. The baseline for fact-checking tasks are retrieval+stance detection performance, while the baseline results for fairness tasks are cited from de Gibert et al. (2018b);. The F1 scores for fact-checking is fake-F1 and for fairness checking is unfair-F1.Model Acc Toxic-F1 Benign-F1 Macro-F1 Finetuned HateBERT 80.96 ‡ 79.26 ‡ 82.40 ‡ 80.82 ‡ Finetuned RoBERTa 80.96 ‡ 74.32 ‡ 84.87 ‡ 79.59 ‡ Zero-cls 77.13 78.17 75.98 77.08 Few-fp + Few-cls with single-task (fact or fairness) prompts Fact-only 80.11 80.00 80.21 80.11 + Entailment 81.17 81.03 81.31 81.17 Fairness-only 82.45 82.20 82.69 82.44 + Entailment 83.30 82.73 83.83 83.28 Few-fp + Few/Zero-cls with multi-task prompts Few-fp + Zero-cls 79.57 80.21 78.90 79.55 + Entailment 80.11 80.66 79.52 80.09 Few-fp + Few-cls 81.70 81.62 81.78 81.70 + Entailment 82.23 81.87 82.59 82.23 Table 3 : 3Accuracy and F1 scores of generalpurposed language ethics checking on ToxiGen dataset.‡ indicates reproduced results from the toxigen_hatebert and toxigen_hateroberta checkpoints fromHartvigsen et al. (2022). Table 4 : 4Accuracy and F1 scores of general-purposed language ethics checking on MGFN dataset. † indicates results from the source paper. 71.50 † is computed from the reported precision (0.72) and recall (0.71) scores of the fake class. Question: What is the intent to call other people sub humans? Claim: How many federal police officers were slayed? Since 2006, 1,820 federal police officers have been killed in Mexico. Question: Have 1,820 federal police officers been slayed in Mexico since 2006? Claim: What did Tubbs Jones die from? Hole in her heart Question: {LLM generated question} ... {Given CNN Article} ... We attempt to answer: What did Tubbs Jones die from? Answer: Hole in her heart Zero-shot Question Answering Article: {Given CNN Article} Answer the question based on the article: {LLM generated question} Answer: {LLM generated answer} Zero-shot Ethical Prediction Article: {Given CNN Article} We attempt to answer: What did Tubbs Jones die from? Candidate answer: Hole in her heart Fact checklist: {LLM generated answer} Yes/No Question: Is the candidate answer correct? Answer: Few-shot Verification Question Generation (b) Single-Task Claim: How many federal police officers were slayed? Since 2006, 1,820 federal police officers have been killed in Mexico. Question: Have 1,820 federal police officers been slayed in Mexico since 2006? Claim: What did Tubbs Jones die from? Hole in her heart Question: {LLM generated question} ... {Given CNN Article} ... We attempt to answer: What did Tubbs Jones die from? Answer: Hole in her heart Zero-shot Question Answering Article: {Given CNN Article} Answer the question based on the article: {LLM generated question} Answer: {LLM generated answer} Zero-shot Ethical Prediction Article: {Given CNN Article} We attempt to answer: What did Tubbs Jones die from? Candidate answer: Hole in her heart Fact checklist: {LLM generated answer} Yes/No Question: Is the candidate answer correct? Answer:Few-shot Verification Question Generation 940 examples are included in the "annotated test" split of the official Hugging Face Dataset: https://huggingface.co/ datasets/skg/toxigen-data. 2 https://github.com/microsoft/ToxiGen. Large language models associate muslims with violence. Abubakar Abid, Maheen Farooqi, James Zou, Nature Machine Intelligence. 36Abubakar Abid, Maheen Farooqi, and James Zou. 2021. Large language models associate mus- lims with violence. Nature Machine Intelligence, 3(6):461-463. Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, and Arpit Mittal. 2021. Feverous: Fact extraction and verification over unstructured and structured information. Rami Aly, Zhijiang Guo, Michael Schlichtkrull, James Thorne, 35th Conference on Neural Information Processing Systems, NeurIPS 2021. Neural Information Processing Systems foundation. Rami Aly, Zhijiang Guo, Michael Schlichtkrull, James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, and Arpit Mittal. 2021. Feverous: Fact extraction and verifi- cation over unstructured and structured information. In 35th Conference on Neural Information Process- ing Systems, NeurIPS 2021. Neural Information Processing Systems foundation. Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova Dassarma, arXiv:2112.00861A general language assistant as a laboratory for alignment. arXiv preprintAmanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. 2021. A general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861. Stance detection with bidirectional conditional encoding. Isabelle Augenstein, Tim Rocktäschel, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAndreas Vlachos, and Kalina BontchevaIsabelle Augenstein, Tim Rocktäschel, Andreas Vla- chos, and Kalina Bontcheva. 2016. Stance detection with bidirectional conditional encoding. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 876-885. Language models are few-shot learners. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Advances in neural information processing systems. 33Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901. Detecting hate speech with. Ke-Li Chiu, Annie Collins, Rohan Alexander, arXiv:2103.12407gpt-3. arXiv preprintKe-Li Chiu, Annie Collins, and Rohan Alexander. 2021a. Detecting hate speech with gpt-3. arXiv preprint arXiv:2103.12407. Irwin King, Savio Wong, and Yeung Yam. 2021b. Creation and evaluation of a pretertiary artificial intelligence (ai) curriculum. K F Thomas, Helen Chiu, Ching-Sing Meng, Chai, IEEE Transactions on Education. 651Thomas KF Chiu, Helen Meng, Ching-Sing Chai, Ir- win King, Savio Wong, and Yeung Yam. 2021b. Cre- ation and evaluation of a pretertiary artificial intelli- gence (ai) curriculum. IEEE Transactions on Edu- cation, 65(1):30-39. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won, Charles Chung, Sebastian Sutton, Gehrmann, arXiv:2204.02311Palm: Scaling language modeling with pathways. arXiv preprintAakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Antonia Creswell, Murray Shanahan, arXiv:2208.14271Faithful reasoning using large language models. arXiv preprintAntonia Creswell and Murray Shanahan. 2022. Faith- ful reasoning using large language models. arXiv preprint arXiv:2208.14271. Hate speech dataset from a white supremacy forum. Naiara Ona De Gibert, Perez, EMNLP. 11Aitor Garcıa-Pablos, and Montse CuadrosOna de Gibert, Naiara Perez, Aitor Garcıa-Pablos, and Montse Cuadros. 2018a. Hate speech dataset from a white supremacy forum. EMNLP 2018, page 11. Hate Speech Dataset from a White Supremacy Forum. Naiara Ona De Gibert, Perez, 10.18653/v1/W18-5102Proceedings of the 2nd Workshop on Abusive Language Online (ALW2). the 2nd Workshop on Abusive Language Online (ALW2)Brussels, BelgiumAssociation for Computational LinguisticsAitor García-Pablos, and Montse CuadrosOna de Gibert, Naiara Perez, Aitor García-Pablos, and Montse Cuadros. 2018b. Hate Speech Dataset from a White Supremacy Forum. In Proceedings of the 2nd Workshop on Abusive Language Online (ALW2), pages 11-20, Brussels, Belgium. Association for Computational Linguistics. Thomas Diggelmann, Jordan Boyd-Graber, Jannis Bulian, arXiv:2012.00614Massimiliano Ciaramita, and Markus Leippold. 2020. Climate-fever: A dataset for verification of real-world climate claims. arXiv preprintThomas Diggelmann, Jordan Boyd-Graber, Jannis Bu- lian, Massimiliano Ciaramita, and Markus Leip- pold. 2020. Climate-fever: A dataset for verifica- tion of real-world climate claims. arXiv preprint arXiv:2012.00614. Hate speech detection with comment embeddings. Nemanja Djuric, Jing Zhou, Robin Morris, Mihajlo Grbovic, Vladan Radosavljevic, Narayan Bhamidipati, Proceedings of the 24th international conference on world wide web. the 24th international conference on world wide webNemanja Djuric, Jing Zhou, Robin Morris, Mihajlo Gr- bovic, Vladan Radosavljevic, and Narayan Bhamidi- pati. 2015. Hate speech detection with comment em- beddings. In Proceedings of the 24th international conference on world wide web, pages 29-30. The capacity for moral selfcorrection in large language models. Deep Ganguli, Amanda Askell, Nicholas Schiefer, Thomas Liao, Kamilė Lukošiūtė, Anna Chen, Anna Goldie, Azalia Mirhoseini, Catherine Olsson, Danny Hernandez, arXiv:2302.07459arXiv preprintDeep Ganguli, Amanda Askell, Nicholas Schiefer, Thomas Liao, Kamilė Lukošiūtė, Anna Chen, Anna Goldie, Azalia Mirhoseini, Catherine Olsson, Danny Hernandez, et al. 2023. The capacity for moral self- correction in large language models. arXiv preprint arXiv:2302.07459. Predictability and surprise in large generative models. Deep Ganguli, Danny Hernandez, Liane Lovitt, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Nova Dassarma, Dawn Drain, Nelson Elhage, 2022 ACM Conference on Fairness, Accountability, and Transparency. Deep Ganguli, Danny Hernandez, Liane Lovitt, Amanda Askell, Yuntao Bai, Anna Chen, Tom Con- erly, Nova Dassarma, Dawn Drain, Nelson Elhage, et al. 2022. Predictability and surprise in large gen- erative models. In 2022 ACM Conference on Fair- ness, Accountability, and Transparency, pages 1747- 1764. # metooma: Multi-aspect annotations of tweets related to the metoo movement. Akash Gautam, Puneet Mathur, Rakesh Gosangi, Debanjan Mahata, Ramit Sawhney, Rajiv Ratn Shah, Proceedings of the International AAAI Conference on Web and Social Media. the International AAAI Conference on Web and Social Media14Akash Gautam, Puneet Mathur, Rakesh Gosangi, De- banjan Mahata, Ramit Sawhney, and Rajiv Ratn Shah. 2020. # metooma: Multi-aspect annotations of tweets related to the metoo movement. In Pro- ceedings of the International AAAI Conference on Web and Social Media, volume 14, pages 209-216. Realtoxicityprompts: Evaluating neural toxic degeneration in language models. Suchin Samuel Gehman, Maarten Gururangan, Yejin Sap, Noah A Choi, Smith, Findings of the Association for Computational Linguistics: EMNLP 2020. Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A Smith. 2020. Realtoxici- typrompts: Evaluating neural toxic degeneration in language models. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3356-3369. Zhijiang Guo, Michael Sejr Schlichtkrull, and Andreas Vlachos. 2022. A survey on automated factchecking. Transactions of the Association for Computational Linguistics. 10Zhijiang Guo, Michael Sejr Schlichtkrull, and An- dreas Vlachos. 2022. A survey on automated fact- checking. Transactions of the Association for Com- putational Linguistics, 10:178-206. Toxigen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection. Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, Ece Kamar, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsLong Papers1Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, and Ece Kamar. 2022. Toxigen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection. In Proceedings of the 60th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 3309-3326. Yejin Bang, Andrea Madotto, and Pascale Fung. 2022. Survey of hallucination in natural language generation. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, ACM Computing Surveys. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Yejin Bang, Andrea Madotto, and Pascale Fung. 2022. Survey of hallu- cination in natural language generation. ACM Com- puting Surveys. Liwei Jiang, Jena D Hwang, Chandra Bhagavatula, Le Ronan, Maxwell Bras, Jon Forbes, Jenny Borchardt, Oren Liang, Etzioni, arXiv:2110.07574Maarten Sap, and Yejin Choi. 2021. Delphi: Towards machine ethics and norms. arXiv preprintLiwei Jiang, Jena D Hwang, Chandra Bhagavatula, Ro- nan Le Bras, Maxwell Forbes, Jon Borchardt, Jenny Liang, Oren Etzioni, Maarten Sap, and Yejin Choi. 2021. Delphi: Towards machine ethics and norms. arXiv preprint arXiv:2110.07574. Explainable automated fact-checking for public health claims. Neema Kotonya, Francesca Toni, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsNeema Kotonya and Francesca Toni. 2020. Ex- plainable automated fact-checking for public health claims. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 7740-7754, Online. Associa- tion for Computational Linguistics. Preetam Amancharla, and Anupam Datta. Kaiji Lu, Piotr Mardziel, Fangjing Wu, Gender bias in neural natural language processing. Logic, Language, and Security: Essays Dedicated to Andre Scedrov on the Occasion of His 65th Birthday. Kaiji Lu, Piotr Mardziel, Fangjing Wu, Preetam Aman- charla, and Anupam Datta. 2020. Gender bias in neural natural language processing. Logic, Lan- guage, and Security: Essays Dedicated to Andre Sce- drov on the Occasion of His 65th Birthday, pages 189-202. Logic against bias: Textual entailment mitigates stereotypical sentence reasoning. Hongyin Luo, James Glass, arXiv:2303.05670arXiv preprintHongyin Luo and James Glass. 2023. Logic against bias: Textual entailment mitigates stereotypical sen- tence reasoning. arXiv preprint arXiv:2303.05670. Hate speech detection: Challenges and solutions. Sean Macavaney, Hao-Ren, Eugene Yao, Katina Yang, Nazli Russell, Ophir Goharian, Frieder, PloS one. 148221152Sean MacAvaney, Hao-Ren Yao, Eugene Yang, Katina Russell, Nazli Goharian, and Ophir Frieder. 2019. Hate speech detection: Challenges and solutions. PloS one, 14(8):e0221152. Selfcheckgpt: Zero-resource black-box hallucination detection for generative large language models. Potsawee Manakul, Adian Liusie, Mark, Gales, arXiv:2303.08896arXiv preprintPotsawee Manakul, Adian Liusie, and Mark JF Gales. 2023. Selfcheckgpt: Zero-resource black-box hal- lucination detection for generative large language models. arXiv preprint arXiv:2303.08896. Stereoset: Measuring stereotypical bias in pretrained language models. Moin Nadeem, Anna Bethke, Siva Reddy, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingMoin Nadeem, Anna Bethke, and Siva Reddy. 2021. Stereoset: Measuring stereotypical bias in pre- trained language models. In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 5356-5371. Fakta: An automatic end-to-end fact checking system. Moin Nadeem, Wei Fang, Brian Xu, Mitra Mohtarami, James Glass, arXiv:1906.04164arXiv preprintMoin Nadeem, Wei Fang, Brian Xu, Mitra Mohtarami, and James Glass. 2019. Fakta: An automatic end-to-end fact checking system. arXiv preprint arXiv:1906.04164. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, L Carroll, Pamela Wainwright, Chong Mishkin, Sandhini Zhang, Katarina Agarwal, Alex Slama, Ray, arXiv:2203.02155Training language models to follow instructions with human feedback. arXiv preprintLong Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Car- roll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow in- structions with human feedback. arXiv preprint arXiv:2203.02155. Linguistic models for analyzing and detecting biased language. Marta Recasens, Cristian Danescu-Niculescu-Mizil, Dan Jurafsky, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational Linguistics1Long Papers)Marta Recasens, Cristian Danescu-Niculescu-Mizil, and Dan Jurafsky. 2013. Linguistic models for an- alyzing and detecting biased language. In Proceed- ings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 1650-1659. Hatecheck: Functional tests for hate speech detection models. Paul Röttger, Bertram Vidgen, Dong Nguyen, Zeerak Waseem, Helen Margetts, Janet B Pierrehumbert, arXiv:2012.15606arXiv preprintPaul Röttger, Bertram Vidgen, Dong Nguyen, Zeerak Waseem, Helen Margetts, and Janet B Pierre- humbert. 2020. Hatecheck: Functional tests for hate speech detection models. arXiv preprint arXiv:2012.15606. Social bias frames: Reasoning about social and power implications of language. Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, A Noah, Yejin Smith, Choi, ACL. Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Juraf- sky, Noah A Smith, and Yejin Choi. 2020. Social bias frames: Reasoning about social and power im- plications of language. In ACL. The limitations of stylometry for detecting machine-generated fake news. Tal Schuster, Roei Schuster, Darsh J Shah, Regina Barzilay, 10.1162/coli_a_00380Computational Linguistics. 462Tal Schuster, Roei Schuster, Darsh J. Shah, and Regina Barzilay. 2020. The limitations of stylometry for detecting machine-generated fake news. Computa- tional Linguistics, 46(2):499-510. Retrieval augmentation reduces hallucination in conversation. Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, Jason Weston, Findings of the Association for Computational Linguistics: EMNLP 2021. Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, and Jason Weston. 2021. Retrieval augmentation reduces hallucination in conversation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3784-3803. Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze, Alicia Cheng, Taylor Jin, Leslie Bos, Yu Baker, Du, arXiv:2201.08239Lamda: Language models for dialog applications. arXiv preprintRomal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. 2022. Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239. NewsQA: A machine comprehension dataset. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman, 10.18653/v1/W17-2623Proceedings of the 2nd Workshop on Representation Learning for NLP. the 2nd Workshop on Representation Learning for NLPVancouver, CanadaAssociation for Computational LinguisticsAdam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2017. NewsQA: A machine compre- hension dataset. In Proceedings of the 2nd Work- shop on Representation Learning for NLP, pages 191-200, Vancouver, Canada. Association for Com- putational Linguistics. liar, liar pants on fire": A new benchmark dataset for fake news detection. William Yang, Wang , Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsShort Papers2William Yang Wang. 2017. "liar, liar pants on fire": A new benchmark dataset for fake news detection. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 422-426. Self-consistency improves chain of thought reasoning in language models. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Denny Zhou, arXiv:2203.11171arXiv preprintXuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171. Kellie Webster, Xuezhi Wang, Ian Tenney, Alex Beutel, Emily Pitler, Ellie Pavlick, Jilin Chen, arXiv:2010.06032Slav Petrov. 2020. Measuring and reducing gendered correlations in pre-trained models. arXiv preprintKellie Webster, Xuezhi Wang, Ian Tenney, Alex Beu- tel, Emily Pitler, Ellie Pavlick, Jilin Chen, Ed Chi, and Slav Petrov. 2020. Measuring and reducing gendered correlations in pre-trained models. arXiv preprint arXiv:2010.06032. Finetuned language models are zero-shot learners. Jason Wei, Maarten Bosma, Y Vincent, Kelvin Zhao, Adams Wei Guu, Brian Yu, Nan Lester, Du, M Andrew, Quoc V Dai, Le, arXiv:2109.01652arXiv preprintJason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, An- drew M Dai, and Quoc V Le. 2021. Finetuned lan- guage models are zero-shot learners. arXiv preprint arXiv:2109.01652. Chain-of-thought prompting elicits reasoning in large language models. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, H Ed, Chi, V Quoc, Denny Le, Zhou, Advances in Neural Information Processing Systems. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed H Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits rea- soning in large language models. In Advances in Neural Information Processing Systems. A broad-coverage challenge corpus for sentence understanding through inference. Adina Williams, Nikita Nangia, Samuel Bowman, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational Linguistics1Long PapersAdina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122. Association for Computational Linguistics. Towards generalisable hate speech detection: a review on obstacles and solutions. Wenjie Yin, Arkaitz Zubiaga, PeerJ Computer Science. 7598Wenjie Yin and Arkaitz Zubiaga. 2021. Towards gener- alisable hate speech detection: a review on obstacles and solutions. PeerJ Computer Science, 7:e598. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2020. Defending against neural fake news. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2020. Defending against neural fake news.
[ "https://github.com/microsoft/ToxiGen." ]
[ "VRKITCHEN2.0-INDOORKIT: A TUTORIAL FOR AUGMENTED INDOOR SCENE BUILDING IN OMNIVERSE A PREPRINT", "VRKITCHEN2.0-INDOORKIT: A TUTORIAL FOR AUGMENTED INDOOR SCENE BUILDING IN OMNIVERSE A PREPRINT", "VRKITCHEN2.0-INDOORKIT: A TUTORIAL FOR AUGMENTED INDOOR SCENE BUILDING IN OMNIVERSE A PREPRINT", "VRKITCHEN2.0-INDOORKIT: A TUTORIAL FOR AUGMENTED INDOOR SCENE BUILDING IN OMNIVERSE A PREPRINT" ]
[ "Yizhou Zhao [email protected] ", "Steven Gong ", "Xiaofeng Gao ", "Wensi Ai ", "Song-Chun Zhu [email protected] ", "\nDepartment of Statistics\nDepartment of Computer Science\nUniversity of California\nLos Angeles\n", "\nDepartment of Statistics\nUniversity of California\nLos Angeles\n", "\nDepartment of Computer Science\nUniversity of California\nLos Angeles\n", "\nDepartment of Statistics\nUniversity of California\nLos Angeles\n", "\nUniversity of California\nLos Angeles\n", "Yizhou Zhao [email protected] ", "Steven Gong ", "Xiaofeng Gao ", "Wensi Ai ", "Song-Chun Zhu [email protected] ", "\nDepartment of Statistics\nDepartment of Computer Science\nUniversity of California\nLos Angeles\n", "\nDepartment of Statistics\nUniversity of California\nLos Angeles\n", "\nDepartment of Computer Science\nUniversity of California\nLos Angeles\n", "\nDepartment of Statistics\nUniversity of California\nLos Angeles\n", "\nUniversity of California\nLos Angeles\n" ]
[ "Department of Statistics\nDepartment of Computer Science\nUniversity of California\nLos Angeles", "Department of Statistics\nUniversity of California\nLos Angeles", "Department of Computer Science\nUniversity of California\nLos Angeles", "Department of Statistics\nUniversity of California\nLos Angeles", "University of California\nLos Angeles", "Department of Statistics\nDepartment of Computer Science\nUniversity of California\nLos Angeles", "Department of Statistics\nUniversity of California\nLos Angeles", "Department of Computer Science\nUniversity of California\nLos Angeles", "Department of Statistics\nUniversity of California\nLos Angeles", "University of California\nLos Angeles" ]
[]
With the recent progress of simulations by 3D modeling software and game engines, many researchers have focused on Embodied AI tasks in the virtual environment. However, the research community lacks a platform that can easily serve both indoor scene synthesis and model benchmarking with various algorithms. Meanwhile, computer graphics-related tasks need a toolkit for implementing advanced synthesizing techniques. To facilitate the study of indoor scene building methods and their potential robotics applications, we introduce INDOORKIT: a built-in toolkit for NVIDIA OMNIVERSE that provides flexible pipelines for indoor scene building, scene randomizing, and animation controls. Besides, combining Python coding in the animation software INDOORKIT assists researchers in creating real-time training and controlling for avatars and robotics. The source code for this toolkit is available at https://github.com/realvcla/VRKitchen2.0-Tutorial, and the tutorial along with the toolkit is available at https://vrkitchen20-tutorial.readthedocs.io/en/latest/.
10.48550/arxiv.2206.11887
[ "https://export.arxiv.org/pdf/2206.11887v1.pdf" ]
249,953,695
2206.11887
306a21e2ac8b2e13cb7d2126a8216d7ce3e791f3
VRKITCHEN2.0-INDOORKIT: A TUTORIAL FOR AUGMENTED INDOOR SCENE BUILDING IN OMNIVERSE A PREPRINT June 24, 2022 Yizhou Zhao [email protected] Steven Gong Xiaofeng Gao Wensi Ai Song-Chun Zhu [email protected] Department of Statistics Department of Computer Science University of California Los Angeles Department of Statistics University of California Los Angeles Department of Computer Science University of California Los Angeles Department of Statistics University of California Los Angeles University of California Los Angeles VRKITCHEN2.0-INDOORKIT: A TUTORIAL FOR AUGMENTED INDOOR SCENE BUILDING IN OMNIVERSE A PREPRINT June 24, 2022Simulation environment · Embodied AI · Animation · Indoor scene synthesis With the recent progress of simulations by 3D modeling software and game engines, many researchers have focused on Embodied AI tasks in the virtual environment. However, the research community lacks a platform that can easily serve both indoor scene synthesis and model benchmarking with various algorithms. Meanwhile, computer graphics-related tasks need a toolkit for implementing advanced synthesizing techniques. To facilitate the study of indoor scene building methods and their potential robotics applications, we introduce INDOORKIT: a built-in toolkit for NVIDIA OMNIVERSE that provides flexible pipelines for indoor scene building, scene randomizing, and animation controls. Besides, combining Python coding in the animation software INDOORKIT assists researchers in creating real-time training and controlling for avatars and robotics. The source code for this toolkit is available at https://github.com/realvcla/VRKitchen2.0-Tutorial, and the tutorial along with the toolkit is available at https://vrkitchen20-tutorial.readthedocs.io/en/latest/. Introduction Simulation engines are tools used by designers to code and plan out a simulation environment quickly and easily without building one from the ground up. As the development of those simulation engines, researchers and game designers are deploying the recent advances in the field of artificial intelligence (AI) to train autonomous and intelligent agents which grow from experimental laboratories into executable products [1]. However, even though learning-based algorithms have gradually increased their influence on training agents in virtual environments, there is still a lack of the toolkit that connects the simulation environment and the state-of-the-art developments in the AI community, including innovative tasks, comprehensive datasets, and powerful algorithms [2]. * These authors contributed equally to this work arXiv:2206.11887v1 [cs.CG] 23 Jun 2022 Figure 1: Toolkit overview. We present this tutorial for our new toolkit INDOORKIT. (1) INDOORKIT supports a wide range of indoor scene datasets synthesized or designed manually, and it allows users to set up their custom dataset. (2) We provide the data processing modules to store scenes that may be from various types of formats into the USD format, making them transferable for other 3D software of game engines. (3) INDOORKIT can be connected with machine learning models for downstream tasks. (Tasks including but not limited to character animation, physical simulation, and robotics. Recently, with the recent release of NVIDIA OMNIVERSE 1 , a scalable development platform for simulation and design collaboration, researchers can deploy recent advances in AI in OMNIVERSE due to its indispensable features: • Python has taken a lead in determining the programming language for AI or neural networks, and OMNIVERSE exposes much of its functionality through Python bindings; • In OMNIVERSE both physics simulation and the neural network policy training reside on GPU and communicate directly [3]; • The universal scene description (USD) format in OMNIVERSE contains many details in 3D computer graphics scene elements and is supported by a wide range of 3D modeling software (e.g. Blender and Autodesk Maya) and game engines (e.g. Unreal Engine and Unity). We present INDOORKIT: a toolkit built in NVIDIA OMNIVERSE that provides flexible pipelines for scene building, character animation, and robotic controls. The innovative features of our INDOORKIT include but not limited to: • Photo-realistic scene rendering by utilizing a wide range of popular 3D assets (e.g. 3D-Front [4], iThor [5], SAPIEN [6], AKB-48 [7], e.t.c.); • Real-time character and robot control by leveraging 3D animatable assets(e.g. AMASS [8], Adobe Mixamo [9]; • Comprehensive and flexible pipeline for data labeling, model training testing. Besides, INDOORKIT is an early work related to OMNIVERSE and we hope that this work paves the way for the creation of influential representation learning in the future. We also provide extensive documentation including a massive amount of demos and tutorials to encourage related research. Library overview INDOORKIT is developed by the Center for Vision, Cognition, Learning, and Autonomy at the University of California, Los Angeles. It enables easy data loading and experiment sharing for synthesizing scenes and animation with the Python API in OMNIVERSE. This section briefly describes several features of our INDOORKIT. Working with 3D scene assets We integrate multiple indoor scene datasets in INDOORKIT. For datasets with different parameterizations of the body, we include documents for meta-data descriptions and visualization tools to illustrate the characteristics of each dataset. Datasets covered include, but not limited to, 3D-Front [4], scenes of AI2Thor [5], and other indoor scene building pipelines [8]. Besides, to facilitate a contribution to the community, INDOORKIT also provides detailed instructions for users to upload and pre-process their custom datasets. Why we choose Omniverse There are several reasons for us to choose the newly platform: OMNIVERSE. First, for its powerful simulation support: rigid body, soft body, articulated body, and fluid are the main types of simulation that are supported in OMNIVERSE. • Rigid body: This type simulates the physics in a static environment. Users can set your mass and gravity for each object and assign forces to objects such as friction or buoyancy. • Soft body: based on the concept of Soft Body Dynamics (SBD), the soft body is controlled by mathematical equations which describe how physical objects behave in the real world when they collide with other objects or themselves. The problem with traditional computer graphics is that the physics simulation can only be done by hand and it takes a lot of time to get an accurate result. With OMNIVERSE: soft body this task can be done automatically. • Articulation body: working with OMNIVERSE, INDOORKIT provides the tool that enables us to build physics articulations such as robotic arms, kinematic chains, and avatars that are hierarchically organized. It also helps us get realistic physics behaviors in the context of simulation for industrial applications. • Fluid: OMNIVERSE allows you to simulate the behavior of liquids and gases. It's based on the Navier-Stokes equations, which describe how fluids flow in an environment. INDOORKIT connects the set up of fluid with indoor scene assets, which provides better flexibility of custom tasks. Second, for its Python scripting environment, it is easy to bring open-source and third-party Python libraries into OMNIVERSE to help the research. • Python is a general-purpose programming language that can be used for many different tasks. It has a simple syntax and readable code, and it's easy to learn. Python is also very popular in the industry, because of its high performance, robustness, and versatility. • Deep learning can be applied to many different fields from computer vision, speech recognition, natural language processing (NLP), translation, robotics, and much more. Deep learning with Python using Pytorch [10] and Tensorflow [11] libraries can be easily supported in OMNIVERSE. Third, OMNIVERSE has the powerful rendering ability with the ray tracing technology. • Ray tracing is a system that improves the lighting in the simulation. It cab be used in everything from reflections to shadows both in the environment and scene elements including atmospheric effects, surfaces reflections and even diffused lighting. • Besides, users can render the scenes with the universal scene description (USD) format. This is a highly compressed format that allows to store the entire scene information in a single file. It is also very simple to use and understand. The basic idea behind USD is that it stores all of the information about how the scene is rendered, including models, textures, lights, cameras, and animation. Potential use case Another essential goal of INDOORKIT is to apply the state-of-the-art embodied AI research while synthesizing photo-realistic and physical-realistic rendering. Here, we list a few common use cases of our library. The full demo and tutorial for them can be found at https://github.com/realvcla/VRKitchen2.0-Tutorial Indoor scene labeling Indoor scene labeling is the process of identifying and labelling indoor scenes. The goal of this project is to improve the quality, quantity and consistency of indoor scene labels in an effort to facilitate more efficient research use. The field of the EAI still lacks of meaningful and near-realistic indoor scenes. We apply OMNIVERSE platform to perform labeling steps for indoor scenes. The sampled data piece contains the information of the game, and allows users to perform downstream tasks. INDOORKIT also offers a clean interface for labeling scene and robot information. Indoor character animation With the recent progress in 3D character animation creation, the popularity of animation generation, as well as its application, grows. Meanwhile, with the improvement of virtual environment simulation capability, researchers have started to study agent behavior in the context of Embodied AI. However, the production process of adapting the generated animation in a photo-realistic and physics-reliable environment is laborious. The key idea is to make a balance between the original animation clip, physics scene, and social interaction meaning through reinforcement learning on the kinematics. Robotics oriented simulation Robotics is a field of study that involves the design, construction and operation of robots. The goal of robotics in OMNIVERSE is to build machines capable of performing tasks that are difficult or impossible for humans to perform. Robotics can be used in many different fields including medicine, manufacturing, research and education. Development and maintenance The project is developed by the Center for Vision, Cognition, Learning, and Autonomy at University of California, Los Angeles. INDOORKIT is developed publicly through Github with an issue tracker to report bugs and ask questions. Documentation consists of tutorials, examples, and API documentation. The third-party packages include Pytorch [12] for the deep learning framework, and Jupyter notebook [13]. For the demo and tutorial, please visit https:// vrkitchen20-tutorial.readthedocs.io/en/latest/.. Conclusion We presented INDOORKIT, an Python library to help researchers to easily develop their indoor scene synthesizing methods and apply the scene to EAI tasks. Figure 2 : 2We also provide different randomization strategies for indoor scenes.(1) Scene/Background randomization: the same simulation task can share different backgrounds. (2) Material randomization: scene items can be rendered as randomized materials. (3) Decoration randomization: colors and materials of the wall and floor can vary. (4) Light randomization: simulation tasks can be performed under different lighting conditions. https://developer.nvidia.com/nvidia-omniverse Development of a virtual manufacturing system by integrating product models and factory models. Masahiko Onosato, Kazuaki Iwata, CIRP annals. 421Masahiko Onosato and Kazuaki Iwata. Development of a virtual manufacturing system by integrating product models and factory models. CIRP annals, 42(1):475-478, 1993. Developing an open-source lightweight game engine with dnn support. Haechan Park, Nakhoon Baek, Electronics. 991421Haechan Park and Nakhoon Baek. Developing an open-source lightweight game engine with dnn support. Electronics, 9(9):1421, 2020. Viktor Makoviychuk, Lukasz Wawrzyniak, Yunrong Guo, Michelle Lu, Kier Storey, Miles Macklin, David Hoeller, Nikita Rudin, Arthur Allshire, Ankur Handa, arXiv:2108.10470Isaac gym: High performance gpu-based physics simulation for robot learning. arXiv preprintViktor Makoviychuk, Lukasz Wawrzyniak, Yunrong Guo, Michelle Lu, Kier Storey, Miles Macklin, David Hoeller, Nikita Rudin, Arthur Allshire, Ankur Handa, et al. Isaac gym: High performance gpu-based physics simulation for robot learning. arXiv preprint arXiv:2108.10470, 2021. 3d-front: 3d furnished rooms with layouts and semantics. Huan Fu, Bowen Cai, Lin Gao, Ling-Xiao Zhang, Jiaming Wang, Cao Li, Qixun Zeng, Chengyue Sun, Rongfei Jia, Binqiang Zhao, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionHuan Fu, Bowen Cai, Lin Gao, Ling-Xiao Zhang, Jiaming Wang, Cao Li, Qixun Zeng, Chengyue Sun, Rongfei Jia, Binqiang Zhao, et al. 3d-front: 3d furnished rooms with layouts and semantics. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10933-10942, 2021. Ai2-thor: An interactive 3d environment for visual ai. Eric Kolve, Roozbeh Mottaghi, Winson Han, Eli Vanderbilt, Luca Weihs, Alvaro Herrasti, Daniel Gordon, Yuke Zhu, Abhinav Gupta, Ali Farhadi, arXiv:1712.05474arXiv preprintEric Kolve, Roozbeh Mottaghi, Winson Han, Eli VanderBilt, Luca Weihs, Alvaro Herrasti, Daniel Gordon, Yuke Zhu, Abhinav Gupta, and Ali Farhadi. Ai2-thor: An interactive 3d environment for visual ai. arXiv preprint arXiv:1712.05474, 2017. Sapien: A simulated part-based interactive environment. Fanbo Xiang, Yuzhe Qin, Kaichun Mo, Yikuan Xia, Hao Zhu, Fangchen Liu, Minghua Liu, Hanxiao Jiang, Yifu Yuan, He Wang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionFanbo Xiang, Yuzhe Qin, Kaichun Mo, Yikuan Xia, Hao Zhu, Fangchen Liu, Minghua Liu, Hanxiao Jiang, Yifu Yuan, He Wang, et al. Sapien: A simulated part-based interactive environment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11097-11107, 2020. Akb-48: A real-world articulated object knowledge base. Liu Liu, Wenqiang Xu, Haoyuan Fu, Sucheng Qian, Qiaojun Yu, Yang Han, Cewu Lu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionLiu Liu, Wenqiang Xu, Haoyuan Fu, Sucheng Qian, Qiaojun Yu, Yang Han, and Cewu Lu. Akb-48: A real-world articulated object knowledge base. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14809-14818, 2022. Amass: Archive of motion capture as surface shapes. Naureen Mahmood, Nima Ghorbani, F Nikolaus, Gerard Troje, Michael J Pons-Moll, Black, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionNaureen Mahmood, Nima Ghorbani, Nikolaus F Troje, Gerard Pons-Moll, and Michael J Black. Amass: Archive of motion capture as surface shapes. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5442-5451, 2019. . Adobe, Mixamo, Adobe. Mixamo. 2020. Deep learning with PyTorch. Eli Stevens, Luca Antiga, Thomas Viehmann, Manning PublicationsEli Stevens, Luca Antiga, and Thomas Viehmann. Deep learning with PyTorch. Manning Publications, 2020. . Ian Joshua V Dillon, Dustin Langmore, Eugene Tran, Srinivas Brevdo, Dave Vasudevan, Moore, arXiv:1711.10604Brian Patton, Alex Alemi, Matt Hoffman, and Rif A SaurousTensorflow distributions. arXiv preprintJoshua V Dillon, Ian Langmore, Dustin Tran, Eugene Brevdo, Srinivas Vasudevan, Dave Moore, Brian Patton, Alex Alemi, Matt Hoffman, and Rif A Saurous. Tensorflow distributions. arXiv preprint arXiv:1711.10604, 2017. Pytorch: An imperative style, high-performance deep learning library. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Advances in neural information processing systems. 32Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32:8026-8037, 2019. Using the jupyter notebook as a tool for open science: An empirical study. Irene V Bernadette M Randles, Milena S Pasquetto, Christine L Golshan, Borgman, 2017 ACM/IEEE Joint Conference on Digital Libraries (JCDL). IEEEBernadette M Randles, Irene V Pasquetto, Milena S Golshan, and Christine L Borgman. Using the jupyter notebook as a tool for open science: An empirical study. In 2017 ACM/IEEE Joint Conference on Digital Libraries (JCDL), pages 1-2. IEEE, 2017.
[ "https://github.com/realvcla/VRKitchen2.0-Tutorial,", "https://github.com/realvcla/VRKitchen2.0-Tutorial" ]
[ "FCNC and Rare B Decays in 3-3-1 Models", "FCNC and Rare B Decays in 3-3-1 Models" ]
[ "J.-Alexis Rodriguez [email protected]:[email protected] \nDepartmento de Fisica\nDepartment of Physics College of William and Mary, Williamsburg\nParticle Theory Group\nUniversidad Nacional de Colombia\n23187BogotaVAColombia, USA\n", "Marc Sher \nDepartmento de Fisica\nDepartment of Physics College of William and Mary, Williamsburg\nParticle Theory Group\nUniversidad Nacional de Colombia\n23187BogotaVAColombia, USA\n" ]
[ "Departmento de Fisica\nDepartment of Physics College of William and Mary, Williamsburg\nParticle Theory Group\nUniversidad Nacional de Colombia\n23187BogotaVAColombia, USA", "Departmento de Fisica\nDepartment of Physics College of William and Mary, Williamsburg\nParticle Theory Group\nUniversidad Nacional de Colombia\n23187BogotaVAColombia, USA" ]
[]
An interesting extension of the Standard Model is based on the electroweak gauge group SU (3) L × U (1). It requires three generations to cancel anomalies, treats the third generation differently than the first two, and has a rich phenomenology. There are several models, distinguished by the embedding of the charge operator into the SU (3) L group and by the choice of fermion representations. In this Brief Report, we consider flavor-changing neutral currents in these models, concentrating on the P − P mass difference, where P = (K, D, B, B s ), as well as B → Kl + l − , B → µ + µ − and B s → µ + µ − decays. Although the P − P mass difference has been considered previously in some models, the rare B decays are new. We find that the strongest bounds come from the B − B and B s − B s mass difference. 1
10.1103/physrevd.70.117702
[ "https://export.arxiv.org/pdf/hep-ph/0407248v1.pdf" ]
16,283,631
hep-ph/0407248
ea890b6001813768c4e750a2c2db74408efb1563
FCNC and Rare B Decays in 3-3-1 Models 21 Jul 2004 March 26, 2022 J.-Alexis Rodriguez [email protected]:[email protected] Departmento de Fisica Department of Physics College of William and Mary, Williamsburg Particle Theory Group Universidad Nacional de Colombia 23187BogotaVAColombia, USA Marc Sher Departmento de Fisica Department of Physics College of William and Mary, Williamsburg Particle Theory Group Universidad Nacional de Colombia 23187BogotaVAColombia, USA FCNC and Rare B Decays in 3-3-1 Models 21 Jul 2004 March 26, 20221 An interesting extension of the Standard Model is based on the electroweak gauge group SU (3) L × U (1). It requires three generations to cancel anomalies, treats the third generation differently than the first two, and has a rich phenomenology. There are several models, distinguished by the embedding of the charge operator into the SU (3) L group and by the choice of fermion representations. In this Brief Report, we consider flavor-changing neutral currents in these models, concentrating on the P − P mass difference, where P = (K, D, B, B s ), as well as B → Kl + l − , B → µ + µ − and B s → µ + µ − decays. Although the P − P mass difference has been considered previously in some models, the rare B decays are new. We find that the strongest bounds come from the B − B and B s − B s mass difference. 1 Introduction One of the more intriguing extensions of the standard model is based on the gauge group SU (3) c ×SU (3) L ×U (1). In the original, minimal version of the model [1,2], the charged leptons and neutrinos are put into antitriplets of SU (3) L , two generations of left-handed quarks are put into triplets and the other generation into an antitriplet. This structure automatically cancels all anomalies, and when combined with the requirement of asymptotic freedom, necessitates that the number of generations is equal to three. The model has an automatic Peccei-Quinn symmetry [3,4]. The fact that one of the quark families is treated differently than the other two could lead to an explanation of the heavy top quark mass [5]. This minimal model contains doubly charged bilepton gauge fields, as well as isosinglet quarks with exotic charges, leading to a rich phenomenolgy [6]. A particularly exciting feature of this model is that there is an upper bound on the scale of SU (3) L breaking which is within range of the LHC. In another version of the model, with a different embedding of the charge operator into SU (3) L ×U (1), the charged lepton in the antitriplet is replaced by a right-handed neutrino [7,8]. In this version, the bileptons are singly charged or neutral. Another model can be found in which there are no lepton-number violating gauge bosons and no exotic quark charges (at the price of adding an isosinglet charged lepton for each generation). Nonetheless, in all of these models, one still treats one of the quark generations differently than the other two. It is most natural to have the third generation be the "different" generation, since this might explain the heavy top quark and since some of the constraints to be discussed below are substantially weakened. With generations treated differently, one will expect to have treelevel flavor-changing neutral currents (FCNC). Thus, it is expected that FCNC involving the third generation will be dominant. Given the success of BELLE and BABAR, an analysis (and update of previous analyses) of rare B decays and FCNC in these models seems warranted. In the next section, we discuss the three models mentioned above, as well as two other models in which all of the generations are treated identically. In section III, we analyze current bounds from FCNC processes in these models. Section IV contains our conclusions. Models A comprehensive review of the gauge, fermion and scalar sectors of the various SU (3) L × U (1) models can be found in Refs. [9] and [10]. In this section, we briefly summarize this review, and then turn to a discussion of FCNC and rare B decays in these models. Different models can be distinguished by the embedding of the electric charge operator. In general, the charge operator is given by Q = aT 3L + 2 √ 3 bT 8L + xI 3 ,(1) where we have used conventional normalization (T i = λ i /2 and Tr(λ i λ j ) = 2δ ij ), I 3 is the 3x3 unit matrix, and a and b are arbitrary. The value of x can be absorbed into the hypercharge definition, and will not be relevant. The fact that weak isospin is contained within the SU for the leptons, and Q i =   u d D   ,   c s S   ,   b t T   (3) for the left-handed quarks. The conjugates for these nine fields are all SU (3) L singlets. D and S are new isosinglet quarks with charge −4/3 and T is an isosinglet quark with charge 5/3. Note how the third generation is treated very differently than the first two. This is necessary to cancel anomalies. In principle, either of the three generations could be chosen to be different, however, as will be seen shortly, the strong bounds on FCNC in the kaon sector make it more likely that the third generation is singled out. This is the original, minimal model, and will be referred to as Model A. A simple alternative to this model [12] is to change the lepton structure by replacing the e c i with a heavy lepton E + i and adding e c i and E − i singlets. This will be referred to as Model A ′ . Although one can, of course, add a right-handed neutrino singlet to the above structure, the model of Montero, et al. [7,8] modifies the lepton sector, and has, with b = −1/2, L i =   ν i e i ν c i  (4) with the e c i being an SU (3) L singlet. The quarks are given by Q i =   d u D   ,   s c S   ,   t b T   (5) The new weak isosinglet quarks now have the same charges as their standard model counterparts, and the bileptons are either neutral or singly-charged. This model will be referred to as Model B. Since additional exotic quarks must be introduced in these models, it is natural, in the spirit of grand unification, to suppose that additional charged leptons are present. In Model C, the leptons are taken to be L i =   ν i e i E i  (6) and the quarks are Q i =   d u U   ,   s c C   ,   t b B  (7) with all other fields (including right handed neutrinos, if necessary) being SU (3) L singlets. This model has been explored in Ref. [13] In all of the above models, the quark generations are treated differently. There are two other models [9,10] with identical quark generations to the previous two models, but in which the leptons are all treated very differently. These models have not been explored in detail, and since we are interested in FCNC in the quark sector, they will not be discussed further here. Finally, there are two models in which all generations, quark and leptons are treated equally. These models lose the appealing feature of explaining the number of generations (via anomaly cancellation), but do have the feature of following naturally from grand unified theories. In each of these models, there are 27 fields in each generation. In Model D, these fields fill out a 27 of E 6 , and arises naturally from the E 6 GUT. This model has been analyzed in Ref. [14]. Model E has a "flipped" structure, and arises from an SU (6) × U (1) unified gauge symmetry, and has been discussed in Ref. [15]. Nothing in the above discussion is new, and there has been some phenomenological work on all of these models. However, there has been very little done (especially in the three-generation models A,B and C) regarding FCNC B-decays, and the bounds from ∆m B and ∆M Bs need to be updated. We turn to these issues in the next section. It should be pointed out that the scalar sector of these models all contain at least three SU (3) L triplets [16], and in some cases an additional Higgs sextet is needed to give leptons mass [17]. These Higgs triplets may give additional contributions to FCNC processes. However, since these contributions will depend on large numbers of arbitrary parameters, we will ignore them-their inclusion would only strengthen the lower bounds on gauge boson masses (unless they interfere destructively and one fine-tunes). FCNC and rare B decays With different generations treated differently, it is not surprising that tree level flavor-changing neutral currents will arise. A nice discussion of FCNC interactions in the minimal model, model A, can be found in the works of Liu [19] and Gomez Dumm, et al [20]. They show that L F CN C = g cos θ W 1 2 √ 3 1 − 4 sin 2 θ W (− sin φZ 1µ + cos φZ 2µ )J µ F CN C(8) where φ is the mixing angle between the weak eigenstate Z's and the mass eigenstates. Since electroweak precision fits force this angle to be very small [18], we will not include it (although will discuss possible interference terms later). Thus, Z 2 is approximately Z ′ . Note the fact that if sin 2 θ W is greater than 1/4, this breaks down, as discussed above. The current is J µ F CN C = 2 cos 2 θ W qγ µ P L q(9) where P L is the left-handed projection operator. In terms of mass eigenstates, this gives J µ F CN C = 2 cos 2 θ W   uγ µ P L U † L   0 0 0 0 0 0 0 0 1   U L u + dγ µ P L V † L   0 0 0 0 0 0 0 0 1   V L d  (10) where U L and V L diagonalize the left-handed Q = 2/3 and Q = −1/3 quark mass matrices, respectively. The U L and V L matrices are not independent, since one knows that V CKM = U † L V L , but the individual values are not known. FCNC processes will then depend on either U L or V L matrices alone, and one will not, without further assumptions, know their values. The papers of Liu [19] and Gomez Dumm et al. [20] calculate the P − P mass difference in this model. For ∆m K , for example, they find that ∆m K = 2 √ 2 9 G F cos 4 θ W 1 − 4 sin 2 θ W |V * 31 V 32 | 2 η Z B K f 2 K m K M 2 Z m 2 Z ′(11) Here, η Z is a QCD correction factor, B K and f K are the bag constant and kaon decay constant. Similar expressions can be obtained for other pseudoscalar systems. Since there is an uncertainty of roughly a factor of two in the Standard Model expression, we assume that the contribution for K − K is less than the Standard Model value, and that the D − D mixing is less than its experimental limit. (In previous works, similar assumptions were made for the B systems.) For B − B mixing, there is very little uncertainty in the hadronic matrix elements, and the primary uncertainty comes from B B and f B , which give an uncertainty of approximately 30%; we assume the contribution is less than this uncertainty. For B s − B s mixing, we require that the contribution be less than 10 picoseconds (for the oscillation time), since that is roughly the current uncertainty. Using updated experimental values, we find the bounds in the first column of Table 1. One can use these results, as done by Liu [19] to bound the mixing angles. Alternatively, one can assume a Fritzsch-like structure [20], and write (with i ≥ j) V ij = m j /m i (similarly for U ij ), and then find bounds on m Z ′ . Doing so gives an upper bound on m Z ′ , in TeV units, shown also in the first column of Table 1. These bounds, especially for the B − B system, are very severe, and are well in excess of the upper bound on the Z ′ mass. The angles must thus be smaller than one's naive expectation, or the model is excluded. It is also shown by Liu [19] and Gomez Dumm [20] that if one chose the first or second generation fields to be picked out as being different, then the bound would be much, much stronger, closer to 1000 TeV. The success of the B-factories has led to stringent bounds on B → Kf + f − , B → f + f − and B s → f + f − . We now calculate these processes in this model. For B → Kf + f − , only the vector part of the interaction will contribute, and thus the matrix element K|sγ µ b|B is needed. We use the matrix elements of Isgur, et al. [21], as discussed in Ref. [22], which gives a value of 2f + p µ K , where f + is given by 3 √ 2 8 m b mq exp( m K −E K m K ). Here, m q is taken to be a constituent quark mass, or 300 MeV. Given this matrix element, the calculation is straightforward, and we find that the partial width is given, in GeV units, by Γ = 1.7 × 10 −15 V 2 32 M Z M Z ′ 4 Using the experimental bound and the Fritzsch ansatz, we find a bound of 1.2 TeV on the mass of the Z ′ , as seen in Table 1. This is substantially weaker than the bound from B s − B s mixing. For B s → f + f − , only the axial vector part of the interaction contributes. Note that a helicity suppression makes the branching ratio proportional to the square of the final state fermion mass. The best experimental bounds are for muon final states (B s → τ + τ − would be very interesting if one could come within a factor of a few hundred of the muonic branching ratio). The standard axial vector matrix element 0|sγ µ γ 5 b|B s = f Bs p µ is used, and we find that Γ = G 2 F M 4 Z f 2 B V 2 32 m B m 2 µ 36πM 4 Z ′(12) Comparing with the experimental bound and using the Fritzsch ansatz gives a lower bound of 0.23 TeV on the Z ′ mass. For B → f + f − , we find very similar numerical results. Again, this is substantially weaker than the bound from mixing. It is important to note that even if one abandoned the Fritzsch ansatz (as one must for the model to be phenomenologically acceptable), the bound from quark-antiquark mixing will always be stronger (unless V 32 is exceptionally small (less than 10 −3 ), in which case the bound Table 1: Bounds on the models described in the text from several flavor changing neutral processes. The upper number is the bound on |V * 3i V 3j | m Z m Z ′ , where i and j refer to the relevant quark masses (and the V's are replaced by U 's for ∆m D ); for the rare B decays, the upper number is the bound on |V * Model A Model A ′ Model B Model C ∆m K 1.6 × 10 −3i V 3j | 1/2 m Z m Z ′ . The lower number is the lower bound on the Z ′ mass assuming a Fritzsch structure for the V matrix. on m Z ′ is less than the direct search bound). In short, there can be no substantial contribution to these rare B-decays in this model (since a substantial contribution would lead to an overly large contribution to B −B mixing), and this statement is independent of the mixing angles. It should also be noted that we have ignored contributions from Z-exchange and from flavor-changing neutral Higgs exchange. These could destructively interfere, weakening the bounds. However, this would require some fine-tuning and since the Higgs sector has many free parameters, we do not consider this possibility. In model A ′ , the only difference is in the coupling of the final state leptons to the Z ′ . While the mass differences are unchanged, there are substantial changes in rare B decays. We find the bounds (see Table 1) on B → Kµ + µ − to be 4.3 TeV, and the bound from B s → µ + µ − to be 1.0 TeV. Again, the bounds from the mass difference in the B − B system are stronger. We now turn to the b = 1/2 models. The embedding of the charge operator now no longer forces sin 2 θ W to be less than 1/4, and thus the upper bound on the scale of SU (3) L breaking no longer applies. As a result, the factors of 1 − 4 sin 2 θ W end up being replaced by 1 − 4 3 sin 2 θ W . In Model B, the mass differences in the neutral K, D and B system (but not the B s ) were calculated in Ref. [23], and the bounds from the rare kaon decay K + → π + νν were calculated [24]. We have reanalyzed these bounds, using updated constraints, and included the bounds from the B s mass difference, and the rare B and B s decays discussed above. Again, if one assumes a Fritzsch-type structure for the U and V matrices, lower bounds on the Z ′ mass are obtained (one can easily remove that assumption and present results in terms of, for example, the V ij and quark masses). The calculation is the same as for Model A, with different couplings. We find the bounds listed in the third column of Table 1. Again, the bounds from the mass differences are much stronger than from rare B-decays, and are weaker than for Model A (primarily due to the absence of a 1 − 4 sin 2 θ W factor). In Model C, the only calculation of flavor-changing neutral current effects that we are aware of is the calculation of the mass difference in the neutral kaon system by Ozer, in Ref. [13]. The fourth column of Table 1 lists these bounds. The bounds from mass differences are substantially stronger than in model B. Models D and E are very different. They are one family models, and thus all generations are treated identically. Due to the existence of isosinglet quarks, there will be flavor changing neutral currents. These models are explicitly explored in Refs. [14] and [15]. FCNC in models with isosinglets have been explored in great detail in a number of papers. The most recent is by Andre and Rosner [25]; the reader is referred to that work and references therein. In most of these works, it is assumed that there is only a singlet isosinglet quark (or if there are more than one, it is assumed that one is much lighter and thus dominates the physical effects), and thus the Q = −1/3 mass matrix is 4 × 4, and it is often assumed that the V 34 element is the largest. However, the models D and E contain three isosinglet quarks, and if the mass hierarchy of these quarks follows the standard mass hierarchy, the lightest of these will interact much more strongly with the down quark, i.e. the biggest element will be V 14 . An analysis of the phenomenology of this case would be interesting. Conclusions SU (3) L × U (1) models fall into two categories, depending on the embedding of the charge operator into the SU (3) L group. The choices of fermion representations further subdivides the models. These models all have tree-level FCNC mediated by gauge bosons. We have calculated the P − P mass differences and several rare B decays in these models. In all cases, we find that the contribution from rare B decays is much smaller than those from B − B and B s − B s mass differences, and thus the models explicitly predict that there will be no substantial contribution to these rare B-decays (independent of mixing angles). Lower bounds on gauge boson masses are typically of the order of tens of TeV if one assumes a Fritzch-like structure for the mixing angles. This is a serious problem for the original, minimal model, which has an upper bound of approximately 2-3 TeV for the gauge boson masses. Thus, these models can only survive if the mixing angles are much smaller than one's naive expectation. This would mean that the down-quark mixing matrix would be very nearly diagonal, and thus CKM mixing would have to arise from the Q = 2/3 sector. This severely constrains attempts to understand the origin of flavor in these models. ( 3 ) 3L group implies that a = 1, and so models are distinguished by the value of b. It should be noted that gauge bosons will have integral charge only for half-integral values of b, and that models with negative b can be transformed into models with positive b by replacing triplet fermion representations with antitriplets, and vice versa. The two choices for b that have been considered are b = 3/2 and b = 1/2. The former gives the original, minimal Pisano-Pleitez-Frampton model, with exotic isosinglet quark charges,while the latter does not lead to any exotic quark charges. We now discuss each choice.Of the 9 gauge bosons of the electroweak group, 3 are neutral and there are three charged pairs, the usual W ± and two others with charges ±(b + 1/2) and ±(b − 1/2). Thus this b = 3/2 model has doubly charged gauge bosons. In the minimal model, When an extension of the standard model predicts new phenomena, one can often explain non-observation of the phenomena by increasing the mass scale of the new physics. However, that is not possible for the minimal SU (3) L × U (1) model. The reason is that if one were to embed the standard model entirely into the SU (3) L group, then the unification gives sin 2 θ W = 1/4. The extra U (1) factor then forces one to have sin 2 θ W ≤ 1/4. This is, of course, valid at low energy, but since sin 2 θ W increases with scale, the scale of SU (3) L breaking cannot be too high. In the original, minimal model, model A, this scale was estimated to be approximately 800 GeV. It has been argued[11] that more precise definitions of "scale" allow this upper bound to be somewhat higher, as high as 2-3 TeV. Thus, the model is capable of being ruled out in the near future. We thank Andrzej Buras for useful discussions. JR would like to thank Colciencias and DIB for financial support and the College of William and Mary for its hospitality. The work of MS was supported by the National Science Foundation grant PHY-023400. . P H Frampton, Phys. Rev. Lett. 692889P. H. Frampton, Phys. Rev. Lett. 69, 2889 (1992). . F Pisano, V Pleitez, Phys. Rev. D. 46410F. Pisano and V. Pleitez, Phys. Rev. D 46, 410 (1992). . R D Peccei, H R Quinn, Phys. Rev. Lett. 381440R. D. Peccei and H. R. Quinn, Phys. Rev. Lett. 38 1440, 1977. . A G Dias, C A De, S Pires, P S Rodrigues Da, Silva , Phys. Rev. D. 68115009A. G. Dias, C. A. de S. Pires and P. S. Rodrigues da Silva, Phys. Rev. D 68 115009 (2003). P H Frampton, hep-ph/9507351Proc. of Workshop on Particle Theory and Phenomenology. of Workshop on Particle Theory and PhenomenologyAmes IowaP. H. Frampton, Proc. of Workshop on Particle Theory and Phenomenology, Ames Iowa (1995) [hep-ph/9507351] . J T Liu, D Ng, Phys. Rev. D. 50548J. T. Liu and D. Ng, Phys. Rev. D 50, 548 (1994); . J Agrawal, P H Frampton, J , J. Agrawal, P. H. Frampton and J. T. . Liu, Int. J. Mod. Phys. A. 112263Liu, Int. J. Mod. Phys. A 11, 2263 (1996); . F Pisano, Mod. Phys. Lett. A. 112639F. Pisano, Mod. Phys. Lett. A 11, 2639 (1996); . D Gomez Dumm, Phys. Lett. B. 411313D. Gomez Dumm, Phys. Lett. B 411, 313 (1997); . P H Frampton, hep-ph/9711281Int. J. Mod. Phys. A. 132345P. H. Frampton, Int. J. Mod. Phys. A 13, 2345 (1998) [hep-ph/9711281]; . F Pisano, J A Silva-Sobrinho, M D Tonasse, Phys. Rev. D. 5857703F. Pisano, J. A. Silva-Sobrinho and M. D. Tonasse, Phys. Rev. D 58, 057703 (1998); . N A Ky, H N Long, D V Soa, Phys. Lett. B. 486140N. A. Ky, H. N. Long and D. V. Soa, Phys. Lett. B 486, 140 (2000); . H N Long, T Inami, Phys. Rev. D. 6175002H. N. Long and T. Inami,Phys. Rev. D 61, 075002 (2000); . Y Kitabayashi, M Yasue, Nucl. Phys. B. 60961Y. Kitabayashi and M. Yasue, Nucl. Phys. B 609, 61 (2001); . J C Montero, V Pleitez, M C Rodriguez, Phys. Rev. D. 6535006J. C. Montero, V. Pleitez and M. C. Rodriguez, Phys. Rev. D 65, 035006 (2002); . R A Diaz, R Martinez, J Mira, J-A Rodriguez, Phys. Lett. B. 552287R. A. Diaz, R. Martinez, J. Mira and J-A. Rodriguez, Phys. Lett. B 552, 287 (2003); . A Gusso, P S Rodrigues Da Silva, C A De, S Pires, J. Phys. G. 3037A. Gusso, P. S. Rodrigues da Silva and C. A. de S. Pires, J. Phys. G. 30, 37 (2004); . M A Perez, G Tavares-Velasco, J J Toscano, Phys. Rev. D. 69115004M. A. Perez, G. Tavares-Velasco and J. J. Toscano, Phys. Rev. D 69, 115004 (2004). . J C Montero, F Pisano, V Pleitez, Phys. Rev. D. 472918J. C. Montero, F. Pisano and V. Pleitez, Phys. Rev. D 47, 2918 (1993). . R Foot, H N Long, T A Tran, Phys. Rev. D. 5034R. Foot, H. N. Long and T. A. Tran, Phys. Rev. D 50, R34 (1994); . H N Long, Phys. Rev. D. 53437H. N. Long, Phys. Rev. D 53, 437 (1996); . V Pleitez, Phys. Rev. D. 53514V. Pleitez, Phys. Rev. D 53 514 (1996). . W A Ponce, J B Florez, L A Sanchez, hep- ph/0103100Int. J. Mod. Phys. A. 17643W. A. Ponce, J. B. Florez and L. A. Sanchez, Int. J. Mod. Phys. A 17, 643 (2002) [hep- ph/0103100] W A Ponce, Y Giraldo, L A Sanchez, hep-ph/0201133Proceedings of the VIII Mexican Workshop of Particles and Fields. the VIII Mexican Workshop of Particles and FieldsZacatecas, MexicoW. A. Ponce, Y. Giraldo and L. A. Sanchez, Proceedings of the VIII Mexican Workshop of Particles and Fields, Zacatecas, Mexico, 2001, pp. 341-346 [hep-ph/0201133]; . W A Ponce, Y Giraldo, L A Sanchez, Phys. Rev. D. 6775001W. A. Ponce, Y. Giraldo and L. A. Sanchez, Phys. Rev. D 67, 075001 (2003). . D Ng, Phys. Rev. D. 494805D. Ng, Phys. Rev. D 49, 4805 (1994); . A G Dias, R Martinez, V Pleitez, hep- ph/0407141A. G. Dias, R. Martinez and V. Pleitez, hep- ph/0407141. . M B Tully, G C Joshi, Phys. Rev. D. 6411301M. B. Tully and G. C. Joshi, Phys. Rev. D 64, 011301 (2001). . M Ozer, Phys. Rev. D. 541143M. Ozer, Phys. Rev. D 54, 1143 (1996); . T Kitabayashi, Phys. Rev. D. 6457301T. Kitabayashi, Phys. Rev. D 64, 057301 (2001). . L A Sanchez, W A Ponce, R Martinez, hep- ph/0103244Phys. Rev. D. 6475013L. A. Sanchez, W. A. Ponce and R. Martinez, Phys. Rev. D 64, 075013 (2001).[hep- ph/0103244] . R Martinez, W A Ponce, L A Sanchez, hep- ph/0110246Phys. Rev. D. 6555013R. Martinez, W. A. Ponce and L. A. Sanchez, Phys. Rev. D 65, 055013 (2002) [hep- ph/0110246] that two Higgs triplets are sufficient. In this case, the Q = −1/3 quarks have zero mass at tree level. It is claimed that a one-loop radiative mass will arise, but one can see (especially by going to a tree-level mass basis) that the masses will only be generated at two-loops. If the b-quark mass is generated only at two-loop order, then the scale of SU (3) L breaking would be over 500 TeV. W A Ponce, Y Giraldo, L A Sanchez, Phys. Rev. D. 6775001It is not clear that such a model is viable-it certainly would suffer from a hierarchy problemIt is argued in W. A. Ponce, Y. Giraldo and L. A. Sanchez, Phys. Rev. D 67, 075001 (2003), that two Higgs triplets are sufficient. In this case, the Q = −1/3 quarks have zero mass at tree level. It is claimed that a one-loop radiative mass will arise, but one can see (especially by going to a tree-level mass basis) that the masses will only be generated at two-loops. If the b-quark mass is generated only at two-loop order, then the scale of SU (3) L breaking would be over 500 TeV. It is not clear that such a model is viable-it certainly would suffer from a hierarchy problem. . R A Diaz, R Martinez, F Ochoa, Phys. Rev. D. 6995009R. A. Diaz, R. Martinez, and F. Ochoa, Phys. Rev. D 69, 095009 (2004). . J T Liu, D Ng, Z. Phys. C. 62693J. T. Liu and D. Ng, Z. Phys. C 62, 693 (1994). . J T Liu, Phys. Rev. D. 50542J. T. Liu, Phys. Rev. D 50, 542 (1994). . D Gomez Dumm, F Pisano, V Pleitez, Mod. Phys. Lett. A. 91609D. Gomez Dumm, F. Pisano and V. Pleitez, Mod. Phys. Lett. A 9, 1609 (1974). . N Isgur, Phys. Rev. D. 39799N. Isgur et al., Phys. Rev. D 39, 799 (1989). . D Black, T Han, H.-J He, M Sher, Phys. Rev. D. 6653002D. Black, T. Han, H.-J. He and M. Sher, Phys. Rev. D 66, 053002 (2002). . H N Long, V T Van, J. Phys. G. 252319H. N. Long and V. T. Van, J. Phys. G 25, 2319 (1999). N A Ky, H N Long, L P Trung, D V Soa, V T Van, hep-ph/0009187Talk given at 4th Rencontres du Vietnam. Hanoi, VietnamN. A. Ky, H. N. Long, L. P. Trung, D. V. Soa and V. T. Van, Talk given at 4th Rencontres du Vietnam, Hanoi, Vietnam, 2000 [hep-ph/0009187]; . H N Long, L P Trung, V , H. N. Long, L. P. Trung and V. T. . Van, hep-ph/0104007J. Exp. Theor. Phys. 92Van, J. Exp. Theor. Phys. 92, 548 (2001) [hep-ph/0104007]. . T C Andre, J L Rosner, Phys. Rev. D. 6935009T. C. Andre and J. L. Rosner, Phys. Rev. D 69, 035009 (2004); . J A Aguilar-Saavedra, Phys. Rev. D. 6735003J. A. Aguilar-Saavedra, Phys. Rev. D 67, 035003 (2003).
[]
[ "Simulation of photodetection using finite-difference time-domain method with application to near-field subwavelength imaging based on nanoscale semiconductor photodetector array", "Simulation of photodetection using finite-difference time-domain method with application to near-field subwavelength imaging based on nanoscale semiconductor photodetector array" ]
[ "Ki Young Kim \nDepartment of Electrical Engineering and Computer Science\nNorthwestern University\n60208EvanstonILUSA\n", "Boyang Liu \nDepartment of Electrical Engineering and Computer Science\nNorthwestern University\n60208EvanstonILUSA\n", "Yingyan Huang \nDepartment of Electrical Engineering and Computer Science\nNorthwestern University\n60208EvanstonILUSA\n", "Seng-Tiong Ho \nDepartment of Electrical Engineering and Computer Science\nNorthwestern University\n60208EvanstonILUSA\n" ]
[ "Department of Electrical Engineering and Computer Science\nNorthwestern University\n60208EvanstonILUSA", "Department of Electrical Engineering and Computer Science\nNorthwestern University\n60208EvanstonILUSA", "Department of Electrical Engineering and Computer Science\nNorthwestern University\n60208EvanstonILUSA", "Department of Electrical Engineering and Computer Science\nNorthwestern University\n60208EvanstonILUSA" ]
[]
Simulation of detecting photoelectrons using multi-level multi-electron (MLME) finitedifference time-domain (FDTD) method with an application to near-field subwavelength imaging based on semiconductor nanophotodetector (NPD) array is reported. The photocurrents from the photodiode pixels are obtained to explore the resolution of this novel NPD device for subwavelength imaging. One limiting factor of the NPD device is the optical power coupling between adjacent detector pixels. We investigate such power coupling in the presence of absorbing media as well as the spatial distributions of the electric field and photoelectron density using the MLME FDTD simulation. Our results show that the detection resolution is about one tenth of the operating wavelength, which is comparable to that of a nearfield scanning optical microscope based on metal clad tapered fiber.
10.1007/s11082-008-9190-0
[ "https://export.arxiv.org/pdf/0902.3302v1.pdf" ]
119,290,253
0902.3302
01142c9bf195df93699f6d344a5bf70249fb9837
Simulation of photodetection using finite-difference time-domain method with application to near-field subwavelength imaging based on nanoscale semiconductor photodetector array Ki Young Kim Department of Electrical Engineering and Computer Science Northwestern University 60208EvanstonILUSA Boyang Liu Department of Electrical Engineering and Computer Science Northwestern University 60208EvanstonILUSA Yingyan Huang Department of Electrical Engineering and Computer Science Northwestern University 60208EvanstonILUSA Seng-Tiong Ho Department of Electrical Engineering and Computer Science Northwestern University 60208EvanstonILUSA Simulation of photodetection using finite-difference time-domain method with application to near-field subwavelength imaging based on nanoscale semiconductor photodetector array 1FDTD simulationnanoscale photodetector (NPD) arrayphotocurrentsubwavelength resolution 2 Simulation of detecting photoelectrons using multi-level multi-electron (MLME) finitedifference time-domain (FDTD) method with an application to near-field subwavelength imaging based on semiconductor nanophotodetector (NPD) array is reported. The photocurrents from the photodiode pixels are obtained to explore the resolution of this novel NPD device for subwavelength imaging. One limiting factor of the NPD device is the optical power coupling between adjacent detector pixels. We investigate such power coupling in the presence of absorbing media as well as the spatial distributions of the electric field and photoelectron density using the MLME FDTD simulation. Our results show that the detection resolution is about one tenth of the operating wavelength, which is comparable to that of a nearfield scanning optical microscope based on metal clad tapered fiber. Introduction Semiconductor photodetector array has various applications in industry and academics, such as product inspection, medical imaging, security screening, and analytical characterization and imaging. Recently, semiconductor photodetector array with nanometer scale pixels has been drawing more and more attentions due to its high detection resolution, fast response and easy integratability (Liu et al. 2007 andKolb et al. 1995). To model such semiconductor photodetector array with nanometer scalesize and complex optical geometry, a more sophisticated modeling technique is required. This model should be able to account for the electromagnetic wave propagation within the detector, its interaction with the absorbing semiconductor medium, and the generation of photocurrent. In this paper, we report a new method to simulate photodetection in semiconductor material using multilevel multi-electron finite-difference time-domain algorithm (Huang 2002;Huang and Ho 2006). In MLME-FDTD method, where Pauli exclusion principle and Femi-Dirac thermalization are incorporated into the rate equation for the semiconductor material system, multiple energy levels are used to describe the essential characteristics of the semiconductor band structures, which allow us to model the full semiconductor carrier dynamics with reasonable accuracy for typical applications. As a result, photocurrents generated by active semiconductor materials, one of the figures of merit for photodetectors, could be calculated and evaluated. Using MLME model, we investigate both the light propagation and the physical mechanisms of the photodetection via the semiconductor materials. Simulation of photodetection using FDTD method Conceptually, a photodetector can be modeled as a medium with two energy levels in which the photocurrent can be calculated from the rate of excitation of ground-state electrons from the ground level (Liu et al. 2007). In our simulation, only the calculation of the photocurrent generated is considered, thus the electrodes have been removed in the simulation schematic. In the simulation structure, the length of NPD pixels is set to be 3μm to investigate the optical power coupling between pixels, although in practical fabrication, the length of NPD pixels is only a few hundred of nanometers. The semiconductor fingers (pixels) play an important role in detecting incident field, which are converted into the photocurrents via the mechanism shown in Fig. 1. The photocurrent generated in each NPD pixel can be quantitatively calculated via the following formula, which is directly derived based on the definition of current and mechanism shown in Fig. 1. 2 ph density pixel sim sim q q I N N N A H t t ⎛ ⎞ = = ⋅ ⋅ ⋅ ⎜ ⎟ ⎝ ⎠ ∑ ,(1) Near-field imaging by NPD array In Fig. 2, we show a typical NPD array geometry, where the center-to-center distance between the NPD pixels is w+s with an inter-pixel gap of s. For an exemplary simulation to be shown below, we will show the case of s=60nm and w=90nm. The operating incident wavelength is 1550nm. The spatial resolution of the NPD is defined by the full-width half-maximum (FWHM) of the spatial distribution of the photocurrent response when it is illuminated by a near-point source. To generate the near-point source, we use a metal sheet with a small aperture having a small width (sw). The source is placed at a certain distance sd away from the front side of the center pixel of the NPD array. The optical absorption is calibrated to be 0.5/μm for a typical III-V semiconductor material, which corresponds to (1). In order to investigate the optical power coupling between NPD pixels, the average optical power in each pixel is calculated. In this initial 2D simulation, we assume a detector slab that is infinite in the direction perpendicular to the paper and the incident source has electric field polarization pointing along this infinite direction (we call this TM field). Fig. 3(a) shows the normalized field pattern, which indicates electric field quasi-guided by the center pixel (pixel 0) with subsequent coupling to the adjacent pixels (pixel 1, 2) and then to the next adjacent pixels (pixel 3, 4). Fig. 3(b) shows the corresponding photoelectron density from the electric field pattern of Fig. 3(a) with an arbitrary normalized linear scale. Fig. 4 shows the photocurrents in each pixel from the spatial distribution of photoelectron density profile of Fig. 3(b) using eq. (1). The estimated spatial resolution for this particular NPD array geometry is about 150nm, which corresponds to a resolution of λ/10. Conclusion and future work We investigated a new simulation method for photodetection in semiconductor medium with its application to a subwavelength resolvable NPD array, where a MLME-FDTD model was employed for the simulation. The FDTD simulations show us the optical power coupling between the NPD pixels, the pixel number pixel number corresponds to a resolution of λ/10. spatial distributions of the electric field, and the photoelectron density of the proposed photodetector structure, from which the photocurrents can be calculated. This novel type of photodetector shows a high optical imaging resolution that is substantially below the diffraction limit, which can be potentially applied to the observation of nanoscale moving objects or living cells. Prototypes of such novel NPD devices have been successfully developed and characterized by us (Liu et al. 2007). Further parameter study such as width of the metal slit, polarization of incident light, distance between slit and the NPD array, etc will be conducted and reported soon. ( level 1 ) to the upper level (level 2 ), which are subsequently returned back to the ground level through an external electric circuit as shown in Fig. study. λ and a λ are the incident wavelength and resonant wavelength of the semiconductor material, respectively. Fig. 2 . 2Schematic of the NPD array. Light to be detected is from the subwavelength metal slit. The refractive indexes of the semiconductor, the filling dielectric material (benzocyclobutene, BCB), and air at 1550nm are assumed to be 3.4, 1.5, and 1.0, respectively. NPD pixels are labeled as 0, two-dimensional schematic of the NPD array for the FDTD simulation with its dimensions and operating parameters. Inset shows a practical photocurrent pickup mechanism. Transparent conductor (TC) is used for the bottom electrodes. The bottom electrodes are in crossing direction to the top electrodes, forming a matrix for pixel-array addressing. The active semiconductor layer is sandwiched between the top and bottom electrodes. Detector pixels are separated by low refractive index dielectric materials the total time used for simulation, N is the total number of electrons, 2 N is the normalized number of electrons on the level 2 in a FDTD pixel, N density is the number of electrons per unit volume, A is the area of the FDTD pixel, and H is the height of the NPD pixels. Fig. 3 .Fig. 4 . 34Electric field pattern (left) and corresponding normalized photoelectron density (right). Ticks on axis represent the number of FDTD pixels (dx and dy), which are equal to 5nm in our simulation.The red color indicates higher amplitude in arbitrary linear scale. Photocurrents in each NPD pixel. The effective FWHM spatial resolution is about 150nm, which Acknowledgements Simulation of Semiconductor Materials using FDTD Method. Y Huang, Northwestern UniversityM.S. ThesisHuang Y.: Simulation of Semiconductor Materials using FDTD Method. M.S. Thesis, Northwestern University (2002) Computational model of solid-state, molecular, or atomic media for FDTD simulation based on a multi-level multi-electron system governed by Pauli exclusion and Fermi-Dirac thermalization with application to semiconductor photonics. Y Huang, S. -T Ho, Opt. Express. 14Huang, Y. and S. -T. Ho.: Computational model of solid-state, molecular, or atomic media for FDTD simulation based on a multi-level multi-electron system governed by Pauli exclusion and Fermi-Dirac thermalization with application to semiconductor photonics. Opt. Express 14, 3569-3587 (2006) B Liu, Y Huang, K Y Kim, S. -T Ho, Near-field imager based on nanophotodetector array. Frontiers in Optics 2007 and Laser Science XXIII. San Jose, CaliforniaLiu, B., Huang Y., K. Y. Kim, and S. -T. Ho.: Near-field imager based on nanophotodetector array. Frontiers in Optics 2007 and Laser Science XXIII, San Jose, California, 16-20 September 2007 Photodetector with subwavelength spatial resolution. G Kolb, C Obermuller, K Karraï, G Abstreiter, G Böhm, G Tränkle, G Weimann, Ultramicroscopy. 57Kolb G., C. Obermuller, K. Karraï, G. Abstreiter, G. Böhm, G. Tränkle, and G. Weimann: Photodetector with subwavelength spatial resolution. Ultramicroscopy 57, 208-211 (1995)
[]
[ "Cures for the Expansion Shock and the Shock Instability of the Roe Scheme", "Cures for the Expansion Shock and the Shock Instability of the Roe Scheme" ]
[ "Xue-Song Li \nDepartment of Thermal Engineering\nKey Laboratory for Thermal Science and Power Engineering of Ministry of Education\nTsinghua University\n100084BeijingPR China\n", "Xiao-Dong Ren \nDepartment of Thermal Engineering\nKey Laboratory for Thermal Science and Power Engineering of Ministry of Education\nTsinghua University\n100084BeijingPR China\n", "Chun-Wei Gu \nDepartment of Thermal Engineering\nKey Laboratory for Thermal Science and Power Engineering of Ministry of Education\nTsinghua University\n100084BeijingPR China\n" ]
[ "Department of Thermal Engineering\nKey Laboratory for Thermal Science and Power Engineering of Ministry of Education\nTsinghua University\n100084BeijingPR China", "Department of Thermal Engineering\nKey Laboratory for Thermal Science and Power Engineering of Ministry of Education\nTsinghua University\n100084BeijingPR China", "Department of Thermal Engineering\nKey Laboratory for Thermal Science and Power Engineering of Ministry of Education\nTsinghua University\n100084BeijingPR China" ]
[]
A common defect of the Roe scheme is the production of non-physical expansion shock and shockinstability. An improved method with several advantages was presented to suppress the shock instability. However, this method cannot prevent expansion shock and is incompatible with the traditional curing method for expansion shock. Therefore, the traditional curing mechanism is analyzed. The discussion explains the effectiveness of the traditional curing method and identifies several defects, one of which leads to incompatibility between curing the shock instability and expansion shock. Consequently, a new improved Roe scheme is proposed in this study. This scheme is concise, easy to implement, low computational cost, and robust. More importantly, the scheme can simultaneously cure the shock instability and expansion shock without additional costs.
null
[ "https://export.arxiv.org/pdf/1607.07047v1.pdf" ]
119,293,168
1607.07047
9b37cd95f655b80c92ba6a70864c01bb4353e961
Cures for the Expansion Shock and the Shock Instability of the Roe Scheme Xue-Song Li Department of Thermal Engineering Key Laboratory for Thermal Science and Power Engineering of Ministry of Education Tsinghua University 100084BeijingPR China Xiao-Dong Ren Department of Thermal Engineering Key Laboratory for Thermal Science and Power Engineering of Ministry of Education Tsinghua University 100084BeijingPR China Chun-Wei Gu Department of Thermal Engineering Key Laboratory for Thermal Science and Power Engineering of Ministry of Education Tsinghua University 100084BeijingPR China Cures for the Expansion Shock and the Shock Instability of the Roe Scheme ____________ * Corresponding author.Key word: Roe schemeExpansion shockShock instability A common defect of the Roe scheme is the production of non-physical expansion shock and shockinstability. An improved method with several advantages was presented to suppress the shock instability. However, this method cannot prevent expansion shock and is incompatible with the traditional curing method for expansion shock. Therefore, the traditional curing mechanism is analyzed. The discussion explains the effectiveness of the traditional curing method and identifies several defects, one of which leads to incompatibility between curing the shock instability and expansion shock. Consequently, a new improved Roe scheme is proposed in this study. This scheme is concise, easy to implement, low computational cost, and robust. More importantly, the scheme can simultaneously cure the shock instability and expansion shock without additional costs. Introduction The Roe scheme [1] is one of the most famous and important shock-capturing schemes because of its high accuracy. This scheme has undergone considerable development, such as its extension to incompressible flows [2][3] [4], and has been extensively used for flow computation, such as in Euler flows [5], LES [6] [7], and 2 cavitation [8]. However, the Roe scheme also suffers from a few shortcomings, such as shock instability and expansion shock [9]. Shock instability is a well-known defect of supersonic flows with different performances, such as carbuncle, kinked Mach stem, and odd-even decoupling. Several methods were proposed to cure shock instability; the cures were achieved by adding an entropy fix [10], combining a dissipative scheme [9], increasing the basic upwind dissipation [11] [12], and considering multi-dimensional characteristics [13] [14]. The expansion shock is another defect of the Roe scheme, which is an unphysical solution that violates the entropy condition. Moreover, this defect often yields unacceptable values, such as negative pressure and density, and leads to the divergence of computation for the highly energetic flow. The entropy fix is often adopted to overcome this drawback, but this approach has limited effects while introducing large numerical dissipation and unfavorable empirical parameters. Another considerably common curing method introduces a slight modification by redefining the numerical signal velocities with improved results [12] [15] [16]. In Ref. [17], the momentum interpolation mechanism in the Roe scheme [18] [19] [20] is considered the most important reason for shock instability. Thus, a new improved Roe scheme is proposed [17] by removing the momentum interpolation mechanism for the non-linear flow. This improvement cures the shock instability while removing the problem-independence empirical parameters and decreasing the numerical dissipation. However, in this paper several numerical results show an unexpected defect, wherein the expansion shock becomes serious and the traditional curing method is 3 invalid. To broaden the range of applications of the improved Roe scheme, the current study aims to cure the expansion shock by identifying the reason for the deterioration and further elucidating the mechanism that produces the expansion shock, and thus propose an ideal scheme. The rest of this paper is organized as follows. Chapter 2 provides the governing equations and the improved Roe scheme for curing shock instability [17]. Chapter 3 analyzes the mechanism of the traditional method of curing the expansion shock. Chapter 4 provides a new approach to cure the expansion shock while maintaining all the advantages of the improved Roe scheme. Chapter 5 concludes this paper. Governing Equations and the Roe Scheme Governing Equations The governing three-dimensional Navier-Stokes equations can be written as follows: ,, x y z , respectively. 0 t x y z             Q F G H ,(1) Roe Scheme and Improvement The classical Roe scheme can be expressed in the following general form as the sum of a central term and a numerical dissipation term:  cd F F F ,(2) where c F is the central term and d F is the numerical dissipation term. For a cell face of the finite volume method, According to Ref. [21], a scale uniform framework for the shock-capturing scheme is proposed [22]. This framework is simple and easy to analyze and improve with low computation cost. 5 where the first term on the right side  is the basic upwind dissipation; the term p p  is the pressure-difference-driven modification for the cell face pressure, For the classical Roe scheme,   12 1 2 LR  c, F F F ,(3)            0 1 2 x u p y u p z u n u = v p p n U U v w n w H U E                1   ,(6)54 1 2 u pU            ,(7)54 2 p p p c     ,(8)54 2 u U U c     ,(9)54 1 2 2 p p U c         ,(10) where c is the sound speed, and the eigenvalues of the system are defined as follows: 1 2 3 U       ,(11)4 Uc   ,(12)5 Uc   .(13) Based on Eqs. (11)-(13), Eqs. (7)- (10) can also be further simplified as:   max 0, u p c U U     ,(14)    sign min , p p p U U c c    ,(15)    sign min , u U U U U c c    ,(16)  2 max 0, p p U c U c     .(17) Ref. [17] suggests that the shock instability is mainly attributed to the term p U  6 and can be cured by multiplying Eq. (17) to the two functions 1 s and 2 s as follows:   12 2 max 0, p p U s s c U c     , (18)   8 2 2 1 2 41 1 min ,1 1 M sM M              ,(19) where M is the Mach number. The function 2 s , which is a shock detector and can be obtained in Ref. [17], is not presented in this paper because it is relatively complicated and probably unnecessary for general cases. This improvement is simple and effective. Two Classical Numerical Tests Two classical numerical examples are available to test the expansion shock by the proposed scheme. One is a specific one-dimensional shock tube and the other is the supersonic corner problem. The initial condition of the shock tube is given as 3 The initial condition is given as 7.04108 L   , 0.9 L u  , 3 L p  , 1 R   , 0.9 R u  ,L   , 4.07794 L u  , 0 L v  , 30.05942 L p  , 1.4 R   , 0 RR uv  , and 1 R p  at the x-axis position of 0.05. In this study, the mesh grids are 200 for the shock tube, and 400*400 for the supersonic corner. For time discretization, the four-stage Runge-Kutta scheme is adopted. For space discretization, the first-order accuracy is adopted (unless otherwise specified) to discuss the schemes themselves. Analysis of the Traditional Method Curing the Expansion Shock Traditional Curing Method To avoid the expansion shock, the tradition curing method redefines the physical signal velocities (i.e., non-linear eigenvalues 4  and 5  ). For example, Ref. [15] proposed that:   4 min , LL U c U c     ,(20)  5 max , RR U c U c     .(21) To precisely obtain a contact discontinuity, Ref. [12] suggested that only U in 4  and 5  should be improved:   4 min , L U c U c     ,(22)  5 max , R U c U c     .(23) Eqs. Performances of the Schemes Figs. 1 and 2 show the results by the classical Roe scheme, as described by Eqs. (6) and (14) Fig. 2(a). No evident expansion shock was observed, but the shock instability was expectedly strong. The traditional curing method produces similar results (see Fig. 2(b)). Fig. 3(b)). The shock instability becomes substantially weak and is nearly cured. However, a strong expansion shock occurs. In the iterative calculation process, the density occasionally becomes negative and the following limitation is necessary to prevent computational divergence: X-Axis Y-Axis   iter cal max ,     ,(24) where  is a small positive value. The traditional curing method for expansion shock in Eqs. (22)-(23) can also be 10 integrated into the improved Roe scheme. For the shock tube, this method can produce results as Fig. 1(b). However, this approach is invalid for the supersonic corner and even substantially increases the expansion shock. The computation diverges because of negative density even when using Eq. (24). This unexpected problem seems confusing and may hinder the possible extensive application of the improved Roe scheme because of concerns regarding computational robust. Therefore, in the following sections the mechanism of preventing expansion shock is further analyzed and the new method satisfies the stringent requirement of simultaneously curing the expansion shock and the shock instability without additional costs. Analysis of the Schemes To develop the new method, the mechanism of the traditional curing method is first further analyzed. Eqs. (22)-(23) can be decomposed into the following five conditions: (1) Uc  :     min , min 0, L LL LL c U U U U c U c U c U U c U U U             ,(25)    max , max 0, R RR RR U c U U U c U c U c U U U c U U             .(26)    min , min 0, L LL LL U c U U U c U c U c U U U c U U             ,(27)    max , max 0, R RR RR U c U U U c U c U c U U U c U U             .(28) (3) Uc  and L Uc  and R Uc  : 11     min , 2 L L L U c U c c U U c U U c          ,(29)    max , max 0, R R R U c U c U c U c U U         .(30)    min , min 0, LL LL L c U U U U c U c U c U U c U U U             ,(31)    max , max 0, R RR RR U c U U U c U c U c U U U c U U                 .(32)    min , min 0, L L L U c U c c U U c U U         ,(33)    max , 2 R R R U c U c U c U c U U c          .(34) By considering     min 0, max 0, LL U U U U     ,(35) Eqs. (25)-(34) can be summarized as follows:   min , 2 LL U c U c U c b      ,(36)  max , 2 RR U c U c U c b      ,(37) where     2 and and 1 sign max 0, otherwise 2 2 and and 1 sign max 0, otherwise 2 L L R L L L U U c U c U c U U b U c U U           ,(38)     R R R L R R U U c U c U c U U b U c U U             .(39) Therefore, Eqs. (7)-(10) become:   max 0, + u R L p c U b b U        ,(40)    sign min , + p R L p p U U c b b c      ,(41)    sign min , + u R L U U U U c b b c      ,(42)  2 max 0, + p R L p U c U b b c         .(43)(1) Uc    1 0 2 R L R L b b U U     ,(46)  1 20 2 R L R L b b U U U      .(47) Therefore, for the subsonic expansion flows, (2) Uc  and L Uc    1 20 2 R L R L b b U U U      ,(48)  1 0 2 R L R L b b U U     .(49) Consequently, (3) Uc  and L Uc  and R Uc  13     1 20 2 R L R L L b b U U U U c c U            ,(50)    1 20 2 R L R L b b U U U U c U c            .(51) Therefore, the increment terms are unsatisfactory because they are not equal to zero and not smooth between Conditions (1) and (2). The other two conditions of Uc  are not discussed for simplicity because they produce the same conclusions as the conditions of Uc  . Further Analysis of the Curing Expansion Shock Mechanism The preceding discussion reveals a few unsatisfactory features of the traditional curing method in Eqs. (22)- (23). Moreover, the discussion provides clues regarding the mechanism of expansion shock suppression. Two inspirations are obtained as follows. (1) An increment factor is designed as follows:         sign max 0, sign max 0, 4 R L R L U c U U U c U U s       ,(52) where L UU  in Eq. (38) and         sign max 0, sign max 0, 4 R L R L U c U U U c U U UU        .(62) The present scheme is significantly concise and easy to implement; the computational cost only has a negligible increase as well. Compared with the scheme of Eqs. (14)- (16) and (18) Conclusions The performance of several Roe-type schemes is discussed in terms of expansion shock, and the mechanism of curing expansion shock is analyzed based on the traditional method. Several unfavorable features of the traditional curing method are discovered, and the possible curing mechanism is not completely utilized. Therefore, an improved method is proposed to overcome these problems. The present scheme is 19 substantially concise, easy to implement, and robust with a low computational cost. This scheme is particularly well compatible with the improvement to cure the shock instability. Therefore, the present scheme is simultaneously free from the problems of shock instability and expansion shock without additional expense. the normal velocity on the cell face. difference-driven modification for the cell face velocity. and 1 Rp 1 at the x-axis position of 0.3. The supersonic corner problem considers a moving supersonic shock around a 90° corner. (22)-(23) can retain the capability of Eqs. (20)-(21) to suppress the expansion shock while having the advantage of obtaining the contact discontinuity. Therefore, only Eqs. (22)-(23) are discussed in this study. -(17); and the traditional curing method, as described by Eqs. (6)-(11) and (22)-(23).For the shock tube, the classical Roe scheme evidently produces an expansion shock at the x = 0.3 position. The traditional curing method demonstrates substantially improved performance. However, a sight gap also exists with the traditional method. ) Classical Roe scheme (b) Traditional curing method Fig. 2 Results of the supersonic corner test at t = 0. Fig. 3 3Results by the improved Roe scheme as described by Eqs. (6), (14)-(16), and (18) By employing the improved Roe scheme in Eqs. (6), (14)-(16) and (18) [17], the results of the one-dimensional shock tube are similar to those with the classical Roe scheme (see Fig. 3(a)) because the improvement of Eq. (18) only affects multi-dimensional computation as analyzed in Ref. [17]. For the two-dimensional computation of the supersonic corner, results become significantly different from those of the classical Roe scheme (see Eqs. (22)-(23). but these results are unsuitable. For supersonic flows, all increment terms should be zero because of the upwind characteristics. Fig. 5 Fig. 6 56Results of the shock tube test with the present scheme Results of the supersonic corner test with the present scheme For the supersonic corner, a series expansion waves exist around the corner. Thus, the numerical computation could produce an expansion shock. The results of the classical Roe scheme are shown in1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 Expansion Shock X-Axis Density 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 (a) Classical Roe scheme (b) Traditional curing method Fig. 1 Results of the shock tube test at t = 0.2 s ( 2 ) 2Uc  andL Uc  and ( R Uc  or R Uc  ): Thus, the value of U is decreased for subsonic expansion flows but still within a reasonable range as given in Eq. (44). Therefore, Figs. 5 and 6 show the numerical results of the present scheme. Higher-order reconstruction methods [23][24][25] are generally adopted for practical problems; thus, MUSCL reconstruction is also adopted to test the higher-order performance of the improved Eq. (62) or (63). The computational processes are robust and all results are satisfactory, particularly for the supersonic corner test, where the expansion shock and the shock instability are simultaneously cured. No adverse side effects were reported for the improvement., only U is redefined as U  by Eq. (62), which can also be expressed as follows:   min , and 0 otherwise L R R L U U U c U U U         . (63) p p  and u U  decrease and u p  and p U  increase synchronously, which provide sufficient power to cure the expansion shock even p U  is decreased by the functions 1 s and 2 s in Eq. (19). X-Axis Density 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 X-Axis Den sit y 0 0 . 2 0 . 4 0 . 6 0 . 8 1 1 . 2 1 . 4 1 . 6 1 . 8 2 2 . 2 2 . 4 2 . 6 2 . 8 3 (a) First-order accuracy (b) MUSCL reconstruction AcknowledgmentsThis work is supported by Project 51276092 of the National Natural Science Foundation of China.particularly for multi-dimensional calculations when 1 M  but 0 U  . Thereafter, the problem of expansion shock deteriorates (seeFig. 3(b)).Simultaneous Improvement of Curing Expansion Shock and Shock InstabilityAlthough Eq. (18) worsens the expansion shock, this condition is reasonable and necessary to suppress shock instability[17]. The traditional curing method of Eqs. Approximate Riemann Solvers: Parameter Vectors and Difference Schemes. P L Roe, Journal of Computational Physics. 43P.L. Roe, Approximate Riemann Solvers: Parameter Vectors and Difference Schemes, Journal of Computational Physics 43 (1981) 357-372. On the Behaviour of Upwind Schemes in the Low Mach Number Limit. H Guillard, C Viozat, Computers and Fluids. 28H. Guillard, C. Viozat, On the Behaviour of Upwind Schemes in the Low Mach Number Limit, Computers and Fluids 28 (1999) 63-86. Unified Computation of Flow with Compressible and Incompressible Fluid Based on Roe's Scheme. D G Huang, Applied Mathematics and Mechanics. 27D.G. Huang, Unified Computation of Flow with Compressible and Incompressible Fluid Based on Roe's Scheme, Applied Mathematics and Mechanics 27 (2006) 758-763 Mechanism of Roe-type Schemes for All-Speed Flows and Its Application. X S Li, C W Gu, Computers and Fluids. 86X.S. Li, C.W. Gu, Mechanism of Roe-type Schemes for All-Speed Flows and Its Application, Computers and Fluids 86 (2013) 56-70. Performance of Low-Dissipation Euler Fluxes and Preconditioned LU-SGS at Low Speeds. K Kitamura, E Shima, K Fujimoto, Z J Wang, Communications in Computational Physics. 10K. Kitamura, E. Shima, K. Fujimoto, Z.J. Wang. Performance of Low-Dissipation Euler Fluxes and Preconditioned LU-SGS at Low Speeds, Communications in Computational Physics 10 (2011) 90-119. On the Use of Shock-Capturing 20. E Garnier, M Mossi, P Sagaut, P Comte, M Deville, E. Garnier, M. Mossi, P. Sagaut, P. Comte, and M. Deville. On the Use of Shock-Capturing 20 . Journal of Computational Physics. 153Schemes for Large-Eddy SimulationSchemes for Large-Eddy Simulation, Journal of Computational Physics 153 (1999) 273-311. All-speed Roe Scheme for the Large Eddy Simulation of Homogeneous Decaying Turbulence. X S Li, X L Li, International Journal of Computational Fluid Dynamics. 30X.S. Li, X.L. Li, All-speed Roe Scheme for the Large Eddy Simulation of Homogeneous Decaying Turbulence, International Journal of Computational Fluid Dynamics, 30 (2016): 69-78. Preconditioned Dual-Time Procedures and its Application to Simulating the Flow with Cavitations. D G Huang, Journal of Computational Physics. 223D.G. Huang, Preconditioned Dual-Time Procedures and its Application to Simulating the Flow with Cavitations, Journal of Computational Physics 223 (2007) 685-689. A Contribution to the Great Riemann Solver Debate. J J Quirk, International Journal for Numerical Methods in Fluids. 18J.J. Quirk, A Contribution to the Great Riemann Solver Debate, International Journal for Numerical Methods in Fluids 18 (1994) 555-574. Modified Entropy Correction Formula for the Roe Scheme. M J Kermani, E G Plett, AIAA PaperM.J. Kermani, E.G. Plett, Modified Entropy Correction Formula for the Roe Scheme, AIAA Paper 2001-0083 (2001). A New Roe-type Scheme for All Speeds. F Qu, C Yan, D Sun, Z Jiang, Computers & Fluids. 121F. Qu, C. Yan, D. Sun, Z. Jiang, A New Roe-type Scheme for All Speeds, Computers & Fluids 121 (2015) 11-25. Cures for the Shock Instability: Development of A Shock-Stable Roe Scheme. S Kim, C Kim, O H Rho, S K Hong, Journal of Computational Physics. 185S. Kim, C. Kim, O.H. Rho, S.K. Hong, Cures for the Shock Instability: Development of A Shock-Stable Roe Scheme, Journal of Computational Physics 185(2003) 342-374. A Robust Shock-Capturing Scheme Based on Rotated Riemann Solvers. Y X Ren, Computers & Fluids. 32Y.X. Ren, A Robust Shock-Capturing Scheme Based on Rotated Riemann Solvers, Computers & Fluids 32 (2003) 1379-1403. Carbuncle-Free, Boundary-Layer-Resolving, Rotated-Hybrid Riemann Solvers. H Nishikawa, K Kitamura, Very Simple, Journal of Computational Physics. 227H. Nishikawa, K. Kitamura, Very Simple, Carbuncle-Free, Boundary-Layer-Resolving, Rotated-Hybrid Riemann Solvers, Journal of Computational Physics 227 (2008) 2560-2581. On Godunov-Type Methods near Low Densities. B Einfeldt, C D Munz, P L Roe, B Sjögreen, Journal of Computational Physics. 92B. Einfeldt, C. D. Munz, P. L. Roe, and B. Sjögreen, On Godunov-Type Methods near Low Densities, Journal of Computational Physics 92 (1991) 273-295. A Sequel to AUSM, Part II: AUSM+-up for All Speeds. M S Liou, Journal Computational Physics. 214M.S. Liou, A Sequel to AUSM, Part II: AUSM+-up for All Speeds, Journal Computational Physics 214 (2006) 137-170. Role of Momentum Interpolation Mechanism of the Roe Scheme in Shock Instability. X D Ren, C W Gu, X S Li, arXiv:1509.02776v2X.D. Ren, C.W. Gu, and X.S. Li. Role of Momentum Interpolation Mechanism of the Roe Scheme in Shock Instability. arXiv:1509.02776v2 (2015). Preconditioning Method and Engineering Application of Large Eddy Simulation. X S Li, J , Z Xu, C W Gu, Science in China Series G: Physics, Mechanics & Astronomy. 51X.S. Li, J,Z. Xu and C.W. Gu. Preconditioning Method and Engineering Application of Large Eddy Simulation, Science in China Series G: Physics, Mechanics & Astronomy, 51 (2008) 667-677. The Momentum Interpolation Method Based on the Time-Marching Algorithm for All-Speed Flows. X S Li, C W Gu, Journal of Computational Physics. 229X.S. Li, C.W. Gu, The Momentum Interpolation Method Based on the Time-Marching Algorithm for All-Speed Flows, Journal of Computational Physics 229 (2010) 7806-7818. Cell Face Velocity Alternatives in A Structured Colocated Grid for the Unsteady Navier-Stokes Equations. A Pascau, International Journal for Numerical Methods in Fluids. 65A. Pascau, Cell Face Velocity Alternatives in A Structured Colocated Grid for the Unsteady Navier-Stokes Equations, International Journal for Numerical Methods in Fluids 65 (2011) 812-833. Preconditioning Applied to Variable and Const Density Flows. J M Weiss, W A Smith, AIAA Journal. 33J.M. Weiss, W.A. Smith, Preconditioning Applied to Variable and Const Density Flows, AIAA Journal 33 (1995) 2050-2057. Uniform Algorithm for All-Speed Shock-Capturing Schemes. X S Li, International Journal of Computational Fluid Dynamics. 28X.S. Li, Uniform Algorithm for All-Speed Shock-Capturing Schemes, International Journal of Computational Fluid Dynamics 28 (2014) 329-338. Towards the Ultimate Conservative Difference Scheme. V. A Second-Order Sequel to Godunov's Method. B Van Leer, Journal of Computational Physics. 32B. Van Leer, Towards the Ultimate Conservative Difference Scheme. V. A Second-Order Sequel to Godunov's Method, Journal of Computational Physics 32 (1979) 101-136. X Ren, K Xu, W Shyy, C Gu, Multi, Order Discontinuous Galerkin Method Based on Gas Kinetic Theory for Viscous Flow Computations. 292X. Ren, K. Xu, W. Shyy, and C. Gu, A Multi-Dimensional High-Order Discontinuous Galerkin Method Based on Gas Kinetic Theory for Viscous Flow Computations. Journal of Computational Physics 292 (2015) 176-193. A Multi-Dimensional High-Order DG-ALE Method Based on Gas-Kinetic Theory with Application to Oscillating Bodies. X Ren, K Xu, W Shyy, Journal of Computational Physics. X. Ren, K. Xu, W. Shyy. A Multi-Dimensional High-Order DG-ALE Method Based on Gas-Kinetic Theory with Application to Oscillating Bodies. Journal of Computational Physics
[]
[ "High-resolution probes of low-resolution nuclei", "High-resolution probes of low-resolution nuclei" ]
[ "R J Furnstahl \nDepartment of Physics\nOhio State University\n43085ColumbusOH\n" ]
[ "Department of Physics\nOhio State University\n43085ColumbusOH" ]
[]
Renormalization group (RG) methods used to soften Hamiltonians allow large-scale computational resources to be used to greater advantage in calculations of nuclear structure and reactions. These RG transformations lower the effective resolution of the nuclei, which raises questions about how to calculate and interpret high-momentum transfer probes of nuclear structure. Such experiments are conventionally explained in terms of short-range correlations, but these disappear with the evolution to low-momentum scales. We highlight the important issues and prospects in the context of recent developments in RG technology, with guidance from the analogous extraction of parton distributions.
null
[ "https://arxiv.org/pdf/1309.5771v1.pdf" ]
55,721,108
1309.5771
1691f917c6c23405d3c5b68ab536a85add9b4809
High-resolution probes of low-resolution nuclei R J Furnstahl Department of Physics Ohio State University 43085ColumbusOH High-resolution probes of low-resolution nuclei Renormalization group; nuclear structure Renormalization group (RG) methods used to soften Hamiltonians allow large-scale computational resources to be used to greater advantage in calculations of nuclear structure and reactions. These RG transformations lower the effective resolution of the nuclei, which raises questions about how to calculate and interpret high-momentum transfer probes of nuclear structure. Such experiments are conventionally explained in terms of short-range correlations, but these disappear with the evolution to low-momentum scales. We highlight the important issues and prospects in the context of recent developments in RG technology, with guidance from the analogous extraction of parton distributions. Introduction Recent electron scattering experiments on nuclei that use large four-momentum transfers to knock out nucleons have been interpreted in terms of short-range correlations (SRCs) in the nuclear wave function [1,2]. As indicated schematically in Fig. 1 (left), the dominant source of ejected back-to-back nucleons is identified as the break-up of an SRC formed by low-momentum nucleons being coupled to high-momentum by the nucleon-nucleon (NN) interaction. At the same time, the use of softened ("lowmomentum") Hamiltonians has had great success in pushing the limits of microscopic calculations of nuclear structure and reactions [3,4,5]. This success is in large part due to the absence of SRCs in the corresponding nuclear wave functions. We seek to reconcile these results by applying a renormalization group (RG) viewpoint, which manifests the scale (and scheme) dependence of nuclear Hamiltonians and operators by continuous changes in the resolution. RG transformations shift the physics between structure and reaction mechanism so that the same data can have apparently different explanations. We use the RG perspective to discuss implications in light of the current and future possibilities of applying new RG technology. The RG is a powerful and versatile tool for this purpose. The common features of the RG for critical phenomena and high-energy scattering are discussed by Steven Weinberg in an essay in Ref. [6]. He summarizes: "The method in its most general form can I think be understood as a way to arrange in various theories that the degrees of freedom that you're talking about are the relevant degrees of freedom for the problem at hand." This is the essence of what we do by evolving to low-momentum interactions: we arrange for the degrees of freedom to be the relevant ones for nuclear structure (and reactions). This does not mean that other degrees of freedom cannot be used (including SRCs from high-momentum interactions), but we need to be mindful of Weinberg's adage [6]: "You can use any degrees of freedom you want, but if you use the wrong ones, you'll be sorry." The benefits of applying RG to high-energy (particle) physics include improving perturbation theory, e.g., in QCD. A mismatch of energy scales can generate large logarithms that ruins perturbative convergence even when couplings by themselves are small. The RG shifts strength between loop integrals and coupling constants to reduce these logs. For critical phenomena in condensed matter systems, the RG reveals the nature of observed universal behavior by filtering out short-distance degrees of freedom. Both these aspects are seen in applications of RG to nuclear structure and reactions. As the resolution is lowered, nuclear calculations become more perturbative, implying that scales are more appropriately matched. In addition, the potentials flow toward universal form (e.g., see Fig. 3) as model dependent short-distance details are suppressed. The end result might be said to make nuclear physics look more like quantum chemistry calculationally, opening the door to a wider variety of techniques (such as many-body perturbation theory) and simplifying calculations (e.g., by improving convergence of basis expansions). However, maintaining RG-induced three-nucleon (NNN) forces (and possibly four-nucleon forces) has been found to be essential for accurate and scale-independent results. Recently developed RG technology to handle three-body evolution [4,7,8] will be critical to realize the power of the RG. [8]. The initial potential is an NN + NNN chiral EFT interaction at next-to-next-to-leading order (see Ref. [9] for background). λ is a flow parameter with 2 λ 2 /M roughly equal to the energy decoupling scale. High-resolution probes of low-resolution nuclei 3 2 Similarity renormalization group flow Renormalization group methods and applications to nuclear systems are well documented in the literature (see Refs. [3,5] and references therein) and in contributions to these proceedings. A popular approach, which we focus on here, is the similarity renormalization group (SRG). In most implementations of the SRG, an initial Hamiltonian (typically with both NN and NNN interactions) is driven by a series of continuous unitary transformations toward more diagonal form in momentum representation. This flow toward the diagonal is illustrated for both NN and NNN matrix elements in Fig. 2 [8]. More diagonal means greater decoupling of low-and high-momentum modes, making interactions more perturbative. The changes in many-body interactions highlight the need to be able to control this part of the evolution. Where does the physics of the decoupled high-momentum modes go? It flows to modifications of the low-momentum parts of both the two-and three-nucleon interactions, which effective field theory (EFT) tells us can be absorbed into regulated contact interactions (as indicated schematically on the right in Fig. 1). That the leading change in the NN potential induced by the SRG does have this form can be shown using the operator product expansion (see Section 3 and Refs. [10,11]) but it is implicit in the NN 1 S 0 partial wave in Fig. 3, where the off-diagonal matrix elements of a set of chiral EFT NN potentials with different regularization schemes (left) are evolved to low resolution (right). We directly see the suppression of offdiagonal strength for k > λ and a flow to universal values when the high-momentum model dependence is suppressed (evidence for universal flow has also been observed in three-body evolution [7,8]). The dominant change in the potential at low momentum is a constant shift, as would be expected from changing the strength of a regulated (smeared) delta function in coordinate space. A visualization of how two-nucleon interactions evolve in coordinate representation is given for two potentials in the 1 S 0 partial wave in Fig. 4, where local projections are applied to the intrinsically non-local SRG evolved potentials [12]. The melting of the hard repulsive core is manifest as well as the flow to universal form. The soft NN (and NNN) potentials after evolution are much more amenable to many-body methods that use basis expansions, such as the no-core shell model, coupled cluster, and the in-medium SRG (see Ref. [5] for recent results). Indeed, nuclear structure and low-energy reactions are more natural with low-momentum interactions, because the Fermi momentum sets the scale rather than a repulsive core. The successes of this approach with the SRG and other RG methods (e.g., which enable many-body perturbation theory for the shell model) are reviewed elsewhere [3,5]. But because the repulsive core is the dominant source of SRCs, the nuclear wave functions have variable SRCs as the resolution is changed (i.e., as λ is lowered). This is illustrated in Fig. 5 for the deuteron (left) and nuclear matter (right). What then are the implications for RG evolution for the high-resolution (that is, high four-momentum transfer) electron scattering experiments? How does the resolution of the nuclear states even enter the analysis? To address these questions, we must ask about the evolution of operators other than the Hamiltonian. , seen as a suppression at short distances, is removed with SRG evolution to λ = 2 fm −1 (left). Short-range correlations in nuclear matter, which are manifested by the "wound" in the pair distribution function compared to the Fermi gas, are largely removed by V low k RG evolution to Λ = 1.9 fm −1 [3]. Operator evolution by the SRG To gain insight into how RG changes in scale should enter the analysis of nuclear knock-out experiments, we can use the extraction of parton distribution functions from deep inelastic scattering (DIS) as a paradigm. The key property that make parton distributions well defined is the controlled factorization of the cross section into structure and reaction parts at hard scales (meaning sufficiently large Q 2 ) [14]. By this means, a structure function such as F 2 (x, Q 2 ) is decomposed into short-distance physics from the electron-quark scattering that is captured in Wilson coefficients in F a 2 (x, Q µ f ) and the remainder, which is the soft, long-distance physics defining the parton distribution f a (x, µ f ) (where a labels quarks): F 2 (x, Q 2 ) ∼ a f a (x, µ f ) ⊗ F a 2 (x, Q µ f ) .(1) The choice of the factorization scale µ f defines the border between the long-and short-distance contributions. It is not unique! But because the observable F 2 must be independent of µ f , knowing how the short-distance part changes with µ f determines the RG running of the parton distribution. A typical choice is µ f = Q (to minimize logarithmic contributions to the Wilson coefficient for the optimal extraction of PDFs from experiment), so this running translates into a Q 2 dependence in the parton distribution [14]. An example of this RG running is shown for the u-quark PDF in a proton as a function of x and Q 2 in Fig. 6. In the left panel, the combination xu(x, Q 2 ) measures the share of momentum carried by u-quarks in a proton within a particular x-interval [15,16]. This momentum distribution changes as a function of the resolution scale Q 2 according to RG evolution equations. Thus u(x, Q 2 ) is scale dependent (as well as scheme dependent, see Ref. [14]). In the right panel, we see that the deuteron momentum distribution n λ d (k) is also scale and scheme dependent. Plotted is n λ (k) for an initial AV18 potential [13] (the choice of potential is a scheme dependence), which is SRG-evolved from λ = ∞ (corresponding to the initial potential and high resolution) down to λ = 1.5 fm −1 (lowest resolution). It is evident that the high-momentum tail, which is identified with SRC physics, is highly scale dependent and is essentially eliminated at lower resolution. Figure 6: Parton distribution xu(x, Q 2 ) for the u-quarks in the proton as a function of x and Q 2 (left, calculated from [16]) and deuteron momentum distribution n λ d (k) at different SRG resolutions λ (right). The extraction of momentum distributions or quantities such as spectroscopic factors from nuclear experiments is also predicated on factorization assumptions just as in DIS. That is, the observable cross sections are separated into the structure and reaction parts according to some assumptions, which is once again not a unique decomposition but depends on the factorization scale. If the impulse approximation is accurate for some scale, then the separation is clean. But this is rarely true in nuclear physics (at least not to the precision we hope to reach). Therefore we should ask for the nucleon knock-out experiments the same questions that are carefully addressed in DIS: Is the factorization robust? Is it process dependent? What is necessary for consistency between structure and reaction models? What are the trade-offs between using different scales (and schemes)? Let's see how the scale dependence like in DIS works out in the language of SRG unitary transformations. The measured cross section is a convolution: reaction ⊗ structure, but the separate parts are not unique, only the combination. A (shortrange) unitary transformation U leaves matrix elements of an operator O invariant: O mn ≡ Ψ m | O|Ψ n = Ψ m | U † U O U † U |Ψ n = Ψ m | O| Ψ n ≡ O m n .(2)O m n ≡ Ψ m | O| Ψ n = O mn =⇒ e.g., Ψ A−1 n |a α |Ψ A 0 changes,(3) where the latter is a spectroscopic factor. In a low-energy effective theory, transformations that modify short-range unresolved physics yield equally valid states, so matrix elements such as spectroscopic factors (SFs) or momentum distributions (see Fig. 6) are scale/scheme dependent observables. All ingredients for the analysis of an experimental cross section mix under a unitary transformation that changes the resolution. A one-body current becomes a manybody current: U ρ(q) U † = + α + · · · , final-state interactions are modified, and new wave function correlations appear (or disappear in the case of short-range calculations at lower resolution): U |Ψ A 0 = U 12 C(e,+ · · · Again, this means that quantities such as SFs are scale dependent. The bottom line is that the cross section is unchanged only if all pieces are included with the same U : the Hamiltonian, the current operator, and the final state interactions. Now consider again the high resolution experiment from Fig. 1 and what happens when RG unitary transformations act to change the resolution. In particular, how does the SRC explanation of nuclear scaling, which accounts for plateaus in inclusive cross section ratios, evolve with the resolution scale? This explanation is based on the dominant role played by the one-body current, the two-body interaction, and SRCs. The underlying physics is most simply isolated by considering the high-resolution momentum distributions in nuclei. In Fig. 7 on the left, we see that ratios of the these momentum distributions in various nuclei to that in the deuteron are almost flat in the high momentum region associated with SRCs (i.e., above k = 2 fm −1 ). The contribution highlighted in the circle in Fig. 1 yields a k dependence largely independent of the nuclear environment, so n A (k) simply scales with A. If we now evolve to lower momentum through unitary transformations, this can no longer explain the cross section, because the softening of the interaction and therefore the wave function means the momentum distribution has no support at these high momenta (e.g., as in Fig. 6 for the deuteron). But the cross section must be unchanged, because it is a unitary transformation. With RG evolution, the probability of high momentum in a nucleus decreases, but if we transform the wave functions and operators: n(k) ≡ A|a † k a k |A = A| U † U a † k a k U † U |Ψ n = A| U a † k a k U † | A ,(4) uteron vs Complex Nuclei t high momentum region C. Ciofi and S. Simula, Phys.Rev C53, 1689(1996) omentum regions are f the Deuteron um Distributions n(k) Ratio to the Deuteron 2 H 3 He, 4 He, 16 then the original momentum distribution is unchanged! We know that the transformed state | A is easier to calculate, but is the new operator too difficult to calculate or even pathological (e.g., does it explode to compensate for the super-exponential suppression of the low-resolution momentum distribution)? Let us consider the SRG operator flow for the momentum distribution graphically. The evolution with λ of any operator O is given by: O λ = U λ O U † λ ,(5) which can be carried out by a flow equation similar to that used to evolve the Hamiltonian. In practice it is more efficient to construct the unitary transformation from U λ = i |ψ i (λ) ψ i (0)| or by solving the dU λ /dλ flow equation. In any case, matrix elements of evolved operators are unchanged by construction (for the deuteron) but the distribution of strength flows. The integrand of the momentum distribution ψ d |a † q a q |ψ d in the deuteron at q ≈ 3.0 fm −1 is shown in Fig. 8. In the top figure, the initial integrand of U λ a † q a q U † λ at λ = ∞ has a delta function at k = k = q. In the SRG flow, one-body operators such as a † q a q do not evolve, and their contribution is in fact unchanged with λ. However, there is a clear flow to lower momentum, which must be entirely due to a two-body operator. In the bottom figure, the deuteron wave functions are folded in (such that the integrated area is the invariant value of the original momentum distribution at q = 3 fm −1 ). We see there is negligible amplitude at small λ from the original one-body operator (nothing explodes!), but instead a smooth contribution at low momenta from the induced two-body operator, which is reminiscent of a regularized delta function. We might wish to conclude that this operator flow implies a type of "conservation of difficulty" with the simplification of the wave function countered by the complication of the operator. But in this situation the separation of momentum scales leads to an important generic factorization of the unitary transformation operator U λ . In particular, U λ -factorization says that the two-body unitary transformation becomes a simple product (in each partial wave): U λ (k, q) → K λ (k)Q λ (q) whenever k < λ and q λ. This result follows from applying effective interaction methods or the operator product expansion (OPE) for nonrelativistic wavefunctions; we refer the reader to Refs. [10,11] for the technical details. Here we rely on a visual demonstration. In particular, we test U λ -factorization by considering the ratio of U λ (k, q) at fixed q but 8 R.J. Furnstahl variable k. In the factorization region: U λ (k i , q) U λ (k 0 , q) k<λ −→ q λ K λ (k i )Q λ (q) K λ (k 0 )Q λ (q) = K λ (k i ) K λ (k 0 ) ≈ 1 ,(6) so for q λ we expect the ratio to go to a constant, which is in fact unity because K λ (k) becomes independent of k to leading order in the OPE. In Fig. 7 (right), we plot this ratio in the 3 S 1 channel and see clear plateaus close to one (at the 10-15% level) for those curves with k i < λ in the q > λ region, just as expected. It works similarly in other channels [10]. We emphasize that because the leading order for K λ (k) is constant for k < λ, the factors K λ (k)K λ (k ) to good approximation play the role of a contact term. Then the contribution from large λ in the diagram in Fig. 1 (right) with an implied integration over q and q has the simplification: ∆V λ (k, k ) = q,q U λ (k, q)V λ (q, q )U † λ (q , k ) for k, k < λ, q, q λ U λ →K·Q −→ K(k) q,q Q(q)V λ (q, q )Q(q ) K(k ) with K(k) ≈ 1 ,(7) which is a constant times a smeared delta function, as advertised. Further, we can understand why nuclear scaling is expected directly from U λ -factorization, if we can argue that the deuteron channel dominates (as in the SRC argument [1,2]). When k < λ and q λ, the ratio of original momentum distributions becomes (in a schematic notation): n A (q) n d (q) = A| U a † q a q U † | A d| U a † q a q U † | d = A| U λ (k , q )δ q q U † λ (q, k)| A d| U λ (k , q )δ q q U † λ (q, k)| d = A| K λ (k )[ Q λ (q )δ q q Q λ (q)]K λ (k)| A d| K λ (k )[ Q λ (q )δ q q Q λ (q)]K λ (k)| d = A| K λ (k )K λ (k)| A d| K λ (k )K λ (k)| d ≡ C A ,(8) where C A is the scaling ratio. A proof of principle test in a toy one-dimensional model verified that this scenario can work [10]. For the realistic nuclear case, we need High-resolution probes of low-resolution nuclei 9 to examine all contributions quantitatively, including from three-body operators, but the pattern in Eq. (8) is promising. We might further speculate that the recent observation that the A dependences of scaling ratios and the slope of the EMC effect dR A (x)/dx (where R A (x) is the large Q 2 ratio of nuclear cross sections for 0.7 < x < 1.0) are linearly correlated [1] could be understood by U λ -factorization and subsequent cancellations in cross section ratios. The EFT treatment of Chen and Detmold [18] predict an analogous factorization in the EMC ratio. In particular, they assert: "The x dependence of R A (x) is governed by short-distance physics, while the overall magnitude (the A dependence) of the EMC effect is governed by long distance matrix elements calculable using traditional nuclear physics." If the same leading operators dominate in the two types of processes (i.e., two-body contact operators with deuteron quantum numbers), then we would expect precisely this sort of linear A dependence. Quantitative calculations are needed! To do such calculations, we need many-body operator contributions, as shown by Neff for 4 He relative momentum distributions [19]. Fortunately, the recently developed technology for evolution of three-body forces can be adopted for more general operator evolution. This will enable direct calculations by ab initio methods in lighter nuclei and many-body perturbation theory for operators in heavier nuclei. Summary and outlook We have presented a brief overview of high-resolution probes of low-resolution nuclei based on the RG/EFT perspective. Some summary observations: • Lower resolution means more natural nuclear structure. • While scale and scheme-dependent observables can be (to good approximation) unambiguous for some systems, they are often (generally?) not so for nuclei. And while cross sections are invariant, the physics interpretation can change with resolution! • Working with scale and scheme dependence requires consistent Hamiltonian and operators. Be wary of treating experimental analysis in independent pieces (as is often done). • Unitary transformations can be used to reveal natural scheme dependence. The RG/EFT perspective and associated tools can help to address whether we can have controlled factorization at low energies, to identify the roles of short-range versus long-range correlations, and to quantitatively assess the scheme-dependence of spectroscopic factors and related quantities. An overreaching question is how should one choose the appropriate scale in different situations (with the RG to evolve the scale as needed). One general motivation is to make calculations easier or more convergent, such as using the QCD runningcoupling scale to improve perturbation theory. For nuclear structure and low-energy reactions, low-momentum potentials are chosen to improve convergence in configuration interaction or coupled cluster calculations or to make a microscopic connection to the shell model. Conversely, local potentials (which until recently were only high resolution) are favored for quantum Monte Carlo. The scale could also be chosen for interpretation or intuition; the SRC phenomenology is such an example. But the most important issue for knock-out experiments is to have the cleanest and most controlled extraction of quantities analogous to PDFs from experiment; this might mean optimizing the validity of the impulse approximation but there are other possibilities (e.g., R.J. Furnstahl optimizing U λ -factorization). To make progress, the plan is to make test calculations with a range of scales starting from initial Hamiltonians and operators matched in an EFT framework, with the RG used to consistently relate scales and quantitatively probe ambiguities (e.g., in spectroscopic factors). A priority calculation in the short term is deuteron electrodisintegration, which is well controlled because of the absence of three-body forces and operators. Figure 1 : 1Schematic two-nucleon knock-out experiment with SRC interpretation (left) and diagrammatic illustration that the contribution of decoupled high-momentum modes in intermediate states is replaced by (regularized) contact interactions (right). Figure 2 : 2Evolution by the SRG showing flow toward the momentum diagonal for both NN (bottom) and NNN (top) interactions Figure 3 : 3Off-diagonal matrix elements of different chiral EFT potentials evolved by the SRG slightly to λ = 5 fm −1 (left) and much further to λ = 1.5 fm −1 (right)[3]. Figure 4 : 4Evolution by the SRG of two NN interactions in coordinate space as visualized by local projections (see Ref.[12] for definitions and details). Figure 5 : 5Short-range correlations induced in the L = 0 part of the deuteron wave function by the Argonne v 18 potential[13] Figure 7 : 7Ratio of momentum distributions in nuclei to the deuteron (denoted n A and n d in the text) with a high-resolution potential[17] (left) and U λ -factorization test for the 3 S 1 channel [10] (right). Figure 8 : 8Integrand of the deuteron momentum distribution at q ≈ 3 fm −1 without (top) and with (bottom) the deuteron wave functions included[10]. I thank my collaborators E. Anderson, S. Bogner, B. Dainton, K. Hebeler, H. Hergert, S. More, R. Perry, A. Schwenk, and K. Wendt. This work was supported in part by the National Science Foundation under Grant No. PHY-1002478 and the U.S. Department of Energy under Grant No. de-sc0008533 (SciDAC-3/NUCLEI project). RG unitary transformations change the decoupling scale, which means that the effective factorization scale (which determines what goes into the operator and what into the wave function) is changed. Note that matrix elements of the operator O itself between the transformed states are in general modified: . J Arrington, D Higinbotham, G Rosner, M Sargsian, Prog. Part. Nucl. Phys. 67898J. Arrington, D. Higinbotham, G. Rosner and M. Sargsian, Prog. Part. Nucl. Phys. 67, 898 (2012). . M Alvioli, C Ciofi Degli Atti, L Kaptari, C Mezzetti, H Morita, Phys. Rev. C. 8734603M. Alvioli, C. Ciofi degli Atti, L. Kaptari, C. Mezzetti and H. Morita, Phys. Rev. C 87, 034603 (2013). . S K Bogner, R J Furnstahl, A Schwenk, Prog. Part. Nucl. Phys. 6594S. K. Bogner, R. J. Furnstahl and A. Schwenk, Prog. Part. Nucl. Phys. 65, 94 (2010). . R Roth, J Langhammer, S Binder, A Calci, J. Phys. Conf. Ser. 40312020R. Roth, J. Langhammer, S. Binder and A. Calci, J. Phys. Conf. Ser. 403, 012020 (2012). . R Furnstahl, K Hebeler, arXiv:1305.3800Rep. Prog. Phys. in pressR. Furnstahl and K. Hebeler, Rep. Prog. Phys. (in press), [arXiv:1305.3800]. A H Guth, K Huang, R L Jaffe, Asymptotic Realms of Physics. Cambridge, MAMIT PressA. H. Guth, K. Huang and R. L. Jaffe, editors, Asymptotic Realms of Physics (MIT Press, Cambridge, MA, 1983). . K Hebeler, Phys. Rev. C. 8521002K. Hebeler, Phys. Rev. C 85, 021002 (2012). . K A Wendt, Phys. Rev. C. 8761001K. A. Wendt, Phys. Rev. C 87, 061001 (2013). . E Epelbaum, H.-W Hammer, U.-G Meißner, Rev. Mod. Phys. 811773E. Epelbaum, H.-W. Hammer and U.-G. Meißner, Rev. Mod. Phys. 81, 1773 (2009). . E Anderson, S Bogner, R Furnstahl, R Perry, Phys. Rev. C. 8254001E. Anderson, S. Bogner, R. Furnstahl and R. Perry, Phys. Rev. C 82, 054001 (2010). . S Bogner, D Roscher, Phys. Rev. C. 8664304S. Bogner and D. Roscher, Phys. Rev. C 86, 064304 (2012). . K Wendt, R Furnstahl, S Ramanan, Phys. Rev. C. 8614003K. Wendt, R. Furnstahl and S. Ramanan, Phys. Rev. C 86, 014003 (2012). . R B Wiringa, V G J Stoks, R Schiavilla, Phys. Rev. C. 5138R. B. Wiringa, V. G. J. Stoks and R. Schiavilla, Phys. Rev. C 51, 38 (1995). . R Brock, CTEQ CollaborationRev. Mod. Phys. 67157CTEQ Collaboration, R. Brock et al., Rev. Mod. Phys. 67, 157 (1995). Particles and nuclei: an introduction to the physical concepts. B Povh, K Rith, C Scholz, F Zetsche, SpringerBerlin4th ed.B. Povh, K. Rith, C. Scholz and F. Zetsche, Particles and nuclei: an introduction to the physical concepts; 4th ed. (Springer, Berlin, 2004). . H Lai, CTEQ CollaborationEur. Phys. J. C. 12375CTEQ Collaboration, H. Lai et al., Eur. Phys. J. C 12, 375 (2000). . C Ciofi Degli Atti, S Simula, Phys. Rev. C. 531689C. Ciofi degli Atti and S. Simula, Phys. Rev. C 53, 1689 (1996). . J.-W Chen, W Detmold, Phys. Lett. 625165J.-W. Chen and W. Detmold, Phys. Lett. B625, 165 (2005). . T Neff, private communicationT. Neff, private communication.
[]
[ "DYNAMICS OF ASYMPTOTICALLY HOLOMORPHIC POLYNOMIAL-LIKE MAPS", "DYNAMICS OF ASYMPTOTICALLY HOLOMORPHIC POLYNOMIAL-LIKE MAPS" ]
[ "EDSONTrevor Clark ", "D E Faria ", "Sebastian Van Strien " ]
[]
[]
The purpose of this paper is to initiate a theory concerning the dynamics of asymptotically holomorphic polynomial-like maps. Our maps arise naturally as deep renormalizations of asymptotically holomorphic extensions of C r (r > 3) unimodal maps that are infinitely renormalizable of bounded type. Here we prove a version of the Fatou-Julia-Sullivan theorem and a topological straightening theorem in this setting. In particular, these maps do not have wandering domains and their Julia sets are locally connected.
null
[ "https://arxiv.org/pdf/1804.06122v1.pdf" ]
119,316,211
1804.06122
9078922e2fe6f0e939804b3329d167af1ce3d09a
DYNAMICS OF ASYMPTOTICALLY HOLOMORPHIC POLYNOMIAL-LIKE MAPS 17 Apr 2018 EDSONTrevor Clark D E Faria Sebastian Van Strien DYNAMICS OF ASYMPTOTICALLY HOLOMORPHIC POLYNOMIAL-LIKE MAPS 17 Apr 2018arXiv:1804.06122v1 [math.DS] The purpose of this paper is to initiate a theory concerning the dynamics of asymptotically holomorphic polynomial-like maps. Our maps arise naturally as deep renormalizations of asymptotically holomorphic extensions of C r (r > 3) unimodal maps that are infinitely renormalizable of bounded type. Here we prove a version of the Fatou-Julia-Sullivan theorem and a topological straightening theorem in this setting. In particular, these maps do not have wandering domains and their Julia sets are locally connected. Introduction Over the last decades many remarkable results were obtained for rational maps of the Riemann sphere, and somewhat surprisingly it turned out that quite a few of these have an analogue in the case of smooth interval maps. For example, the celebrated Julia-Fatou-Sullivan structure theorem for rational maps establishes the absence of wandering domains, showing that each component of the Fatou set is eventually periodic, and moreover gives a simple classification of the possible dynamics on a periodic component of the Fatou set, see [59]. For smooth interval maps analogous results were obtained, starting with Denjoy's results for C 2 circle diffeomorphisms dating back to 1932. We now know that C 2 interval or circle maps cannot have wandering intervals provided all their critical points are nonflat, proved in increasing generality in [26,39,7,48,45,49,58]. Interestingly, although the statements for the Julia-Fatou-Sullivan structure theorem for rational maps and the generalised Denjoy theorems for interval and circle maps are analogous, the proofs use entirely different ideas. In the former case, they rely on the Measurable Riemann Mapping Theorem (MRMT) while in the latter case the proofs rely on real bounds coming from C 2 distortion estimates together with arguments relating to the order structure of the real line. However, overall, not only the results but also the techniques used in the fields of holomorphic dynamics and interval dynamics have become increasingly intertwined over the last decades. Indeed, within the literature of real one-dimensional dynamics a growing number of results are obtained under the additional assumption that the maps are real analytic rather than smooth. The reason for this is that a real analytic map (obviously) has a complex extension to a small neighbourhood in C of the dynamical interval, and therefore many tools from complex analysis can be applied to such a real map. For instance, many results in the theory of renormalization of interval maps are either not known in the smooth category, or were only obtained with a significant amount of additional effort. Specifically, the Feigenbaum-Coullet-Tresser conjectures were first obtained using computer supported proofs, e.g. [36] and later using conceptual proofs for real analytic unimodal interval maps in [60,47,42,5], for real analytic circle homeomorphisms with critical points in [14,15,61,32], and for certain multimodal maps in [53,54,55,56]. All these later results heavily use complex analytic machinery, and in particular rely on the complex analytic extensions of interval maps. Within the literature on holomorphic dynamics one sees a similar development: many conjectures about iterations of general polynomials are only solved in the context of polynomials with real coefficients. An example of such a conjecture is density of hyperbolicity which is unsolved in the general case but was proved for real quadratic maps independently by Lyubich and Graczyk -Swiatek and in the general case by Kozlovski, Shen and van Strien, see [24,39,34,35]. These results heavily rely on the existence of so-called real and complex bounds, [38,23,43,52,11] but such complex bounds do not hold for general non-real polynomials or rational maps. Indeed they hold for non-renormalizable polynomial maps [62,27,33] but in general not for non-real infinitely renormalizable quadratic maps, see for example [51,57]. Of course there are plenty of results on renormalization and towards density of hyperbolicity in the setting of non-real polynomials [41,29,30,31,28,9] and similarly there are plenty impressive results on interval maps which do not use complex tools, on for example invariant measures, thermodynamic formalism and stochastic stability. Nevertheless it is fair to say that a growing number of results within the field of real one-dimensional dynamics crucially rely on complex analytic tools, and vice versa many results about polynomial maps are only known when these preserve the real line. When studying real one-dimensional maps, it is unnatural to restrict attention to maps which are real analytic. Indeed, in certain cases renormalization results for real analytic interval maps can be extended to C 3 or C 4 maps. This was done using a functional analytic approach in [17] for unimodal interval maps and heavily exploiting what is known for real analytic circle homeomorphisms in [21]. A purely real approach which gives existence of periodic points of the renormalization operator for unimodal maps of the form g(|x| ℓ ), ℓ > 1, was obtained by Martens [44]. The purpose of this paper is to initiate a theory for C 3+ interval maps showing that these have extensions to the complex plane with properties analogous to those of real polynomial maps. Thus the eventual aim of this theory is to show that C 3+ maps can be treated with techniques which are very similar to the complex analytic techniques which were so fruitful in the case of polynomial and real-analytic maps. In this paper we will establish the first cornerstone of this theory by showing that one has a Julia-Fatou-Sullivan type description for such maps in a very important situation, namely for infinitely renormalizable maps of bounded type. Let us be more precise and consider a C r map f : I → R. Such a map f has an extension to a C r map F : C → C which is asymptotically holomorphic of order r, i.e., ∂ ∂z F (z) = 0 when Im z = 0 and ∂ ∂z F (z) = O(|Im z| r−1 ) uniformly, see [25]. The notion of asymptotically holomorphic maps goes back at least to [8]. In dynamics this notion was used in [40], [60], [11], [21], [4], [10] (see also [19], [20] for related material on the more restrictive notion of uniformly asymptotically conformal (UAC) map). Note that F is not conformal outside the real line, and so in principle periodic points can be of saddle type. Even if a periodic point is repelling, in general the linearization at such a point will not be conformal. It follows that F cannot be quasiconformally conjugate to a polynomiallike map (the pullbacks of a small circle in a small neighbourhood of a non-conformal repelling point become badly distorted, but this is not the case in a small neighbourhood of a conformal repelling point). For this reason, the absence of wandering domains for F cannot be obtained via Sullivan's Nonwandering Domains Theorem [60]. Main Theorem. Let f ∈ C 3+α (α > 0) be a unimodal, infinitely renormalizable interval map of bounded type whose critical point has criticality given by an even integer d. Then every C 3+α extension F of f to a map defined on a neighborhood of the interval in the complex plane is such that there exist a sequence of domains U n ⊂ V n ⊂ C containing the critical point of f and iterates q n with the following properties. (1) The map G := F qn : U n → V n is a degree d, quasi-regular polynomial-like map. (2) For large enough n, each periodic point in the filled Julia set K G := {z ∈ U n ; G i (z) ∈ U n ∀i ≥ 0} is repelling. (3) The Julia J G := ∂K G and filled-in Julia set of G coincide, i.e., J G = K G . (4) The map G is topologically conjugate to a polynomial mapping in a neighbourhood of its Julia set. In particular, G has no wandering domains. (5) The Julia set J G is locally connected. A more precise statement of this theorem can be found in Corollary 6.8 where we use the notion of controlled AHPL-maps, see Definition 5.1. We expect a similar result to hold in much greater generality, for example for general C 3+α asymptotically holomorphic interval maps with finitely many critical points of integer order. Our plan is to build on the results in this paper to prove absence of invariant line fields for asymptotically holomorphic maps extending the methods of [47]. In addition, rather than using functional analytic tools as in [17], we plan to prove renormalization results for C r maps through the McMullen tower construction directly following the ideas in [47], or more ambitiously following the approach of Avila-Lyubich [5]. Thus our ultimate goal is to establish a closer analogy between real and complex one-dimensional dynamics along the lines suggested in the table below. setting real polynomials on the complex plane C 3 asympt. hol. maps analogy Julia-Fatou-Sullivan Theory Yes (this paper) McMullen tower construction ? Schwarz contraction ? Hyperbolicity of Renormalization ? Deformation theory (through MRMT) ? 1.1. Object of study. We shall study the dynamics of certain quasi-regular maps in the complex plane that are generalizations of standard (holomorphic) polynomial-like maps, as defined by Douady-Hubbard in [12]. Such generalized polynomial-like maps arise as deep renormalizations of unimodal interval maps that admit an asymptotically holomorphic extension to a complex neighborhood of their real domain. Let ϕ : U → V be a C 1 map between two domains in the complex plane, and assume that U ∩ R = Ø. We say that ϕ is asymptotically holomorphic of order r > 1 if ϕ is quasi-regular and its complex dilatation µ ϕ satifies |µ ϕ (z)| ≤ C|Im z| r−1 for all z ∈ U and some constant C > 0 (in particular, µ ϕ vanishes on the real axis, i.e., ϕ is conformal there). As mentioned above, every C r map of the real line admits an extension to a neighborhood of the real axis which is asymptotically holomorphic of order r. (The notion of asymptotically holomorphic maps can even be defined for maps which are merely quasiconformal on C. It can be shown that if such a map is asymptotically holomorphic of order r then its restriction to the real line is actually C r , see [2,13]. ) We may now formally define the class of dynamical systems we intend to study. Please note that in what follows we only consider maps having a unique critical point of finite even order d ≥ 2. Definition 1.1. Let U, V ⊂ C be Jordan domains symmetric about the real axis, and suppose U is compactly contained in V . A C r (r ≥ 3) map f : U → V is said to be an asymptotically holomorphic polynomial-like map, or AHPL-map for short, if (i) f is a degree d ≥ 2 proper branched covering map of U onto V , branched at a unique critical point c ∈ U ∩ R of criticality given by d; (ii) f is symmetric about the real axis, i.e., f (z) = f (z) for all z ∈ U; (iii) f is asymptotically holomorphic of order r. It follows from the well-known Stoilow Factorization Theorem (see [3,Cor. 5.5.3]) that an AHPL-map f as above can be written as f = φ • g, where g : U → V is a (holomorphic) polynomial-like map and φ : V → V is a C r quasiconformal diffeomorphism which is also asymptotically holomorphic of order r. Just as in the case of standard polynomial-like maps, we define the filled-in Julia set of an AHPL-map f : U → V to be the closure of the set of points which never escape under iteration, namely K f = n≥0 f −n (V ) = n≥0 f −n (U ) . This is a compact, totally f -invariant subset of U. Its boundary J f = ∂K f is called the Julia set of f . By simple analogy with the case of holomorphic polynomial-like maps, there are natural questions to be asked about AHPL-maps and their Julia sets, to wit: (1) Are the (expanding) periodic points dense in J f ? (2) When is J f locally connected? (3) What is the classification of stable components of K f \ J f ? (4) Can f have non-wandering domains? (5) Is there a (topological) straightening theorem for AHPL-maps? These questions do not have obvious answers. For instance, in the holomorphic case, the first question has an affirmative answer whose proof is easy thanks to Montel's theorem -a tool which is not useful here. Likewise, in the holomorphic case question (4) has a negative answer thanks to Sullivan's non-wandering domains theorem, whose proof uses quasiconformal deformations of f in a way that is not immediately available here, because in general the iterates of an AHPL-map are not uniformly quasiconformal. Rather than studying very general AHPL-maps, in this paper we will restrict our attention to those which can be renormalized , in fact infinitely many times. The definition of renormalization in the present context is the same as the one for polynomial-like mappings: an AHPL-map f is renormalizable if there exists a topological disk D containing the critical point of f and an integer p > 1 so that D is compactly contained in f p (D) and f p : D → f p (D) is again an AHPL-map. Thanks to a theorem proved in [11], every sufficiently deep renormalization of an asymptotically holomorphic map whose restriction to the real line is an infinitely renormalizable map (in the usual real sense) is an (infinitely renormalizable) AHPL-map with a priori bounds. One of our goals in the present paper is to provide answers to (some of) the above questions under the assumption that the AHPL-map f is infinitely renormalizable of bounded type. Another goal will be to prove C 2 a priori bounds for the renormalizations of such an f , under the same bounded type assumption. Summary. Here is a brief description of the contents of this paper. We start by revisiting the real bounds for C 3 unimodal maps in §2. In §3, we prove that the successive renormalizations of a C 3 infinitely renormalizable AHPL-map of bounded type are uniformly bounded in the C 2 topology, and that such bounds are beau in the sense of Sullivan. In proving these bounds, we employ as a tool the matrix form of the chain rule for the second derivative of a composition of maps. This tool does not seem to have been used at all in the literature on low-dimensional dynamics. The key ingredient that allows us to prove our Main Theorem is a result that, roughly speaking, states that (a deep renormalization of) an AHPL-map is an infinitesimal expansion of the hyperbolic metric on its co-domain minus the real axis. This is the main result in §5.1, namely Theorem 5.4. In §4 we introduce techniques which are crucial in establishing Theorem 5.4, namely Proposition 4.14 and Theorem 4.15. Specifically, we give a bound for the hyperbolic Jacobian of a C 2 quasiconformal map in terms of its local quasiconformal distortion in two situations: for maps with small dilatation and for maps which are asymptotically holomorphic. These bounds are applied to the diffeomorphic part of our AHPL-map, which therefore needs to be at least C 2 with good bounds. This is the main reason why we need the C 2 bounds developed in §3. This infinitesimal expansion of the hyperbolic metric has several consequences, e.g., the fact that every periodic point of (a sufficiently deep renormalization of) an AHPL-map is expanding -once again, see Theorem 5.4. Finally, in §6, we go further and construct puzzle pieces for such AHPL-maps, and show with the help of Theorem 5.4, that the puzzle pieces containing any given point of the Julia set of an infinitely renormalizable AHPL-map shrink around that point. This implies that the Julia set of such a map is always locally connected. Even more, as a consequence, such a map is in fact topologically conjugate to an actual (holomorphic) polynomial-like map and therefore does not have wandering domains. Revisiting the real bounds In this section we will recall some basic facts about renormalization of real unimodal maps. 2.1. Renormalization of unimodal maps. We need to recall some definitions and a few facts concerning the renormalization theory of interval maps. Let us consider a C 3 unimodal map f : I → I defined on the interval I = [−1, 1] ⊂ R, with its unique critical point at 0 and corresponding critical value at 1, i.e., with f ′ (0) = 0 and f (0) = 1. From the viewpoint of renormalization, to be defined below, there is no loss of generality in assuming that f is even, i.e., that f (−x) = f (x) for all x ∈ I. We also assume that the critical point of f has finite even order d ≥ 2. Hence we oftentimes refer to f as a d-unimodal map. We say that such an f is renormalizable if there exist an integer p = p(f ) > 1 and λ = λ(f ) = f p (0) such that f p |[−|λ|, |λ|] is unimodal and maps [−|λ|, |λ|] into itself. Taking p the smallest possible, we define the first renormalization of f to be the map Rf : I → I given by Rf (x) = 1 λ f p (λx) . (2.1) The intervals ∆ j = f j ([−|λ|, |λ|]), for 0 ≤ j ≤ p − 1, have pairwise disjoint interiors, and their relative order inside I 0 determines a unimodal permutation θ of {0, 1, . . . , p − 1}. Thus, renormalization consists of a first return map to a small neighbourhood of the critical point rescaled to unit size via a linear rescale. It makes sense to ask whether Rf is also renormalizable, since Rf is certainly a normalized unimodal map. If the answer is yes then one can define R 2 f = R(Rf ), and so on. In particular, it may be the case that the unimodal map f is infinitely renormalizable, in the sense that the entire sequence of renormalizations f, Rf, R 2 f, . . . , R n f, . . . is well-defined. We assume from now on that f is infinitely renormalizable. Let us denote by P (f ) ⊆ I the closure of the forward orbit of the critical point under f (the post-critical set of f ). The set P (f ) is a Cantor set with zero Lebesgue measure, see below. It can be shown also that P (f ) is the global attractor of f both from the topological and metric points of view. Note that for each n ≥ 0, we can write R n f (x) = 1 λ n f qn (λ n x) , where q 0 = 1, λ 0 = 1, q n = n−1 i=0 p(R i f ) and λ n = n−1 i=0 λ(R i f ) = f qn (0). The positive integers a i = p(R i f ) ≥ 2 are called the renormalization periods of f , and the q n 's are the closest return times of the orbit of the critical point. Note that q n+1 = a n q n = i=n i=0 a i ≥ 2 n+1 ; in particular, the sequence q n goes to infinity at least exponentially fast. It will be important to consider the renormalization intervals of f at level n, namely ∆ 0,n = [−|λ n |, |λ n |] ⊂ I 0 , and ∆ i,n = f i (∆ 0,n ) for i = 0, 1, . . . , q n − 1. The collection C n = {∆ 0,n , . . . , ∆ qn−1,n } consists of pairwise disjoint intervals. Moreover, {∆ : ∆ ∈ C n+1 } ⊆ {∆ : ∆ ∈ C n } for all n ≥ 0 and we have P (f ) = ∞ n=0 qn−1 i=0 ∆ i,n . Once we know that max 0≤i≤qn−1 |∆ i,n | → 0 as n → ∞, it follows that P (f ) is, indeed, a Cantor set. This (and much more) follows from the so-called real a priori bounds proved by Sullivan in [60]. The following form of the real bounds is not the most general, but it will be quite sufficient for our purposes. We say that an infinitely renormalizable map f as above has combinatorial type bounded by N if its remormalization periods are bounded by N, i.e., a n ≤ N for all n ∈ N. Theorem 2.1 (Real Bounds). Let f : I → I be a C 3 unimodal map as above, and suppose that f is infinitely renormalizable with combinatorial type bounded by N > 1. Then there exist constants K f > 0 and 0 < α f < β f < 1 such that the following holds for all n ∈ N. (i) If ∆ ∈ C n+1 , ∆ * ∈ C n and ∆ ⊂ ∆ * , then α f |∆ * | ≤ |∆| ≤ β f |∆ * |. (ii) For all 1 ≤ i < j ≤ q n − 1 and each x ∈ ∆ i,n , we have 1 K f |∆ j,n | |∆ i,n | ≤ |(f j−i ) ′ (x)| ≤ K f |∆ j,n | |∆ i,n | . (iii) We have R n f C 1 (I) ≤ K f . Moreover, there exist positive constants K = K(N), α = α(N), β = β(N), with 0 < α < β < 1, and n 0 = n 0 (f ) ∈ N such that, for all n ≥ n 0 , the constants K f , α f and β f in (i), (ii) and (iii) above can be replaced by K, α and β, respectively. For a complete proof of this theorem, see [49]. In informal terms, the theorem states three things. First, that the post-critical set P (f ) of an infinitely renormalizable dunimodal map with bounded combinatorics is a Cantor set with bounded geometry. Second, that the successive renormalizations of such a map are uniformly bounded in the C 1 topology. Third, that the bounds on the geometry of the Cantor set and on the C 1 norms of the renormalizations become universal at sufficiently deep levels (such bounds are called beau by Sullivan in [60] -see also [49]). Further analysis of the non-linearity of renormalizations yields the following consequence of the real bounds. Let f : I → I be a C 3 unimodal map as defined above, and suppose f is infinitely renormalizable with renormalization periods bounded by N. For each n ≥ 1, let C n = {∆ i,n : 0 ≤ i ≤ q n − 1} denote the collection of renormalization intervals of f at level n. For each n ≥ 1, we define S n = Cn∋∆ =∆ 0,n |∆| d(c, ∆) , where d(c, ∆) denotes the Euclidean distance between ∆ ⊂ I and the critical point c = 0. Roughly speaking, the result states that the for each infinitely renormalizable unimodal map of bounded type, the sequence {S n } n≥1 is bounded, and the bound is beau in the sense of Sullivan. Proof. The desired bound can be proved by a recursive estimate. Note that we can write S n+1 = C n+1 ∋J⊂∆ 0,n \∆ 0,n+1 |J| d(c, J) + Cn∋∆ =∆ 0,n   C n+1 ∋J⊂∆ |J| d(c, J)   (2.2) Now, since d(c, J) > 1 2 |∆ 0,n+1 | for each J ∈ C n+1 , we certainly have C n+1 ∋J⊂∆ 0,n \∆ 0,n+1 |J| d(c, J) ≤ 2 |∆ 0,n | |∆ 0,n+1 | . (2.3) From the real bounds, Theorem 2.1, we know that there exists a constant 0 < α = α(N) < 1 such that |∆ 0,n | ≤ α −1 |∆ 0,n+1 | for all sufficiently large n. For each ∆ ∈ C n , let J 1 , J 2 , . . . , J an ∈ C n+1 be all the intervals at level n + 1 which are contained in ∆. Then, again from the real bounds, we have an i=1 |J i | ≤ β|∆|, where 0 < β = β(N) < 1, provided the renormalization level n is sufficiently large. Moreover, d(c, J i ) ≥ d(c, ∆) for all i. Hence we have, for all n sufficiently large, Cn∋∆ =∆ 0,n   C n+1 ∋J⊂∆ |J| d(c, J)   ≤ Cn∋∆ =∆ 0,n C n+1 ∋J⊂∆ |J| d(c, ∆) ≤ β Cn∋∆ =∆ 0,n |∆| d(c, ∆) = βS n . (2.4) Putting (2.3) and (2.4) back into (2.2), we deduce that there exists n 0 = n 0 (f ) such that S n+1 ≤ βS n + α −1 for all n ≥ n 0 . By induction, it follows that S n 0 +k ≤ β k S n 0 + α −1 (1 + β + · · ·+ β k−1 ) for all k ≥ 0. Since β < 1, this shows that the sequence (S n ) n≥1 is bounded, and eventually universally so. What we will need is in fact a consequence of this lemma. Given f as in Lemma 2.3, write for all n ≥ 1 S * n = qn−1 i=1 |∆ i,n | 2 |∆ i+1,n | [d(c, ∆ i,n )] d−2 (2.5) where d is the order of f at the critical point c. Lemma 2.4. There exists a constant B 2 = B 2 (N) > 0 with the following property. For each infinitely renormalizable unimodal map f of combinatorial type bounded by N, there exists n 2 = n 2 (f ) ∈ N such that, for all n ≥ n 2 , we have S * n ≤ B 2 . Proof. Since f has a critical point of order d at c, we have |f ′ (x)| ≥ C 0 |x − c| d−1 for all x ∈ I, for some C 0 = C 0 (f ) > 0. Replacing, if necessary, f by R k f for sufficiently large k, we can assume that C 0 depends in fact only on N. Now, for each i we can write |∆ i+1,n |/|∆ i,n | = |f ′ (x i,n )| for some x i,n ∈ ∆ i,n , by the mean-value theorem. Hence, using that |x i,n − c| ≥ d(c, ∆ i,n ), we have |∆ i,n | 2 |∆ i+1,n | [d(c, ∆ i,n )] d−2 = |∆ i,n | |f ′ (x i,n )| [d(c, ∆ i,n )] d−2 ≤ ≤ C −1 0 |∆ i,n | |x i,n − c| ≤ C −1 0 |∆ i,n | d(c, ∆ i,n ) This shows that S * n ≤ C −1 0 S n for all (sufficiently large) n, and the desired result follows from Lemma 2.3. The C 2 bounds for AHPL-maps In this section we prove that the successive renormalizations of an infinitely renormalizable AHPL-map of bounded combinatorial type are uniformly bounded in the C 2 topology, and the bound are beau. Such bounds will be required when we study the diffeomorphic part of a AHPL-map. The main result of this section can be stated more precisely as follows. Theorem 3.1. Let f : U → V be an infinitely renormalizable, C 3 , AHPL-map of combinatorial type bounded by N ∈ N, and let R n (f ) : U n → V n , n ≥ 1, be the sequence of renormalizations of f . There exists a constant C f > 0 such that R n (f ) C 2 (Un) ≤ C f . Moreover, there exist C = C(N) > 0 and m = m(f ) ∈ N such that R n (f ) C 2 (Un) ≤ C for all n ≥ m. The proof will use the real bounds as formulated in §2.1, Lemma 2.4, as well as the complex bounds established in [11], in the form stated in §3.1 below. In fact, the complex bounds are essential even to make sure that the renormalizations R n f appearing in Theorem 3.1 are well-defined AHPL-maps (see Remark 3.3 below). 3.1. The complex bounds. We conform with the notation introduced earlier when dealing with infinitely renormalizable interval maps, and with AHPL-maps. Theorem 3.2 (Complex bounds). Let f : U → V be an AHPL-map and suppose that f | I : I → I is an infinitely renormalizable quadratic unimodal map with combinatorial type bounded by N. There exist C = C(N) > 1 and n 3 = n 3 (f ) ∈ N such that the following statements hold true for all n ≥ n 3 . (i) For each 0 ≤ i ≤ q n −1 there exist Jordan domains U i,n , V i,n , with piecewise smooth boundaries and symmetric about the real axis, such that ∆ i,n ⊂ U i,n ⊂ V i,n , the V i,n are pairwise disjoint, and we have the sequence of surjections U 0,n f − → U 1,n f − → · · · f − → U qn−1,n f − → V 0,n f − → V 1,n f − → · · · f − → V qn−1,n . (ii) For each 0 ≤ i ≤ q n − 1, f i,n = f qn | U i,n : U i,n → V i,n is a well-defined AHPL-map with critical point at f i (c). (iii) We have mod (V i,n \ U i,n ) ≥ C −1 and diam(V i,n ) ≤ C|∆ i,n |, for all 0 ≤ i ≤ q n − 1. (iv) The map f i,n : U i,n → V i,n has a Stoilow decomposition f i,n = φ i,n • g i,n such that K(φ i,n ) ≤ 1 + C|∆ 0,n |, for each 0 ≤ i ≤ q n − 1. This theorem is a straightforward consequence of (a special case of) the complex bounds proved in [11]. Remark 3.3. For each n ≥ 1, consider the linear map Λ n (z) = |∆ 0,n |z, and consider the Jordan domais U n = Λ −1 n (U 0,n ) ⊂ C and V n = Λ −1 n (V 0,n ) ⊂ C. Note that I ⊂ U n ⊂ V n . We define R n f : U n → V n by R n f = Λ −1 n • f 0,n • Λ n . This is the n-th renormalization of f that appears in the statement of Theorem 3.1. Note that the complex bounds given by this theorem guarantee that diam(V n ) ≍ |I|; in particular, the C 0 norms R n f C 0 (Un) are uniformly bounded (by a beau constant). 3.2. Digression on the chain rule. Let φ : U → R n be a C 2 map defined on an open set U ⊂ R n . In matrix form, the second derivative D 2 φ of φ is a n × n 2 matrix obtained by the juxtaposition of the Hessian matrices of each of the n scalar components of φ. For instance, in dimension n = 2, the second derivative of a map φ = u + iv is given by the 2 × 4 matrix D 2 φ = u xx u xy v xx v xy u yx u yy v yx v yy obtained by adjoining the Hessian matrices of the two components of φ. Now, if U, V, W ⊆ R n are open sets with V ⊆ W , and if ψ : U → V and φ : W → R n are both C 2 , then the composition φ • ψ is C 2 , and D 2 (φ • ψ) = D 2 φ • ψ · Dψ ⊗ Dψ + Dφ • ψ · D 2 ψ . (3.1) This is the chain rule for the second derivative of a composition in matrix form. Here, we denote by A ⊗ B the tensor (or Kronecker) product of two square matrices A, B of the same size; thus, in our case Dψ ⊗ Dψ is a square n 2 × n 2 matrix. For a proof of this formula, see [46]. We will need in fact a formula for the second derivative of an (arbitrarily high) iterate of a given map. We formulate it as a lemma. 1 Lemma 3.4. Let φ : U → R n , U ⊆ R n open, be a C 2 map. Then for each k ≥ 0 we have D 2 φ k = D 2 φ • φ k−1 · (Dφ k−1 ) ⊗2 + k−1 j=1 Dφ k−j • φ j · D 2 φ • φ j−1 · (Dφ j−1 ) ⊗2 , wherever the k-th iterate φ k is defined. Proof. This easily established from (3.1) by induction (write φ k+1 = φ•φ k for the induction step). Of course, in this paper we will only need these formulas in dimension n = 2. 3.3. Proof of Theorem 3.1. Here we prove our first main result, namely Theorem 3.1. It is natural to divide the proof into two steps: in the first step we bound the C 1 norms of renormalizations, and in the second step we bound the C 2 norms. Throughout the proof, we shall successively denote by C 0 , C 1 , C 2 , . . . positive constants that are either absolute or depend only on the constants given by the real and complex bounds. Also, in the estimates to follow we use the operator norm on matrices; to wit, we define A = sup |v|=1 |Av| (here, |v| denotes the euclidean norm of the vector v). This norm has the advantage of being sub-multiplicative, which is to say that AB ≤ A · B whenever the product AB is well-defined. It also satisfies A ⊗ B ≤ A · B . Bounding the C 1 norms. First we prove that the sucessive renormalizations of f are uniformly bounded in the C 1 topology, with beau bounds. We will prove a bit more than what is required. Let us fix n ∈ N so large that the real and complex bounds given by Theorem 2.1 and Theorem 3.2 hold true for R n f . We divide our argument into a series of steps. (i) Replacing f by a sufficiently high renormalization we may assume, using Corollary 2.2, that the C 2 norm of f | I is bounded by a beau constant (that depends only on N). In particular, there exists an open complex neighborhood O of the dynamical interval I ⊂ R, with O ⊆ U, such that f C 2 (O) ≤ C 0 . And, because the critical point c has order d, we may also assume that Df (y) ≤ C 0 |y − c| d−1 and D 2 f (y) ≤ C 0 |y − c| d−2 for all y ∈ O. (ii) We may assume that n is so large that V i,n ⊂ O for all i. This is possible because, by the complex bounds (Theorem 3.2), diam(V i,n ) ≍ |∆ i,n |, and therefore the V i,n shrink exponentially fast as n → ∞, by the real bounds. (iii) Let j, k be positive integers such that 1 ≤ j < j + k ≤ q n . Then for each x ∈ ∆ j,n we have, by Theorem 2.1, C −1 1 |∆ j+k,n | |∆ j,n | ≤ Df k (x) = |(f k ) ′ (x)| ≤ C 1 |∆ j+k,n | |∆ j,n | . (3.2) (iv) Given x ∈ ∆ j,n and y ∈ U j,n , let us write x i = f i (x), y i = f i (y) for all i = 0, 1, . . . , k. By step (i), and since f has a critical point at c of order d, we have Df (x i ) − Df (y i ) [d(c, ∆ i+j,n )] d−2 ≤ C 2 |x i − y i | ≤ C 3 |∆ i+j,n | , (3.3) for i = 0, 1, . . . , k − 1. From (3.3) we obviously have Df (y i ) ≤ Df (x i ) + C 3 |∆ i+j,n | · [d(c, ∆ i+j,n )] d−2 ,(3.4) for i = 0, 1, . . . , k − 1. (v) By the chain rule for first derivatives, we have Df k (y) ≤ k−1 i=0 Df (y i ) ≤ k−1 i=0 Df (y i ) . (3.5) (vi) Using (3.4) and (3.5) we get Df k (y) ≤ k−1 i=0 Df (x i ) + C 3 |∆ i+j,n | · [d(c, ∆ i+j,n )] d−2 ≤ k−1 i=0 Df (x i ) · k−1 i=0 1 + C 3 |∆ i+j,n | Df (x i ) [d(c, ∆ i+j,n )] d−2 . (3.6) (vii) But since x i is real (and f preserves the real line), we have k−1 i=0 Df (x i ) = k−1 i=0 f ′ (x i ) = Df k (x) . (3.7) Moreover, for each i = 0, 1, . . . , k we have Df (x i ) = |f ′ (x i )| ≍ |∆ i+j+1,n | |∆ i+j,n | . (3.8) (viii) Putting (3.7) and (3.8) back into (3.6), we get Df k (y) ≤ Df k (x) · k−1 i=0 1 + C 4 |∆ i+j,n | 2 |∆ i+j+1,n | [d(c, ∆ i+j,n )] d−2 . (3.9) But now, using Lemma 2.4, we see that the product in the right-hand side of (3.9) is uniformly bounded, because k−1 i=0 1 + C 4 |∆ i+j,n | 2 |∆ i+j+1,n | [d(c, ∆ i+j,n )] d−2 ≤ exp C 4 k−1 i=0 |∆ i+j,n | 2 |∆ i+j+1,n | [d(c, ∆ i+j,n )] d−2 ≤ exp C 4 qn−1 i=1 |∆ i,n | 2 |∆ i+1,n | [d(c, ∆ i,n )] d−2 = exp{C 4 S * n } ≤ exp{B 2 C 4 } . (3.10) (ix) Hence we have proved that Df k (y) ≤ C 5 Df k (x) , for all y ∈ U j,n and all x ∈ ∆ j,n . From (3.2), it follows that Df k (y) ≤ C 6 |∆ j+k,n | |∆ j,n | , for all y ∈ U j,n . (3.11) In particular, taking j = 1 and k = q n − 1, we see that the first derivative of the map f qn−1 | U 1,n : U 1,n → V 0,n satisfies 2 Df qn−1 (y) ≤ C 6 |∆ 0,n | |∆ 1,n | , for all y ∈ U 1,n . (3.12) (x) On the other hand, since f has a critical point of order d at c = 0, the restriction f | U 0,n : U 0,n → U 1,n satisfies Df (y) ≤ C 7 |y| d−1 ≤ C 8 |∆ 0,n | d−1 for all y ∈ U 0,n (we are implicitly using step (i) here). Combining this fact with step (ix), (3.12), and using the chain rule, we see that the first derivative of the map f 0,n = f qn | U 0,n = f qn−1 | U 1,n • f | U 0,n : U 0,n → V 0,n satisfies Df qn (y) ≤ C 9 |∆ 0,n | d |∆ 1,n | , for all y ∈ U 0,n . (3.13) But, again using that the critical point has order d, we have |∆ 1,n | ≍ |∆ 0,n | d . Putting this information back in (3.13), we deduce that Df 0,n C 0 (U 0,n ) = Df qn C 0 (U 0,n ) ≤ C 10 . Therefore DR n f C 0 (Un) ≤ C 10 also, since R n f is a simply a linearly rescaled copy of f 0,n . This shows that the successive renormalizations of f around the critical point are indeed uniformly bounded in the C 1 topology, and the bounds are beau. Bounding the C 2 norms. We now move to the task of bounding the second derivatives of the renormalizations of f . Here we use the chain rule for the second derivative of a (long) composition, as given by Lemma 3.4. Once again, we break the proof into a series of (short) steps. (xi) Since R n f = Λ −1 n • f 0,n • Λ n , with Λ n (z) = |∆ 0,n |z, we have D 2 R n f C 0 (Un) ≤ |∆ 0,n | · D 2 f 0,n C 0 (U 0,n ) . (3.14) We need to bound the norm on the right-hand side of (3.14). (xii) Recall from step (x) the decomposition f 0,n = f qn−1 | U 1,n • f | U 0,n . By the chain rule for second derivatives, for each y ∈ U 0,n we have D 2 f 0,n (y) = D 2 f qn−1 (f (y))Df (y) ⊗2 + Df qn−1 (f (y))D 2 f (y) . (3.15) Note from step (i) that D 2 f (y) ≤ C 0 |y − c| d−2 ≤ C 11 |∆ 0,n | d−2 . Moreover, applying (3.12) with y replaced by f (y), we have Df qn−1 (f (y)) ≤ C 6 |∆ 0,n | |∆ 1,n | . (3.16) These two estimates combined yield an upper bound for the matrix norm of the second summand in the right-hand side of (3.15), namely Df qn−1 (f (y))D 2 f (y) ≤ C 12 |∆ 0,n | d−1 |∆ 1,n | ,(3.17) where C 12 = C 6 C 11 . (xiii) It remains to bound the matrix norm of the first summand in the right-hand side of (3.15). Applying Lemma 3.4 with φ = f and k = q n − 1 to any point z ∈ U 1,n , we have D 2 f qn−1 (z) = D 2 f (f qn−2 (z))(Df qn−2 (z)) ⊗2 (3.18) + qn−2 j=1 Df qn−j−1 (f j (z))D 2 f (f j−1 (z))(Df j−1 (z)) ⊗2 , Note that D 2 f (f qn−2 (z)) ≤ C 0 , by step (i). Since f j−1 (z) ∈ U j,n ⊂ O, it also follows from step (i) that D 2 f (f j−1 (z)) ≤ C 0 |f j−1 (z) − c| d−2 ≤ C 13 [d(c, ∆ j,n )] d−2 , for all j ≤ q n . Using this information in (3.18), we get D 2 f qn−1 (z) ≤ C 0 Df qn−2 (z) 2 (3.19) + C 13 qn−2 j=1 Df qn−j−1 (f j (z)) Df j−1 (z) 2 [d(c, ∆ j,n )] d−2 . (xiv) We now need to bound the norms on the right-hand side of (3.19). Using the estimate (3.11) given in step (ix), we have Df qn−2 (z) ≤ C 6 |∆ qn−1,n | |∆ 1,n | ,(3.20) as well as Df qn−j−1 (f j (z)) ≤ C 6 |∆ qn−1,n | |∆ j+1,n | , (3.21) and Df j−1 (z) ≤ C 6 |∆ j,n | |∆ 1,n | ,(3.D 2 f qn−1 (z) ≤ C 14 |∆ qn−1,n | 2 |∆ 1,n | 2 + qn−2 j=1 |∆ qn−1,n | |∆ j+1,n | |∆ j,n | 2 |∆ 1,n | 2 [d(c, ∆ j,n )] d−2 . (3.23) (xv) Now we note that |∆ qn−1,n | ≍ |∆ 0,n |, by the real bounds. 3 Using this information in (3.23), we deduce that D 2 f qn−1 (z) ≤ C 15 |∆ 0,n | |∆ 1,n | 2 |∆ 0,n | + qn−2 j=1 |∆ j,n | 2 |∆ j+1,n | [d(c, ∆ j,n )] d−2 . (3.24) Applying Lemma 2.4, we see that the sum inside square-brackets in the right-hand side of (3.24) is bounded (by a beau constant). Hence we have established that D 2 f qn−1 (z) ≤ C 16 |∆ 0,n | |∆ 1,n | 2 . (3.25) (xvi) Carrying the estimates (3.17) and (3.25) back into (3.15), we deduce that D 2 f 0,n (y) ≤ C 17 |∆ 0,n | 2d−1 |∆ 1,n | 2 + |∆ 0,n | d−1 |∆ 1,n | (3.26) This inequality is established for all y ∈ U 0,n . (xvii) Finally, combining (3.26) with (3.14), we get D 2 R n f C 0 (Un) ≤ C 18 |∆ 0,n | 2d |∆ 1,n | 2 + |∆ 0,n | d |∆ 1,n | . Using once again the fact that |∆ 1,n | ≍ |∆ 0,n | d , we deduce at last the inequality D 2 R n f C 0 (Un) ≤ C 20 . Hence the successive renormalizations of f are uniformly bounded in the C 2 topology, as claimed (and the bounds are beau). This finishes the proof of Theorem 3.1. Remark 3.5. If we consider the Stoilow decomposition R n f = φ n • g n coming from Theorem 3.2(iv), where g n : U n → V n is a d-to-1 holomorphic branched covering map, and φ n : V n → V n is an asymptotically holomorphic diffeomorphism, then it is possible to prove, using similar estimates, that φ n C 2 (Vn) , φ −1 n C 2 (Vn) and g n C 2 (Un) are uniformly bounded, and the bounds are beau. 3 We have |∆ 0,n | = |f ′ (ξ)||∆ qn −1,n | for some ξ ∈ ∆ qn−1,n , by the mean value theorem, so |∆ 0,n | ≤ C 0 |∆ qn−1,n | (where C 0 is the constant of step (i)). An inequality in the opposite direction follows from the fact, due to Guckenheimer (and using [48,Theorem IV.B] if f is not symmetric), that when f | I has negative Schwarzian derivative, the renormalization interval containing the critical point is the largest among all renormalization intervals at its level. Here we have not assumed the negative Schwarzian property for f , but it can be proved that R n f | I has this property for all sufficiently large n. For details, see [17, p. 760]. Controlling the distortion of hyperbolic metrics This section is a conformal/quasiconformal intermezzo. Here we develop the distortion tools that will be used in the proof of Theorem 5.4 in §5. We believe that these toolsespecially those concerning the control of infinitesimal distortion of hyperbolic metric by an asymptotically conformal diffeomorphism, see Proposition 4.14 (for self-maps of the disk) and Theorem 4.15 (for other domains) -are of independent interest, and may find applications in other topics of study, such as Riemann surface theory. 4.1. Comparison of hyperbolic metrics. We view any non-empty open set Y ⊂ C whose complement has at least two points as a hyperbolic Riemann surface. As such, Y admits a conformal metric of constant negative curvature equal to −1, the so-called hyperbolic or Poincaré metric of Y . We denote by ρ Y (z)|dz| this metric; ρ Y (z) is the Poincaré density at z ∈ Y . Integrating this metric along a given rectifiable path γ ⊂ Y , we get its hyperbolic length ℓ Y (γ). This gives rise to a distance d Y in the usual way: for any given pair of points z, w ∈ Y , we set d Y (z, w) = inf ℓ Y (γ), where γ ranges over all paths joining z to w (this will be equal to ∞ if z and w lie in distinct components of Y ). We call d Y the hyperbolic distance of Y . Accordingly, given E ⊆ Y , we denote by diam Y (E) the hyperbolic diameter of E. We also use the following notation: if z ∈ Y and v ∈ T z Y is a tangent vector to Y at z, then we write |v| Y for the hyperbolic length of v (i.e., the length of v in the above infinitesimal conformal metric). Thus, when Y is the upper or lower half-plane, we have ρ Y (z) = |Im z| −1 . When Y is the disk of center z 0 ∈ C and radius R > 0, we have ρ Y (z) = 2R R 2 − |z − z 0 | 2 . (4.1) In the case of the unit disk, one can easily compute that d D (0, z) = log 1 + |z| 1 − |z| . This yields the following elementary estimate which will be used in §5.1 (see Remark 5.2). Lemma 4.1. Let 0 ∈ E ⊂ D and 0 < δ ≤ 1. If z ∈ D is any point whose distance to the boundary of D is at least δ, and if w ∈ E, then d D (z, w) ≤ diam D (E) + log 1 δ . The well-known Schwarz lemma states that any holomorphic map ϕ : X → Y between two hyperbolic Riemann surfaces weakly contracts the underlying hyperbolic metrics. In other words, |Dϕ(z)v| Y ≤ |v| X for all z ∈ X and every tangent vector v ∈ T z X. If equality holds for some z even at a single non-zero vector v ∈ T z X, then ϕ is a local isometry between (a component of) X and (a component of) Y . In particular, if X is connected and X ⊂ Y is a strict inclusion, and ϕ : X → Y is the inclusion map, then ϕ is a strict contraction of the hyperbolic metrics. This leads, in the case when X is connected and X ⊂ Y ⊂ C, to the strict monotonicity of Poincaré densities: ρ X (z) > ρ Y (z) for all z ∈ X. The following comparison of Poincaré densities follows from monotonicity and will prove useful later. Lemma 4.2. Let Y ⊆ C \ R be an non-empty open set, and let z, w ∈ Y be such that Re z = Re w and |Im z| ≤ |Im w|. If z ∈ D(w, |Im w|) ⊆ Y , then 1 |Im z| ≤ ρ Y (z) ≤ 1 |Im z| 1 − 1 2 |Im z| |Im w| −1 . (4.2) Proof. Look at the inclusions D(w, |Im w|) ⊆ Y ⊆ C \ R and use (4.1) with z 0 = w and R = |Im w|. 4.2. Expansion of hyperbolic metric. It so happens that contraction sometimes leads to expansion. If ψ : X → Y is a bi-holomorphic map between two hyperbolic Riemann surfaces and X ⊂ Y , then the inverse ψ −1 , viewed as a map from Y into Y , can be written as a composition of ψ −1 : Y → X with the inclusion X ⊂ Y . The first map in the composition is an isometry between the underlying hyperbolic metrics, whereas the second map is a contraction. Therefore ψ expands the hyperbolic metric of Y . In the present paper, we shall need a more quantitative version of this fact. This is given by the following lemma due to McMullen (see [47]). v ∈ T x X we have |Dψ(x)v| Y ≥ Φ(s X,Y (x)) −1 |v| X , (4.3) where s X,Y (x) = d Y (x, Y \ X) and Φ(·) is the universal function given by 4 Φ(s) = sinh (s) log 1 + e −s 1 − e −s . (4.4) We remark that Φ(s) is a continuous monotone increasing function with Φ(0) = 0 and Φ(∞) = 1. Instead of (4.4), we shall need merely the estimate Φ(s) < 1 − 1 3 e −2s . (4.5) This estimate is valid provided s > 1 2 log 2, and is easily proved with the help of Taylor's formula. 4.3. Non-linearity and conformal distortion. We will also need certain well-known results concerning the geometric distortion of holomorphic univalent maps. For details and some background, we recommend [16, §3.8]. Let ϕ : V → C be a holomorphic univalent map defined on an open set V ⊂ C. Then we have Koebe's pointwise estimate on the non-linearity ϕ ′′ /ϕ ′ ; to wit, for every z ∈ V we have ϕ ′′ (z) ϕ ′ (z) ≤ 4 dist(z, ∂V ) , (4.6) where dist(·, ·) denotes euclidean distance. This form of pointwise control of the nonlinearity of ϕ has the following geometric consequence. Suppose D ⊂ V is a compact convex subset, and write N ϕ (D) = diam(D) sup z∈D ϕ ′′ (z) ϕ ′ (z) . (4.7) Then for all z, w ∈ D we have e −Nϕ(D) ≤ ϕ ′ (z) ϕ ′ (w) ≤ e Nϕ(D) . (4.8) When D is not convex, we can still get an estimate like (4.8) by covering D with small disks. The following result is by no means the sharpest of its kind, but it will be quite sufficient for our purposes. Lemma 4.4. Let ϕ : V → C be holomorphic univalent, and let W ⊂ V be a non- empty compact connected set. Suppose M > 1 is such that 1 ≤ diam(V ) ≤ M and dist(∂V, ∂W ) ≥ M −1 . Also, let z 0 ∈ W be given. Then the following assertions hold. (i) There exists K 1 = K 1 (M) > 1 such that, for all z, w ∈ W , we have 1 K 1 ≤ ϕ ′ (z) ϕ ′ (w) ≤ K 1 . (4.9) In fact, we can take K 1 = e 32πM 4 . (ii) There exists K 2 = K 2 (M) > 0 such that max { ϕ ′ | W C 0 , ϕ ′′ | W C 0 } ≤ K 2 |ϕ ′ (z 0 )|. Proof. Cover W with a finite number m of non-overlapping closed squares Q j , 1 ≤ j ≤ m, each Q j having the same side ℓ = (2 √ 2M) −1 , and take m to be the smallest possible. Then Q j ∩ W = Ø, the diameter of Q j is (2M) −1 , and dist(Q j , ∂V ) ≥ (2M) −1 , for each 1 ≤ j ≤ m. Since the total area of these squares cannot exceed the area of V , which is less than πM 2 , we see that m < 8πM 4 . Moreover, from Koebe's estimate (4.7) we have for each j N ϕ (Q j ) ≤ (2M) −1 · 4 (2M) −1 = 4 . Now, since W is connected, given any pair of points z, w ∈ W , we can join them by a chain of pairwise distinct squares Q j 1 , Q j 2 , . . . , Q jn such that Q j k ∩ Q j k+1 = Ø, with z ∈ Q j 1 and w ∈ Q jn , say. Choose z k ∈ Q j k ∩ Q j k+1 for k = 1, 2, . . . , n − 1, and set z 0 = z, z n = w. Use (4.8) to get ϕ ′ (z) ϕ ′ (w) = n−1 k=0 ϕ ′ (z k ) ϕ ′ (z k+1 ) ≤ exp n k=1 N ϕ (Q j k ) ≤ e 4m . This establishes the upper bound in (4.9); the lower bound is obtained in the same way, or simply interchanging z and w. Hence assertion (i) is proved. Assertion (ii) follows from assertion (i) and the inequality (4.6). Quasiconformality and holomorphic motions. We need some non-trivial facts from the theory of quasiconformal mappings. Good references for what follows are [1] and [3]. Given a quasiconformal homeomorphism φ, we write µ φ (z) for the Beltrami form of φ at z, and K φ (z) = (1 + |µ φ (z)|)/(1 −|µ φ (z)|) for the dilatation of φ at z. We also denote by K φ the maximal dilatation of φ, namely the supremum of K φ (z) over all z in the domain of φ. Lemma 4.5. Let φ : C → C be a K-quasiconformal homeomorphism. Then for each z ∈ C and all r > 0 and s > 0 we have Lemma 4.6. Let φ : D → C be a quasiconformal embedding of the disk with φ(0) = 0, and let 0 < r < 1. Then the restriction φ| D(0,r) admits a homeomorphic K-quasiconformal extension to the entire plane, where K = 1+r 1−r K φ . This lemma and its proof can be found in [3, p. 310]. We shall need also the following rather non-trivial result due to Slodkowski. Recall that a holomorphic motion of a set max |ζ−z|=rs |φ(ζ) − φ(z)| min |ζ−z|=s |φ(ζ) − φ(z)| ≤ e πK max r K , r 1/K .E ⊆ C is a map F : ∆ × E → C, where ∆ ⊂ C is a disk, such that (i) for each z ∈ E, the map t → F (t, z) is holomorphic in ∆; (ii) for each t ∈ ∆, the map ϕ t : E → C given by ϕ t (z) = F (t, z) is injective; (iii) for a certain t 0 ∈ ∆ we have ϕ t 0 (z) = z for all z ∈ E. The point t 0 is called the base point of the motion. (i) The map F is a holomorphic motion of C which extends F (in the sense that F (t, z) = F (t, z) for all z ∈ E and all t ∈ ∆). (ii) For each t ∈ ∆, the map ψ t (z) = F (t, z) is a global K t -quasiconformal homeomorphism with K t ≤ exp{d ∆ (t, t 0 )} (where d ∆ denotes the hyperbolic metric of ∆). The following lemma contains a well-known result stating that every quasiconformal homeomorphism can be embedded in a holomorphic motion (see [3, ch. 12]). It will be used in combination with Slodkowski's theorem. Lemma 4.8. Let ψ : C → C be a quasiconformal homeomorphism with k = µ ψ ∞ = 0, and let z 0 ∈ C be such that ψ(z 0 ) = z 0 . (i) There exists a holomorphic motion ψ t : C → C, t ∈ D, such that ψ k = ψ and ψ t (z 0 ) = z 0 for all t. Proof. We may assume that z 0 = 0 (otherwise we simply conjugate ψ by the translation z → z −z 0 and work with the resulting map, which fixes 0). For each t ∈ D, let ϕ t : C → C be the unique solution to the Beltrami equation ∂ϕ t = t k µ ψ ∂ϕ t , normalized so that ϕ t fixes 0, 1 and ∞. Define ψ t : C → C by the formula ψ t (ζ) = 1 + t k (ψ(1) − 1) ϕ t (ζ) . (4.11) Note that ψ t (0) = 0 for all t. Also, for t = k, we have ψ k (ζ) = ψ(1)ϕ k (ζ), so ψ k (1) = ψ(1). Since the Beltrami form of ψ k is the same as the Beltrami form of ϕ k , which is µ ψ , it follows from uniqueness of normalized solutions to the Beltrami equation that ψ k = ψ. This proves (i). Applying Lemma 4.5 to φ = ϕ t , z = 0 and s = 1, we see that for all 0 < r < 1 max |ζ|=r |ϕ t (ζ)| ≤ e πKt r 1/Kt , where K t is the maximal dilatation of ϕ t , which satisfies K t ≤ 1 + |t| 1 − |t| . In particular, since K t < 3 for all t with |t| < 1 2 , we have ϕ t (D(0, r)) ⊆ D(0, e 3π r 1/3 ) (4.12) Let us now estimate the scaling factor multiplying ϕ t (ζ) on the right-hand side of (4.11). Applying Lemma 4.5 with φ = ψ, z = 0, s = r 0 and r = r −1 0 , and taking onto account that the maximal dlatation of ψ is less than 3, we get In particular, |ψ(1) − 1| ≤ 2Me 3π r −2 0 , and therefore Therefore ψ t (D(0, r)) ⊆ D(0, R) for all t with |t| ≤ 1 2 and all 0 < r < 1, where R is given by (4.10). This proves (ii). 1 + t k (ψ(1) − 1) ≤ 2Me 3π 4.5. Quasi-isometry estimates for almost conformal maps. Our goal in this subsection is to make more precise a somewhat vague but intuitive assertion, namely that if a self-map of a hyperbolic domain (or Riemann surface) is almost conformal, then it is an almost isometry of the hyperbolic metric. For the sake of the dynamical applications we have in mind, what is needed is an infinitesimal version of this statement. The desired infinitesimal quasi-isometry property will be presented in two versions. In the first version we deal with the case when the quasiconformal map has small dilatation everywhere, and the quasi-isometry bounds we get are in terms of this global small dilatation. In the second version we deal with the situation when the map is K-quasiconformal (with K not necessarily small) but the quasi-isometry bounds we get are local, near any point z ∈ D where the dilatation is bounded by some fixed power of the distance between z and ∂D. This last version is precisely what we need when studying the metric distortion properties of maps which are asymptotically holomorphic. Both versions are first established for quasiconformal diffeomorphisms of the unit disk, but at the end of this subsection we show how to transfer these results to the kind of simply-connected regions that matter to us. First, let us introduce some notation. We denote by ρ D (z) = 2(1 − |z| 2 ) −1 the Poincaré density of the unit disk, as before. We also denote by ∆ z ⊂ D the closed euclidean disk {ζ : |ζ − z| ≤ 1 2 (1 − |z|)}. Given a C 2 map φ : D → D, we denote by m φ (z) the C 2 norm of φ| ∆z . We write J φ (z) = det Dφ(z) for the euclidean Jacobian of φ at z, and J h φ (z) = J φ (z) ρ D (φ(z)) ρ D (z) 2 for the hyperbolic Jacobian of φ at z. Proposition 4.9. For each 0 < θ < 1, there exists a universal continuous function A θ : (1, ∞) × R + → R + for which the following holds. Let 0 < ǫ < 1 and α > 1 be given, and suppose φ : D → D is a C 2 quasiconformal diffeomorphism with K φ ≤ 1 + ǫ. If z ∈ D is such that α −1 ≤ ρ D (φ(z)) ρ D (z) ≤ α , (4.13) then J h φ (z) ≤ 1 + A θ (α, m φ (z))ǫ 1−θ . The proof, given later in this subsection, will use the following three lemmas. (4.14) Proof. We may assume that z is real and non-negative, say z = x ∈ [0, 1). Let ϕ ∈ Aut(D) be given by ϕ(ζ) = ζ − x 1 − xζ , and define α = ϕ(x − r) = −r 1 − x 2 + rx ; β = ϕ(x + r) = r 1 − x 2 − rx . Then D ′ r = ϕ(D(x, r)) is a disk with diameter (α, β) ⊂ (−1, 1). Since |α| ≤ β, we see that D ′ r ⊇ D(0, |α|). Therefore mod(D \ D(x, r)) = mod(D \ D ′ r ) ≤ mod(D \ D(0, |α|)) = log 1 |α| = log 1 − x 2 + rx r , and this finishes the proof. Remark 4.11. It follows from (4.14) that mod(D \ D(z, r)) ≤ log 2 r . This estimate will be useful when r is small compared to the distance from z to ∂D. If r = 1 2 δ(1 − |z|) with 0 < δ ≤ 1, then an easy manipulation of the right-hand side of (4.14) yields the estimate mod(D \ D(z, r)) ≤ log 5 δ . This remark will be used in the proof of Lemma 4.13 below. Lemma 4.12. Let α > 1 and suppose z, w ∈ D are such that α −1 ≤ ρ D (z) ρ D (w) ≤ α ,(4. 15) Then there exists ψ ∈ Aut(D) with ψ(z) = w such that the following inequalities hold for all ζ ∈ ∆ z : (i) 1 2α ≤ |ψ ′ (ζ)| ≤ 4α 2 ; (ii) |ψ ′′ (ζ)| ≤ 16α 3 . Proof. Write a = |z| and b = |w|, so that 0 ≤ a, b < 1. We have 1 − a 2 = ρ D (z) −1 and 1 − b 2 = ρ D (w) −1 , so (4.15) tells us that α −1 ≤ 1 − a 2 1 − b 2 ≤ α . (4.16) Let ϕ ∈ Aut(D) be the hyperbolic translation with axis (−1, 1) ⊂ D such that ϕ(a) = b. Then ϕ(ζ) = ζ − c 1 − cζ , where c = (a − b)/(1 − ab) ∈ (−1, 1) , as a simple calculation shows. Moreover, we have ϕ ′ (ζ) = 1 − c 2 (1 − cζ) 2 ,(4.17) as well as ϕ ′′ (ζ) = 2c(1 − c 2 ) (1 − cζ) 3 ,(4.18)Since 1 − c 2 = (1 − a 2 )(1 − b 2 )/(1 − ab) 2 , and since min{1 − a 2 , 1 − b 2 } ≤ 1 − ab ≤ max{1 − a 2 , 1 − b 2 }, it follows from (4.16) that α −1 ≤ 1 − c 2 ≤ 1 .(4.19) Now, if ζ ∈ ∆ a , then |ζ| ≤ (1 + a)/2. Hence |1 − cζ| ≥ 1 − |c| 1 + a 2 = 1 − |c| 2 + 1 − |c|a 2 > 1 − |c|a 2 . Here, there are two cases to consider. If a ≥ b, then c ≥ 0 and 1 − |c|a = 1 − ca = (1 − a 2 )/(1 − ab), so from (4.16) we deduce that 1 − |c|a ≥ α −1 . If however a < b, then c < 0, and in this case we see that 1 − |c|a = 1 − b 2 + (b − a) 2 1 − ab > 1 − b 2 1 − a 2 ≥ α −1 , where once again we have used (4.16). Thus, in either case we have 1 2α ≤ |1 − cζ| < 2 , for all ζ ∈ ∆ a . (4.20) Using both (4.19) and (4.20) in (4.17) and(4.18), we easily arrive at inequalities (i) and (ii) with ϕ replacing ψ (and ∆ a replacing ∆ z ). Finally, we define ψ = R b • ϕ • R a , where R a is the rigid rotation around 0 with R a (z) = a, and R b is the rigid rotation around 0 with R b (b) = w. Then ψ(z) = w, and since R a , R b are euclidean isometries and R a (∆ z ) = ∆ a , the inequalities (i) and (ii) for ψ follow from the corresponding inequalities for ϕ. For our final lemma, we introduce further notation. Given a C 2 map φ : D → D, a point z ∈ D and 0 < δ ≤ 1, we denote by m φ (z, δ) the C 2 norm of the restriction of φ to the disk {ζ : |ζ − z| ≤ δr z }, where r z = 1 2 (1 − |z|). In particular, m φ (z, 1) = m φ (z). Lemma 4.13. For each 0 < θ < 1 there exists a universal, continuous monotone function B θ : R + → R + such that the following holds. Given 0 < ǫ < 1, let φ : D → D be a C 2 quasiconformal diffeomorphism with K φ ≤ 1 + ǫ, and suppose that z ∈ D is a fixed point of φ. Then for each 0 < δ ≤ 1 we have J h φ (z) ≤ 1 + B θ m φ (z, δ) δ ǫ 1−θ . (4.21) Proof. The basic geometric idea behind the proof is to use macroscopic estimates on the moduli of certain annuli in order to bound a microscopic quantity, namely the hyperbolic Jacobian at z. Rotating the coordinate axes if necessary, we may also assume that Dφ(z) = S · T , where S = ρI = ρ 0 0 ρ , for some ρ > 0, and T = λ b 0 λ −1 , where λ ≥ 1 and b ∈ R. Here we obviously have ρ 2 = det Dφ(z) = J φ (z) = J h φ (z). We shall prove the lemma only in the case when b = 0 and λ > 1. The cases when b = 0 and/or λ = 1 are similarly handled. Note that the linear map Dφ(z) maps the circle of radius 1 about the origin onto an ellipse with major axis ρλ and minor axis ρ/λ. Since φ is (1 + ǫ)-qc, we have λ 2 ≤ 1 + ǫ. In what follows, we assume that ρ > λ + ǫ, as otherwise ρ 2 ≤ (λ + ǫ) 2 ≤ 1 + 6ǫ and there is nothing to prove. If ζ is such that |ζ − z| ≤ δr z we can write, using Taylor's formula and the fact that φ(z) = z, φ(ζ) = z + Dφ(z) · (ζ − z) + R φ (ζ) ,(4.22) where the remainder R φ (ζ) satisfies |R φ (ζ)| ≤ C|ζ − z| 2 , with C = C 0 m φ (z, δ) > 0 (and C 0 > 0 an absolute constant). Let us choose 0 < r ≤ δr z so small that ρ λ r − Cr 2 > ρ λ + ǫ r . (4.23) For definiteness, we take r = min δr z , ρǫ Cλ 2 (λ + ǫ) . (4.24) Then (4.22) and (4.23) tell us that φ maps the disk D(z, r) onto a Jordan domain V r which contains that disk and also the round annulus Ω = {ζ : r < |ζ − z| < ρ λ+ǫ r}. Setting Ω 0 = V r \ D(z, r), we have Ω 0 ⊇ Ω, and so mod(Ω 0 ) ≥ mod(Ω) = log ρ λ + ǫ . (4.25) Consider the images of Ω 0 under the forward iterates of φ, i.e., Ω n = φ n (Ω 0 ), n ≥ 0. The annuli Ω n are pairwise disjoint, and ∪ ∞ n=0 Ω n ⊂ D \ D(z, r). By sub-additivity of the modulus, we have ∞ n=0 mod(Ω n ) ≤ µ r = mod(D \ D(z, r)) . (4.26) Now, since φ is (1 + ǫ)-qc, we know that φ n is (1 + ǫ) n -qc, and therefore (1 + ǫ) n ≤ µ r . mod(Ω n ) ≥ mod(Ω 0 ) (1 + ǫ) n . (4.28) Applying Lemma 4.10 and Remark 4.11 to our r as defined in (4.24), we see that µ r ≤            log 5 δ , when r = δr z ; log 2Cλ 2 (λ + ǫ) ρǫ , when r = ρǫ Cλ 2 (λ + ǫ) . (4.29) Regardless of which of the two cases occur, we certainly have µ r ≤ log 10Cλ 2 (λ + ǫ) δρǫ < log 60C δǫ ,(4.30) where in the last step we have used that λ 2 (λ + ǫ) < 6 and ρ > 1. Combining (4.28) and (4.30), we deduce that log ρ λ + ǫ ≤ ǫ 1 + ǫ log 60C δǫ < ǫ log 60C δ + ǫ log 1 ǫ (4.31) Since 0 < ǫ < 1, we have ǫ < ǫ 1−θ and ǫ θ log 1 ǫ ≤ (θe) −1 . Using these facts in (4.31), we get ρ ≤ (λ + ǫ) exp 1 θe + log 60C δ ǫ 1−θ (4.32) ≤ 1 + 2 + 180e 1/θe C δ ǫ 1−θ ,(4.33) where we have used that λ + ǫ ≤ 1 + 2ǫ. From this, and the fact that C = C 0 m φ (z, δ), it readily follows that J h φ (z) = ρ 2 ≤ 1 + 3 2 + 180e 1/θe C 0 m φ (z, δ) δ 2 ǫ 1−θ . This proves (4.21), provided we take B θ (t) = 3 2 + 180e 1/θe C 0 t 2 . We are now ready for the proof of the first main result of this subsection. Proof of Proposition 4.9. The idea, of course, is to reduce the required estimate to the case treated in Lemma 4.13. Let ψ ∈ Aut(D) be the conformal automorphism given by Lemma 4.12, with ψ(z) = w = φ(z). Then the diffeomorphism F = ψ −1 • φ : D → D has a fixed point at z. Since ψ −1 is an isometry of the hyperbolic metric, we certainly have J h F (z) = J h φ (z). We would like to estimate J h F (z) using Lemma 4.13. For this, we need an estimate on the C 2 norm of the composition ψ −1 • φ in a suitable disk around z. By Koebe's one-quarter theorem, ψ(∆ z ) contains the disk D = ζ : |ζ − w| < 1 4 |ψ ′ (z)| · r z . Since we know from Lemma 4.12(i) that |ψ ′ (z)| ≥ (2α) −1 , it follows that ψ(∆ z ) ⊃ D(w, R), where R = r z /8α. Now let us define δ = 1 8αm φ (z) and M = sup ζ∈∆z |Dφ(ζ)| ≤ m φ (z) . Then we have φ(D(z, δr z )) ⊂ D(w, Mδr z ) ⊆ D(w, R) ⊂ ψ(∆ z ). We can now estimate the C 2 norm of F restricted to the disk D(z, δr z ), i.e. we can estimate m F (z, δ), with the help of Lemma 4.12. We do this by means of the following two steps. (i) By the chain rule for first derivatives, we have DF = Dψ −1 • φ · Dφ. Since ψ −1 is holomorphic, for each ζ ∈ D(z, δr z ) we have Dψ −1 (φ(ζ)) ≤ |(ψ −1 ) ′ (φ(ζ))| = |ψ ′ (ψ −1 • φ(ζ))| −1 ≤ 2α . (4.34) Hence the C 0 norm of DF in D(z, δr z ) is bounded by 2αm φ (z). (ii) By the chain rule for second derivatives, we have D 2 F = (D 2 ψ −1 • φ) · (Dφ ⊗ Dφ) + Dψ −1 • φ · D 2 φ . (4.35) Again, since ψ −1 is holomorphic, a simple calculation shows that (ψ −1 ) ′′ = − ψ ′′ • ψ −1 (ψ ′ • ψ −1 ) 3 . Therefore, for each ζ ∈ D(z, δr z ) we have, with the help of Lemma 4.12, D 2 ψ −1 (φ(ζ)) ≤ |(ψ −1 ) ′′ (φ(z))| ≤ 128α 6 . (4.36) Using (4.34), (4.36) and the fact that Dφ ⊗ Dφ ≤ Dφ 2 in (4.35), we deduce that the C 0 norm of D 2 F in the disk D(z, δr z ) is bounded by (128α 6 + 2α)m φ (z) < 130α 6 m φ (z). From steps (i) and (ii) above we deduce that m F (z, δ) ≤ 130α 6 m φ (z). Therefore, applying Lemma 4.13 for F yields J h φ (z) = J h F (z) ≤ 1 + B θ m F (z, δ) δ ǫ 1−θ ≤ 1 + B θ 1040α 7 (m φ (z)) 2 ǫ 1−θ . This completes the proof of our theorem, provided we take A θ (s, t) = B θ (1040s 7 t 2 ). Proposition 4.14. For each 0 < θ < 1, there exists a universal continuous function C θ : (1, ∞) × (1, ∞) × R + × R + → R + for which the following holds. Let α > 1 and β > 1 be given, and suppose φ : D → D is a C 2 quasiconformal diffeomorphism. If z ∈ D is such that α −1 ≤ ρ D (φ(z)) ρ D (z) ≤ α ,(4.37) and sup ζ∈∆z |µ φ (ζ)| ≤ b 0 (1 − |z|) β , (4.38) then J h φ (z) ≤ 1 + C θ (α, β, b 0 , m φ (z))(1 − |z|) β(1−θ) . (4.39) Proof. We present the proof of the required estimate under the additional assumption that z is a fixed-point of φ. The general case can be reduced to this one by post-composing φ with a suitable conformal automorphism of the unit disk, and proceeding just as in the proof of Proposition 4.9, mutatis mutandis. For the sake of clarity of exposition, we divide the proof into a series of steps. (i) First we introduce some notation. Throughout the proof we denote by c 0 , c 1 , . . . positive constants that are either absolute or depend on the given constants α, β, b 0 , M, where M = m φ (z). Let us write ǫ = b 0 (1 − |z|) β = (b 0 2 β )r β z . Also, let k 0 = sup ζ∈∆z |µ φ (ζ)| ≤ ǫ, and set r 0 = ǫr z . We may assume without loss of generality that ǫ is small, say ǫ < 1/32. (ii) The restricted map φ| ∆z : ∆ z → D is a 1+k 0 1−k 0 -quasiconformal embedding. By Lemma 4.6, the further restriction φ| D(z,r 0 ) can be extended to a global quasiconformal homeomorphism ψ : C → C with k = µ ψ ∞ satisfying 1 + k 1 − k ≤ 1 + ǫ 1 − ǫ · 1 + k 0 1 − k 0 ≤ 1 + ǫ 1 − ǫ 2 . (iii) In particular, k ≤ 16ǫ < 1 2 (by our assumption on ǫ in (i)). We may assume that k = 0 (if this is not the case, it is easy to perturb ψ slightly in a neighborhood of infinity). By Lemma 4.8(i), there exists a global holomorphic motion ψ t : C → C with ψ k = ψ and ψ t (z) = z for all t ∈ D. Now choose r 1 > 0 so small that R = 2Me 6π k 0 r 2 0 · r 1/3 1 < r z . For definiteness, take r 1 = c 1 k 3 r 6β+9 z , where c 1 = b 6 0 /(M 3 e 18π ). Then, by Lemma 4.8(ii), we have ψ t (D(z, r 1 )) ⊂ D(z, R) for all t with |t| < 1 2 (note that this includes the time t = k). (iv) We may now define, for each t ∈ D(0, 1 2 ), the map ψ t : D(z, r 1 ) ∪ (C \ D) → C by ψ t (ζ) =    ψ t (ζ) for ζ ∈ D(z, r 1 ) , ζ for ζ ∈ C \ D . Since D(z, R) ⊂ D, we have from step (iii) that ψ t (D(z, r 1 )) ∩ C \ D = Ø. Hence ψ t , |t| < 1 2 , is a holomorphic family of injections, i.e., a holomorphic motion of the set D(z, r 1 ) ∪ (C \ D). (v) Now apply Slodkowski's Theorem 4.7 to get a global extension ψ t : C → C of the motion ψ t , with time parameter t in D(0, 1 2 ). In particular, the map ψ = ψ k is K-quasiconformal with K = 1+2k 1−2k , and it maps the unit disk onto itself. Moreover, we have ψ| D(z,r 1 ) = ψ| D(z,r 1 ) = φ| D(z,r 1 ) . Thus, ψ is the desired modification of φ away from z. (vi) We are now in a position to use the same annulus trick we employed in the proof of Lemma 4.13. Let ρ > 0, λ > 1 and the absolute constant C 0 > 0 be as in the proof of that Lemma. In particular, ρ 2 = J h φ (z) = J h ψ (z), and thus our goal is to bound ρ from above. We have λ ≤ 1 + ǫ, and we may assume that ρ > λ + ǫ, otherwise there is nothing to prove. Now let r 2 > 0 be given by r 2 = ǫ 3C 0 M < ρǫ C 0 Mλ 2 (λ + ǫ) . Then for all r ≤ r 2 the inequality (4.23) holds. Let us choose r = min{r 1 , r 2 }. With this choice of r, using the Taylor expansion (4.22) as in the proof of Lemma 4.13 we see that Ω 0 = ψ(D(z, r)) \ D(z, r) = φ(D(z, r)) \ D(z, r) is a conformal annulus, with mod (Ω 0 ) ≥ log ρ λ + ǫ . (4.40) (vii) Now define Ω n = ψ n (Ω 0 ) for all n ≥ 0, and note that mod (Ω n ) ≥ 1 − 2k 1 + 2k n mod (Ω 0 ) . (4.41) Since ∪ n≥0 Ω n ⊂ D \ D(z, r), we deduce from (4.40) and (4.41) that (vii) But from our choices of r 1 and r 2 , we see that r = min{r 1 , r 2 } = c 2 k 3 r 6β+9 z , for some constant c 2 > 0. Hence log ρ λ + ǫ ∞ n=0 1 − 2k 1 + 2k n ≤ log 2 r ,(4.log 2 r ≤ log 2 c 2 + 3 log 1 k + (6β + 9) log 1 r z . Putting this back into (4.43) and using that k ≤ (const.)r β z , we deduce that, for each 0 < θ < 1, log ρ λ + ǫ ≤ c 3 k + c 4 k log 1 k + c 5 k log 1 r z ≤ c 6 r β(1−θ) z + c 7 r β z log 1 r z ≤ c 8 r β(1−θ) z . Here the constants c 6 , c 7 , c 8 depend on M, β, b 0 and also on θ. From this it follows that ρ ≤ 1 + c 9 r β(1−θ) z , and therefore J h φ (z) = ρ 2 ≤ 1 + c 10 r β(1−θ) z , where the constant c 10 depends on M, β, b 0 and θ. Hence we have established (4.39), with c 10 playing the role of C θ , in the case when z is a fixed-point of φ. As we already remarked, the general case follows from this one by post-composition of φ with a suitable automorphism of the disk, using the same procedure given in the proof of Proposition 4.9. It is here, and only here, that (4.37) is used. Hence the final constant C θ indeed depends on M, α, β, b 0 , and of course also on θ. This finishes the proof. As we informally said in the beginning of this subsection, our goal is to develop bounds on the infinitesimal distortion, by a self-map (diffeomorphism) of a hyperbolic Riemann surface, of the underlying hyperbolic metric in terms of the local quasiconformal distortion of the map. So far we have only shown how to bound in such terms the hyperbolic Jacobian of these maps. Can we use such estimates on the Jacobian to bound the infinitesimal distortion of the hyperbolic metric? The answer is yes, and the reason lies in the fact that there is a simple relationship between the two concepts. More precisely, let φ : Y → Y be a quasiconformal diffeomorphism. Then for each z ∈ Y and each non-zero tangent vector v ∈ T z Y , we have 1 K φ (z) J h φ (z) ≤ |Dφ(z)v| Y |v| Y 2 ≤ K φ (z) J h φ (z) . (4.44) This fact is classical (see for instance [47, p. 17]). M = max diam(V ), (dist(∂V, ∂U)) −1 , φ C 2 , φ −1 C 2 > 0 Then the following facts hold true for each 0 < θ < 1. ( i) If φ is (1 + δ)-quasiconformal (δ > 0), then for each z ∈ U ∩ Y with φ(z) ∈ U ∩ Y and all non-zero tangent vectors v ∈ T z Y we have 1 + C θ δ 1−θ −1 ≤ |Dφ(z)v| Y |v| Y ≤ 1 + C θ δ 1−θ , (4.45) where C θ > 0 depends only on θ and M. (ii) If φ is asymptotically holomorphic of order r, so that |µ φ (z)| ≤ b 0 |Im z| r−1 for all z ∈ Y , then for each z ∈ U ∩ Y with φ(z) ∈ U ∩ Y and all non-zero tangent vectors v ∈ T z Y we have 1 + C θ |Im z| (r−1)(1−θ) −1 ≤ |Dφ(z)v| Y |v| Y ≤ 1 + C θ |Im z| (r−1)(1−θ) (4.46) where C θ > 0 depends only on θ, M and b 0 . Proof. The hard work has already been done in Propositions 4.9 and 4.14, and all we have to do is to show, with the help of (4.44), how to reduce the present theorem to the situation in those auxiliary results. There is no loss of generality in assuming that φ preserves Y + = Y ∩ C + (and therefore also Y − = Y ∩ C − ). Also, it suffices to establish the upper estimates in (4.45) and (4.46), since the lower estimates follow by replacing φ with its inverse. Moreover, by symmetry we only need to establish these upper estimates for points z ∈ U ∩ Y + . Let (a, b) = V ∩R, and let ϕ : V → C be a holomorphic univalent map with ϕ(Y + ) = D, ϕ(Y − ) = C \ D, normalized so that ϕ(a) = −1, ϕ(b) = +1. Let W * = ζ∈ϕ(U + ) ∆ ζ ⊂ D, and consider W = ϕ −1 (W * ) ⊂ Y + . Note that W ⊃ U + . By Lemma 4.4 (ii), the C 2 norms of the restrictions ϕ| W and ϕ −1 | ϕ(W * ) are both bounded by a constant that depends only on dist(∂V, ∂W ), and it is not difficult (albeit a bit laborious) to see that this last distance is bounded by a constant that depends only on M. These bounds also imply that there exists a constant K 1 > 1 depending only on M such that 1 K 1 (1 − |ϕ(z)|) ≤ |Im z| ≤ K 1 (1 − |ϕ(z)|) (4.47) for all z ∈ W . Now consider the C 2 diffeomorphism ψ : D → D given by ψ = ϕ • φ • ϕ −1 . Note that, by the chain rule and the bounds on ϕ, ϕ −1 stated above, the C 2 norm of ψ| W * is also bounded by a constant that depends only on M. Given a point z ∈ Y + and a vector v ∈ T z Y + ≡ T z Y , let ζ = ϕ(z) ∈ D and w = Dϕ(z)v ∈ T ζ D. Since ϕ yields an isometry between the hyperbolic metric of Y + (i.e., of Y ) and the hyperbolic metric of D, we have |v| Y = |w| D . Moreover, by the chain rule we have |Dφ(z)v| Y = |Dϕ −1 (ψ(ζ)) Dψ(ζ)w| Y = |Dψ(ζ)w| D , where in the last step we have used that ϕ −1 yields an isometry between the hyperbolic metric of D and the hyperbolic metric of Y + (and therefore the derivative Dϕ −1 (ψ(ζ)) is an infinitesimal isometry between corresponding tangent spaces). This shows that for each z ∈ Y + and each non-zero tangent vector v ∈ T z Y , we have |Dφ(z)v| Y |v| Y = |Dψ(ζ)w| D |w| D . (4.48) In addition, since ϕ and ϕ −1 are conformal, we have that ψ and φ have the same dilatation at corresponding points, i.e., K ψ (ζ) = K φ (z) for all z ∈ Y + . Also, since ϕ and ϕ −1 are hyperbolic isometries, the hyperbolic Jacobians of ψ and φ agree on corresponding points, i.e., J h ψ (ζ) = J h φ (z). Putting these facts together, we see that the assertions (i) and (ii) in the statement (i.e., the estimates in (4.45) and (4.46)) will be proved for φ as soon as the corresponding assertions for ψ are proved. But assertion (i) for ψ follows by putting together Proposition 4.9 and (4.44), whereas assertion (ii) for ψ follows by putting together Proposition 4.14 and (4.44). To see why this is so, we need to check that, in each case, the hypotheses of the corresponding propositions are satified by ψ. Case (i). If φ is (1 + δ)-quasiconformal, as in (i), then ψ is (1 + δ)-quasiconformal as well. The hypotheses on φ imply that there exists a constant K 2 > 1 depending only on M such that 1 K 2 ≤ |Im z| |Im φ(z)| ≤ K 2 (4.49) for all z ∈ W . Applying this with z = ϕ −1 (ζ) for ζ ∈ W * and using (4.47), we deduce that there exists K 3 > 1 depending only on M such that 1 K 3 ≤ ρ D (ζ) ρ D (ψ(ζ)) ≤ K 2 for all ζ ∈ W * . This shows that the inequality (4.13) in the hypothesis of Proposition 4.9 is satisfied for ψ. Moreover, we have for each ζ ∈ ϕ(U + ) we have ∆ ζ ⊂ W * , and so, in the notation introduced before , m ψ (ζ) ≤ ψ| W * C 2 ≤ K 4 , where K 4 > 0 is a constant that depends only on M. Hence all the hypotheses of Proposition 4.9 are satisfied by ψ. It follows that, for each 0 < θ < 1, there exists a constant K θ depending only on θ and M such that J h ψ (ζ) ≤ 1 + K θ δ 1−θ , (4.50) for all ζ ∈ ϕ(U + ). Combining (4.50) with the general upper estimate in (4.44) (for ψ), we see that for each 0 < θ < 1 there exists a constant C θ > 0 depending only on θ and M such that |Dψ(ζ)w| D |w| D ≤ 1 + C θ δ 1−θ ,(4.51) for all ζ ∈ ϕ(U + ) and each non-zero tangent vector w ∈ T ζ D. Putting (4.51) together with (4.48) for z = ϕ −1 (ζ) ∈ U + and v = Dϕ −1 (ζ)w ∈ T z Y + , we deduce the upper estimate in (4.45), as desired. Case (ii). If φ is asymptotically holomorphic (near the real axis) then so is ψ (near the boundary of the unit disk). Verifying the hypotheses of Proposition 4.14 for ψ in this case is similar to what was done in case (i), hence we omit the details. Remark 4.16. In the application we have in mind, namely Theorem 5.4 below, the diffeomorphism φ will be the asymptotically holomorphic diffeomorphism appearing in the Stoilow decomposition of a high renormalization of an (infinitely renormalizable) AHPLmap. For such maps, we can always assume that the constant b 0 appearing in assertion (ii) is equal to one. The reason for this is embedded in the proof of a slightly improved version of the complex bounds (see Theorem 3.2 (iv)). Recurrence and expansion This section contains a crucial step towards the proof of our Main Theorem (as stated in the introduction), namely Theorem 5.4 below. We show that every AHPL-map arising as a deep renormalization of an infinitely renormalizable C r unimodal map with bounded combinatorics expands the hyperbolic metric of its co-domain minus the real axis. From this we deduce a few basic properties concerning the global dynamics of these AHPL-maps -such as the fact that all of their periodic points are expanding. The expansion property proved here will lead to much stronger results in §6, including, of course, the proof of the Main Theorem. 5.1. Controlled AHPL-maps. In order to establish the desired expansion property, we need to assume that our AHPL-maps satisfy certain geometric constraints. We call such maps controlled AHPL-maps. These geometric constraints may seem artificial, but the point is that they are always verified once we renormalize a given AHPL-map a sufficient number of times. Let us proceed with the formal definition. First, we need some notation. Given z = x + iy ∈ C \ R and α > 1, let z α = x + iαy. Definition 5.1. Let α, M > 1 and 0 < δ, θ < 1 be real constants, and let n 0 ∈ N. An AHPL-map f : U → V of class C r , r ≥ 3, is said to be (α, δ, θ, M, n 0 )-controlled if the following conditions are satisfied. (i) We have diam(V ) ≤ M and mod(V \ U) ≥ M −1 ; (ii) If f = φ•g is the Stoilow decomposition of f , with φ : V → V a C r -diffeomorphism and g : U → V holomorphic, then φ C 2 , φ −1 C 2 ≤ M; (iii) φ is (1 + δ)-quasiconformal on V ; (iv) The dilatation µ φ satisfies |µ φ (z)| ≤ M|Im z| r−1 ; (v) For all z ∈ U α = U ∩{w : |Im w| ≤ (αM) −1 }, we have D(z α , |Im z α |) ⊂ Y = V \ R; (vi) For all z ∈ U \ R we have M −1 ≤ |Im z|/|Im φ(z)| ≤ M, as well as M −1 ≤ ρ Y (z)/ρ Y (φ(z)) ≤ M; (vii) We have Φ(diam Y (U \ U α ) + 2n 0 log M) < 1 − C θ δ 1−θ , where Φ is McMullen's universal function (4.4) and C θ = C θ (M) is the constant appearing in Theorem 4.15 (i). Remark 5.2. It is possible to prove, with the help of Lemma 4.1 and the Riemann mapping theorem, that diam Y (U \ U α ) ≤ C + log α for some positive constant C = C(M). The following result is a straightforward consequence of the complex bounds, as given by Theorem 3.2, together with the C 2 bounds, as given by Theorem 3.1 and Remark 3.5. Theorem 5.3. For each positive integer N there exists M = M(N) > 1 such that the following holds. Let f : U → V be an AHPL-map of class C r , r ≥ 3, whose restriction to the real line is an infinitely renormalizable unimodal map with combinatorics bounded by N. Then for each α > 1 and 0 < θ < 1 and each n 0 ∈ N, there exist 0 < δ < 1 and n 1 = n 1 (f, α, θ, n 0 ) ∈ N such that, for all n ≥ n 1 , the n-th renormalization R n f : U n → V n is an (α, δ, θ, M, n 0 )-controlled AHPL map. Now, we have the following main theorem. Theorem 5.4. Given M > 1, r > 3 and 0 < θ < 1 so small that (r − 1)(1 − θ) > 2, there exists α 0 > 1 such that the following holds for all α > α 0 . Let f : U → V be an AHPL-map of class C r and assume that f is (α, δ, θ, M, n 0 )-controlled for some 0 < δ < 1 and some n 0 ∈ N. Suppose also that r, α, θ and n 0 are such that r > 1 + 4n 0 α (n 0 − 1)(1 − θ)(2α − 1) . (5.1) Then the following assertions hold true. (a) There exists a constant 0 < η < 1 such that |Df n (z)v| Y ≥ η|v| Y , for all z ∈ Y ∩ U such that f i (z) ∈ Y for 0 ≤ i ≤ n and all v ∈ T z Y . (b) If z is a point in the filled-in Julia set of f and its ω-limit set is not contained in the real axis, we have |Df n (z)v| Y /|v| Y → ∞ as n → ∞, for each non-zero tangent vector v ∈ T z Y . Proof. First we give an informal description of the argument. For a suitable constant 0 < λ < 1, we partition the domain of f = φ • g into a sequence of scales, the n-th scale being the set of points in the domain (off the real axis) whose distance to the real axis is of the order λ n . The rough idea then is that at each level the worst expansion of the hyperbolic metric of Y by g beats the best contraction of that metric by φ. In this, we are aided by Theorem 4.15 and Lemma 4.3. We warn the reader that, in what follows, whenever invoking Theorem 4.15, we denote by C θ the largest of the two constants with that name appearing in assertions (i) and (ii) of said theorem. Let us now present the formal proof. Let us assume we are given a large number α > 1. How large α must be will be determined in the course of the argument. To start with, note that by (4.2) in Lemma 4.2 we have, for all z ∈ U α , 1 |Im z| ≤ ρ Y (z) ≤ 1 |Im z| 1 − 1 2α −1 . (5.2) Let us fix for the time being a real number 0 < λ < 1, which we will use to define the scales we mentioned above. For definiteness, we take λ = M −1 . For each n ≥ 1 we define W n = z ∈ U α : λ n αM ≤ |Im z| < λ n−1 αM . Also, we set W 0 = U \ U α ⊂ Y . Then we have, of course, U \ R = ∞ n=0 W n . Claim. There exists a sequence of numbers ξ n > 1, n ≥ 0, with ξ n → 1 as n → ∞, having the following property: For each z ∈ W n and each tangent vector v ∈ T z Y , we have |D(g • φ)(z)v| Y ≥ ξ n |v| Y . (5.3) Proof of Claim. In order to prove this claim, we analyse separately the expansion of the conformal map g and the (possible) contraction of the quasi-conformal diffeomorphism φ. We proceed through the following steps. (i) Let X ⊂ Y be the open set containing φ(z) such that g maps X univalently onto Y . Writing w = Dφ(z)v ∈ T φ(z) Y , and applying Lemma 4.3 together with the estimate (4.5), we deduce that |Dg(φ(z)) w| Y ≥ 1 + 1 3 e −2s X,Y (φ(z)) |w| Y . (5.4) Now we need to estimate s X,Y (φ(z)). (ii) Let us write p = φ(z) = x + iy and let q = x + i(αM) −1 y |y| ∈ U \ U α , which lies in the same vertical as p. There are two cases to consider: (1) We have p ∈ X but q / ∈ X. In this case, we have d Y (p, Y \ X) ≤ d Y (p, q). Using (5.2), we get s X,Y (φ(z)) ≤ d Y (p, q) ≤ 1 − 1 2α −1 log (αM) −1 |Im φ(z)| . But by property (vi) of Definition 5.1 we have |Im φ(z)| ≥ M −1 λ n (αM) −1 . Hence s X,Y (φ(z)) ≤ 1 − 1 2α −1 n log 1 λ + log M . (5.5) (2) We have p ∈ X and q ∈ X. In this case we have d Y (p, Y \ X) ≤ d Y (p, q) + d Y (q, Y \ X) ≤ d Y (p, q) + diam Y (U \ U α ) . Therefore s X,Y (φ(z)) ≤ C α + 1 − 1 2α −1 n log 1 λ + log M ,(5.6) where C α = diam Y (U \ U α ). Whichever case occurs, we see that (5.6) always holds. Combining these facts with (5.4) we deduce that |Dg(φ(z))w| Y ≥ 1 + K 1 λ 2n(1− 1 2α ) −1 |w| Y ,(5.7) where K 1 = K 1 (α, M) is the constant given by K 1 = 1 3 e −2Cα exp −2 1 − 1 2α −1 log M < 1 . (5.8) This gives us a lower bound on the amount of expansion of the hyperbolic metric of Y by the conformal map g for points at level n. (iii) Let us now bound the amount of contraction of the hyperbolic metric by the quasiconformal diffeomorphism φ at z ∈ W n . First we assume that n ≥ n 0 . Applying Theorem 4.15(ii), we have for all v ∈ T z Y the estimate |Dφ(z)v| Y ≥ 1 − C θ |Im z| (r−1)(1−θ) |v| Y ,(5.9) But since z ∈ W n , we know that |Im z| ≤ (αM) −1 λ n−1 . Carrying this information back into (5.9), we deduce that |Dφ(z)v| Y ≥ 1 − K 2 λ (n−1)(r−1)(1−θ) |v| Y ,(5.10) where K 2 = K 2 (α, θ, r, M) is the constant given by K 2 = C θ (αM) (1−r)(1−θ) . (5.11) (iv) Note that both constants K 1 and K 2 depend on α. We claim that the ratio K 2 /K 1 goes to zero as α → ∞. From (5.8) and (5.11), we see that K 2 K 1 < C 1 e 2Cα α (1−r)(1−θ) , where C 1 = 3C θ M (1−r)(1−θ) M 4 is independent of α. By Remark 5.2, we have C α < C 2 + log α, for some constant C 2 depending only on M. Hence K 2 K 1 < C 3 α 2−(r−1)(1−θ) ,(5.12) where C 3 = C 1 e 2C 2 . Since by hypothesis (r − 1)(1 − θ) > 2, it follows that the right-hand side of (5.12) indeed goes to zero as α → ∞. Hence we assume from now on that α is so large that 2K 2 < K 1 . (v) Thus, if for each n ≥ n 0 we let ξ n be given by ξ n = 1 + K 1 λ 2n(1− 1 2α ) −1 1 − K 2 λ (n−1)(r−1)(1−θ) ,(5.13) then we have |D(g • φ)(z)v| Y ≥ ξ n |v| Y for all z ∈ W n and each v ∈ T z Y . Note that ξ n → 1 as n → ∞, because λ < 1. We still need to check that ξ n > 1 for all n ≥ n 0 . This will be true provided K 1 λ 2n(1− 1 2α ) −1 > 2K 2 λ (n−1)(r−1)(1−θ) ,(5.14) for all n ≥ n 0 . Note that both sides of (5.14) are indeed smaller than 1, because from (5.8) and step (iv) we have 2K 2 < K 1 < 1, and λ < 1. Extracting logarithms from both sides of (5.14), we get 2n 1 − 1 2α −1 log λ > (n − 1)(r − 1)(1 − θ) log λ + log (2K −1 1 K 2 ) . Dividing both sides of the above inequality by (n − 1)(1 − θ) log λ < 0, we arrive at r > 1 + 2n (n − 1)(1 − θ) 1 − 1 2α + log (2K −1 1 K 2 ) (n − 1)(1 − θ) log 1 λ (5.15) But since 2K −1 1 K 2 < 1 (by our choice of α at the end of step (iv)), the third term on the right-hand side of (5.15) is negative and therefore can be safely ignored. Moreover, since n ≥ n 0 we have 2n/(n−1) ≤ 2n 0 /(n 0 −1). Therefore the inequality (5.14) will hold for all n ≥ n 0 provided r > 1 + 2n 0 (n 0 − 1)(1 − θ) 1 − 1 2α . But this is nothing but (5.1) in disguise! Hence we have established that the ξ n 's given by (5.13) satisfy ξ n > 1, for all n ≥ n 0 . (vi) In order to establish the claim, it remains to analyse what happens when z ∈ W 0 ∪ W 1 ∪ · · · ∪ W n 0 −1 . On the one hand, since φ is (1 + δ)-quasiconformal throughout, applying Theorem 4.15 for such z and any v ∈ T z Y yields the lower bound |Dφ(z)v| Y ≥ 1 − C θ δ 1−θ |v| Y . (5.16) On the other hand, using the estimate (5.6) above with n = n 0 we deduce that s X,Y (φ(z)) ≤ C α + 2(n 0 − 1) log 1 λ + 2 log M = C α + 2n 0 log M . Therefore, by McMullen's Lemma 4.3, we have for all w ∈ T φ(z) Y , |Dg(φ(z))w| Y ≥ Φ(s X,Y (φ(z)) −1 |w| Y (5.17) ≥ Φ (C α + 2n 0 log M) −1 |w| Y . (5.18) Combining (5.16) and (5.17) (with w = Dφ(z)v), we deduce that |D(g • φ)(z)v| Y ≥ Φ (C α + 2n 0 log M) −1 1 − C θ δ 1−θ |v| Y . Hence we can take ξ 0 = ξ 1 = · · · = ξ n 0 −1 = Φ (C α + 2n 0 log M) −1 1 − C θ δ 1−θ > 1 . This establishes (5.3) for all z ∈ W n , for all n ≥ 0, and completes the proof of our claim. With the Claim at hand, we proceed to the proof of the assertions in the statement of our theorem. Let z ∈ K f be a point whose iterates up to time n > 1 stay off the real axis -in other words, f i (z) ∈ Y for all 0 ≤ i ≤ n. Note that, since f = φ • g, we have f n = φ • (g • φ) n−1 • g. Write z 1 = g(z) and define inductively z j+1 = g • φ(z j ), for j = 1, . . . , n − 1. Then for each non-zero tangent vector v ∈ T z Y , we have by the chain rule Df n (z)v = Dφ(z n ) n−1 j=1 Dg(φ(z j ))Dφ(z j ) Dg(z)v . (5.19) Now, since the holomorphic map g expands the hyperbolic metric of Y , we have that |Dg(z)v| Y > |v| Y . Moreover, the amount of possible contraction of the hyperbolic metric by the (1 + δ)-quasiconformal diffeomorphism φ is bounded from below. Indeed, we have |Dφ(ζ)w| Y ≥ (1 − C θ δ 1−θ )|w| Y for all ζ ∈ Y and all w ∈ T ζ Y . Moreover, writing v 1 = Dg(z)v ∈ T z 1 Y and v j+1 = D(g • φ)(z j )v j ∈ T z j+1 Y for j = 1, . . . , n − 1, and applying the above Claim, we get |v j+1 | Y = |D(g • φ)(z j )v j | Y ≥ ξ k j |v j | Y , where k j ≥ 0 is the unique integer such that z j ∈ W k j . Setting η = 1 − C θ δ 1−θ < 1 and carrying these facts back into (5.19), we deduce that 20) where N k,n (z) is the total number of j's in the range 1 ≤ j ≤ n − 1 such that z j ∈ W k (in particular, the product appearing in the right-hand side is actually finite). This proves assertion (a). Now suppose that z is such that its ω-limit set accumulates at a point off the real axis, say p ∈ Y . This is the case, for instance, if z is a recurrent or periodic point for f . Then there exist k ≥ 0 and a sequence j ν → ∞ such that z jν → p as ν → ∞ and z jν ∈ W k for all ν. But this tells us that N k,n (z) → ∞ as n → ∞, and therefore, from (5.20), we deduce at last that |Df n (z)v| Y /|v| Y → ∞ as n → ∞. This proves the desired expansion property stated in assertion (b), and it also proves assertion (c). Hence it remains to prove assertion (d). |Df n (z)v| Y > η ∞ k=1 ξ N k,n (z) k |v| Y ,(5.Let z ∈ Y ∩ K f be a recurrent point. Let N ≥ 1 be such that |Df N (z)v| Y ≥ 3η −1 |v| Y for all v ∈ T z Y , where η is the constant of assertion (a). Such an N exists because of assertion (b). By continuity of ζ → Df N (ζ), we can find ǫ 0 > 0 such that |Df N (ζ)v| Y ≥ 2η −1 |v| Y for all ζ ∈ B Y (z, ǫ 0 ) and each v ∈ T ζ Y . Now, given 0 < ǫ < 1 4 ηǫ 0 , choose m > N such that f m (z) ∈ B Y (z,O ′ ⊂ B Y (z, η −1 · (2ǫ)) ⊂ B Y (z, ǫ 0 ) . Now that we know this fact, writing f m = f m−N • f N we see that, for all ζ ∈ O ′ and each non-zero v ∈ T ζ Y , |Df m (ζ)v| Y |v| Y = |Df m−N (f N (ζ))Df N (ζ)v| Y |Df N (ζ)v| Y · |Df N (ζ)v| Y |v| Y ≥ η · (2η −1 ) = 2 . Equivalently, we have shown that |Df −m (ζ)v| Y ≤ 1 2 |v| Y for all ζ ∈ O and each v ∈ T ζ Y . In other words, f −m | O : O → O ′ is, in fact, a contraction of the hyperbolic metric of Y , with contraction constant 1 2 . In particular, O ′ = f −m | O (O) ⊂ B Y (z, ǫ) ⋐ B Y (f m (z), 2ǫ) = O . This means that f −m | O maps the hyperbolic ball O strictly inside itself (and it is a contraction of the hyperbolic metric). Hence there exists z * ∈ O ′ such that f m (z * ) = z * , and this periodic point is necessarily expanding, by assertion (c). Thus, we have proved that for each ǫ > 0 there exists an expanding periodic point ǫ-close to z. This establishes assertion (d) and completes the proof of our theorem. It is worth pointing out that, combining Theorem 5.4 with Theorem 5.3, we already deduce the following simple properties of the dynamics of all sufficiently deep renormalizations of a given AHPL-map. Considerably stronger results will be proved in §6 below. Corollary 5.5. Let f : U → V be an AHPL-map of class C r , with r > 3, whose restriction to the real line is an infinitely renormalizable unimodal map with bounded combinatorics. There exists n 1 = n 1 (f ) ∈ N such that, for all n ≥ n 0 , the n-th renormalization f n = R n f : U n → V n is an AHPL-map with the following properties. (a) Every periodic orbit of f n is expanding. (b) The expanding periodic points are dense in the set of all recurrent points. (c) There are no stable components of int(K fn ) whose closures intersect the real axis. Proof. Choose 0 < θ < 1, as well as n 0 ∈ N and α > 1 large enough so that (5.1) holds true. This is possible because r > 3. Then, by Theorem 5.3, there exists n 1 ∈ N such that for all n ≥ n 1 , the n-th renormalization f n of f is an (α, δ, θ, M, n 0 )-controlled AHPL map, for some 0 < δ < 1. Hence assertions (a) and (b) follow from the corresponding assertions in Theorem 5.4. To prove (c), suppose Ω ⊂ Y n = V n \ R is a stable component of int(K fn ) such that Ω ∩ R = Ø. Let p ≥ 1 be such that f p n (Ω) = Ω. Also, consider the decomposition of the domain of f n into scales as in Theorem 5.4. Since Ω ⊂ U n \ R ⊂ Y n is compact, it is contained in the union of finitely many scales. In each scale f n expands the hyperbolic metric of Y n by a definite amount. Hence so does f p n on Ω. But this is impossible, because Ω has finite hyperbolic area. Topological conjugacy to polynomials and local connectivity of Julia sets In this section, we will prove that a (α, δ, θ, M, n 0 )-controlled AHPL-mapping f : U → V, which is infinitely renormalizable of bounded type, is topologically conjugate to a real polynomial in a neighbourhood of its filled Julia set, so that from the topological point of view, the dynamics of these mappings are the same as those of polynomials; in particular, such mappings do not have wandering domains. We will also prove that the Julia set of such an AHPL-mapping is locally connected. Specifically, we will assume that f satisfies the conditions of Theorem 5.4. In particular, we assume that f : U → V is a C r asymptotically holomorphic polynomial-like mapping that is (α, δ, θ, M, n 0 )-controlled, r > 1 + 4n 0 α (n 0 − 1)(1 − θ)(2α − 1) , and that the conclusions of Theorem 5.4 all hold. By Theorems 3.2 and 5.3, for any r > 3, if g is a C r mapping of the interval, which is infinitely renormalizable of bounded type, then for any n sufficiently large, there is a renormalization, R n g : U n → V n of g, which is an AHPL-mapping that satisfies these assumptions. Proof. It is sufficient to show that M λ n−1 αM r−1 ≤ K 1 λ 2n(1− 1 2α ) −1 − 2K 2 λ (n−1)(r−1)(1−θ) , see Equation (5.14). Factoring out λ (n−1)(r−1) on the right and cancelling it with the same term on the left, this is equivalent to: M (αM) r−1 ≤ K 1 λ 2n(1− 1 2α ) −1 −(n−1)(r−1) − 2K 2 λ −θ(n−1)(r−1) . (6.2) Since n > n 0 , we have that 4n 0 α(n − 1) − 4nα(n 0 − 1) = 4α(n 0 (n − 1) − n(n 0 − 1)) > 0 (6.3) Now, since r ≥ 1 + 4n 0 α (n 0 − 1)(1 − θ)(2α − 1) , we have that (r − 1)(1 − θ)(n − 1) ≥ 4n 0 α(n − 1) (n 0 − 1)(2α − 1) . (6.4) So (r − 1)(1 − θ)(n − 1) − 2n(1 − 1 2α ) −1 ≥ 4n 0 α(n − 1) (n 0 − 1)(2α − 1) − 2n 2α 2α − 1 = 4n 0 α(n − 1) − 4nα(n 0 − 1) (n 0 − 1)(2α − 1) > 0, where the first inequality follows from (6.4) and the last inequality follows from (6.3) Thus we have 2n(1 − 1 2α ) −1 − (n − 1)(r − 1) ≤ −θ(n − 1)(r − 1), since both exponents on the right hand side of (6.2): 2n(1 − 1 2α ) −1 − θ(n − 1)(r − 1) and − θ(n − 1)(r − 1) are negative, equation (6.1) holds for n sufficiently large. Let Recall that W k is the strip K f n (z) = 1 + |µ f n (z)| 1 − |µ f n (z)| ,W k = z ∈ U α : λ k αM ≤ |Im z| < λ k−1 αM . Corollary 6.2. For each N ∈ N there exists c > 0 such that the following holds. Let A be an open domain in C. Suppose that f n : A → B is onto and let {B j } n j=0 be the chain with B 0 = A and B n = B. Assume that for each 0 ≤ j ≤ n that #{k : B j ∩ W k = Ø} ≤ N. Then c · sup z∈A log K f n (z) ≤ inf z∈A log |Df n (z)v| Y , for each unit tangent vector v ∈ T z Y . Proof. Let us express f n : B 0 → B n as φ • (g • · · · • g • φ) • g. For each 0 ≤ j < n, g : B j → φ −1 (B j+1 ) . Since φ is a (1 + ε(δ))-quasi-isometry in the hyperbolic metric on Y where ε(δ) → 0 as δ → 0, we have that there exists N 1 , depending only on N, so that φ −1 (B j ) intersects at most N 1 strips W k . For each B j , let n j be minimal so that φ −1 (B j )∩W n j = Ø. Then for any g(z) ∈ φ −1 (B j ), 1 ≤ j < n, we have that ∂ (g • φ) ∂(g • φ) (g(z)) = ∂ φ ∂φ (g(z))] ≤ M λ n j −1 αM r−1 . By equation (5.3) and Lemma 6.1, we have that for all v ∈ T z Y, with |v| Y = 1, |D(g • φ)(z)| Y ≥ 1 + M λ n j −1+N 1 αM (r−1) = 1 + Mλ N 1 (r−1) λ n j −1 αM r−1 , so that |D(g • φ)(z)v| Y ≥ 1 + λ N 1 (r−1) sup z∈B j ∂ (g • φ) ∂(g • φ) (g(z)) |v| Y . Thus we have that inf z∈B j |D(g • φ)(z)v| Y ≥ 1 + λ N 1 (r−1) sup z∈B j ∂ (g • φ) ∂(g • φ) (g(z)) |v| Y . For each i, let k i = #{j : B j ∩ W i = Ø, and for all i ′ < i, B j ∩ W i ′ = Ø}, and let us reindex the B j as follows: For each i ∈ N ∪ {0}, let B i 0 , . . . , B i k i be an enumeration of all B j so that B j ∩ W i = Ø and for all 0 ≤ i ′ < i, B j ∩ W i ′ = Ø. Notice that n = ∞ i=0 k i . By the chain rule and Theorem 3.1, we have that there exists a constant c 1 > 0 so that inf z∈B 0 |Df n (z)v| Y ≥ c 1 ∞ i=0 k i j=0 (1 + λ N 1 (r−1) sup z∈B i j |µ f (z))|) Now, there exists a constant c 2 > 0 such that log ∞ i=0 k i j=0 (1 + λ N 1 (r−1) sup z∈B i j |µ f (z)|) = ∞ i=0 k i j=0 log(1 + λ N 1 (r−1) sup z∈B i j |µ f (z)|) ≥ c 2 ∞ i=0 k i j=0 λ N 1 (r−1) sup z∈B i j |µ f (z)| = c 2 λ N 1 (r−1) 2 ∞ i=0 k i j=0 sup z∈B i j |µ f (z)| − (−|µ f (z)|) ≥ c 2 λ N 1 (r−1) 2 ∞ i=0 k i j=0 sup z∈B i j log 1 + |µ f (z)| 1 − |µ f (z)| = c 2 λ N 1 (r−1) 2 log ∞ i=0 k i j=0 sup z∈B i j 1 + |µ f (z)| 1 − |µ f (z)| . Hence there exists a constant c so that, inf z∈B 0 log |Df n (z)v| Y ≥ c · log sup z∈B 0 K f n (z). 6.2. Puzzle pieces. Let us construct external rays for f . These will allow us to construct Yoccoz puzzle pieces for f where the role of equipotentials is played by the curves f −i ∂V . To construct these rays, we use a method analogous to the one used by Levin-Przytycki in [37] to construct external rays for holomorphic polynomial-like maps. First, we associate to f an external map, h f as follows: Let X 0 = V and for i ∈ N, set X i+1 = f −1 (X i ). Notice that since U ⋐ V , f : U → V is a branched covering of V , ramified at a single point, 0, and f i (0) ∈ U for all i, we have that X i = f −i (V ) is a connected and simply connected topological disk for all i ∈ N ∪ {0}, and X i+1 ⋐ X i . Let M = mod (V \ K f ), and let φ : D(0, e M ) \ D → V \ K f be the uniformization of V \ K f by a round annulus. Let D i = φ −1 (X i ), we have that each annulus D i \ D i+1 is mapped as a d-to-1 covering map onto D i−1 \ D i by h f = φ −1 • f • φ. The mapping h f extends continuously to ∂D, and by Schwarz reflection, h f can be defined as a mapping between annuli W ′ ⊂ W , each with the same core curve, ∂D. We have that h f is a C 3 expanding mapping of S 1 (see the proof of [11] Lemma 10.17) and that the dilatation of h f on W ′ is the same as the dilatation of f . Foliate W \ W ′ by C r , h f invariant rays, connecting ∂W ′ and ∂W . and pull them back by h f . We obtain a foliation by C r rays of W ′ \ ∂D that is continuous on W ′ . Pulling back this foliation of W ′ by φ, we obtain a foliation of V \ K f . The leaves of this foliation are the external rays of f . Remark 6.3. Observe that since h f |S 1 is a degree d expanding mapping of the circle, it is topologically conjugate to z → z d on a neighbourhood of S 1 . Consequently, one can carry out this construction simultaneously for two mappings f : U → V andf :Ũ →Ṽ to obtain a mapping H : V →Ṽ such that H • F (z) =F • H(z) for any z ∈ U contained in an equipotential or ray. For each z ∈ V \ K f , we let R z denote the ray through z. Let us parameterize R z by R z (t), t ≥ 0, such that for each n ∈ N we have that R z (n) is the unique point on R z that passes through ∂X n . We say that a ray R z lands at a point p if lim t→∞ R z (t) = p. To prove that certain rays land, we will need the following lemma. Lemma 6.4. [6, Lemma 2.3] Let Ω ⊂ C be a hyperbolic region. Let γ n : [0, 1] → Ω be a family of curves with uniformly bounded hyperbolic length and such that γ n (0) → ∂Ω. Then diam(γ n ) → 0. Lemma 6.5. If R z accumulates on a real repelling periodic point p, then R z lands at p. Proof. Compare [37, Lemma 2.1] and [6]. Suppose that p is a real repelling periodic point of period s. Then one can repeat the proof of linearization near repelling periodic points of holomorphic maps to prove that there exists a neighbourhood B of p such that f s is conjugate to z → λz near p, where λ = Df s (p), see [50]. Let R z ([n − 1, n]) be the segment of the ray connecting ∂X n−1 and ∂X n . Let us show that diam(R z ([n − 1, n])) → 0 as n → ∞. By Lemma 6.4, and since φ is an isometry in the hyperbolic metric, it is sufficient to show that the curves φ −1 (R z ([n − 1, n])) have uniformly bounded hyperbolic lengths. This follows from the fact that Dh f (z) > 1 in the hyperbolic metric for z sufficiently close to ∂D, which was proved in the proof of [11,Lemma 10.17]. Thus we have that diam(R z ([n − 1, n])) → 0 as n → ∞. So there exists n 0 ∈ N such that for all n ≥ n 0 , we have that R z ([n, n + 1]) ⊂ (f | B ) −s(n−n 0 ) (B). Since f s | B is qc-conjugate to z → λz with λ > 1 in a neighbourhood of 0, we have that ∩ ∞ n=n 0 (f | B ) −s(n−n 0 ) (B) = {p}. So the only accumulation point of the ray is p. We define puzzle pieces for f as follows. Let us index the renormalizations R n f : U n → V n of f by f n : U n → V n , so that f n = f qn | Un . Let I n = K fn ∩ R denote the invariant interval for f n . Let τ : I 0 → I 0 be the even, dynamical, symmetry about the even critical point at 0. Let β n ∈ ∂I n be the orientation preserving fixed point of f n in ∂I n . By realsymmetry, there exist two rays, labeled R βn and R ′ βn that land at β n . Let R τ (βn) and R ′ τ (βn) denote the preimages under f qn of R βn and R ′ βn , respectively, which land at τ (β n ). For each n ∈ N, the initial configuration of puzzle pieces at level n are the components of V \ (R βn ∪ R ′ βn ∪ R τ (βn) ∪ R ′ τ (βn) ∪ {β n , τ (β n )}). We denote this union of puzzle pieces by Y Proof. For all j ∈ N, K fn ⊂ Y (n) j . Let q n be the period of the renormalization f n of f . Let K j = comp 0 f −qnj (Y (n) 0 ). Since K j ⊂ K j−1 and f sn : K j → K j−1 , and ∩ ∞ j=0 K j is a compact connected set, we have that K fn ⊂ ∩ ∞ j=0 K j ⊂ U n . Proposition 6.7. Suppose that z ∈ K f . Then there exist arbitrarily small neighbourhoods P of z such that P is a union of puzzle pieces. Proof. Observe that Lemma 6.6 implies that there are arbitrarily small puzzle pieces containing the critical point of f . Let us start by spreading this information throughout the filled Julia set of f . Let z ∈ K f . Case 1: Assume that 0 ∈ ω(z). For each n, let C n ⊂ U n be the puzzle piece given by Lemma 6.6. Let r n be minimal so that f rn (x) ∈ C n and let C 0 n = comp x f −rn (C n ). Each C n is contained in the topological disk, Γ n , bounded by the core curve γ n of the annulus V n \ U n . By Theorem 3.2, there exists C > 0 such that for all n ∈ N, we have that mod(V n \ U n ) ≥ C −1 . Thus the domain Γ n is a K = K(C)-quasidisk. Let V 0 n = Comp x f −rn (V n ), and Γ 0 n = Comp x f −rn (Γ n ). It is not hard to see that f rn : V 0 n → V n is a diffeomorphism: Suppose that there exists 0 < j < r n so that f j (V 0 n ) ∋ 0, but f j (V 0 n ) is not contained in U n , so that f j (V 0 n ) ∩ ∂U = Ø. Since f sn : U n → V n is a first return mapping to V n , for all k ∈ N, f j+ksn (V 0 n ) intersects both K fn and ∂V n , and we have that there exists no j 1 ∈ N such that f j 1 (V 0 n ) = V n . Thus we have that if for some j, f j (V 0 n ) ∋ 0, then f j (V 0 n ) ⊂ U n , but then since for all k ∈ N, f −ksn (C n ) ∩ (V n \ C n ) = Ø, j = r n , and so f rn : V 0 n → V n is a diffeomorphism. Case 1a: Suppose that 0 ∈ ω(z) and z ∈ R ∩ K f . Then, by the complex bounds, we have that there exists K > 1 for each n, the mapping f rn : V 0 n → V n is a diffeomorphism with quasiconformal distortion bounded by K. Hence there exists m > 0 depending only on K and M such that for all n, mod (Γ 0 n \ Γ 0 n+1 ) > m. Thus the puzzle pieces C 0 n have diameters converging to 0. Case 1b: Suppose that 0 ∈ ω(z), ω(z) ⊂ R, and for all j, f j (z) / ∈ R. We consider the case when the mappings f j have uniformly bounded quasiconformal distortion near z, and the case when they have unbounded quasiconformal distortion near z, separately. First, suppose that there exists K x ≥ 1 such that for each n the mapping f rn : C 0 n → C n extends to a mapping from V 0 n onto V n with quasiconformal distortion bounded by K x . We have that each Γ 0 n is a K 1 -quasidisk, for some K 1 > 1 depending on x, and there exists a constant m > 0 such that for all n, mod (Γ 0 n \ Γ 0 n+1 ) ≥ m, and so the puzzle pieces C 0 n shrink to z. Suppose now that the quasiconformal distortion of f rn : V 0 n → V n tends to infinity as n tends to infinity. For each n, let {V j n } rn j=0 be the chain with V rn n = V n and V 0 n = Comp z f −rn (V n ), and let {Γ j n } rn j=0 be the chain with Γ rn n = Γ n and Γ 0 n = Comp z f −rn (Γ n ). For all n sufficiently large, there exists 0 ≤ j n < r n maximal so that the set V jn n = Comp f jn (z) f −(rn−jn) (V n ) does not intersect the real line (see case 2a below). Let Γ jn n = Comp f jn (z) f −(rn−jn) (Γ n ). Since ∂Γ jn n is the core curve V n \ U n , and f (rn−jn) has bounded quasiconformal distortion, we have that there exists m 1 > 0 such that mod(V jn n \Γ jn n ) > m 1 . But this implies that there exists m 2 > 0 such that dist(∂V jn n , Γ jn n ) > m 2 diam(Γ jn n ), which immediately gives us that there exists m 3 > 0 so that dist(Γ jn n , R) > m 3 diam(Γ jn n ). It follows that there exists ξ > 0 such that for all n, diam Y (Γ jn n ) < ξ. Let us inductively choose a subsequence V n i of of the levels V n so that the landing maps from V jn i n i to V jn i+1 n i+1 all have definite expansion. Let η ∈ (0, 1) be the constant from Theorem 5.4, so that |Df i (z)v| Y ≥ η|v| Y . Then we have that if X, a component of f −i 0 (V jn n ), is a pullback of V jn n such that the quasiconformal distortion of f i 0 | X is bounded by 2(1 + δ)/η, then there exists N ∈ N such that for each i ≤ i 0 , for each element X i = f i (X) in the chain associated to the pullback, X i intersects at most N of the strips W k . Let c > 0 be the constant associated to N from Corollary 6.2. Let k 0 > 0 be minimal Combining Cases (1a) and (1b), we have that for all z such that 0 ∈ ω(z), that there are arbitrarily small puzzle pieces P ∋ z. Now we treat the cases when 0 / ∈ ω(z). Case 2a: Suppose that there exists n ∈ N such that ω(z) ⊂ R \ V n . Let Y (n) 0 , be the initial configuration of puzzle pieces at level n. Let x 0 ∈ ω(z), then, since the real traces of puzzle pieces shrink to points, there exist m 0 > 0 and a union of (closed) puzzle pieces of Y (n) m 0 , denoted by Q 0 , such that Q 0 ∩ ω(0) = Ø and x 0 ∈ int(Q 0 ). Let Y (n) j (x 0 ) denote the closure of the set of puzzle pieces P in Y (n) j with x 0 ∈ P . Let Q = ∩ ∞ j=0 Y (n) j (x 0 ). Let us show that Q = {x 0 }. If diam(Q) > 0, then, since ∪ n f n (Q) is a bounded set, there exists C > 0, x ∈ Q and a vector v ∈ T x C such that |Df k i (x)v| < C. If ω(x) is not contained in the real-line, then in a small neighbourhood of x, the hyperbolic metric on Y is comparable to the Euclidean metric, but now |Df k i (x)v| < C contradicts Theorem 5.4 (b). So we can assume that ω(x) ⊂ R, but then ω(x) is contained in the hyperbolic set of points that avoid V n , and we have that |Df k i (x)v| → ∞ for any v ∈ T x C, and so diam(Q) = 0. Let us point out that this argument shows that if z ∈ R is contained in a hyperbolic set, then for any n sufficiently big, diam(Y (n) j (z)) → 0 as j → ∞, and indeed that J f is locally connected at any point in J f ∩ R that is contained in a hyperbolic set. Suppose that for all j ∈ N ∪ {0}, f j (z) / ∈ R. Let r 0 be the first return time of x 0 to Q 0 , and let Q 1 = Comp x 0 f −r 0 (Q 0 ). Inductively define Q i+1 by taking r i to be the first return time of x 0 to Q i and setting Q i+1 = Comp x 0 f −r i (Q i ). Let ε > 0 be so small that if z = x + iy satisfies dist(z, R) < ε and z / ∈ V n , then dist(x, 0) > diam(V n )/2. Since x 0 ∈ ω(z), there exist n i → ∞ with the property that n i is minimal with f n i (z) ∈ Q i . It is sufficient to show that there exists a constant c > 0 so that for all i, Df n i (z) ≥ c. Fix some i ∈ N. Let j 0 ≥ n 0 be minimal so that dist(f j 0 (z), R) > ε, and let j 1 ≤ n i be maximal so that dist(f j 1 (z), R) > ε, then there exists a constant c 1 > 0 so that Df n i (z) ≥ c 1 η Df n i −j 1 (f j 1 (z)) Df j 0 −n 0 (f n 0 (z)) Df n 0 (z) . Thus it suffices to bound Df n i −j 1 (f j 1 (z)) and Df j 0 −n 0 (f n 0 (z)) from below. Let z 0 = f n 0 (z) and define z i = f i (z 0 ), x i = f i (x 0 ). Then there exist constants c 2 , c 3 so that Df j 0 −n 0 (z 0 ) ≥ c 2 j 0 −n 0 i=0 Df (x i ) j 0 −n 0 i=0 (1 − c 3 |z i − x i | Df (x i ) ). By our choice of ε, and since x 0 is contained in a hyperbolic Cantor set, we have that there exists a constant c 4 > 0 and Λ > 1 so that j 0 −n 0 i=0 |z i − x i | Df (x i ) ≤ 1 2diam(V n ) j 0 −n 0 i=0 |z i − x i | ≤ 1 2diam(V n ) c 4 ε 1 − Λ −1 . Thus we have that Df j 0 −n 0 (z 0 ) is bounded from below. The proof that Df n i −j 1 (f j 1 (z)) is bounded from below is similar. Case 2b: Suppose that ω(z) ⊂ R. Let z 0 be an accumulation point of ω(z) that is not contained in R. Since the real puzzle pieces shrink to points, there exist n and m and a union Q of puzzle pieces in Y (n) m and a sequence k i → ∞ such that Q ∩ R = Ø, and f k i (z) ∈ Q for all i. By Theorem 5.4 (b), we have that diam(Comp f k 0 (z) (f −(k i −k 0 ) (Q))) → 0 as i → ∞. that Q is a union of puzzle pieces. Since J f ∩ P is connected for any puzzle piece P , we have that J f ∩ Q is connected too. Let us remark that since f is topologically conjugate to a polynomial, we obtain that the repelling periodic points of f are dense in J f . We also point out that this implies that f has no wandering domains, but that this fact can be deduced immediately from the fact that the puzzle pieces shrink to points. Corollary 2.2 (C 2 real bounds). Under the assumptions of Theorem 2.1, the successive renormalizations of f are uniformly bounded in the C 2 topology, and the bound is beau in the sense of Sullivan. The following consequence of the real bounds, namely Lemma 2.3 below, is adapted from [14, Lemma A.5, page 379], and also from [18, §2.1]. Lemma 2. 3 . 3There exists a constant B 1 = B 1 (N) > 0 with the following property. For each infinitely renormalizable unimodal map f of combinatorial type bounded by N, there exists n 1 = n 1 (f ) ∈ N such that, for all n ≥ n 1 , we have S n ≤ B 1 . Lemma 4. 3 . 3Let X, Y be hyperbolic Riemann surfaces with X ⊂ Y , and let ψ : X → Y be holomorphic univalent and onto. Then for all x ∈ X and each tangent vector a proof of this lemma, see[3, pp. 312-313]. Theorem 4. 7 . 7Let F : ∆ × E → C be a holomorphic motion of a set E ⊆ C with base point t 0 ∈ ∆. Then there exists a continuous map F : ∆ × C → C with the following properties. (ii) If 0 < r 0 < 1 and M > 1 are such that ψ(D(z 0 , r 0 )) ⊆ D(z 0 , Mr 0 ), then for all 0 ≤ r < 1 and all t with |t| < 1 2 we have ψ t (D(z 0 , r)) ⊆ D(z 0 , R), where R = 2Me 6π r 1/3 kr 2 0 .(4.10) t with |t| ≤ 1 2 . Combining this fact with (4.12), it follows that for all such t we have max |ζ|≤r |ψ t (ζ)| ≤ 2Me 6π r Lemma 4. 10 . 10Let z ∈ D and let 0 < r < 1 − |z|. Then mod(D \ D(z, r)) ≤ log 1 − |z| 2 + |z|r r . Theorem 4 . 15 . 415Let U, V ⊂ C be Jordan domains, symmetric about the real axis, with U ⊂ V , and let Y = V \ R. Let φ : V → V be a C r diffeomorphism which is symmetric about the real axis, and write (c) Every periodic orbit of f is expanding. (d) The expanding periodic points are dense in the set of all recurrent points. ǫ); this is possible because z is recurrent. Write O = B Y (f m (z), 2ǫ) ⊂ B y (z, ǫ 0 ), and let O ′ ⊂ Y be the component of f −m (O) that contains z. Then f m | O ′ : O ′ → O is a diffeomorphism. By assertion (a), the inverse diffeomorphism f −m | O : O → O ′ is Lipschitz with constant η −1 in the hyperbolic metric of Y . Therefore 6. 1 . 1Dilatation and expansion. The proof of the following lemma is implicit in the proof of Theorem 5.4; it makes the lower bound in Equation (5.3) explicit. Lemma 6.1. Let ξ n be the constant defined in Equation (5.13). There exists N ≥ n 0 such that if n ≥ N, then 1 + M λ n−1 αM r−1 ≤ ξ n . (6.1) be the quasiconformal distortion of f n at z. A chain of domains is a sequence of domains{B j } n j=0 where B j is a component of f −1 (B j+1 )for all j = 0, 1, 2, . . . , n − 1 and B n is a domain in C. To a mapping f n : A → B, we associate the chain of domains {B j } n j=0 , where B n = B and B j = Comp f j (B) f −(n−j) (B) for j = 0, . . . , n − 1. for j ∈ N ∪ {0}, we define Y (n) jto be the union of the connected components of f −j (Y (n) 0 ). Given any z ∈ K f , we let Y be the component that contains the critical point. Lemma 6. 6 . 6For each n ∈ N, there exists j, so that K fn ⊂ Y (n) j ⊂ U n . We use the abbreviation A ⊗m = A ⊗ A ⊗ · · · ⊗ A (m times). Recall that ∆ qn,n = ∆ 0,n . In[47] McMullen gives Φ(s) = 2 |t log t| 1−t 2 , where 0 ≤ t < 1 is such that s = d D (0, t). Eliminating t yields (4.4). AcknowledgementsWe would like to thank Dennis Sullivan and Davoud Cheraghi for their general comments, and Genadi Levin for his keen remarks concerning the proof of Lemma 6.5.Let 0 ≤ j ′ k 0 < j k 0 be maximal so thatThen, since f is (1 + δ)-qc, we have thatThus by Corollary 6.2 we have thatand by Theorem 5.4, we have thatWe now repeat the argument: let k 1 > k 0 be minimal so that so thatand let 0 ≤ j ′ k 1 < j k 1 be maximal so thatThen, since f is (1 + δ)-qc, we have thatAgain by Corollary 6.2 we have thatand by Theorem 5.4, we have thatCombining this with the first step, we have that diam Y (Γ 0 k 1 ) < ξ/4. If the quasiconformal distortion of f n diverges at x, we see that we can repeat this argument infinitely many times to obtain a nest of puzzle piecesThus by Theorem 5.4 (a),Proposition 6.7 has several important consequences.Corollary 6.8. Suppose that f ∈ C r is an asymptotically holomorphic polynomial-like mapping, which is (α, δ, θ, M, n 0 )-controlled, and that.Then the following hold:(2) f : U → V is topologically conjugate to a polynomial mapping in a neighbourhood of its Julia set. In particular, f : U → V has no wandering domains.(3) J f is locally connected.Proof.(1). To see that J f = K f observe that for each z ∈ K f , there are arbitrarily small puzzle pieces containing z, so z is a limit of points whose orbits eventually land in V \ U.Thus z ∈ J f . In particular, K f has empty interior.(2). Let us now show that f : U → V is topologically conjugate to a polynomial mapping in a neighbourhood of its Julia set. Let I ⊂ U ∩ R denote the invariant interval for f . Since f | I has negative Schwarzian derivative, there exists a real polynomial p with a critical point of the same degree as the critical point of f such that f is topologically conjugate to p on I. Let h : I → I be the continuous mapping such that h • f | I = p • h. LetṼ be a domain containing J p that is bounded by some level set of the Green's function for p. LetŨ = p −1 (Ṽ ).Let H 0 : V →Ṽ be a homeomorphism such that • for each z ∈ ∂U, H 0 • F (z) = p • H 0 (z),• for each z ∈ ∪ n (R βn ∪ R τ (βn) ), we have that H 0 (z) • f = p • H 0 (z), and • H 0 | I = h. See Remark 6.3 for a description of how to construct such an H 0 .GivenSince each H i is conjugacy on J between f and p that maps that critical value of f to the critical value of p, this pullback is always well-defined and continuous. Observe that for each z ∈ U \ K f , H i eventually stabilizes. Let H : V →Ṽ be a limit of the H i . To see that H is continuous, take any z ∈ U and let {z n } be a sequence of points such that z n → z. If z / ∈ K f , then there exists a neighbourhood W of z and i 0 ∈ N, large, such that for all i ≥ i 0 and w ∈ W H i (w) = H i 0 (w). Hence H(z n ) → H(z). So suppose that z ∈ K f , then since the nests of puzzle pieces about z and H(z) both shrink to points and H maps puzzle pieces for f to corresponding puzzle pieces for p, H(z n ) → H(z). Also, since for each z ∈ U \ K f , H i eventually stabilizes, H : U →Ũ satisfies H • F (z) = p • H(z) for all z ∈ U \ K f and since K f has empty interior, we have that H is a conjugacy between f and p on U.(3). Finally, let us show that J f is locally connected. Let z ∈ J f , and let B be any open set that contains z, by Proposition 6.7, there exists a neighbourhood Q ⊂ B of z, such L Ahlfors, Lectures on Quasiconformal Mappings. Van Nostrand. L. Ahlfors, Lectures on Quasiconformal Mappings. Van Nostrand, 1966. Quasiconformal self-mappings with smooth boundary values. J M Anderson, A Hinkkanen, Bull. London Math. Soc. 266J.M. Anderson and A. Hinkkanen, Quasiconformal self-mappings with smooth boundary values, Bull. London Math. Soc. 26(6) (1994), 549-556. Elliptic Partial Differential Equations and Quasiconformal Mappings in the Plane. K Astala, T Iwaniec, G Martin, Princeton Mathematical Series. 48Princeton University PressK. Astala, T. Iwaniec and G. Martin, Elliptic Partial Differential Equations and Quasiconformal Mappings in the Plane. Princeton Mathematical Series 48, Princeton University Press, 2009. . A Avila, R Krikorian, Monotonic cocycles. Invent. Math. 2021A. Avila and R. Krikorian, Monotonic cocycles. Invent. Math 202(1) (2015) 271-331. The full renormalization horseshoe for unimodal maps of higher degree: exponential contraction along hybrid classes. A Avila, M Lyubich, Publ. Math. IHES. 1141A. Avila and M. Lyubich, The full renormalization horseshoe for unimodal maps of higher degree: exponential contraction along hybrid classes. Publ. Math. IHES 114(1) (2011) 171-223. Repelling periodic points and landing of rays for post-singularly bounded exponential maps. A M Benini, M Lyubich, Ann. Inst. Fourier (Grenoble). 644A.M. Benini and M. Lyubich, Repelling periodic points and landing of rays for post-singularly bounded exponential maps. Ann. Inst. Fourier (Grenoble) 64(4) (2014), 1493-1520. Nonexistence of wandering intervals and structure of topological attractors of one-dimensional dynamical systems. II. The smooth case. A M Blokh, M Yu, Lyubich, Ergod. Th. & Dynam. Sys. 9A.M. Blokh and M.Yu. Lyubich, Nonexistence of wandering intervals and structure of topological attractors of one-dimensional dynamical systems. II. The smooth case, Ergod. Th. & Dynam. Sys., 9 (1989), 751-758. On mappings, conformal at the boundary. L Carleson, J. Analyse Math. 19L. Carleson, On mappings, conformal at the boundary. J. Analyse Math. 19 (1967), 1-13. D Cheraghi, M Shishikura, arXiv:1509.07843Satellite renormalization of quadratic polynomials. D. Cheraghi and M. Shishikura, Satellite renormalization of quadratic polynomials. arXiv:1509.07843 Quasisymmetric rigidity in dimension one. T Clark, S Van Strien, ManuscriptT. Clark and S. van Strien, Quasisymmetric rigidity in dimension one. Manuscript 2018. Complex bounds for real maps. T Clark, S Van Strien, S Trejo, Comm. Math. Phys. 3553T. Clark, S. van Strien and S. Trejo, Complex bounds for real maps, Comm. Math. Phys. 355(3) (2017), 1001-1119. On the dynamics of polynomial-like mappings. A Douady, J H Hubbard, Ann. Sci.École Norm. Sup. 184A. Douady and J.H. Hubbard, On the dynamics of polynomial-like mappings. Ann. Sci.École Norm. Sup. (4) 18(2) (1985) 287-343. Estimates for asymptotically conformal mappings. E Dyn&apos;kin, Ann. Acad. Sci. Fenn. Math. 222E. Dyn'kin, Estimates for asymptotically conformal mappings, Ann. Acad. Sci. Fenn. Math. 22(2) (1997), 275-304. Rigidity of critical circle mappings I. E De Faria, W De Melo, J. Eur. Math. Soc. 1E. de Faria and W. de Melo, Rigidity of critical circle mappings I, J. Eur. Math. Soc., 1 (1999), 339-392. Rigidity of critical circle mappings II. E De Faria, W De Melo, J. Amer. Math. Soc. 13E. de Faria and W. de Melo, Rigidity of critical circle mappings II, J. Amer. Math. Soc., 13 (2000), 343-370. Mathematical Tools for One-dimensional Dynamics. E De Faria, W De Melo, Cambridge Studies in Advanced Mathematics. 115Cambridge University PressE. de Faria and W. de Melo, Mathematical Tools for One-dimensional Dynamics, Cambridge Studies in Advanced Mathematics 115, Cambridge University Press, 2008. Global hyperbolicity of renormalization for C r unimodal mappings. E De Faria, W De Melo, A Pinto, Ann. of Math. 164E. de Faria, W. de Melo and A. Pinto, Global hyperbolicity of renormalization for C r unimodal mappings, Ann. of Math. 164 (2006), 731-824. Discrete and Continuous Dynamical Systems A. E De Faria, P Guarino, 36Real bounds and Lyapunov exponentsE. de Faria and P. Guarino, Real bounds and Lyapunov exponents. Discrete and Continuous Dynamical Systems A 36 (2016), 1957-1982. Symmetric structures on a closed curve. F Gardiner, D Sullivan, Amer. J. Math. 114F. Gardiner and D. Sullivan, Symmetric structures on a closed curve, Amer. J. Math. 114 (1992), 683-736. Lacunary series as quadratic differentials in conformal dynamics. F Gardiner, D Sullivan, Contemp. Math. 169F. Gardiner and D. Sullivan, Lacunary series as quadratic differentials in conformal dynamics, Contemp. Math. 169 (1994), 307-330. Rigidity of smooth critical circle maps. P Guarino, W De Melo, J. Eur. Math. Soc. 196P. Guarino and W. de Melo, Rigidity of smooth critical circle maps. J. Eur. Math. Soc 19(6) (2017), 1729-1783. P Guarino, M Martens, W De Melo, arXiv:1511.02792Rigidity of critical circle maps. P. Guarino, M. Martens and W. de Melo, Rigidity of critical circle maps. arXiv:1511.02792 Polynomial-like property for real quadratic polynomials. J Graczyk, G Swiatek, Topology Proc. 21J. Graczyk and G. Swiatek, Polynomial-like property for real quadratic polynomials, Topology Proc. 21 (1996), 33-112. Generic hyperbolicity in the logistic family. J Graczyk, G Swiatek, Ann. of Math. 2J. Graczyk and G. Swiatek, Generic hyperbolicity in the logistic family. Ann. of Math. (2) 146 (1997), 1-52. Decay of geometry for unimodal maps: negative Schwarzian case. J Graczyk, D Sands, G Swiatek, Ann. of Math. 2J. Graczyk, D. Sands and G. Swiatek, Decay of geometry for unimodal maps: negative Schwarzian case. Ann. of Math. (2) 161(2) (2005), 613-677. Sensitive dependence to initial conditions for one-dimensional maps. J Guckenheimer, Comm. Math. Phys. 702J. Guckenheimer, Sensitive dependence to initial conditions for one-dimensional maps. Comm. Math. Phys. 70(2) (1979) 133-160. Local connectivity of Julia sets and bifurcation loci: three theorems of J. J H Hubbard, Topological methods in modern mathematics. Stony Brook, NY; Houston, TXJ.H. Hubbard, Local connectivity of Julia sets and bifurcation loci: three theorems of J.-C. Yoccoz. Topological methods in modern mathematics (Stony Brook, NY, 1991), 467-511, Publish or Perish, Houston, TX, 1993. The renormalization for parabolic fixed points and their perturbation. H Inou, M Shishikura, Preprint available atH. Inou and M. Shishikura, The renormalization for parabolic fixed points and their perturbation, Preprint available at https://www.math.kyoto-u.ac.jp/ mitsu/pararenorm/, 2006. A priori bounds for some infinitely renormalizable maps: I. bounded primitive combinatorics. J Kahn, Preprint, IMS at Stony BrookJ. Kahn, A priori bounds for some infinitely renormalizable maps: I. bounded primitive combi- natorics, Preprint, IMS at Stony Brook, 2006/05, 2006. A priori bounds for some infinitely renormalizable quadratics. J Kahn, M Lyubich, Ann. Sci.Éc. Norm. Supér. 4II. DecorationsJ. Kahn and M. Lyubich, A priori bounds for some infinitely renormalizable quadratics. II. Decorations, Ann. Sci.Éc. Norm. Supér. (4) 41 (2008), no. 1, 57-84. A priori bounds for some infinitely renormalizable quadratics. J Kahn, M Lyubich ; A K Peters, M A Wellesley, III. Molecules. Complex dynamics. J. Kahn and M. Lyubich, A priori bounds for some infinitely renormalizable quadratics. III. Molecules. Complex dynamics, 229-254, A K Peters, Wellesley, MA, 2009. Robust rigidity for circle diffeomorphisms with singularities. K Khanin, A Teplinsky, Invent. Math. 1691K. Khanin and A. Teplinsky, Robust rigidity for circle diffeomorphisms with singularities. Invent. Math. 169(1) (2007), 193-218. Local connectivity and quasi-conformal rigidity of nonrenormalizable polynomials. O Kozlovski, S Van Strien, Proc. Lond. Math. Soc. 3O. Kozlovski and S. van Strien, Local connectivity and quasi-conformal rigidity of non- renormalizable polynomials. Proc. Lond. Math. Soc. (3) 99 (2009)(2), 275-296. Rigidity for real polynomials. O Kozlovski, W Shen, S Van Strien, Ann. of Math. 2O. Kozlovski, W. Shen and S. van Strien, Rigidity for real polynomials. Ann. of Math. (2) 165 (2007)(3), 749-841. Density of hyperbolicity. O Kozlovski, W Shen, S Van Strien, Ann. of Math. 2O. Kozlovski, W. Shen and S. van Strien, Density of hyperbolicity. Ann. of Math. (2) 166 (2007)(1), 145-182. A computer assisted proof of the Feigenbaum conjectures. O E Lanford, Bull. Amer. Math. Soc. 6O. E. Lanford, A computer assisted proof of the Feigenbaum conjectures. Bull. Amer. Math. Soc. 6 (1982), 427-434. External rays to periodic points. G Levin, F Przytycki, Israel Journal of Mathematics. 94G. Levin and F. Przytycki, External rays to periodic points, Israel Journal of Mathematics 94 (1995), 29-57. Local connectivity of the Julia set of real polynomials. G Levin, S Van Strien, Ann. of Math. 2G. Levin and S. van Strien, Local connectivity of the Julia set of real polynomials. Ann. of Math. (2) 147(3) (1998), 471-541. Non-existence of wandering intervals and structure of topological attractors of one dimensional dynamical systems: 1. The case of negative Schwarzian derivative. M Yu, Lyubich, Ergod. Th. & Dynam. Sys. 9M.Yu. Lyubich, Non-existence of wandering intervals and structure of topological attractors of one dimensional dynamical systems: 1. The case of negative Schwarzian derivative. Ergod. Th. & Dynam. Sys. 9 (1989), 737-749. Teichmüller space of Fibonacci maps. M Lyubich, M. Lyubich, Teichmüller space of Fibonacci maps. ArXiv 9311213v1. Dynamics of quadratic polynomials. I, II. M Lyubich, Acta Math. 178M. Lyubich, Dynamics of quadratic polynomials. I, II, Acta Math. 178 (1997), 185-247, 247-297. Feigenbaum-Coullet-Tresser universality and Milnor's hairiness conjecture. M Lyubich, Ann. of Math. 1492M. Lyubich, Feigenbaum-Coullet-Tresser universality and Milnor's hairiness conjecture. Ann. of Math. (2), 149(2) (1999), 319-420. Dynamics of quadratic polynomials: complex bounds for real maps. M Lyubich, M Yampolsky, Ann. Inst. Fourier (Grenoble). 47M. Lyubich and M. Yampolsky, Dynamics of quadratic polynomials: complex bounds for real maps, Ann. Inst. Fourier (Grenoble) 47 (1997), 1219-1255. The periodic points of renormalization. M Martens, Ann. of Math. 2M. Martens, The periodic points of renormalization. Ann. of Math. (2) 147(3) (1998), 543-584. Julia-Fatou-Sullivan theory for real one-dimensional dynamics. M Martens, W De Melo, S Van Strien, Acta Math. 1683-4M. Martens, W. de Melo and S. van Strien, Julia-Fatou-Sullivan theory for real one-dimensional dynamics. Acta Math. 168(3-4) (1992), 273-318. J Manton, arXiv:1208.0197v2Differential Calculus, tensor products and the importance of notation. J. Manton, Differential Calculus, tensor products and the importance of notation, arXiv:1208.0197v2 Renormalization and 3-manifolds which fiber over the circle. C Mcmullen, Ann. of Math. Studies. 142Princeton University PressC. McMullen, Renormalization and 3-manifolds which fiber over the circle, Ann. of Math. Studies 142, Princeton University Press, 1996. A structure theorem in one-dimensional dynamics. W De Melo, S Van Strien, Ann. of Math. 2W. de Melo and S. van Strien, A structure theorem in one-dimensional dynamics. Ann. of Math. (2) 129(3) (1989), 519-546. W De Melo, S Van Strien, One-dimensional Dynamics. New YorkSpringer-VerlagW. de Melo and S. van Strien, One-dimensional Dynamics, Springer-Verlag, New York, 1993. Dynamics in One Complex Variable. J Milnor, Annals of Mathematics Studies. 160Princeton University PressJ. Milnor, Dynamics in One Complex Variable. Annals of Mathematics Studies 160, Princeton University Press. 2006. Local connectivity of Julia sets: expository lectures, The Mandelbrot set, theme and variations. J Milnor, London Mathematical Society Lecture Note Series. 274Cambridge University PressJ. Milnor, Local connectivity of Julia sets: expository lectures, The Mandelbrot set, theme and variations, London Mathematical Society Lecture Note Series 274 (Cambridge University Press, Cambridge, 2000) 67-116. On the metric properties of multimodal interval maps and C 2 density of Axiom A. W Shen, Invent. Math. 1562W. Shen, On the metric properties of multimodal interval maps and C 2 density of Axiom A. Invent. Math 156 (2004) (2), 301-403. Complex bounds for multimodal maps: bounded combinatorics. D Smania, Nonlinearity. 145D. Smania, Complex bounds for multimodal maps: bounded combinatorics. Nonlinearity 14 (2001) (5), 1311-1330. Phase space universality for multimodal maps. D Smania, Bulletin of the Brazilian Mathematical Society. 362D. Smania, Phase space universality for multimodal maps. Bulletin of the Brazilian Mathematical Society 36 (2) (2005), 225-274. On the hyperbolicity of the period-doubling fixed point. D Smania, Transactions of the American Mathematical Society. 3584D. Smania, On the hyperbolicity of the period-doubling fixed point. Transactions of the American Mathematical Society 358 (4) (2006), 1827-1846. D Smania, arXiv:1603.06300Solenoidal attractors with bounded combinatorics are shy. D. Smania, Solenoidal attractors with bounded combinatorics are shy. arXiv:1603.06300. Infinitely renormalizable quadratic polynomials, with non-locally connected Julia set. D Sörensen, J. Geom. Anal. 101D. Sörensen, Infinitely renormalizable quadratic polynomials, with non-locally connected Julia set, J. Geom. Anal. 10 (2000), no. 1, 169-206. Real bounds, ergodicity and negative Schwarzian for multimodal maps. S Van Strien, E Vargas, J. Amer. Math. Soc. 174S. van Strien and E. Vargas, Real bounds, ergodicity and negative Schwarzian for multimodal maps. J. Amer. Math. Soc. 17 (2004) (4), 749-782. Quasiconformal homeomorphisms and dynamics. I. Solution of the Fatou-Julia problem on wandering domains. D Sullivan, Ann. of Math. 2D. Sullivan, Quasiconformal homeomorphisms and dynamics. I. Solution of the Fatou-Julia prob- lem on wandering domains. Ann. of Math. (2) 122 (1985) (3), 401-418. Bounds, quadratic differentials, and renormalization conjectures. D Sullivan, Mathematics into the Twenty-first Century. AMS Centennial Publications2D. Sullivan, Bounds, quadratic differentials, and renormalization conjectures, AMS Centennial Publications, 2, Mathematics into the Twenty-first Century, 1988. Hyperbolicity of renormalization of critical circle maps. M Yampolsky, Publ. Math. Inst. Hautes Etudes Sci. 96M. Yampolsky, Hyperbolicity of renormalization of critical circle maps. Publ. Math. Inst. Hautes Etudes Sci. 96 (2002), 1-41. On the local connectivity of the Mandelbrot set. J.-C Yoccoz, UnpublishedJ.-C. Yoccoz, On the local connectivity of the Mandelbrot set. Unpublished, 1990.
[]
[ "arXiv:cond-mat/9508025v1 8 Aug 1995 Diffusion Processes and Growth on Stepped Metal Surfaces", "arXiv:cond-mat/9508025v1 8 Aug 1995 Diffusion Processes and Growth on Stepped Metal Surfaces" ]
[ "J Merikoski \nResearch Institute for Theoretical Physics\nUniversity of Helsinki\nP.O. Box 9FIN-00014Finland\n", "T Ala-Nissila \nResearch Institute for Theoretical Physics\nUniversity of Helsinki\nP.O. Box 9FIN-00014Finland\n\nDepartment of Physics\nBrown University\nBox 1843R.I. 02912Providence\n\nU.S.A\nTampere University of Technology\nP.O. Box 692FIN-33101TampereFinland\n" ]
[ "Research Institute for Theoretical Physics\nUniversity of Helsinki\nP.O. Box 9FIN-00014Finland", "Research Institute for Theoretical Physics\nUniversity of Helsinki\nP.O. Box 9FIN-00014Finland", "Department of Physics\nBrown University\nBox 1843R.I. 02912Providence", "U.S.A\nTampere University of Technology\nP.O. Box 692FIN-33101TampereFinland" ]
[]
15 May 1995 (to appear in Phys. Rev. B Rapid Comm.))We study the dynamics of adatoms in a model of vicinal (11m) fcc metal surfaces. We examine the role of different diffusion mechanisms and their implications to surface growth. In particular, we study the effect of steps and kinks on adatom dynamics. We show that the existence of kinks is crucially important for adatom motion along and across steps. Our results are in agreement with recent experiments on Cu(100)and Cu(1,1,19)surfaces. The results also suggest that for some metals exotic diffusion mechanisms may be important for mass transport across the steps.PACS numbers: 61.50. Cj, 68.35.Fx, 68.55.Bd Diffusion of adatoms on solid surfaces is an extensively studied subject [1]. In particular, adatom dynamics on vicinal metal and semiconductor surfaces has important implications to surface growth under non-equilibrium conditions [2,3]. However, barring a few special cases, the atomistic details of diffusion processes near steps and kinks are not known. Current experimental techniques are now able to yield atomistic information about adatom dynamics near steps and kinks [4], and growth of surfaces [3,5]. Clearly, careful microscopic calculations are needed to understand these phenomena.Our aim in this Letter is to study models of surfaces of fcc metals vicinal to the(100)plane. The open structure of the (100) facets can be expected to give rise to some unconventional diffusion processes that are not seen for instance on stepped surfaces with fcc(111) terraces. First, we want to identify the various microscopic mechanisms relevant to self-diffusion near steps and kinks. Second, we shall discuss the implications of our results to the morphological stability of these surfaces under growth [6], and suggest a phenomenological model for step growth. Our results are consistent with experiments on Cu(100)[3] and Cu (1,1,19)[4] surfaces.The geometric structure of an fcc(119) surface is shown inFig. 1. An ideal fcc(11m) facet, with odd m > 1, consists of (100) terraces of width (m − 1)r nn /2 separated by (111) steps of height r nn / √ 2, where r nn is the distance between nearest neighbor atoms. Due to the geometry, only one kind of steps (of monolayer height) with close-packed edges exist on these surfaces. The metallic interactions between atoms in our model are derived from the semi-empirical Effective Medium Theory (EMT). The formalism of EMT is presented in Ref.[7], and a description of the implementation for molecular dynamics (MD) simulations of the present work can be found in Ref.[8]. In the case of copper, EMT has been shown to give a reasonably accurate quantitative description of many different surface phenomena [7-11], which motivates its use for the present case.We shall divide the discussion of the microscopic mechanisms near step edges into three parts: standard hopping events (denoted by H), exchange and other exotic mechanisms (X), and the effect of kinks on diffusion near and across step edges (K). For each mechanism M the activation barrier is denoted by E M , and that of the reversed process by E rev M . InFig. 2(a)we show a contour plot of the adiabatic energy surface E(x, y) experienced by a single adatom on the Cu(119) surface at zero temperature. The potential across the terrace is shown inFig. 2(b), indicating the activation energy for diffusion on the terrace E A , and the height of the Schwoebel step barrier E rev H1[12]. It is evident that the barrier in the x direction is appreciably modulated only at the immediate vicinity of the steps[13]. We have verified this for the Cu(1,1,15) surface also.Activation energies for simple hopping mechanisms on surfaces of several fcc metals with different orientations, as given by EMT, have been extensively tabulated in Ref.[10]. Our results for copper are fully consistent. The barrier height for a single jump on a flat terrace far from step edges is found to be E A = 0.399 eV, and that for diffusion of a vacancy in the first layer of the terrace is E V = 0.473 eV. The lowest barrier is that of an adatom diffusing along a straight step edge, and has a value E H2 = 0.258 eV. As expected, on Cu(11m) surfaces we find E H2 < E A < E rev H1 < E H1 . The inequality E H2 < E A is consistent with experimental results[14].In addition to ground-state calculations we have performed extensive MD simulations [15] to identify possible exotic diffusion mechanisms[16]and to study entropic contributions to the rates[17]. A well-known mechanism for step crossing, the replacement of an edge atom by an adatom from the terrace (mechanism X1) is observed in our simulations. In our model the activation energies for hopping and the simple exchange across the step edge are approximately equal: E H1 ≈ E X1[18,19]. We have also found more complicated mechanisms for step crossing. InFig. 3we show two examples: a "coherent" chain transfer and an atom-by-atom replacement mechanism (vacancy 1
10.1103/physrevb.52.r8715
[ "https://arxiv.org/pdf/cond-mat/9508025v1.pdf" ]
26,935,375
cond-mat/9508025
61f4dee4c2f406cb31d9f6c6ce78a64c786d3982
arXiv:cond-mat/9508025v1 8 Aug 1995 Diffusion Processes and Growth on Stepped Metal Surfaces J Merikoski Research Institute for Theoretical Physics University of Helsinki P.O. Box 9FIN-00014Finland T Ala-Nissila Research Institute for Theoretical Physics University of Helsinki P.O. Box 9FIN-00014Finland Department of Physics Brown University Box 1843R.I. 02912Providence U.S.A Tampere University of Technology P.O. Box 692FIN-33101TampereFinland arXiv:cond-mat/9508025v1 8 Aug 1995 Diffusion Processes and Growth on Stepped Metal Surfaces 15 May 1995 (to appear in Phys. Rev. B Rapid Comm.))We study the dynamics of adatoms in a model of vicinal (11m) fcc metal surfaces. We examine the role of different diffusion mechanisms and their implications to surface growth. In particular, we study the effect of steps and kinks on adatom dynamics. We show that the existence of kinks is crucially important for adatom motion along and across steps. Our results are in agreement with recent experiments on Cu(100)and Cu(1,1,19)surfaces. The results also suggest that for some metals exotic diffusion mechanisms may be important for mass transport across the steps.PACS numbers: 61.50. Cj, 68.35.Fx, 68.55.Bd Diffusion of adatoms on solid surfaces is an extensively studied subject [1]. In particular, adatom dynamics on vicinal metal and semiconductor surfaces has important implications to surface growth under non-equilibrium conditions [2,3]. However, barring a few special cases, the atomistic details of diffusion processes near steps and kinks are not known. Current experimental techniques are now able to yield atomistic information about adatom dynamics near steps and kinks [4], and growth of surfaces [3,5]. Clearly, careful microscopic calculations are needed to understand these phenomena.Our aim in this Letter is to study models of surfaces of fcc metals vicinal to the(100)plane. The open structure of the (100) facets can be expected to give rise to some unconventional diffusion processes that are not seen for instance on stepped surfaces with fcc(111) terraces. First, we want to identify the various microscopic mechanisms relevant to self-diffusion near steps and kinks. Second, we shall discuss the implications of our results to the morphological stability of these surfaces under growth [6], and suggest a phenomenological model for step growth. Our results are consistent with experiments on Cu(100)[3] and Cu (1,1,19)[4] surfaces.The geometric structure of an fcc(119) surface is shown inFig. 1. An ideal fcc(11m) facet, with odd m > 1, consists of (100) terraces of width (m − 1)r nn /2 separated by (111) steps of height r nn / √ 2, where r nn is the distance between nearest neighbor atoms. Due to the geometry, only one kind of steps (of monolayer height) with close-packed edges exist on these surfaces. The metallic interactions between atoms in our model are derived from the semi-empirical Effective Medium Theory (EMT). The formalism of EMT is presented in Ref.[7], and a description of the implementation for molecular dynamics (MD) simulations of the present work can be found in Ref.[8]. In the case of copper, EMT has been shown to give a reasonably accurate quantitative description of many different surface phenomena [7-11], which motivates its use for the present case.We shall divide the discussion of the microscopic mechanisms near step edges into three parts: standard hopping events (denoted by H), exchange and other exotic mechanisms (X), and the effect of kinks on diffusion near and across step edges (K). For each mechanism M the activation barrier is denoted by E M , and that of the reversed process by E rev M . InFig. 2(a)we show a contour plot of the adiabatic energy surface E(x, y) experienced by a single adatom on the Cu(119) surface at zero temperature. The potential across the terrace is shown inFig. 2(b), indicating the activation energy for diffusion on the terrace E A , and the height of the Schwoebel step barrier E rev H1[12]. It is evident that the barrier in the x direction is appreciably modulated only at the immediate vicinity of the steps[13]. We have verified this for the Cu(1,1,15) surface also.Activation energies for simple hopping mechanisms on surfaces of several fcc metals with different orientations, as given by EMT, have been extensively tabulated in Ref.[10]. Our results for copper are fully consistent. The barrier height for a single jump on a flat terrace far from step edges is found to be E A = 0.399 eV, and that for diffusion of a vacancy in the first layer of the terrace is E V = 0.473 eV. The lowest barrier is that of an adatom diffusing along a straight step edge, and has a value E H2 = 0.258 eV. As expected, on Cu(11m) surfaces we find E H2 < E A < E rev H1 < E H1 . The inequality E H2 < E A is consistent with experimental results[14].In addition to ground-state calculations we have performed extensive MD simulations [15] to identify possible exotic diffusion mechanisms[16]and to study entropic contributions to the rates[17]. A well-known mechanism for step crossing, the replacement of an edge atom by an adatom from the terrace (mechanism X1) is observed in our simulations. In our model the activation energies for hopping and the simple exchange across the step edge are approximately equal: E H1 ≈ E X1[18,19]. We have also found more complicated mechanisms for step crossing. InFig. 3we show two examples: a "coherent" chain transfer and an atom-by-atom replacement mechanism (vacancy 1 We study the dynamics of adatoms in a model of vicinal (11m) fcc metal surfaces. We examine the role of different diffusion mechanisms and their implications to surface growth. In particular, we study the effect of steps and kinks on adatom dynamics. We show that the existence of kinks is crucially important for adatom motion along and across steps. Our results are in agreement with recent experiments on Cu(100) and Cu(1,1, 19) surfaces. The results also suggest that for some metals exotic diffusion mechanisms may be important for mass transport across the steps. Diffusion of adatoms on solid surfaces is an extensively studied subject [1]. In particular, adatom dynamics on vicinal metal and semiconductor surfaces has important implications to surface growth under non-equilibrium conditions [2,3]. However, barring a few special cases, the atomistic details of diffusion processes near steps and kinks are not known. Current experimental techniques are now able to yield atomistic information about adatom dynamics near steps and kinks [4], and growth of surfaces [3,5]. Clearly, careful microscopic calculations are needed to understand these phenomena. Our aim in this Letter is to study models of surfaces of fcc metals vicinal to the (100) plane. The open structure of the (100) facets can be expected to give rise to some unconventional diffusion processes that are not seen for instance on stepped surfaces with fcc(111) terraces. First, we want to identify the various microscopic mechanisms relevant to self-diffusion near steps and kinks. Second, we shall discuss the implications of our results to the morphological stability of these surfaces under growth [6], and suggest a phenomenological model for step growth. Our results are consistent with experiments on Cu(100) [3] and Cu(1,1, 19) [4] surfaces. The geometric structure of an fcc(119) surface is shown in Fig. 1. An ideal fcc(11m) facet, with odd m > 1, consists of (100) terraces of width (m − 1)r nn /2 separated by (111) steps of height r nn / √ 2, where r nn is the distance between nearest neighbor atoms. Due to the geometry, only one kind of steps (of monolayer height) with close-packed edges exist on these surfaces. The metallic interactions between atoms in our model are derived from the semi-empirical Effective Medium Theory (EMT). The formalism of EMT is presented in Ref. [7], and a description of the implementation for molecular dynamics (MD) simulations of the present work can be found in Ref. [8]. In the case of copper, EMT has been shown to give a reasonably accurate quantitative description of many different surface phenomena [7][8][9][10][11], which motivates its use for the present case. We shall divide the discussion of the microscopic mechanisms near step edges into three parts: standard hopping events (denoted by H), exchange and other exotic mechanisms (X), and the effect of kinks on diffusion near and across step edges (K). For each mechanism M the activation barrier is denoted by E M , and that of the reversed process by E rev M . In Fig. 2(a) we show a contour plot of the adiabatic energy surface E(x, y) experienced by a single adatom on the Cu(119) surface at zero temperature. The potential across the terrace is shown in Fig. 2(b), indicating the activation energy for diffusion on the terrace E A , and the height of the Schwoebel step barrier E rev H1 [12]. It is evident that the barrier in the x direction is appreciably modulated only at the immediate vicinity of the steps [13]. We have verified this for the Cu(1,1,15) surface also. Activation energies for simple hopping mechanisms on surfaces of several fcc metals with different orientations, as given by EMT, have been extensively tabulated in Ref. [10]. Our results for copper are fully consistent. The barrier height for a single jump on a flat terrace far from step edges is found to be E A = 0.399 eV, and that for diffusion of a vacancy in the first layer of the terrace is E V = 0.473 eV. The lowest barrier is that of an adatom diffusing along a straight step edge, and has a value E H2 = 0.258 eV. As expected, on Cu(11m) surfaces we find E H2 < E A < E rev H1 < E H1 . The inequality E H2 < E A is consistent with experimental results [14]. In addition to ground-state calculations we have performed extensive MD simulations [15] to identify possible exotic diffusion mechanisms [16] and to study entropic contributions to the rates [17]. A well-known mechanism for step crossing, the replacement of an edge atom by an adatom from the terrace (mechanism X1) is observed in our simulations. In our model the activation energies for hopping and the simple exchange across the step edge are approximately equal: E H1 ≈ E X1 [18,19]. We have also found more complicated mechanisms for step crossing. In Fig. 3 we show two examples: a "coherent" chain transfer and an atom-by-atom replacement mechanism (vacancy diffusion). A possible explanation for the first mechanism is the local close-packed-like order of the surface atoms in the second configuration of Fig. 3(a), which may lower the local free energy. This is obviously an effect characteristic to stepped surfaces with fcc(100) terraces: e.g. on an fcc(111) terrace the atomic rows are more densely packed and cannot easily slide with respect to each other. The second process shown in Fig. 3(b) can be described as a "popping up" of a surface atom onto the step edge and the diffusion of a vacancy towards the descending step. By reaching the step, the vacancy turns into a hole or a pair of kinks at the step edge, which can then be filled by a surface atom or an adatom from the terrace below. Repeating this procedure, e.g. under an external field driving the atoms into the negative x direction, can result in mass transfer across steps which can then enhance growth instability [6]. The activation energy of the first stage of the process shown in Fig. 3(b), i.e. the pop-up of a surface atom, was reduced by the existence of a kink at the step edge. For the processes with and without a kink we find E K3 < E X3 , respectively (cf. Fig. 4). Indeed, the effect of kinks on the energetics of diffusion processes near step edges seems to be crucial. In Fig. 4 we show the most important diffusion routes near straight and kinked step edges. It turns out that the hopping of a single adatom across the step edge in the vicinity of a kink site (K1) is not much more favorable than climbing across a straight edge (H1). On the other hand, activation barrier for the escape of an adatom from a kink (K4) is higher than E A , while going "around the corner" along a kinked step edge (K2) is even more expensive, i.e. E H2 < E A < E K4 < E K2 . From experiments on Cu(1,1,19), the activation energy for mass transport along kinked step edges was determined to be 10300 ± 1630 K [4], which was assumed to be due to the process K4. For K4 we obtain 6011 K in agreement with Ref. [18]. However, if we assume that K2 is the rate-limiting process we get an activation energy of E K2 = 9075 K which is within the experimental error bars. The activation barriers for the processes shown in Fig. 4 for Cu(11m) are summarized in Table I. For any mechanism of step crossing in either direction, the barrier height is found to be well above E A , i.e. a clear Schwoebel barrier exists. Our results thus indicate that under growth conditions the currents should go upward which makes the (100) surface unstable [6]. This is consistent with the experimental results of Ref. [3] on growth of the Cu(100) surface. In the case of our copper model, the activation barriers for H1, X1, and K3 across the step edge are almost the same. This means that a finite density of kinks promotes step crossing. Due to small differences in activation energy, the relative occurrence of the different mechanisms at low temperatures is expected to be strongly model and material dependent. In particular, our results suggest the possibility that for (11m) surfaces of other fcc metals, exotic mechanisms such as K3 could play a more important role in mass transport across step edges. In addition, some processes such as X2 are influenced by the step density. More systematic studies on the effects of inclination and finite temperatures will be published elsewhere [17]. As already mentioned, our results have important implications for growth processes on copper surfaces vicinal to the (100) plane. First, under molecular beam epitaxy (MBE) conditions step crossing should be very rare. Thus, our data can be used to construct a growth model for individual steps. In the case of copper, such a growth model [17] should include the following features: adatom motion along a straight step edge which is very fast, and motion through a kink site at the step edge which in turn is much slower than simple diffusion on a flat terrace. In the simplest approximation, step crossing and evaporation can be neglected. The activation energies for these processes are well separated from each other and well below those of the neglected ones, which makes the model simple and hopefully applicable to a variety of vicinal fcc metal surfaces. This is supported by experiments [3,4,14]. For some fcc metals step crossings could be more significant, and then the detailed interplay between different mechanisms at step edges and kink statistics has to be taken into account. Note that during growth a non-vanishing kink concentration is naturally provided by adatoms deposited on the terraces. The growth rules suggested above differ substantially from those expected to describe MBE on stepped surfaces of silicon [20], where the strong anisotropy of diffusion [21] results in an interesting step morphology [5,20]. Discussions with I. Bukharev and H. Häkkinen are gratefully acknowledged. J. Krug and S. C. Ying are acknowledged for a critical reading of the manuscript. This work has been in part supported by a joint grant between the Academy of Finland and Deutcher Akademischer Austauschdienst (DAAD). place. K1 is the mechanism with the lowest activation barrier of the several possibilities for hopping across a step near a kink. The five stages of the process X3 are shown by numbers in parentheses. The processes seen in Fig. 3(a) and 3(b) correspond to X2 and X3, respectively, albeit without an initial adatom on the lower terrace. TABLE I. Activation energies of some diffusion mechanisms near step edges on Cu(11m) surfaces as given by EMT. The whole system around the adatom was allowed to relax in the calculations. The labels in the first column are those used in Fig. 4. The third column shows the barriers of the corresponding reversed processes. For the mechanism X3 the height of the highest barrier (stage 1) is given. M (mechanism) EM ( PACS numbers: 61.50.Cj, 68.35.Fx, 68.55.Bd * Permanent address: Department of Physics, University of Jyväskylä, P.O. Box 35, FIN-40351 Jyväskylä, Finland. [1] R. Gomer, Rep. Prog. Phys. 53, 917 (1990); T. Ala-Nissila and S. C. Ying, Prog. Surf. Sci. 39, 227 (1992). [2] J. Krug and H. Spohn, in Solids Far from Equilibrium, ed. G. Godreche (Cambridge University Press, Cambridge, 1991) [3] H. J. Ernst, F. Fabre, R. Folkerts, and J. Lapujoulade, Phys. Rev. Lett. 72, 112 (1994).[4] M. Giesen-Seibert, R. Jentjens, M. Poensgen, and H. FIG. 1. Ideal fcc(119) surface: (a) Perspective view and (b) top view. The size of the unit cell is shown by a dashed line. FIG. 2. The adiabatic potential experienced by an adatom on the Cu(119) surface as given by EMT at zero temperature: (a) a contour plot of the potential, where the global minimum is at (0,0) and the energy difference between each contour is 0.1 eV, and (b) minimum energy route of an adatom diffusing in the x direction. FIG. 3. Snapshots of two exotic diffusion events from MD simulations of the Cu(119) surface (a) at T = 700 K and (b) at T = 750 K. These events only include surface atoms which are colored black. Atoms in the adjacent layers are shown in grey. Only part of the simulation cell is shown. In (a), the surface atom marked with a cross is pushed up and a hole is left behind at the descending step edge. In (b), the surface atom marked with a cross is pushed up and the vacancy left behind diffuses to the lower step on the right. FIG. 4. Dominant diffusion mechanisms at the step edge on an fcc(11m) surface (top view). Black circles are adatoms and open circles denote surface atoms. A shaded circle shows the position of an atom after the diffusion event has taken Fig.1 (Merikoski et al) . Ibach, Phys. Rev. Lett. 713521Ibach, Phys. Rev. Lett. 71, 3521 (1993). . F Wu, S G Jaloviar, D E Savage, M G Lagally, Phys. Rev. Lett. 714190F. Wu, S. G. Jaloviar, D. E. Savage, and M. G. Lagally, Phys. Rev. Lett. 71, 4190 (1993); . H J W Zandvliet, H Wormeester, D J Wentink, A Van Silfhout, Phys. Rev. Lett. 702122H. J. W. Zandvliet, H. Wormeester, D. J. Wentink, and A. van Silfhout, Phys. Rev. Lett 70, 2122 (1993). . J Krug, M Plischke, M Siegert, Phys. Rev. Lett. 703271J. Krug, M. Plischke, and M. Siegert, Phys. Rev. Lett. 70, 3271 (1993); . J Krug, H T Dobbs, Phys. Rev. Lett. 731947J. Krug and H. T. Dobbs, Phys. Rev. Lett. 73, 1947 (1994). . K W Jacobsen, J K Nørskov, M J Puska, Phys. Rev. B. 357423K. W. Jacobsen, J. K. Nørskov, and M. J. Puska, Phys. Rev. B 35, 7423 (1987); . Phys. Rev. Lett. 602496Phys. Rev. Lett. 60, 2496 (1988); The Structure of Surfaces II. J. F. van der Veen and M. A. van HoveBerlinSpringerin The Structure of Surfaces II, edited by J. F. van der Veen and M. A. van Hove (Springer, Berlin, 1987). . H Häkkinen, M Manninen, Phys. Rev. B. 461725H. Häkkinen and M. Manninen, Phys. Rev. B 46, 1725 (1992). . L Hansen, P Stoltze, K W Jacobsen, J K Nørskov, Phys. Rev. B. 446523L. Hansen, P. Stoltze, K. W. Jacobsen, and J. K. Nørskov, Phys. Rev. B 44, 6523 (1991); . Surf. Sci. 28968Surf. Sci. 289, 68 (1993). . P Stoltze, J. Phys. Condens. Matter. 69495P. Stoltze, J. Phys. Condens. Matter 6, 9495 (1994). . J Merikoski, H Häkkinen, M Manninen, J Timonen, K Kaski, Int. J. Mod. Phys. B. 83175and references thereinJ. Merikoski, H. Häkkinen, M. Manninen, J. Timonen, and K. Kaski, Int. J. Mod. Phys. B 8, 3175 (1994) and references therein. . R L Schwoebel, J. Appl. Phys. 44614R. L. Schwoebel, J. Appl. Phys. 44, 614 (1969). At low temperatures even a small variation in barrier height can have a drastic effect on diffusion. G Wang, Ehrlich, Phys. Rev. Lett. 7041At low temperatures even a small variation in barrier height can have a drastic effect on diffusion, see e.g. S. C. Wang and G. Ehrlich, Phys. Rev. Lett. 70, 41 (1993). . M Breeman, D O Boerma, Surf. Sci. 287881M. Breeman and D. O. Boerma, Surf. Sci. 287, 881 (1993). Lz) = (6, 10, 12) and (Ns, Ly, Lz) = (4, 20, 12), where Ns is the number of steps in the cell, Lyrnn is the size of the cell in y direction, and Lz is the number of atomic layers. In simulations, the feasible temperature range was between 700-800 K, and the simulation length was of the order of a few nanoseconds. Ly Ns, For Cu(119) surface we have simulation cells of the sizes. Computational details will be reported elsewhere [17For Cu(119) surface we have simulation cells of the sizes (Ns, Ly, Lz) = (6, 10, 12) and (Ns, Ly, Lz) = (4, 20, 12), where Ns is the number of steps in the cell, Lyrnn is the size of the cell in y direction, and Lz is the number of atomic layers. In simulations, the feasible temperature range was between 700-800 K, and the simulation length was of the order of a few nanoseconds. Computational details will be reported elsewhere [17]. . P J Feibelman, Phys. Rev. Lett. 65729P. J. Feibelman, Phys. Rev. Lett. 65, 729 (1990); . J E Black, Z.-J Tian, Phys. Rev. Lett. 712445J. E. Black and Z.-J. Tian, Phys. Rev. Lett. 71, 2445 (1993). . J Merikoski, to be publishedJ. Merikoski et al., to be published. This disagrees with the Embedded Atom Method result reported by. their work they find no Schwoebel barrier in the direction of descending steps. Z. J. Tian and T. S. Rahman479751This disagrees with the Embedded Atom Method result reported by Z. J. Tian and T. S. Rahman, Phys. Rev. B 47, 9751 (1993). In their work they find no Schwoebel barrier in the direction of descending steps. correction to EMT considerably lowers the activation energy for exchange on a flat Cu(100) surface. A similar but much smaller reduction is given by a kinetic-exchangecorrelation term in the "corrected effective medium" (CEM) approach, see. Surf. Sci. L. S. Perkins and A. E. DePristo29467In Ref. [9] it was shown that a "one-electron. For diffusion near step edges, however, this effect has not been studiedIn Ref. [9] it was shown that a "one-electron" correc- tion to EMT considerably lowers the activation energy for exchange on a flat Cu(100) surface. A similar but much smaller reduction is given by a kinetic-exchange- correlation term in the "corrected effective medium" (CEM) approach, see L. S. Perkins and A. E. DePristo, Surf. Sci. 294, 67 (1993). For diffusion near step edges, however, this effect has not been studied. . T Ala-Nissila, I Bukharev, J M Kosterlitz, T. Ala-Nissila, I. Bukharev, and J. M. Kosterlitz, un- published (1995). . C Roland, G Gilmer, Phys. Rev. Lett. 673188C. Roland and G. Gilmer, Phys. Rev. Lett. 67, 3188 (1991); . Phys. Rev. B. 4613437Phys. Rev. B 46, 13437 (1992).
[]
[ "On supersymmetric Minkowski vacua in IIB orientifolds", "On supersymmetric Minkowski vacua in IIB orientifolds" ]
[ "Daniel Krefl [email protected] \nArnold Sommerfeld Center for Theoretical Physics Department für Physik\nLudwig-Maximilians-Universität München\n80333MunichGermany\n", "Dieter Lüst \nArnold Sommerfeld Center for Theoretical Physics Department für Physik\nLudwig-Maximilians-Universität München\n80333MunichGermany\n\nMax-Planck-Institut für Physik\n80805MunichGermany\n" ]
[ "Arnold Sommerfeld Center for Theoretical Physics Department für Physik\nLudwig-Maximilians-Universität München\n80333MunichGermany", "Arnold Sommerfeld Center for Theoretical Physics Department für Physik\nLudwig-Maximilians-Universität München\n80333MunichGermany", "Max-Planck-Institut für Physik\n80805MunichGermany" ]
[]
Supersymmetric Minkowski vacua in IIB orientifold compactifications based on orbifolds with background fluxes and non-perturbative superpotentials are investigated. Especially, microscopic requirements and difficulties to obtain such vacua are discussed. We show that orbifold models with one and two complex structure moduli and supersymmetric 2-form flux can be successfully stabilized to such vacua. By taking additional gaugino condensation on fixed space-time filling D3-branes into account also models without complex structure can be consistently stabilized to Minkowski vacua.
10.1088/1126-6708/2006/06/023
[ "https://arxiv.org/pdf/hep-th/0603166v2.pdf" ]
2,605,620
hep-th/0603166
b5c94f5a9944946bb1f62a0d46ffb644fd0cb9b8
On supersymmetric Minkowski vacua in IIB orientifolds 5 Apr 2006 Daniel Krefl [email protected] Arnold Sommerfeld Center for Theoretical Physics Department für Physik Ludwig-Maximilians-Universität München 80333MunichGermany Dieter Lüst Arnold Sommerfeld Center for Theoretical Physics Department für Physik Ludwig-Maximilians-Universität München 80333MunichGermany Max-Planck-Institut für Physik 80805MunichGermany On supersymmetric Minkowski vacua in IIB orientifolds 5 Apr 2006arXiv:hep-th/0603166v2 LMU-ASC 20/06 MPP-2006-27 hep-th/0603166 Supersymmetric Minkowski vacua in IIB orientifold compactifications based on orbifolds with background fluxes and non-perturbative superpotentials are investigated. Especially, microscopic requirements and difficulties to obtain such vacua are discussed. We show that orbifold models with one and two complex structure moduli and supersymmetric 2-form flux can be successfully stabilized to such vacua. By taking additional gaugino condensation on fixed space-time filling D3-branes into account also models without complex structure can be consistently stabilized to Minkowski vacua. Introduction Moduli stabilization in superstring theory has been an unsolved problem for a long time. However, during recent time significant progress has been made. An important step was to recognize the importance of flux backgrounds [1] for moduli stabilization issues. E.g. turning on 3-form fluxes in type IIB orientifolds generates a potential for the axion-dilaton and the complex structure moduli. However, in general Kähler moduli stay unstabilized. To overcome this difficulty, KKLT [2] proposed to lift the remaining flat directions by considering non-perturbative effects. In particular, gaugino condensation in super Yang-Mills theory of D7-branes wrapping internal 4-cycles or instanton effects via euclidean D3-branes also wrapping 4-cycles may give a proper non-perturbative term to the superpotential to lift the flat directions. Alternatively, one might also consider the possibility that α ′ and perturbative effects might be sufficient to lift the flat directions [3,4,5]. After having stabilized all moduli to an AdS space the KKLT scenario in addition proposes to uplift the AdS vacuum to a dS vacuum by introducing D3-branes. Despite this remarkable success the situation concerning full moduli stabilization is still not finished. More detailed investigations and applications of the KKLT scheme to more complicated models quickly uncovered that the consistency of the scheme is strongly model dependent. Specially, the creation of non-perturbative potentials for the Kähler moduli strongly depends on the fluxes and the topology of the compactification manifold [6,7,8,9,10,11]. Moreover the proposal to first integrate out the heavy fields before adding the nonperturbative potentials to the superpotential seems unnatural. Indeed, if one naively integrates in the heavy fields, inconsistencies can arise [12,13,14], because tachyonic directions may emerge in models without complex structure moduli, which will be a problem after uplift to dS space. Specifically, the moduli stabilization procedure to AdS vacua was studied in [15] for the T 6 /Z 2 × Z 2 orientifold, with the result that all moduli indeed can be fixed. Moreover all other Z N and Z N × Z M orientfolds were studied in great detail, both at the orbifold point [13] and also for blowing up the orbifold singularities [16,17] Finally, the process of uplifting is still poorly understood. The uplifting by D3-branes breaks explicitly supersymmetry, hence making a controlled uplift difficult. An alternative proposal is to consider D-terms due to non-supersymmetric 2-form flux on the worldvolume of D7-branes as uplifting terms [18]. Recently, progress has been made in this direction [19,20,21]. However, one should keep in mind that all results obtained so far are only valid in the large volume limit such that the backreaction on the geometry due to the fluxes is negligible and perturbative α ′ -corrections are under control. The main focus of this work will be on applying the refined KKLT scenario of [22], namely to stabilize all moduli in a Minkowski vacuum instead of an AdS vacuum, to the orbifold models of [13]. This is interesting since several problems related to the uplift in the original KKLT scenario can be avoided in the scheme of [22]. In particular, whereas the toroidal orientifold models without complex structure generally suffer from tachyonic directions in the minimized scalar potential after the uplift, Minkowski vacua guarantee the absence of tachyonic directions without any further input. As we will see however, models without complex structure modulus still have a problem since the axion-dilaton stays unstabilized. It will be found that the difficulties can be in principle solved by taking an additional effect, namely gaugino condensation on a stack of space-time filling fixed D3-branes into account. Further, supersymmetric Minkowski vacua show the nice property of being qualitatively independent of perturbative corrections to the Kähler potential. The outline of the paper is as follows: In section 2, the general conditions for a supersymmetric Minkowski vacuum are given. Two possible ways to fulfill the consistency condition of vanishing superpotential are discussed and a comment about the independence of supersymmetric Minkowski vacua on perturbative corrections to the Kähler potential is made. Section 3 deals with microscopic details to obtain a racetrack scheme. It is argued that in type IIB only gaugino condensation should be a source for a racetrack potential. The arising difficulties in constructing a microscopic model are explained, and the scheme of [22] is generalized to include supersymmetric 2-form flux on D7-branes. In section 4 toroidal orientifold models with one or two complex structure moduli in the orbifold limit are considered and it is shown that they indeed possess supersymmetric Minkowski vacua. In section 5 an additional gaugino condensate on a stack of space-time filling D3-branes is used to construct consistent supersymmetric Minkowski vacua for orientifold models without complex structure moduli. Finally, section 6 gives the conclusion. Minkowski vacua conditions Minkowski vacua are characterized by vanishing cosmological constant. For supersymmetric vacua in N = 1 supergravity, a vanishing cosmological constant is equivalent to a vanishing scalar potential. We limit our discussion on the F-term scalar potential, which is given by: V F = e K (G IJ D I WDJW − 3|W | 2 ),(1) where I,J run over all moduli fields φ I , K denotes the Kähler-, W the superpotential and G IJ the inverse Kähler metric. For simplicity of notation, the set of complex structure moduli (Z 1 , ..., Z m ) will be denoted as Z and the Kähler moduli (T 1 , ..., T n ) as T . S denotes the axion-dilaton. The respective vacuum expectation values will be denoted as T 0 , S 0 and Z 0 . The local supersymmetry conditions are given by D I W = ∂ I W + (∂ I K)W = 0,(2) for all moduli I. At supersymmetric points, the scalar potential (1) reduces to: V susy F = −3e K |W (T 0 , S 0 , Z 0 )| 2 .(3) A vanishing cosmological constant then requires W (T 0 , S 0 , Z 0 ) = 0.(4) At such points, the local supersymmetry conditions reduce to the global ones: ∂ I W = 0.(5) Hence, moduli expectation values for supersymmetric Minkowski vacua can be obtained by solving (4) and (5). Eq. (5) can be solved in two ways: first the superpotential W does not at all depend on a particular scalar field φ I , i.e. ∂ I W ≡ 0; this is of course not what we want, since φ I stays to be a flat direction in the potential. Therefore we are looking for non-trivial solutions of eq.(5) with all scalar fields φ I fixed to specific values. As we will see this requirement may cause problems to some concrete models. Note that in contrast to the Minkowski vacua, it is not possible to get from a F-term scalar potential nontrivial supersymmetric AdS vacua with negative cosmological constant, which nevertheless possess still some complex flat, undetermined moduli directions. The proof goes as follows: Let X be a set of moduli and φ a modulus which is a flat direction of the scalar potential, i.e. ∂ φ V ≡ 0. Further, assume that V possesses an extremal point X 0 which stabilize the moduli X. If in addition the X and φ satisfy the supersymmetry conditions D I W = ∂ I W + (∂ I K)W = 0,(6) where I = (X, φ) at X 0 for all φ, the flat direction of V is called a supersymmetric flat direction. Note that due to this definition the X 0 are necessarily independent of φ. If (∂ φ K)| X 0 = 0 for all φ, D φ W | X 0 = 0 requires that W | X 0 ≡ (∂ φ W )| X 0 ≡ 0 since W is holomorphic and K not. Hence such points are automatically Minkowski. For some φ, (∂ φ K)| (X 0 ,φ) = 0 might occur, but still W needs to vanish in such points since otherwise φ would not be a flat supersymmetric direction. Hence, flat complex supersymmetric directions in the scalar potential lead automatically to Minkowski vacua. Therefore, a supersymmetric AdS vacuum does not possess such flat directions and the associated unstabilized moduli. One immediately sees that the original KKLT scheme can not lead to supersymmetric Minkowski vacua, since the superpotential is given by W = W 0 + Ce −aT ,(7) where W 0 , C, a are constants and the second term is of non-perturbative origin. Here T denotes a single Kähler modulus. Hence ∂ T W = 0 can not be satisfied non-trivially for finite values of T . This changes, if one introduces additional non-perturbative T dependend terms. The simplest case is the racetrack scheme W = W 0 + Ce −aT − De −bT ,(8) with C, D, a, b real positive constants. Such racetrack superpotentials with vanishing W 0 have already been introduced some time ago in the context of heterotic strings [23,24,25] to stabilize the dilaton and breaking supersymmetry. Lately, such potentials with nonvanishing W 0 gained again attention in the IIB KKLT setup [26,27,28,22] since they possess nice cosmological properties and a positive-definite mass matrix M IJ = ∂ I ∂J V in supersymmetric Minkowski vacua, avoiding stability problems after uplifting to dS vacua. The positive-definiteness of M in supersymmetric Minkowski vacua can easily be verified, since only terms which do not involve W or a first derivative of W contribute to M at such vacua due to the conditions (4) and (5). Thus, M M N = 0,(9)MM N = e K G IJ (∂ N ∂ I W )(∂M ∂JW ).(10) Since the Kähler metric G is positive-definite, so is M. Hence in supersymmetric Minkowski vacua, extrema of the scalar potential are always minima. 1 Another interesting possibility to obtain supersymmetric Minkowski vacua would be nonperturbative superpotentials of the following form: W = W 0 + C T e −aT ,(11) where the prefactor of the non-perturbative potential is linear in T . For a simple one modulus system with constant flux superpotential W 0 , the conditions (4) and (5) give: T 0 = 1 a ,(12)W 0 = − C ae .(13) However, it is unclear if it is possible to obtain such superpotentials in a Type IIB setup, e.g. by considering gauge threshold correction in orientifold models [30]. It would be interesting to investigate this in future work. From now on we will stick to the classical racetrack scheme, and assume that the geometry of the compactification manifold allows a non-perturbative potential of racetrack form for each Kähler modulus T i : W np = i C i e −a i T i − D i e −b i T i .(14) Some microscopic details about such racetrack potentials in IIB string compactifications will be discussed in section 3. At the moment, it is just assumed that C i , D i are positive real constants. The full superpotential is then given by W = W f lux + n i C i e −a i T i − D i e −b i T i ,(15) where W f lux denotes the Gukov-Vafa-Witten superpotential arising in flux compactifications [31,32,33,34]: W f lux = X 6 G (3) ∧ Ω,(16) X 6 denotes the compact Calabi-Yau space, G (3) is the combined 3-form flux and Ω denotes the unique globally defined harmonic (3,0)-form on X 6 . The flux potential can be parameterized as: W f lux = A(Z 1 , ..., Z m ) + B(Z 1 , ..., Z m )S,(17) where A, B are flux dependent functions. The Minkowski vacuum conditions (4) and (5) lead to the following set of equations to be solved for the vacuum expectation values of the moduli: T 0 i = 1 a i − b i ln a i C i b i D i ,(18)B(Z 0 ) = 0,(19)∂ Z j A(Z)| Z 0 + S 0 ∂ Z j B(Z)| Z 0 = 0,(20)A(Z 0 ) + ω 0 = 0,(21) where ω 0 has been defined as ω 0 = n i C i e −a i T 0 i − D i e −b i T 0 i .(22) This set of equations are identical to the original ones of [22]. The authors of [22] proposed to use equations (19) and (20) to fix the complex structure moduli and the axion-dilaton and to ensure by specific choice of C i , D i , a i , b i that equation (21) is satisfied. Alternatively to the approach by [22], one might think about satisfying (21) by appropriate choice of flux. Some comments are in order. Treating C i , D i , a i , b i as free parameters is not necessarily justified, since a i , b i , C i , D i are fixed by the specific compactification construction and low-energy physics as long as threshold corrections are neglected. 2 However, solving (21) by tuning of fluxes is also not necessarily possible, since flux can only be tuned discreetly. Nevertheless, since it is the simplest approach, in the following we will assume that flux degrees of freedom can be chosen such that (21) is satisfied, keeping in mind that this may not always be possible. Also note that the vacuum expectation values of the moduli are stable against perturbative corrections to the Kähler potential, since the expectation values are completely determined by the superpotential. The same holds for the positive-definiteness property of the mass matrix since (10) only depends on the Kähler potential via the e K prefactor and the Kähler metric which stays positive definite under perturbative corrections. Racetrack potentials There are two known sources of possible non-perturbative corrections to superpotentials in Type IIB orientifold compactifications. Namely, instanton effects due to euclidean D3branes wrapping four cycles in X 6 and gaugino condensation in supersymmetric gauge theories on the worldvolume of D-branes. The instantons yield the following non-perturbative superpotential [6]: W np = C(Z)e −2πT ,(23) where C(Z) is a complex structure dependent one-loop determinant and T the Kähler modulus associated to the volume of the 4-cycle wrapped by the euclidean D3-branes. The explicit form of C(Z) is unknown in general. Generally, the existence of such instantons is only possible if the 4-cycle satisfies certain topological properties. It is reasonable to expect that a racetrack potential can not be generated by two instantons on the same cycle, since in this case the two non-perturbative terms should combine to a single KKLT type term. Also D3 branes and higher dimensional D-branes on the same cycle should combine to a single stack of D-branes, preventing the simultaneous creation of an instanton and a gaugino condensate. For these reasons, only gaugino condensation might be seen as a candidate for generating racetrack potentials. Gaugino condensation is a low energy effect in supersymmetric gauge theories. If no additional matter is present (pure super Yang-Mills), the following non-perturbative superpotential is generated: W np ∼ be − 3 2b f ,(24) where b is the β-function coefficient of the gauge group and f the gauge kinetic function. For gauge theories on the world-volume of D-branes, the gauge kinetic functions are related to moduli. Of special interest are stacks of D3 and D7-branes, since these can occur simultaneously in a supersymmetric IIB orientifold compactification. To first order, one finds for gauge theories on stacks of D7 branes filling space-time and wrapping a 4-cycle of X 6 : f D7 = T,(25) while for gauge theories on stacks of space-time filling D3 branes: f D3 = S.(26) For pure SU(N) super Yang-Mills, b is given by the quadratic Casimir of SU(N). In this case, the non-perturbative potential is given by: W np = NCe − 2π N f ,(27) where C is an O(1) constant determined by low-energy physics and N is the rank of SU(N). The existence of such gaugino condensates giving non-perturbative potentials for the Kähler moduli of the orientifold, puts strong constraints on the topology of X 6 . The constraints arising for toroidal orientifolds were discussed in [13]. Similar constraints results hold in more general orientifolds. In detail, additional fundamental, bi-fundamental and adjoint matter, due to intersecting D-Branes, Wilson-lines and/or variable D-brane positions, may spoil the gaugino condensate. One must make sure that such additional matter does not exist or becomes massive, e.g. by appropriately switching on 3-form fluxes. In order to obtain a racetrack scheme, one needs to break the gauge group G of a stack of N D7-branes wrapping a 4-cycle down to a product gauge group G c × G d , such that gaugino condensation occurs independently in each gauge sector. The standard procedure to achieve this, is to use the translational degrees of freedom in the transversal space of the 4-cycle to fix N c and N d branes (with N c + N d = N) at different positions in the transversal space. If the 4-cycle of X 6 has no transversal degrees of freedom, one needs to break the gauge group by Wilson-lines or by switching on different 2-form flux on the branes. Note that for the first two possibilities, the structure of the broken gauge group is determined by the vacuum expectation values of the associated scalar fields associated to the positions of the D7-branes and Wilson-lines. Hence, 3-form flux must be properly switched on such that these fields have no flat directions in the scalar potential such that they become massive so that gaugino condensation can occur. If one switches on in addition 2-form fluxes on the worldvolume of the D7-branes in toroidal orientifolds, the gauge kinetic function f D7 changes as follows [35,36]: 3 f D7 = T − γS,(28) where γ is a complex constant parameterizing the 2-form flux. One should keep in mind that due to the switched on 2-form flux, also D7-branes can contribute to the Ramond-Ramond 4-form tadpole conditions. In addition it is assumed that the 2-form flux preserves supersymmetry, which means that the associated D-term potential is vanishing. There is also the possibility that additional moduli dependence may occur due to threshold corrections to the gauge kinetic function. However, stabilizing the axion-dilaton and volume sufficiently large, the freedom might be taken to neglect them as is done in the following. If the X 6 under consideration supports a consistent D7-brane setup which leads to such a pure super Yang-Mills with product gauge group SU(N c ) × SU(N d ) for each Kähler modulus, gaugino condensation in both gauge sectors will give the following superpotential if no 2-form flux is switched on: W = W f lux + n i N i c C i e − 2π N i c T i − N i d D i e − 2π N i d T i .(29) In this setup, equation (18) reads T 0 i = 1 2π N i c N i d N i d − N i c ln C i D i .(30) Note that for real C i , D i it is necessary that the prefactors of both non-perturbative terms differ in sign. Since the gauge interactions do not fix the phase of gaugino condensates [23,37], this should be possible as long as the rank of one of the gauge groups is even. One immediately sees that N i c = N i d is required. Otherwise one is back to the standard KKLT scheme. Also, the positive-definiteness of the Kähler moduli require one of the two following conditions to be satisfied: (31) or N i d > N i c , C i > D iN i d < N i c , C i < D i .(32) In the following it will be assumed that the parameters satisfy the first case. One immediately sees that for realistic gauge group ranks a stabilization at large T i values requires N i c to be close to N i d . The largest possible T i value can be obtained for N i d = N i c + 1: T 0 i = 1 2π N i c (N i c + 1) ln [Θ i ] ,(33)with Θ i = C i D i .(34) T i in dependence of N i c for several values of Θ i and T i in dependence of Θ i for several values of N i c is plotted in figure 1. Clearly, a stabilization at large volume requires that N i c and Θ i are large. This may become problematic for resolved toroidal orientifold models, since these generally possess a large amount of Kähler moduli. As argued before, only gaugino condensation should be a source for racetrack potentials and hence every 4-cycle must be wrapped with a stack of approximately twenty D7-Branes to achieve a large volume supersymmetric Minkowski vacua. In total, it is reasonable to expect that several hundreds of D7 branes are needed to obtain such vacua via Racetrack potentials, making the cancellation of D7 charge and also Ramond-Ramond 4-form charge tadpole conditions difficult. For later convenience, define as in (22) ω 0 = W np (T 0 ): ω 0 = i D i Θ −N i c i .(35) In the identical setup, but with 2-form flux turned on, the superpotential becomes: W = W f lux + n i N i c C i e − 2π N i c T i +Λ i c − N i d D i e − 2π N i d T i +Λ i d ,(36)with Λ i l = 2π N i l γ i S,(37) for l = c, d. Note that necessarily Λ i c = Λ i d since N c = N d and that it was assumed that the flux factor γ is identical for both gauge sectors. Hence it is assumed that the gauge T 0 i = 1 2π N i c (N i c + 1) ln [Θ i ] + (Λ i c − Λ i d ) = 1 2π N i c (N i c + 1) ln [Θ i ] + γ i S 0 .(38) Observe that the vacuum expectation value of the non-perturbative superpotential with 2-form flux W np (T 0 , S 0 ) equals the vacuum expectation value of the non-perturbative superpotential without 2-form flux ω 0 . Further, (∂ S W np (T, S))| T 0 vanishes. 4 Hence, the set of equations (19)- (21) is identical for models with and without supersymmetric 2-form flux. The only difference is that the vacuum expectation values of the Kähler moduli get an additional flux and axion-dilaton dependent term. Thus, the generalized set of equations determining the vacuum expectation values of the moduli with and without 2-form flux in a Minkowski vacuum for SU(N i c ) × SU(N i c + 1) gauge theories living on D7-branes is given by: T 0 i = 1 2π N i c (N i c + 1) ln [Θ i ] + γ i S 0 , B(Z 0 ) = 0, (∂ Z j A(Z))| Z 0 + S 0 (∂ Z j B(Z))| Z 0 = 0, A(Z 0 ) + ω 0 = 0.(39) The D7 charge cancellation in this setup is less problematic, since fewer D7-branes are needed for a large volume stabilization if 2-form flux is properly switched on. In sections 4 and 5 this scheme will be explicitly applied to some toroidal orientifold models. 4 Toroidal orientifold models with complex structure moduli (CSM) One CSM In [13] AdS moduli stabilization in Z N and Z N × Z M orientifold models was discussed. Here were are interested in the question, whether these models can also lead to Minkowski vacua. If the consistency conditions discussed in section 3 are satisfied, the models with h (1,1) = n, h untw (2,1) = 1, 3-form flux and additional 2-form flux possess the following superpotential: W = (α 1 + α 2 Z) + (α 3 + α 4 Z)S + n i N i c C i e − 2π N i c T i +Λ i c − N i d D i e − 2π N i d T i +Λ i d .(40) This captures the Z 6−II , Z 2 × Z 3 , Z 2 × Z 6 models in the orbifold limit and also the Z 6−II ′ model after blowup since h twist 2,1 = 0 [13]. For convenience, the 3-form flux matrix G 3 will be defined as G 3 = α 1 α 2 α 3 α 4 ,(41) where α i are 3-form flux dependent constants. Application of equations (39) gives: Z 0 = − α 3 α 4 , S 0 = − α 2 α 4 , α 1 = −α 2 Z 0 − ω 0 .(42) Substitution of the first equation into the last gives the condition: det(G 3 ) = −α 4 ω 0 .(43) A choice of 3-form flux G 3 = −ω 0 − α 2 α 3 α 2 , α 3 −1 ,(44) gives a consistent supersymmetric Minkowski vacuum with tuneable S 0 and Z 0 : S 0 = α 2 Z 0 = α 3 .(45) The fixed Kähler moduli are given in (33). Two CSM Toroidal orientifolds with h (1,1) = n and h untw (2,1) = 2 which fulfill tadpole conditions do not exist. However, if one identifies two of the complex structure moduli this case captures the Z 2 × Z 2 model. For simplicity, we will stick to this case. Since the Z 2 × Z 2 model has h twist (2,1) = 0, the resolved case [15] is captured as well. The superpotential is given by: W =(α 1 + α 2 Z 1 ) + (α 3 + α 4 Z 1 )S + (α 5 + α 6 S)Z 2 + (α 7 + α 8 S)Z 1 Z 2 + n i N i c C i e − 2π N i c T i +Λ i c − N i d D i e − 2π N i d T i +Λ i d .(46) Again a racetrack type superpotential with possible 2-form flux was taken into account. For convenience, the 3-form flux matrix G 3 will be defined as G 3 =   α 1 α 2 α 3 α 4 α 5 α 6 α 7 α 8 0   .(47) Equations (39) lead to: α 3 + α 4 Z 0 1 + α 6 Z 0 2 + α 8 Z 0 1 Z 0 2 = 0, α 2 + α 4 S 0 + α 7 Z 0 2 + α 8 S 0 Z 0 2 = 0, α 5 + α 6 S 0 + α 7 Z 0 1 + α 8 S 0 Z 0 1 = 0, α 1 + α 2 Z 0 1 + α 5 Z 0 2 + α 7 Z 0 1 Z 0 2 + ω 0 = 0.(48) The first three equations simplify to: S 0 = − α 5 + α 7 Z 0 1 α 6 + α 8 Z 0 1 , Z 0 1 = − α 3 + α 6 Z 0 2 α 4 + α 8 Z 0 2 , Z 0 2 = − α 2 + α 4 S 0 α 7 + α 8 S 0 .(49) If the flux parameters satisfy certain determinant conditions, the vacuum expectation values of the moduli are related among each other by projective conformal transformations given by the group P SL(2, C). Choosing the 3-form flux as G 3 =   −(ω 0 + α 5 2 ) 0 0 1 α 5 1 1 0 0   ,(50) gives: Z 0 1 = S 0 ,(51)Z 0 2 = −S 0 ,(52)S 0 = − α 5 2 .(53) Hence a consistent supersymmetric Minkowski vacuum with tuneable S 0 . The fixed Kähler moduli are given in (33). Models without CSM The general racetrack superpotential for orientifolds with h (1,1) = n, h (2,1) = 0 and possible 2-form flux is given by: W = α 1 + α 2 S + n i N i c C i e − 2π Nc T i +Λ i c − N i d D i e − 2π N d T i +Λ i d ,(54) where α i are complex constants determined by 3-form fluxes. Since it is reasonable to expect that W holds even after blowing up toroidal orbifolds, as long as h twist 2,1 = 0, this specially includes the Z 3 , Z 7 , Z 3 × Z 3 , Z 6 × Z 6 and Z 2 × Z 6 ′ toroidal orientifold models before and after blowup and the Z 6−I , Z 12−I and Z 3 × Z 6 models in the orbifold limit [13]. The Minkowski vacua condition ∂ S W = 0 immediately shows that one is forced to set α 2 = 0. In this case, S is a flat direction of the superpotential. 5 The axion-dilaton stays unstabilized. Hence, the scheme fails for models without complex structure moduli. However, if an additional stack of fixed D3 branes is present, gaugino condensation can occur in the corresponding gauge theory with gauge coupling f D3 = S.(55) This leads to an additional term to the non-perturbative superpotential of the form W D3 = N e Ee − 2π Ne S ,(56) where E is an O(1) constant. The condition ∂ S W = 0 then gives S 0 = N e 2π ln 2πE α 2 .(57) Positive definiteness of S 0 requires that α 2 < 2πE.(58) The consistency condition W (T 0 , S 0 ) = 0 can be fulfilled by setting α 1 = − S 0 + N e 2π α 2 + ω 0 ,(59) The T 0 i values are unaffected and given in (33). Hence, with this modified scheme it is possible to stabilize to a supersymmetric Minkowski vacuum with tuneable S 0 . For illustration, the scalar potential (1) is plotted in figure 2 for a sample choice of purely real parameters using the standard Kähler potential with identified Kähler moduli K = −n ln(T +T ) − ln(S +S), valid for the mentioned Z 7 , Z 12−I , Z 3 × Z 3 , Z 6 × Z 6 , Z 3 × Z 6 and Z 3 × Z 6 ′ models in orbifold limit. Note that this scheme gives the first realization of moduli stabilization without tachyonic directions in toroidal orientifold models without complex structure and without first integrating out the axion-dilaton. Conclusion In this paper we discussed the possibility to get Minkowski vacua in type IIB orientifold models with all moduli stabilized. We first showed that there are two serious obstacles for explicit IIB orientifold Minkowski vacua. Besides the complications to be overcome for gaugino condensation to occur, the situation becomes even more problematic since Minkowski vacua require that the consistency condition W (T 0 , Z 0 , S 0 ) = 0 is fulfilled. There are two possibilities to achieve this. Tuning of 3-form fluxes or restrictions on parameters a i , b i , C i , D i . Tuning of fluxes gives a convenient way to fulfill the consistency condition, however with the major drawback that this may not be possible for every model since flux is only tunable discretely. The second possibility, treating the parameters a i , b i , C i , D i as free, means further strong restrictions on the compactification manifold, since a i , b i depend on the number of D-branes and C i , D i are determined by low-energy physics as long as one does not take threshold effects into account. Hence, no matter which way one chooses to fulfill the consistency condition, finding an explicit compactification setup is much more restrictive and difficult than in the standard KKLT scheme. However in case being realized, Minkowski vacua in IIB orientifold compactifications 5 Strictly, this is only valid in the case when the 2-form flux γ is identical for both gauge sectors, since only in this case the additional axion-dilaton dependence due to the 2-form fluxes vanishes in the vacuum. offer very nice phenomenological features. Besides the properties already observed in [27,22], namely the possibility to have high energy scale inflation with low-energy supersymmetry breaking and solution of stability problems in the uplifting process, two new features were observed. Firstly, that properly switched on 2-form flux only affects the Kähler moduli vacuum expectation values. Secondly, stability against perturbative corrections to the Kähler potential of the moduli vacuum expectation values and of the positive-definiteness property of the mass matrix. While tachyonic directions are automatically absent in supersymmetric Minkowsi vacua, flat directions may occur. Interestingly, supersymmetric AdS vacua show the opposite properties. In a concrete model without complex structure the flat direction can be lifted by taking an additional effect into account, namely gaugino condensation on space-time filling D3-branes. Finally note that also in KKLT scenarios with AdS vacua additional gaugino condensation on fixed D3 branes may solve the stability problems [12,41,13] in models without complex structure modulus 6 . Figure 1 : 1Left: T moduli in dependence of N c for Θ = 1.2 (red line), Θ = 3 (green line), Θ = 9 (blue line). Right: T moduli in dependence of Θ for N c = 5 (red line), N c = 10 (green line), N c = 15 (blue line). group is broken by translation in transversal space or by Wilson-lines. The Kähler moduli are stabilized at: Figure 2 : 2Slides of the F-Term scalar potential for models without complex structure, Kählerpotential as in (60), racetrack potential (without 2-form flux) and D3 brane gaugino condensation. Left: V with Kähler moduli fixed at t ≈ T 0 (V is multiplied by 10 8 ), Right: V with axion-dilaton fixed at s ≈ S 0 (V is multiplied by 10 16 ). Choice of parameters as follows: n = 3, N c = 14, N d = 15, N e = 2, C = 3, D = 1, F = 1, α 2 ≈ 0.072. Strictly, M is only positive semi-definite, however the semi-definite case corresponds to a flat direction in the scalar potential[22]. Further note that stability of non-supersymmetric Minkowski vacua is model dependent[29]. It might be reasonable to expect that threshold corrections[30] will lead at least to a complex structure dependence of the gauge kinetic function which can be seen as a complex structure dependence of the prefactors C i , D i . Racetrack models with similar gauge coupling, but with W f lux = 0 were considered in[38,39]. Note that this only holds for N d = N c + 1. Therefore we will always consider this case in the following models if two-form flux is switched on. A similar observation was mentioned in a footnote of[40] AcknowledgementsWe would like to thank M. Haack, S. Reffert and S. Stieberger for many valuable discussions. This work is partly supported by EU contract MRTN-CT-2004-005104. . M Grana, arXiv:hep-th/0509003Phys. Rept. 423M. Grana, Phys. Rept. 423, 91 (2006) [arXiv:hep-th/0509003]. . S Kachru, R Kallosh, A Linde, S P Trivedi, arXiv:hep-th/0301240Phys. Rev. D. 6846005S. Kachru, R. Kallosh, A. Linde and S. P. Trivedi, Phys. Rev. D 68 (2003) 046005 [arXiv:hep-th/0301240]. . G Gersdorff, A Hebecker, arXiv:hep-th/0507131Phys. Lett. B. 624270G. von Gersdorff and A. Hebecker, Phys. Lett. B 624 (2005) 270 [arXiv:hep-th/0507131]. . V Balasubramanian, P Berglund, J P Conlon, F Quevedo, arXiv:hep-th/0502058JHEP. 05037V. Balasubramanian, P. Berglund, J. P. Conlon and F. Quevedo, JHEP 0503, 007 (2005) [arXiv:hep-th/0502058]. . M Berg, M Haack, B Körs, arXiv:hep-th/0508171M. Berg, M. Haack and B. Körs, arXiv:hep-th/0508171. . E Witten, arXiv:hep-th/9604030Nucl. Phys. B. 474343E. Witten, Nucl. Phys. B 474, 343 (1996) [arXiv:hep-th/9604030]. . L Görlich, S Kachru, P K Tripathy, S P Trivedi, arXiv:hep-th/0407130JHEP. 041274L. Görlich, S. Kachru, P. K. Tripathy and S. P. Trivedi, JHEP 0412 (2004) 074 [arXiv:hep-th/0407130]. . R Kallosh, A K Kashani-Poor, A Tomasiello, arXiv:hep-th/0503138JHEP. 050669R. Kallosh, A. K. Kashani-Poor and A. Tomasiello, JHEP 0506, 069 (2005) [arXiv:hep-th/0503138]. . L Martucci, J Rosseel, D Van Den Bleeken, A Van Proeyen, arXiv:hep-th/0504041Class. Quant. Grav. 222745L. Martucci, J. Rosseel, D. Van den Bleeken and A. Van Proeyen, Class. Quant. Grav. 22, 2745 (2005) [arXiv:hep-th/0504041]. . E Bergshoeff, R Kallosh, A K Kashani-Poor, D Sorokin, A Tomasiello, arXiv:hep-th/0507069JHEP. 0510102E. Bergshoeff, R. Kallosh, A. K. Kashani-Poor, D. Sorokin and A. Tomasiello, JHEP 0510 (2005) 102 [arXiv:hep-th/0507069]. . P Berglund, P Mayr, arXiv:hep-th/0504058P. Berglund and P. Mayr, arXiv:hep-th/0504058. . K Choi, A Falkowski, H P Nilles, M Olechowski, S Pokorski, arXiv:hep-th/0411066JHEP. 041176K. Choi, A. Falkowski, H. P. Nilles, M. Olechowski and S. Pokorski, JHEP 0411, 076 (2004) [arXiv:hep-th/0411066]. . D Lüst, S Reffert, W Schulgin, S Stieberger, arXiv:hep-th/0506090D. Lüst, S. Reffert, W. Schulgin and S. Stieberger, arXiv:hep-th/0506090. . S P De Alwis, arXiv:hep-th/0506266Phys. Lett. B. 626223S. P. de Alwis, Phys. Lett. B 626 (2005) 223 [arXiv:hep-th/0506266]. . F Denef, M R Douglas, B Florea, A Grassi, S Kachru, arXiv:hep-th/0503124F. Denef, M. R. Douglas, B. Florea, A. Grassi and S. Kachru, arXiv:hep-th/0503124. . S Reffert, E Scheidegger, arXiv:hep-th/0512287S. Reffert and E. Scheidegger, arXiv:hep-th/0512287. . D Lüst, S Reffert, E Scheidegger, W Schulgin, S Stieberger, paper to appearD. Lüst, S. Reffert, E. Scheidegger, W. Schulgin and S. Stieberger, paper to appear. . C P Burgess, R Kallosh, F Quevedo, arXiv:hep-th/0309187JHEP. 031056C. P. Burgess, R. Kallosh and F. Quevedo, JHEP 0310, 056 (2003) [arXiv:hep-th/0309187]. . G Villadoro, F Zwirner, arXiv:hep-th/0508167Phys. Rev. Lett. 95231602G. Villadoro and F. Zwirner, Phys. Rev. Lett. 95 (2005) 231602 [arXiv:hep-th/0508167]. . A Achucarro, B Carlos, J A Casas, L Doplicher, arXiv:hep-th/0601190A. Achucarro, B. de Carlos, J. A. Casas and L. Doplicher, arXiv:hep-th/0601190. . S L Parameswaran, A Westphal, arXiv:hep-th/0602253S. L. Parameswaran and A. Westphal, arXiv:hep-th/0602253. . J J Blanco-Pillado, R Kallosh, A Linde, arXiv:hep-th/0511042J. J. Blanco-Pillado, R. Kallosh and A. Linde, arXiv:hep-th/0511042. . N V Krasnikov, Phys. Lett. B. 19337N. V. Krasnikov, Phys. Lett. B 193 (1987) 37. SLAC-PUB-5229 Invited talk given at 15th APS Div. of Particles and Fields General Mtg. L J Dixon, Houston,TXL. J. Dixon, SLAC-PUB-5229 Invited talk given at 15th APS Div. of Particles and Fields General Mtg., Houston,TX, Jan 3-6, 1990 . M Dine, Y Shirman, arXiv:hep-th/9906246Phys. Rev. D. 6346005M. Dine and Y. Shirman, Phys. Rev. D 63 (2001) 046005 [arXiv:hep-th/9906246]. . C Escoda, M Gomez-Reino, F Quevedo, arXiv:hep-th/0307160JHEP. 031165C. Escoda, M. Gomez-Reino and F. Quevedo, JHEP 0311, 065 (2003) [arXiv:hep-th/0307160]. . R Kallosh, A Linde, arXiv:hep-th/0411011JHEP. 04124R. Kallosh and A. Linde, JHEP 0412 (2004) 004 [arXiv:hep-th/0411011]. . J J Blanco-Pillado, arXiv:hep-th/0406230JHEP. 041163J. J. Blanco-Pillado et al., JHEP 0411, 063 (2004) [arXiv:hep-th/0406230]. . M Gomez-Reino, C A Scrucca, arXiv:hep-th/0602246M. Gomez-Reino and C. A. Scrucca, arXiv:hep-th/0602246. . D Lüst, S Stieberger, arXiv:hep-th/0302221D. Lüst and S. Stieberger, arXiv:hep-th/0302221. . S Gukov, C Vafa, E Witten, arXiv:hep-th/9906070Nucl. Phys. B. 58469Erratum-ibid. BS. Gukov, C. Vafa and E. Witten, Nucl. Phys. B 584 (2000) 69 [Erratum-ibid. B 608 (2001) 477] [arXiv:hep-th/9906070]. . T R Taylor, C Vafa, arXiv:hep-th/9912152Phys. Lett. B. 474130T. R. Taylor and C. Vafa, Phys. Lett. B 474, 130 (2000) [arXiv:hep-th/9912152]. . P Mayr, arXiv:hep-th/0003198Nucl. Phys. B. 59399P. Mayr, Nucl. Phys. B 593, 99 (2001) [arXiv:hep-th/0003198]. . S B Giddings, S Kachru, J Polchinski, arXiv:hep-th/0105097Phys. Rev. D. 66106006S. B. Giddings, S. Kachru and J. Polchinski, Phys. Rev. D 66, 106006 (2002) [arXiv:hep-th/0105097]. . D Lüst, P Mayr, R Richter, S Stieberger, arXiv:hep-th/0404134Nucl. Phys. B. 696205D. Lüst, P. Mayr, R. Richter and S. Stieberger, Nucl. Phys. B 696, 205 (2004) [arXiv:hep-th/0404134]. . D Lüst, S Reffert, S Stieberger, arXiv:hep-th/0406092Nucl. Phys. B. 7063D. Lüst, S. Reffert and S. Stieberger, Nucl. Phys. B 706 (2005) 3 [arXiv:hep-th/0406092]. . J A Casas, Z Lalak, C Munoz, G G Ross, Nucl. Phys. B. 347243J. A. Casas, Z. Lalak, C. Munoz and G. G. Ross, Nucl. Phys. B 347 (1990) 243. . H Abe, T Higaki, T Kobayashi, arXiv:hep-th/0512232H. Abe, T. Higaki and T. Kobayashi, arXiv:hep-th/0512232. . H Abe, T Higaki, T Kobayashi, arXiv:hep-th/0511160H. Abe, T. Higaki and T. Kobayashi, arXiv:hep-th/0511160. . J P Conlon, F Quevedo, K Suruliz, arXiv:hep-th/0505076JHEP. 05087J. P. Conlon, F. Quevedo and K. Suruliz, JHEP 0508 (2005) 007 [arXiv:hep-th/0505076]. . K Choi, A Falkowski, H P Nilles, M Olechowski, arXiv:hep-th/0503216Nucl. Phys. B. 718113K. Choi, A. Falkowski, H. P. Nilles and M. Olechowski, Nucl. Phys. B 718 (2005) 113 [arXiv:hep-th/0503216].
[]
[ "Insight-HXMT observations of Swift J0243.6+6124: the evolution of RMS pulse fractions at super-Eddington luminosity", "Insight-HXMT observations of Swift J0243.6+6124: the evolution of RMS pulse fractions at super-Eddington luminosity" ]
[ "P J Wang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n", "2★L D Kong \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "S 2★ \nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n", "Zhang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "Y P Chen \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "S N Zhang \nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n", "J L Qu \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n", "L Ji \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nInstitut für Astronomie und Astrophysik\nKepler Center for Astro and Particle Physics\nEberhard Karls Universität\n72076TübingenGermany\n", "L Tao \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "M Y Ge \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "F J Lu \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "L Chen \nDepartment of Astronomy\nBeijing Normal University\n100088BeijingChina\n", "L M Song \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n", "T P Li \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n\nDepartment of Astronomy\nTsinghua University\n100084BeijingChina\n", "Y P Xu \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n", "X L Cao \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "Y Chen ", "C Z Liu \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "Q C Bu \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nInstitut für Astronomie und Astrophysik\nKepler Center for Astro and Particle Physics\nEberhard Karls Universität\n72076TübingenGermany\n", "C Cai \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n", "Z Chang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "G Chen ", "T X Chen \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "Y B Chen \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nDepartment of Physics\nTsinghua University\n100084BeijingChina\n", "W Cui \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nDepartment of Astronomy\nTsinghua University\n100084BeijingChina\n", "W W Cui ", "J K Deng \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nDepartment of Physics\nTsinghua University\n100084BeijingChina\n", "Y W Dong \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "Y Y Du \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "M X Fu \nDepartment of Physics\nTsinghua University\n100084BeijingChina\n", "G H Gao \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n", "H Gao ", "M Gao \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n", "Y D Gu \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "J Guan \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "C C Guo \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n", "D W Han \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "Y Huang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n", "J Huo \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "S M Jia \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n", "L H Jiang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "W C Jiang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "J Jin \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "Y J Jin \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nDepartment of Engineering Physics\nTsinghua University\n100084BeijingChina\n", "B Li \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "C K Li ", "G Li \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "M S Li ", "W Li \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "X Li ", "X B Li \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "X F Li \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "Y G Li \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "Z W Li \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "X H Liang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "J Y Liao \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "B S Liu \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "G Q Liu \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nDepartment of Physics\nTsinghua University\n100084BeijingChina\n", "H W Liu \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "X J Liu \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "Y N Liu \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nDepartment of Engineering Physics\nTsinghua University\n100084BeijingChina\n", "B Lu \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "X F Lu \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "Q Luo \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n", "T Luo \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "X Ma \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "B Meng \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "Y Nang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n", "J Y Nie \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "G Ou \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nComputing Division\nInstitute of High Energy Physics\nChinese Academy of Sciences\n100049BeijingChina\n", "N Sai \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n", "R C Shang \nDepartment of Physics\nTsinghua University\n100084BeijingChina\n", "X Y Song \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "L Sun \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "Y Tan \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "Y L Tuo \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n", "C Wang \nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n\nKey Laboratory of Space Astronomy and Technology\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina\n", "G F Wang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "J Wang ", "L J Wang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "W S Wang \nComputing Division\nInstitute of High Energy Physics\nChinese Academy of Sciences\n100049BeijingChina\n", "Y S Wang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "X Y Wen \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "B Y Wu \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n", "B B Wu \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "M Wu \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "G C Xiao \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n", "S Xiao \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "S L Xiong \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n", "J W Yang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "S Yang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "Yan Ji Yang ", "Yi Jung Yang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "Q B Yi \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nSchool of Physics and Optoelectronics\nXiangtan University\n411105XiangtanChina\n", "Q Q Yin \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "Y You \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n", "A M Zhang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "C M Zhang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "F Zhang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "H M Zhang ", "J Zhang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "T Zhang ", "W C Zhang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "W Zhang \nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n", "W Z Zhang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nDepartment of Astronomy\nBeijing Normal University\n100088BeijingChina\n", "Y Zhang ", "Y F Zhang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "Y J Zhang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "Y Zhang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "Zhao Zhang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n\nDepartment of Physics\nTsinghua University\n100084BeijingChina\n", "Zhi Zhang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nDepartment of Engineering Physics\nTsinghua University\n100084BeijingChina\n", "Z L Zhang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "H S Zhao \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "X F Zhao \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n", "S J Zheng \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "Y G Zheng \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nCollege of physics Sciences & Technology\nHebei University\n071002Baoding\n\nHebei Province\nChina\n", "D K Zhou \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n", "J F Zhou \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nDepartment of Engineering Physics\nTsinghua University\n100084BeijingChina\n", "Y X Zhu \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nCollege of Physics\nJilin University\n130012ChangchunChina\n", "Y Zhu \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n", "R L Zhuang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n\nUniversity of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina\n\nDepartment of Engineering Physics\nTsinghua University\n100084BeijingChina\n", "P J Wang \nInstitute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina\n" ]
[ "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institut für Astronomie und Astrophysik\nKepler Center for Astro and Particle Physics\nEberhard Karls Universität\n72076TübingenGermany", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Department of Astronomy\nBeijing Normal University\n100088BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "Department of Astronomy\nTsinghua University\n100084BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institut für Astronomie und Astrophysik\nKepler Center for Astro and Particle Physics\nEberhard Karls Universität\n72076TübingenGermany", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Department of Physics\nTsinghua University\n100084BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Department of Astronomy\nTsinghua University\n100084BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Department of Physics\nTsinghua University\n100084BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Department of Physics\nTsinghua University\n100084BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Department of Engineering Physics\nTsinghua University\n100084BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Department of Physics\nTsinghua University\n100084BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Department of Engineering Physics\nTsinghua University\n100084BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Computing Division\nInstitute of High Energy Physics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "Department of Physics\nTsinghua University\n100084BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "Key Laboratory of Space Astronomy and Technology\nNational Astronomical Observatories\nChinese Academy of Sciences\n100012BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Computing Division\nInstitute of High Energy Physics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "School of Physics and Optoelectronics\nXiangtan University\n411105XiangtanChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Department of Astronomy\nBeijing Normal University\n100088BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "Department of Physics\nTsinghua University\n100084BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Department of Engineering Physics\nTsinghua University\n100084BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "College of physics Sciences & Technology\nHebei University\n071002Baoding", "Hebei Province\nChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Department of Engineering Physics\nTsinghua University\n100084BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "College of Physics\nJilin University\n130012ChangchunChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina", "University of Chinese Academy of Sciences\nChinese Academy of Sciences\n100049BeijingChina", "Department of Engineering Physics\nTsinghua University\n100084BeijingChina", "Institute of High Energy Physics\nKey Laboratory of Particle Astrophysics\nChinese Academy of Sciences\n100049BeijingChina" ]
[]
Based on Insight-HXMT data, we report on the pulse fraction evolution during the 2017-2018 outburst of the newly discovered first Galactic ultraluminous X-ray source (ULX) Swift J0243.6+6124. The pulse fractions of 19 observation pairs selected in the rising and fading phases with similar luminosity are investigated. The results show a general trend of the pulse fraction increasing with luminosity and energy at super-critical luminosity. However, the relative strength of the pulsation between each pair evolves strongly with luminosity. The pulse fraction in the rising phase is larger at luminosity below 7.71 × 10 38 erg s −1 , but smaller at above. A transition luminosity is found to be energy independent. Such a phenomena is firstly confirmed by Insight-HXMT observations and we speculate it may have relation with the radiation pressure dominated accretion disk.
10.1093/mnras/staa2448
[ "https://arxiv.org/pdf/2012.13228v1.pdf" ]
224,847,880
2012.13228
2e8eef16b58d82a50175804602e07decce75f9ad
Insight-HXMT observations of Swift J0243.6+6124: the evolution of RMS pulse fractions at super-Eddington luminosity P J Wang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina 2★L D Kong Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina S 2★ University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina Zhang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Y P Chen Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina S N Zhang University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina J L Qu Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina L Ji Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Institut für Astronomie und Astrophysik Kepler Center for Astro and Particle Physics Eberhard Karls Universität 72076TübingenGermany L Tao Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina M Y Ge Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina F J Lu Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina L Chen Department of Astronomy Beijing Normal University 100088BeijingChina L M Song Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina T P Li Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina Department of Astronomy Tsinghua University 100084BeijingChina Y P Xu Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina X L Cao Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Y Chen C Z Liu Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Q C Bu Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Institut für Astronomie und Astrophysik Kepler Center for Astro and Particle Physics Eberhard Karls Universität 72076TübingenGermany C Cai Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina Z Chang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina G Chen T X Chen Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Y B Chen Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Department of Physics Tsinghua University 100084BeijingChina W Cui Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Department of Astronomy Tsinghua University 100084BeijingChina W W Cui J K Deng Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Department of Physics Tsinghua University 100084BeijingChina Y W Dong Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Y Y Du Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina M X Fu Department of Physics Tsinghua University 100084BeijingChina G H Gao Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina H Gao M Gao Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina Y D Gu Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina J Guan Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina C C Guo Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina D W Han Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Y Huang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina J Huo Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina S M Jia Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina L H Jiang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina W C Jiang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina J Jin Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Y J Jin Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Department of Engineering Physics Tsinghua University 100084BeijingChina B Li Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina C K Li G Li Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina M S Li W Li Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina X Li X B Li Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina X F Li Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Y G Li Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Z W Li Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina X H Liang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina J Y Liao Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina B S Liu Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina G Q Liu Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Department of Physics Tsinghua University 100084BeijingChina H W Liu Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina X J Liu Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Y N Liu Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Department of Engineering Physics Tsinghua University 100084BeijingChina B Lu Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina X F Lu Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Q Luo Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina T Luo Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina X Ma Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina B Meng Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Y Nang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina J Y Nie Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina G Ou Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Computing Division Institute of High Energy Physics Chinese Academy of Sciences 100049BeijingChina N Sai Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina R C Shang Department of Physics Tsinghua University 100084BeijingChina X Y Song Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina L Sun Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Y Tan Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Y L Tuo Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina C Wang University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina Key Laboratory of Space Astronomy and Technology National Astronomical Observatories Chinese Academy of Sciences 100012BeijingChina G F Wang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina J Wang L J Wang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina W S Wang Computing Division Institute of High Energy Physics Chinese Academy of Sciences 100049BeijingChina Y S Wang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina X Y Wen Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina B Y Wu Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina B B Wu Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina M Wu Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina G C Xiao Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina S Xiao Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina S L Xiong Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina J W Yang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina S Yang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Yan Ji Yang Yi Jung Yang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Q B Yi Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina School of Physics and Optoelectronics Xiangtan University 411105XiangtanChina Q Q Yin Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Y You Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina A M Zhang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina C M Zhang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina F Zhang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina H M Zhang J Zhang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina T Zhang W C Zhang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina W Zhang University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina W Z Zhang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Department of Astronomy Beijing Normal University 100088BeijingChina Y Zhang Y F Zhang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Y J Zhang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Y Zhang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Zhao Zhang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina Department of Physics Tsinghua University 100084BeijingChina Zhi Zhang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Department of Engineering Physics Tsinghua University 100084BeijingChina Z L Zhang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina H S Zhao Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina X F Zhao Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina S J Zheng Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Y G Zheng Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina College of physics Sciences & Technology Hebei University 071002Baoding Hebei Province China D K Zhou Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina J F Zhou Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Department of Engineering Physics Tsinghua University 100084BeijingChina Y X Zhu Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina College of Physics Jilin University 130012ChangchunChina Y Zhu Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina R L Zhuang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina University of Chinese Academy of Sciences Chinese Academy of Sciences 100049BeijingChina Department of Engineering Physics Tsinghua University 100084BeijingChina P J Wang Institute of High Energy Physics Key Laboratory of Particle Astrophysics Chinese Academy of Sciences 100049BeijingChina Insight-HXMT observations of Swift J0243.6+6124: the evolution of RMS pulse fractions at super-Eddington luminosity Accepted XXX. Received YYY; in original form ZZZCompiled using MNRAS L A T E X style file v3.0pulse fraction -pulse profile -stars: neutron -pulsars: individual (Swift J02436+6124) -X-rays: binaries Based on Insight-HXMT data, we report on the pulse fraction evolution during the 2017-2018 outburst of the newly discovered first Galactic ultraluminous X-ray source (ULX) Swift J0243.6+6124. The pulse fractions of 19 observation pairs selected in the rising and fading phases with similar luminosity are investigated. The results show a general trend of the pulse fraction increasing with luminosity and energy at super-critical luminosity. However, the relative strength of the pulsation between each pair evolves strongly with luminosity. The pulse fraction in the rising phase is larger at luminosity below 7.71 × 10 38 erg s −1 , but smaller at above. A transition luminosity is found to be energy independent. Such a phenomena is firstly confirmed by Insight-HXMT observations and we speculate it may have relation with the radiation pressure dominated accretion disk. INTRODUCTION The ultraluminous X-ray sources (ULXs) are pointlike, nonnuclear X-ray sources, whose X-ray luminosities exceed the Eddington limit of Galactic stellar mass black holes (assuming isotropic emission, typically 10 39 erg s −1 , BH: Edd =1.3 × 10 38 ( BH /M ⊙ ) erg s −1 ). Some ultraluminous X-ray sources exhibit X-ray pulsations, e.g., M82 X-2 (Bachetti et al. 2014), NGC 7793 P13 (Fürst et al. 2016;Israel et al. 2017a), NGC 5907 ULX1 (Israel et al. 2017b), NGC 300 ULX1 (Carpano et al. 2018), NGC 1313 X-2 (Sathyaprakash et al. 2019) and M51 ULX-7 (Rodríguez Castillo G. A., et al. 2019), which are considered to be ultraluminous X-ray pulsars with luminosity produced by the super-Eddington accretion of the neutron star (NS). For the X-ray pulsar, the configuration of emitting region depends on the mass accretion rate. In sub-critical luminosity, the accretion plasma falls directly to the neutron star surface and heats the neutron star atmosphere to generate X-rays. When the source reaches the super-critical luminosity, the accretion material can be decelerated via the radiation dominated shock, and an accretion column will be formed below the shock surface (Basko & Sunyaev 1976). The accretion disk may have three distinct zones (Shakura & Sunyaev 1973): inner zone (zone A), intermediate zone (zone B), and outer zone (zone C), dominated by radiation pressure, gas pressure and electron scattering, gas pressure and Kramer opacity, respectively. When the luminosity is high enough, the inner edge of the disk may reach to zone A, i.e., the disk is dominated by the radiation pressure. The observational signatures of a radiation-pressure dominated accretion disk are reported by Ji et al. (2019) and Mönkkönen et al. (2019). Swift J0243.6+6124 is the first Galactic ULXs harboring a NS discovered by the Neil Gehrels Swift Observatory during the 2017-2018 outburst (Kennea et al. 2017). It has a pulse period about 9.86 s and exhibits spin-up during the outburst Zhang et al. 2019). Its companion star was identified as a Be star (Kouroubatzakis et al. 2017), due to the presence of hydrogen and helium emission lines in the optical band. The NS magnetic field strength < 10 13 G was estimated by Tsygankov et al. (2018) based on the upper limit on the propeller luminosity prop < 6.8×10 35 erg s −1 . The timing analysis by Doroshenko et al. (2019) reveals two critical luminosities: a critical luminosity ( 1 ) of 1.5 × 10 38 erg s −1 where the source is expected to be in transition from pencil to fan mode, and a critical luminosity ( 2 ) of 4.4 × 10 38 erg s −1 where the accretion mode of the disk is supposed to change from gas to radiation dominated. NuSTAR snapshots show the spectrum can be generally fitted with a cutoff power law plus one blackbody (Jaisawal et al. 2018) but needs multiple blackbody components at high luminosities (Tao et al. 2019). Part of the latter may be related to the outflow (van den Eijnden et al. 2019). Swift J0243.6+6124 was monitored thoroughly by Insight-HXMT with high cadence and high statistics. In this paper, we report the pulsed fraction evolution of Swift J0243.6+6124 during the 2017-2018 outburst in a broad range of energy obtained by Insight-HXMT. The observations and data reduction are described in Section 2. In Section 3, we present the results of pulse fractions and pulse profile analyses. Finally, our arguments to explain the evolution of pulse fractions are discussed in Section 4. OBSERVATIONS AND DATA ANALYSIS Insight-HXMT is the first Chinese X-ray satellite and it was launched on June 15th 2017. Insight-HXMT was designed in a collimated mode and composed of three telescopes: the High Energy X-ray telescope (HE, poshwich NaI/CsI, 20-250 keV), the Medium Energy X-ray telescope (ME, Si pin detector, 5-30 keV) and the Low Energy X-ray telescope (LE, SCD detector, 0.7-13 keV), working in scanning and pointing observational modes and GRB mode. Figure 1). To investigate the pulse properties in the rising and fading phases with a similar luminosity, as shown in Figure 1, we select the Insight-HXMT observational pairs available within MJD 58043-58100, when the source stayed at super-Eddington luminosities. This paper focuses on the timing analysis of the pulse fraction, the luminosity estimation follows the procedure in Zhang et al. (2019), where the spectral model is cons * TBabs * (cutoffpl+bbody+gaussian). The luminosity is obtained in 2-150 keV for Swift J0243.6+6124 by assuming a dis- Table 1. For the timing analysis, Insight-HXMT data analysis software package HXMTDAS V2.01 is used. Insight-HXMT data are processed according to the standard processing procedures described in the Insight-HXMT Data Reduction Guide v2.01 1 . The arrival time of each photon is corrected to the solar system barycenter with the HXMTDAS tool hxbary and corrected for the orbital modulation by taking the ephemeris coming from GBM Pulsar Spin Histories 2 . By using the Stingray 3 model in python, the spin period of Swift J0243.6+6124 is measured in each observation. In the timing analysis the entire Insight-HXMT band is subdivided into 12 energy bins, and the details are shown in Table 2. Finally, pulse fractions and profiles are obtained in each energy bin of the observational pairs. RESULTS Energy dependence of the pulse fraction The root mean squared (RMS) for the pulsation, denoted as rms , are computed for each energy bin of the entire 33 observations. Here the rms is defined as: rms = ( =1 ( −¯ ) 2 / ) 1/2 (1) where¯ is the phase average count rate, the phase count rate and = 32 the total phase bin number. The error of rms is estimated by error transfer and quoted at the 90% confidence level. We find that in . Energy spectra of the pulse fraction ratio between the rising and fading phases for 17 observational pairs. The vertical line with arrow shows the direction with the increasing luminosity. Two observational pairs with the highest luminosity are not included because they were carried out closely. all the observations rms increases with energy, which is consistent with that obtained with a simple definition of pulse fraction (( max − min )/( max + min )) reported by Tao et al. (2019) with NuSTAR data. An example of such an energy dependence on rms for two pairs with a relatively lower luminosity (∼ 3.95 × 10 38 erg s −1 ) and a higher luminosity (∼ 1.09 × 10 39 erg s −1 ) is shown in Figure 2, where the energy dependence of the pulse fraction is obvious. The pulse fraction of the rising phase is larger at lower luminosity but smaller at higher luminosity than that of the fading phase. Such a trend is seen more clearly in Figure 3 where the rms ratio of the rising to the fading observations of each pair is plotted against energy. Along with the increasing luminosity, such a ratio moves from > 1 toward <1. The ratios averaged over each energy bin for data with ratio above and below a unit, as shown in Figure 4, clearly show the distinct evolution behavior. It seems that, rms of the rising and fading phases tends to be comparable at around a specific luminosity. Figure 4. Same as in Figure 3, but average over observations with ratio above 1 (blue square) and below 1 (red circle). Such a trend is indicated in NICER observations at energies below 12 keV (Wilson-Hodge et al. 2018). We hence investigate this in what follows via looking into the luminosity dependence in each energy bin. Luminosity dependence of the pulse fraction In Figures 5, 6, 7, rms for each pair are plotted in each of the 12 energy bins. One sees that, along with the increasing luminosity, for all the energy bins, rms evolves in a similar way in each pair: rms is larger at lower luminosity and then gradually approaches to that in the fading phase, and finally the trend reverses once the luminosity passes through a certain value. To find out the energy dependence of the cross luminosity, we plot in Figure 8 the rms ratios of each pair in an energy bin of 9.8-12.9 keV. In order to obtain the range of the cross luminosity, we use the quadratic polynomial to fit the rms ratio, as shown in Figure 8. To investigate the energy dependence of the cross luminosity, the data in Figure 8 are re-sampled via bootstrapping for error estimations. As shown in Figure 9 are the luminosity distribution and the energy evolution of the luminosity obtained via quadratic polynomial fittings. The histogram distribution gives a mean cross luminosity of 7.72 × 10 38 erg s −1 , which is well consistent with that of 7.71 +0.12 −0.14 × 10 38 erg s −1 , obtained by averaging over the adopted energies. One sees from Figure 9 that the cross luminosity is most likely energy independent. Evolution of the pulse profile Nineteen pairs of pulse profiles are presented in Figures A1-A6, with adjacent energies combined due to their similarity in pulse profile. Each pulse profile is normalized to its mean count rate. Pulse profiles for each pair are co-aligned with respect to the phase Figure 8. The rms ratio evolution against luminosity in 9.8-12.9 keV. The green square and black line represent the data and the quadratic polynomial fitting, respectively. The black dash represents where the ratio is equal to unit. with the minimum count rate. From these plots, we see that the pulse profiles are in general similar for each pair between the rising and the fading phases. Along with the increasing luminosity, the peak flux of the pulses in the rising phase decreases and finally becomes smaller than those in the fading phase once the source is brighter than the cross luminosity 7.71 × 10 38 erg s −1 . DISCUSSION AND SUMMARY The high-cadence Insight-HXMT observations of the first Galactic ULX Swift J0243.6+6124 allow us for the first time to compare the pulse properties of the rising and fading phases in detail at similar luminosities. We select 19 pairs of observations covering the outburst evolution with luminosities above 1 . Previously such researches were relatively rare due to sporadic coverage of HMXB outbursts and faintness of the ULXs located in the neighboring galaxies. For Swift J0243.6+6124, although the snapshots of NuS- TAR show an evolving RMS pulse fraction spectrum, and also the different RMS pulse fraction between rising and fading phase is hinted in NICER observations at soft X-rays, details were still lacking, especially in a broad energy band for a well-sampled outburst (Tao et al. 2019;Wilson-Hodge et al. 2018). Our results show that the RMS pulse fraction spectrum is consistent in general with that reported by NuSTAR but ends up with the discovery that the relative strength between the rising and fading phases evolves with luminosity: above 2 , the RMS pulse fraction of the rising phase is in general larger than that in fading phase and, along with the increasing luminosity, such a trend inverses. Further investigations of such a behavior on its energy and luminosity dependences show that the turnaround of RMS pulse fraction occurs at a luminosity around 7.71 × 10 38 erg s −1 which has little dependence of energy. The pulse profiles of the rising/fading phase are similar in each pair but the evolving pulse profile peak may provide hint to possible understanding of such the peculiar RMS pulse fraction behavior. It is generally believed that for a HMXB the matter from the companion can be accreted towards the compact star via stellar wind. In case of harboring a magnetized neutron star, the accretion rate must be high enough to overcome the magnetic barrier of so-called propeller effect so that the accreting matter can be channeled via magnetic field line onto the magnetic poles of neutron star, where part of the gravitational energy is released in form of Xray emissions. Depending on the accretion rate, the accreted matter will form a mound or a column at the magnetic poles, and pencil Figure 10. Schematic drawing of the evolution of the accretion disk and accretion column, the latter produces the observed pulsed-emission. The inset in the middle shows the observed evolution of the RMS pulse fraction ( rms ) during the rising phase (blue crosses and arrows) and fading phase (red crosses and arrow). Panels a, b, c and d show the proposed states of the accretion disk and accretion column during the rising phase (a and b) and fading phase (c and d), respectively. In panel a, < 2 , the disk is gas pressure dominated. In panel b, > 2 , the disk is radiation pressure dominated and outflow from the disk is produced; rms increases slightly due to the slightly broader accretion column caused by the smaller magnetospheric radius at higher accretion rate. In panel c, some of the outflowed matter is returning back to the system, following a broader range of magnetic field lines to form a broadened accretion column, thus producing higher rms for the same luminosity compared to that in panel b. In panel d, the disk is back to gas pressure dominated state and rms is reduced due to the larger magnetospheric radius than that in panel a for the same luminosity, if the same physical process happened as that observed in the HMXB V 0332+53 (Doroshenko et al. 2017). or fan beam modes are expected at work respectively. The critical luminosity for transition between these two modes is estimated as * ≈ eff 0 (Mushtukov et al. 2015a), here eff , 0 , and are the effective opacity, the annular arc thickness, the NS mass and radius. The X-rays are most likely to be emitted radially along the field line in pencil mode and perpendicular to the field line in the fan mode. For Swift J0243.6+6124, this luminosity appears as a critical luminosity of 1.5 × 10 38 erg s −1 . Therefore the emission pattern for the selected 19 pairs is most likely in the fan mode. In the fan mode, the accretion matter will be shocked via radiation pressure from the hot spot on the surface of the neutron star and form a column structure with bulk velocity larger at the top of column (Wang & Frank 1981;Basko & Sunyaev 1976;Becker, et al. 2012). The seed photons from either the surface of neutron star or the accretion disk will be up-scattered via inverse Compton process in this column region and escape in direction perpendicular to the falling flow. Above 2 , the accretion mode will switch from a gas dominated into a radiation dominated disk, and hence the inner accretion disk inflates in direction vertical to the disk plane . As shown in a simple definition of pulse fraction (( max − min )/( max + min )), the pulse fraction is mostly determined by the maximum and minimum fluxes ( max and min ) showing up in the pulse profile. The pulse fraction increases with larger max or smaller min . In fan mode the pulse fraction of Swift J0243.6+6124 is observed to be proportional to energy and luminosity. The maxi- where 0 and 0 is the annular arc length and thickness of the accretion channel (Mushtukov et al. 2015b). One sees that the flux increases with a broader column (e.g. with larger 0 ). Along with the increasing luminosity, the accretion disk moves inward and can cover more magnetic field lines which can result in a broadened accretion column on the magnetic pole of the neutron star. The energy dependence of the pulse fraction may be related to the beaming effect. In the accretion column, the accretion material passing through shock will still have relativistic speed at the top of the accretion column where the seed photons are up-scatted to higher energies, in their way of sinking toward the neutron star surface. Accordingly, the emitted harder X-rays will undergo a larger beaming effect. One has a minimum flux when the orientation of the accretion column aligns with the line of the sight. Such a beaming effect may result in smaller minimum flux for harder X-rays. During the outburst of the HMXB V 0332+53, it was observed that the magnetospheric radius of the neutron star tends to be smaller (for the same field strength of the NS) in the rising phase than in the fading phase (Doroshenko et al. 2017). If a similar case holds as well for Swift J0243.6+6124, we speculate that the following scenario may be relevant to the observed transition of the pulse fraction between the rising and fading phases. For the pulse fraction evolution at the luminosity below 2 , with a smaller magnetospheric radius at the rising phase the accretion disk can move to an inner region, and hence form a broader accretion column at the magnetic pole. Above 2 , the accretion disk is radiation pressure dominated. Part of the accreting material will be blown out but some of them may be bound by gravity and return along a wide range of magnetic field lines to form a broader accretion column during the fading phase. So the observed transition luminosity of ∼ 7.71×10 38 erg s −1 may be the result of the balance between these two effects, which is evolving as the outburst luminosity goes beyond 1 (Figure 10). In summary, Insight-HXMT observed a peculiar evolution of the pulse fractions from the first Galactic ULX Swift J0243.6+6124. The balance of the pulse fraction between the rising and fading phases evolves with luminosity: the pulse fraction in the rising phase is larger below a critical luminosity of 7.71×10 38 erg s −1 but smaller at above. Such a phenomena is firstly confirmed by Insight-HXMT observations but a thorough scenario is still missing, although we speculate it may have relation with the transition of the accretion modes between a gas and a radiation pressure dominated disk. Figure A6. Same as Figure A1 but in 38.6-107.9 keV. See details about Insight-HXMT in Zhang et al. (2019); Cao et al. (2019); Liu C. Z., et al. (2019); Chen Y., et al. (2019). Insight-HXMT observations of Swift J0243.6+6124 cover a period from MJD 58030 to MJD 58180 ( Figure 1 . 1The luminosity evolution of Swift J0243.6+6124 as observed by Insight-HXMT during outburst. The green dashed stands for the luminosity discovered with rms transition. The blue dotted line and red dash-dot line represent 1 and 2 as reported byDoroshenko et al. (2019), respectively. tance of 6.8 kpc(Bailer-Jones et al. 2018). A systematic error of 1% is added in the spectral fitting with the XSPEC software package version 12.10.0c. The resulted luminosities are compared and those pairs with luminosity consistency within 97-103% are selected. Finally, a sample of 19 pairs is obtained which consists of 14 observations in the rising part and 19 observations in the fading part, as is shown in Figure 2 . 2Energy spectra of the RMS pulse fraction derived for two pair observations. The data with circle and triangle (No. 1) have larger luminosity than those with square and diamond (No. 15). The rising phases are denoted by square and circle, and the fading phases by diamond and triangle. Figure 3 3Figure 3. Energy spectra of the pulse fraction ratio between the rising and fading phases for 17 observational pairs. The vertical line with arrow shows the direction with the increasing luminosity. Two observational pairs with the highest luminosity are not included because they were carried out closely. Figure 5 . 5rms evolution against luminosity for each energy bin. The green dashed represents the cross luminosity when rms of the rising and fading phases are comparable. The rising and fading phases are denoted by blue square and red diamond, respectively. Figure 6 . 6Same as Figure 5 but with different energy bins. Figure 7 . 7Same as Figure 5 but with different energy bins. Figure 9 . 9Upper panel: the distribution of the cross luminosity obtained by quadratic polynomial fitting in 12 energy bins. Lower panel: the cross luminosity versus energy. The mean cross luminosity is 7.71 × 10 38 erg s −1 . mum flux in fan mode can be estimated as: * * = ( = ) ≈ 1.8 × 10 39 0 Figure A1 . A1Pulse profiles for 19 pairs in 0.8-3.4 keV. The blue line and red dash-dot line represent the rising and fading phases, respectively. For each plot the pulse profile is normalized to their mean value and co-aligned with respect to the minimum of the pulse profile. Figure A2 . A2Same asFigure A1but in 3.4-5.6 keV. Figure A3 . A3Same asFigure A1but in 6.5-12.9 keV. Figure A4 . A4Same asFigure A1but in 12.9-28.2 keV. Figure A5 . A5Same asFigure A1but in 27.4-38.6 keV. Table 1 . 1Observational pairs adopted in the pulsed fraction analysis.No Insight-HXMT ObsID 2−150keV Time (10 38 erg s −1 ) (MJD) 1 P011457700201 3.97 58043.1 P011457705502 3.92 58100.3 2 P011457700301 4.05 58044.1 P011457705402 4.10 58099.3 3 P011457700401 4.29 58045.1 P011457705401 4.29 58099.1 4 P011457700501 4.49 58046.1 P011457705301 4.47 58098.4 5 P011457700502 * 4.64 58046.3 P011457704901 4.81 58094.2 6 P011457700502 * 4.64 58046.2 P011457705101 4.71 58096.6 7 P011457700701 5.59 58049.0 P011457704501 5.49 58090.0 8 P011457700801 * 5.77 58050.0 P011457704301 5.89 58088.2 9 P011457700801 * 5.77 58050.0 P011457704401 5.77 58089.3 10 P011457701105 * 7.56 58052.7 P011457703501 7.71 58080.0 11 P011457701105 * 7.56 58052.7 P011457703601 7.40 58081.1 12 P011457701107 8.10 58053.0 P011457703401 8.30 58079.2 13 P011457701201 9.02 58055.0 P011457703301 9.08 58078.1 14 P011457701301 9.64 58056.0 P011457703201 9.45 58077.4 15 P011457701401 10.69 58057.0 P011457703001 11.01 58075.2 16 P011457701401 10.69 58057.0 P011457703101 10.42 58076.2 17 P011457701501 11.77 58058.0 P011457702901 11.97 58074.2 18 P011457701801 * 19.51 58061.3 P011457702101 20.65 58065.6 19 P011457701801 * 19.51 58061.3 P011457702102 20.52 58065.8 * The duplicate observations in different pairs. Table 2 . 2The energy bins of each Insight-HXMT telescopes used for pulse fraction analysis.Telescopes of Insight-HXMT 1 2 3 4 LE 0.8-2.4 keV 2.4-3.4 keV 3.4-4.4 keV 4.4-5.6 keV ME 6.5-9.8 keV 9.8-12.9 keV 12.9-17.7 keV 17.7-28.2 keV HE 27.4-32.4 keV 32.4-38.6 keV 38.6-51.6 keV 51.6-107.9 keV MNRAS 000, 1-7 (2015) http://www.hxmt.org/index.php/usersp/dataan/fxwd 2 https://gammaray.msfc.nasa.gov/gbm/science/pulsars.html 3 https://stingray.readthedocs.io/en/latest/ ACKNOWLEDGEMENTSThis work is supported by the National Key R&D Program of China (2016YFA0400800) and the National Natural Science Foundation of China under grants U1838201, U1838202, U1938101 and 11733009. This work made use of data from the Insight-HXMT mission, a project funded by China National Space Administration (CNSA) and the Chinese Academy of Sciences (CAS).DATA AVAILABILITYThe data underlying this article will be shared on reasonable request to the corresponding author.APPENDIX A: PULSE PROFILES IN SIX ENERGY BANDSThis paper has been typeset from a T E X/L A T E X file prepared by the author. . M Bachetti, 514202NaturBachetti M., et al., 2014, Natur, 514, 202 . C A L Bailer-Jones, J Rybizki, M Fouesneau, G Mantelet, Andrae R. 15658AJBailer-Jones C. A. L., Rybizki J., Fouesneau M., Mantelet G., Andrae R., 2018, AJ, 156, 58 . M M Basko, R A Sunyaev, MNRAS. 175395Basko M. M., Sunyaev R. A., 1976, MNRAS, 175, 395 . P A Becker, A&A. 544123Becker P. A., et al., 2012, A&A, 544, A123 . X L Cao, 2020, Sci. China-Phys. Mech. Astron. 63249504Cao X. L., et al., 2020, Sci. China-Phys. Mech. Astron. 63, 249504 . S Carpano, F Haberl, C Maitra, G Vasilopoulos, MNRAS. 47645Carpano S., Haberl F., Maitra C., Vasilopoulos G., 2018, MNRAS, 476, L45 . Y Chen, Sci. China-Phys. Mech. Astron. 63249505Chen Y., et al., 2020, Sci. China-Phys. Mech. Astron. 63, 249505 . V Doroshenko, S S Tsygankov, A A Mushtukov, A A Lutovinov, A Santangelo, V F Suleimanov, J Poutanen, MNRAS. 4662143Doroshenko V., Tsygankov S. S., Mushtukov A. A., Lutovinov A. A., San- tangelo A., Suleimanov V. F., Poutanen J., 2017, MNRAS, 466, 2143 . V Doroshenko, S Tsygankov, A Santangelo, A&A. 61319Doroshenko V., Tsygankov S., Santangelo A., 2018, A&A, 613, A19 . V Doroshenko, MNRAS.tmp. 2510Doroshenko V., et al., 2019, MNRAS.tmp, 2510 . F Fürst, ApJL. 83114Fürst F., et al., 2016, ApJL, 831, L14 . G L Israel, MNRAS. 46648Israel G. L., et al., 2017, MNRAS, 466, L48 . G L Israel, Sci. 355817Israel G. L., et al., 2017, Sci, 355, 817 . G K Jaisawal, S Naik, J Chenevez, MNRAS. 4744432Jaisawal G. K., Naik S., Chenevez J., 2018, MNRAS, 474, 4432 . Ji L , MNRAS.tmp. 2499Ji L., et al., 2019, MNRAS.tmp, 2499 . J A Kennea, A Y Lien, H A Krimm, S B Cenko, M Siegel, ATel. 108091Kennea J. A., Lien A. Y., Krimm H. A., Cenko S. B., Siegel M. H., 2017, ATel, 10809, 1 . K Kouroubatzakis, P Reig, J Andrews, A Zezas, ATel. 108221Kouroubatzakis K., Reig P., Andrews J., & Zezas A., 2017, ATel, 10822, 1 . C Z Liu, 2020, Sci. China-Phys. Mech. Astron. 63249503Liu C. Z., et al., 2020, Sci. China-Phys. Mech. Astron. 63, 249503 . Y E Lyubarskii, R A Syunyaev, SvAL. 14390Lyubarskii Y. E., Syunyaev R. A., 1988, SvAL, 14, 390 . A A Mushtukov, V F Suleimanov, S S Tsygankov, J Poutanen, MNRAS. 4471847Mushtukov A. A., Suleimanov V. F., Tsygankov S. S., Poutanen J., 2015, MNRAS, 447, 1847 . A A Mushtukov, V F Suleimanov, S S Tsygankov, J Poutanen, MNRAS. 4542539Mushtukov A. A., Suleimanov V. F., Tsygankov S. S., Poutanen J., 2015, MNRAS, 454, 2539 . J Mönkkönen, S S Tsygankov, A A Mushtukov, V Doroshenko, V F Suleimanov, J Poutanen, A&A. 626106Mönkkönen J., Tsygankov S. S., Mushtukov A. A., Doroshenko V., Suleimanov V. F., Poutanen J., 2019, A&A, 626, A106 . Rodríguez Castillo, G A , arXiv:1906.04791arXivRodríguez Castillo G. A., et al., 2019, arXiv, arXiv:1906.04791 . R Sathyaprakash, MNRAS. 48835Sathyaprakash R., et al., 2019, MNRAS, 488, L35 . N I Shakura, R A Sunyaev, A&A. 24337Shakura N. I., Sunyaev R. A., 1973, A&A, 24, 337 . L Tao, H Feng, S Zhang, Q Bu, S Zhang, J Qu, Y Zhang, ApJ. 87319Tao L., Feng H., Zhang S., Bu Q., Zhang S., Qu J., Zhang Y., 2019, ApJ, 873, 19 . S S Tsygankov, A A Lutovinov, A V Serber, MNRAS. 4011628Tsygankov S. S., Lutovinov A. A., Serber A. V., 2010, MNRAS, 401, 1628 . S S Tsygankov, V Doroshenko, A A Mushtukov, A A Lutovinov, J Poutanen, J Eijnden, MNRAS. 4794355MNRASTsygankov S. S., Doroshenko V., Mushtukov A. A., Lutovinov A. A., Pouta- nen J., 2018, MNRAS, 479, L134 van den Eijnden J., et al., 2019, MNRAS, 487, 4355 . Y.-M Wang, J Frank, A&A. 93255Wang Y.-M., Frank J., 1981, A&A, 93, 255 . C A Wilson-Hodge, ApJ. 8639Wilson-Hodge C. A., et al., 2018, ApJ, 863, 9 . S N Zhang, 2020, Sci. China-Phys. Mech. Astron. 63249502Zhang S. N., et al., 2020, Sci. China-Phys. Mech. Astron. 63, 249502 . Y Zhang, ApJ. 87961Zhang Y., et al., 2019, ApJ, 879, 61
[]
[ "A Large Distance Expansion for Quantum Field Theory", "A Large Distance Expansion for Quantum Field Theory" ]
[ "Paul Mansfield [email protected] \nDepartment of Mathematical Sciences\nUniversity of Durham South Road Durham\nDH1 3LEEngland\n" ]
[ "Department of Mathematical Sciences\nUniversity of Durham South Road Durham\nDH1 3LEEngland" ]
[]
Using analyticity of the vacuum wave-functional under complex scalings, the vacuum of a quantum field theory may be reconstructed from a derivative expansion valid for slowly varying fields. This enables the eigenvalue problem for the Hamiltonian to be reduced to algebraic equations. Applied to Yang-Mills theory this expansion leads to a confining force between quarks.
null
[ "https://export.arxiv.org/pdf/hep-th/9608097v1.pdf" ]
17,985,391
hep-th/9608097
778fc3e3d31ef6e415bc6cb9979386e86352b922
A Large Distance Expansion for Quantum Field Theory Aug 1996 Paul Mansfield [email protected] Department of Mathematical Sciences University of Durham South Road Durham DH1 3LEEngland A Large Distance Expansion for Quantum Field Theory Aug 1996arXiv:hep-th/9608097v1 15 Invited talk at the Second International Sakharov Conference on Physics Using analyticity of the vacuum wave-functional under complex scalings, the vacuum of a quantum field theory may be reconstructed from a derivative expansion valid for slowly varying fields. This enables the eigenvalue problem for the Hamiltonian to be reduced to algebraic equations. Applied to Yang-Mills theory this expansion leads to a confining force between quarks. Introduction I will describe an approach to the eigenvalue problem for the Hamiltonian of a quantum field theory,Ĥ |E = E |E , in which states are constructed from their simple large distance behaviour. [1] This is in contrast to the usual approach to, say, Yang-Mills theory, which is built up from simple short-distance behaviour. For simplicity, I will concentrate on scalar field theory, although the results also apply to Yang-Mills theory where the leading order in the expansion which I will describe leads to an area law for the Wilson loop [2] via a kind of dimensional reduction. [3] In the Schrödinger representation the field operator,φ, is diagonal and its conjugate momentum represented by functional differentiation ϕ|φ(x) = ϕ|φ(x), ϕ|π(x) = −i δ δϕ(x) ϕ|,(1) so that the ground state is represented by the wave-functional ϕ|E 0 = Ψ[ϕ] = exp W [ϕ]. In general W [ϕ] is non-local, but when ϕ(x) varies very slowly on length-scales that are large in comparison to the inverse of the mass of the lightest particle it has a derivative expansion in terms of local functions, e.g. W = dx(a 1 ϕ 2 + a 2 ∇ϕ · ∇ϕ + a 3 ϕ 4 ..). This expansion is the basis of our method. At first glance it would appear to be completely useless, because the internal structure of particles is characterised by much shorter scales, although there is one physically interesting phenomenon that takes place at arbitrarily large distances, the confinement of quarks. I will claim, however, that this large distance behaviour is relevant not just to confinement but to understanding physics on all length scales, because I will show how it may be used to reconstruct the wave-functional Ψ[ϕ] for arbitrary ϕ(x). The Local Expansion Consider ϕ|e −TĤ |φ . According to Feynman this is given by an integral over fields φ(x, t) that live in a Euclidean space-time bounded by the surfaces t = 0, and t = −T on which φ is equal to ϕ andφ respectively. As T → ∞ this matrix element is dominated by the contribution from the ground-state, so ϕ| e −TĤ |φ = Dφ e −S E ∼ Ψ[ϕ] e −T E 0 Ψ * [φ] = e W [ϕ]+W [φ]−E 0 T ,(2) where S E is the Euclidean action. From this we can extract Ψ[ϕ]. A different formulation makes the dependence on ϕ more explicit. [4] Define the bra D| so as to be annihilated byφ, then the canonical commutation relations imply that ϕ| = D| exp(i dxπϕ). So now ϕ|e −TĤ |φ = D| e i dxπϕ e −TĤ e −i dxπφ |D ,(3) which can be written as the functional integral Dφ e −S E + dxφ(x,0) ϕ(x)− dxφ(x,−T )φ(x) .(4) The boundary condition on the integration variable, φ, implied by D| is that it should vanish on the boundary surfaces t = 0 and t = −T . (In replacingπ byφ, the time derivative of φ, we should also include delta functions in time, coming from the timeordering.) So W [ϕ] is the sum of connected Euclidean Feynman diagrams in which ϕ is a source forφ on the boundary. The only major difference from the usual Feynman diagrams encountered in field theory is that the propagator vanishes when either of its arguments lies on the boundary. Using this, Symanzik discovered the remarkable result that in 3 + 1 dimensional φ 4 theory W [ϕ] is finite as the cut-off is removed. For a free scalar field with mass m this gives W = − 1 2 dx ϕ √ −∇ 2 + m 2 ϕ, so that if the Fourier transform of ϕ vanishes for momenta with magnitude greater than the mass, W can be expanded in the convergent series − dx m 2 ϕ 2 + 1 4m (∇ϕ) 2 − 1 16m 3 (∇ 2 ϕ) 2 . . . The terms of this expansion are local in the sense that they involve the field and a finite number of its derivatives at the same spatial point. The same is true for an interacting theory in which the lightest particle has non-zero mass, because massive propagators are exponentially damped at large distances so that configuration-space Feynman diagrams are negligible except when all their points are within a distance ≈ 1/m of each other. Integrating these against slowly varying sources, ϕ(x), leads to local functions. Reconstructing the Vacuum For 1 + 1-dimensional scalar theory define the scaled field ϕ s (x) = ϕ(x/ √ s) where s is real and greater than zero. I will now show that W [ϕ s ] extends to an analytic function of s with singularities only on the negative real axis (at least within an expansion in powers of ϕ) from which W [ϕ] can be obtained using Cauchy's theorem. As T → ∞ in (4), Ψ becomes a functional integral on the Euclidean space-time t ≤ 0. By rotating the coordinates we can view this instead as a functional integral over the Euclidean space-time x ≥ 0, so e W [ϕ s ] = Dφ e −S r E + dt φ ′ (0,t)ϕ s (t) ,(5) where φ ′ = ∂φ/∂x, and S r E is the action for the rotated space-time. This can be reinterpreted as the time-ordered expectation value of exp dt (ϕ s (t)φ ′ (0, t) −Ĥ r ) in the ground-state, |E r , of the rotated Hamiltonian,Ĥ r . The time integrals can be done if this is expanded in powers of ϕ s , and the sources Fourier analysed usingφ s (k) = √ sφ(k √ s). This yields Ψ[ϕ s ] = ∞ n=0 dk n ..dk 1φ (k n ) ...φ(k 1 ) δ( n 1 k i ) × √ s n E r 0 |φ ′ (0) 1 √ sĤ r + i( n−1 1 k i )φ ′ (0) ...φ ′ (0) 1 √ sĤ r + ik 1φ ′ (0)|E r 0 .(6) This can now be extended to the complex s-plane. Since the eigenvalues ofĤ r are real, the singularities occur for s on the negative real axis. This must also hold for W [ϕ s ], which is the connected part of Ψ[ϕ s ], since any additional singularities could not cancel between connected and disconnected pieces. Now define I(λ) ≡ 1 2πi C ds s − 1 e λ(s−1) W [ϕ s ](7) Where C is a very large circle centred on the origin, beginning just below the negative real axis and ending just above. On C, ϕ s (x) = ϕ(x/ √ s) ≈ ϕ(0) and so varies only very slowly with x, so here we can use our local expansion. Now collapse the contour to a small circle around s = 1, which contributes W [ϕ], and a contour, C ′ , surrounding the negative real axis. When ℜ(λ) > 0 the latter is exponentially suppressed (to check this note that for large |s| we can use the local expansion, and elsewhere on C ′ the integrand is bounded). Hence W [ϕ] = lim ℜ(λ)→∞ I, which is expressed in terms of the local expansion only. In practice we can truncate the series to a finite number of terms and work with a large value of λ to get a good approximation. In the Schrödinger representation for ϕ 4 theory the term in the Hamiltonian that needs to be regulated is dxπ 2 . If we introduce a momentum cut-off, 1/ǫ, and define H ǫ as − 1 2 k 2 <1/ǫ dk δ 2 δφ(k)δφ(−k) + dx 1 2 (ϕ ′2 + M 2 (ǫ)ϕ 2 ) + g 4! ϕ 4 − E(ǫ) ,(8) where M 2 and E are known functions that diverge as ǫ ↓ 0, and g and E are finite, then the Schrödinger equation is lim ǫ↓0 (H ǫ − E)Ψ = 0. This cannot be applied directly to the local expansion since the cut-off refers to short distances, whereas the local expansion is only valid at large distances. However, using the same technique as above, it may be shown that (H sǫ Ψ)[ϕ s ] is analytic in the s-plane with the negative real axis removed. The small-s and large-s behaviour are related by Cauchy's theorem, so This leads to a separate equation for the coefficent of each independent local function of ϕ. A good approximation results from working to a finite order in λ, and taking λ large, but finite. Expanding in powers of g reproduces standard perturbative results for short-distance phenomena, but these equations may also be solved without resorting to perturbation theory. λs ({H sǫ − E}Ψ)[ϕ s ] = 0 (9) AcknowledgmentsI would like to thank the Royal Society for a conference grant. . P Mansfield, Phys. Lett. B. 358287P. Mansfield, Phys. Lett. B 358, 287 (1995); . Phys. Lett. B. 365207Phys. Lett. B 365, 207 (1996). . P Mansfield, Nucl. Phys. B. 418113P. Mansfield, Nucl. Phys. B 418, 113 (1994). . J Greensite, Nucl. Phys. B. 158469J. Greensite, Nucl. Phys. B 158, 469 (1979); . Nucl. Phys. B. 166113Nucl. Phys. B 166, 113 (1980); . Phys. Lett. B. 191431Phys. Lett. B 191, 431 (1987); . J Greensite, J Iwasaki, Phys. Lett. B. 223207J. Greensite and J. Iwasaki, Phys. Lett. B 223, 207 (1989); . H Arisue, Phys. Lett. B. 28085H. Arisue, Phys. Lett. B 280, 85 (1992); . Q Z Chen, X Q Luo, S H Guo, Phys. Lett. B. 341349Q.Z. Chen, X.Q. Luo, and S.H. Guo, Phys. Lett. B 341, 349 (1995); . M Halpern, Phys. Rev. D. 19517M. Halpern, Phys. Rev. D 19, 517 (1979); . J Ambjorn, P Olesen, C Petersen, Nucl. Phys. B. 240189J. Ambjorn, P. Olesen, C. Petersen, Nucl. Phys. B 240, 189 (1984). . K Symanzik, Nucl. Phys. B. 1901K. Symanzik, Nucl. Phys. B 190, 1 (1983).
[]
[ "Rethink Decision Tree Traversal", "Rethink Decision Tree Traversal" ]
[ "Jinxiong Zhang [email protected] ", "I D " ]
[]
[]
We will show how to implement binary decision tree traversal in the language of matrix computation. Our main contribution is to propose some equivalent algorithms of binary tree traversal based on novel matrix representation of the hierarchical structure of the decision tree. Our key idea is to travel the binary decision tree by maximum inner product search. We not only implement decision tree methods without the recursive traverse but also delve into the partitioning nature of tree-based methods.arXiv:2209.04825v2 [cs.LG] 6 Oct 2022Given a tree T (N, L), the traversal of the input x is described in algorithm 1 and its correctness is proved in[12]. Here we follow the description of tree model in[12].A decision tree T (N, L) is composed of a set of internal nodes N = {n 0 , n 1 , · · · , n t } and and a set of leaves L = {l 0 , l 1 , · · · , l t+1 } where each n ∈ N is associated with a Boolean test and each leaf l ∈ L stores the prediction l.val ∈ R. All the nodes whose Boolean conditions evaluate to F alse are called false nodes, and true nodes otherwise. If a visited node in N is a false one, then the right branch is taken, and the left branch otherwise.
10.48550/arxiv.2209.04825
[ "https://export.arxiv.org/pdf/2209.04825v2.pdf" ]
252,198,914
2209.04825
f2f5af78c971295093b3c023f616b07f933ee093
Rethink Decision Tree Traversal Jinxiong Zhang [email protected] I D Rethink Decision Tree Traversal We will show how to implement binary decision tree traversal in the language of matrix computation. Our main contribution is to propose some equivalent algorithms of binary tree traversal based on novel matrix representation of the hierarchical structure of the decision tree. Our key idea is to travel the binary decision tree by maximum inner product search. We not only implement decision tree methods without the recursive traverse but also delve into the partitioning nature of tree-based methods.arXiv:2209.04825v2 [cs.LG] 6 Oct 2022Given a tree T (N, L), the traversal of the input x is described in algorithm 1 and its correctness is proved in[12]. Here we follow the description of tree model in[12].A decision tree T (N, L) is composed of a set of internal nodes N = {n 0 , n 1 , · · · , n t } and and a set of leaves L = {l 0 , l 1 , · · · , l t+1 } where each n ∈ N is associated with a Boolean test and each leaf l ∈ L stores the prediction l.val ∈ R. All the nodes whose Boolean conditions evaluate to F alse are called false nodes, and true nodes otherwise. If a visited node in N is a false one, then the right branch is taken, and the left branch otherwise. Introduction QuickScorer [12] and RapidScorer [21] are proposed based on bit-vectors of the false nodes in order to speed up the additive ensemble of regression trees in learning to rank. Inspired by [12], more works, such as [2; 11; 13; 15], focus on the application and acceleration of additive tree models while we will pay attention to the theory of algorithms specially the representation of binary decision tree in the language of matrix computation. Based on so-called Tree Supervision Loss, a hierarchical classifier is built from the weights of the softmax layer in convolutional neural networks in [18]. In [20; 19], tree regularization is used to enhance the interpretability of deep neural networks. A generalized tree representation termed TART is based on transition matrix shown in [22]. We introduce some matrix to represent the structure of binary decision tree, which means that matrix can imply hierarchy. And we will show how to generate equivalent algorithms of binary tree traversal, which is to translate the binary decision tree to matrix computation. We begin with the algorithm 1 and provide some illustration. Then we modify 1 to matrix computation, where we introduce bit-matrix and sign-matrix to represent the decision tree structure. And we consider binary decision tree evaluation as error-correcting output codes. Later we discuss the interpretable representation. Finally we give a summary. QuickScorer The QuickScorer is a tree-based rank model, which has reduced control hazard, smaller branch mis-prediction rate and better memory access patterns in experiment [12]. The core of QuickScorer is to represent the candidate exit leaves with a bitvetor [12] as shown in algorithm (1). Based on the bit-vectors of false nodes, tree traversal is via the bitwise logical AND operations when all the results of associated test evaluation are known as shown in algorithm (1). Its correctness is originally proved in [12]. More implementation or extension of QuickScorer can be found in [2; 11; 13; 21; 15]. Hereinafter, we assume that nodes of T are numbered in breadth-first order and leaves from left to right. Algorithm 1 Scorer of QuickScorer Input: • input feature vector x • a binary decision tree T (N, L) with internal nodes N : {n 0 , · · · , n t−1 }, 1 ≤ t < ∞ -a set of leaves L: {l 0 , · · · , l t }, 1 ≤ t < ∞ -the prediction of leaf l ∈ L :l.val ∈ R node bitvector associated with n ∈ N : n.bitvector Output: tree traversal output value 1: procedure Score(x, T ) 2: Initialize the result bitvector v ← 1, 1, · · · , 1 3: Find the F asle nodes U (x, T ) given x, T 4: for node u ∈ U do iterate over the false nodes 5: v ← v ∧ u.bitvector update by logical AND 6: j ← index of the leftmost bit set to 1 of v 7: return l j .val It is best to read [12] for more details on how to evaluate a single binary decision tree stored as a set of precomputed bitvectors. Here we adopt the illustration from [16] in figure (1) and (2). Construction of Bit-vectors and Bit-matrix In this section, we will introduce the construction of the bit-matrix based on (1). In [12], the idea of tree traversal using bitvectors is based on the node bitvector as below: Every internal node n is associated with a node bitvector n.bitvector (of the same length), acting as a bitmask that encodes (with 0's) the set of leaves to be removed from L whenever n is a false node. In [21], QuickScorer is described as below QuickScorer maintains a bitvector, composed of ∧ bits, one per leaf, to indicate the possible exit leaf candidates with corresponding bits equal to 1. Figure 1: The bitvector in QuickScorer We call the bit-vector in QuickScorer as bit-vector for false node defined in (1). Definition 1 The bit-vector for a false node is defined as a binary vector in {0, 1} n if the decision tree has n internal nodes, where the digit 1 corresponds to the possible exit leaf candidates and the digit 0 corresponds to the impossible exit leaf when the associated Boolean test is false. The last bit is always equal to 1 in each bitvector by definition. Now we consider the simplest case : (1) all internal nodes are true; (2) all internal nodes are false. In the first case, there is no false nodes and the result bitvector v is only initialized in algorithm 1. Thus, the first bit must correspond to the leftmost leaf. In the second case, there is no true nodes and the result bitvector v is bit-wise logical AND of all bitvectors in the tree, where the rightmost leaf must be indicated by 1. Definition 2 The bit-matrix B of a decision tree is a binary matrix where each column is the bitvector of the tree.         . The next is to show the relation between the structure of decision tree and its bit-matrix. Some Properties of Bit-matrice By the definition 2, the bit-matrix is binary matrix. Here we want to search some properties of the bit-matrix of a binary decision tree specially its rank. Definition 3 The complement of the bitvector b denoted by c is defined by b + c = 1. For example, the complement of 0 1 T is 1 0 T . The complement bit-vectors of a node is a binary vector where the digit 1 corresponds to the possible exit leaf candidates over its subtree when it is a true node. The bitvector b and its complement bitvector 1 − b cannot exist in the same bit-matrix of a decision tree. We can calculate a leaf bitvector as the product of bit-vectors of false nodes and complement bit-vectors of true nodes. For example, the bitvectors of the false nodes are 111111, 001111, 011111, 111101, 001101 and complement bit-vectors of true nodes are 001100 and 001000. And their product is 001000 which indicates the exit leaf directly Lemma 1 The bitvector b is linearly independent of its complement bitvector 1 − b. Lemma 2 The bitvectors of the right subtree are linearly independent of the bitvectors of the left subtree. Proof 2.1 By the definition, the bits corresponding to the exit leaf candidates of the left subtree are always set to 1 in the bitvectors of the right subtree while there is at least one 0-bit corresponding to the exit leaf candidates of the left subtree in the bitvectors of the left subtree. Thus this lemma is completed. Based on the above lemma, we can obtain the following theorem 1. Theorem 1 The bit-matrix B is invertible for any decision tree. Proof 2.2 We use induction. The induction hypothesis is that (B n×(n−1) ) is invertible for all integer n ≥ 2. When n = 2, we have B = 0 1 T so it holds. Assume that (B n×(n−1) ) is invertible if t ≤ N , we will prove that (B (N +1)×N ) is invert- ible. By the assumption, it holds for the right subtree and left subtree. By the above lemma, it holds for the whole tree. As above, it holds for all integer n ≥ 2 by induction. From 1 we can find that the augmented matrix (B n×(n−1) 1 n ) invertible. The rank is an invariant of bit-matrices if their size is determined. The column is unique and distinct in the matrix B, where the digit 1/0 corresponds to the possible/impossible exit leaf when the associated Boolean test is false. The augmented matrix (B 1) of the tree in 1 is symmetric but the coding scheme of bitvectors is based on the false nodes. This symmetry is emergent. The structure of a decision tree is determined by its bit-matrix and the structure of the subtree is determined by a submatrix of the bit-matrix. And we can observe that recursive patterns emerge in the bit-matrix as below.         0 0 1 1 1 0 1 1 1 1 1 1 0 0 1 1 1 0 1 1 1 1 1 1 0 1 1 1 1 1         ,     0 0 1 0 1 1 1 1 0 1 1 1     . The hierarchy of the tree is encoded in the position order of bitvectors. Beyond Bitvector: Diverse Tree Evaluation We will convert the bitwise logical AND operation to multiplication of binary matrix and vector. Specially, we focus on how to implement binary decision tree in the language of matrix computation. The Computational Perspective There are two key steps of QuickScorer :(1) the first is to find the false nodes given the input and tree; (2) the second is to find the terminal node based on the bitvectors of false nodes. Here, we will implement these steps in the language of matrix computation. From the computational perspective, the of logical AND of bitvectors is equivalent to their element-wise multiplication n algorithm (1) S F = arg max( ∧ u∈U u.bitvector) = arg max( u∈U u.bitvector).(1) So the index of the bit set to 1 of logical AND in algorithm (1) requires that all operand must be 1 in the index corresponding to the bit set to 1 of v. And it is equivalent to the intersection ∩ u∈U arg max(u.bitvector) by combining the fact that 1 is the maximum of the bitvector in algorithm (1). Then we can replace the logical AND operation in the algorithm (1) with the matrix computation operation based on the following relation S F = ∩ u∈U arg max(u.bitvector) = arg max( u∈U u.bitvector).(2) The step to find the false nodes in (1) is to evaluate all the Boolean tests of internal nodes and we use a binary vector to represent such procedure as defined in (4). Definition 4 The test vector t of the input x is a binary column vector where the digit 1 corresponds to the false node and the digit 0 corresponds to the true node for a given tree T (N, L). Based on the test vector t and bit-matrix B, we replace the logical AND over the bitvectors of false nodes by the matrix computation as below: v = B t + 1(3) where 1 is a bitvector with all bits set to 1. The matrix-vector multiplication B t is the sum of bitvectors of all false nodes. The key observation is that the maximum of formula (3) corresponds to bit set to 1 of the result bitvector in algorithm (1). In another word, arg max of formula (3) And we can obtain the theorem (2). return l j .val The matrix-vector multiplication is the linear combination of column vector, so B t = i t i B i , where B i is the i-th bitvector. And we can obtain the following relation: S F = arg max(B t + 1) = arg max( i t i B i ) = arg max( i p i t i B i ) if each p i is positive. Thus, the non-negative number is sufficient for the test vector t and bitmatrix B. In another word, we extend the algorithm 1 from binary value to non-negative value. Another observation of B t is that arg max B t = ∩ i arg max(t i B i ). LetB i replace the 0 in B i by −1, then we can find the following equation arg max B t = ∩ i arg max(t i B i ) = ∩ i arg max(t iBi ) = arg max( i t iBi ),(4) so bit-matrix is not necessary for the algorithm (1). In contrast to [12], we can define the bitvector for the true node acting as a bitmask that encodes (with 0's) the set of leaves to be removed whenever the internal node is a true node. The bitwise logical AND between 1 and the node bitvector of a true node n corresponds to the removal of the leaves in the right subtree of n from the candidate exit leaves. And we can obtain the following equation S T = arg max( t∈T t.bitvector) = ∩ t∈T arg max(t.bitvector) where T is the true node of a decision tree and t.bitvector is the bitvector for the internal node t. The exit leaf cannot removed by the logical AND of these bitvectors but leftmost bit set to 1 does not correspond to the exit leaf. In another word, we have the following relation S T \ S F = ∅ and min(S T ∩ S F ) = min S F where S F is defined in 2. The inner product between the i-th column of the B and the test vector t is the i-th value of B t, which is the number of false nodes that select the i-th candidate leaf. And the maximum of B t is the number of all false nodes, which is equal to t, t or the 1 norm of t and independent of the norm of column vectors of B. However, computation procedures can be equivalent in form but different in efficiency. The arg max and intersection operation may take more time to execute. And these diverse equivalent form will help us understand the algorithm further. In algorithm 2, we replace the procedure to find the false nodes and the iteration over the false nodes in algorithm 1 with the test vector and matrix-vector multiplication. These observation and facts imply the possibility to design more efficient algorithms equivalent to the algorithm (1). We can find more substitute for the bit-matrix encoding schemes. In contrast to [18], we find that the hierarchy of each node is embedding in the column vectors of the bit-matrix. In algorithm (1), it is the index of leftmost bit set to 1 in the result vector that we only require, which is the index of the exit leaf that we really need during tree traversal when the input x and the decision tree T are given. Next we will introduce how to construct an algorithm based on the unique and constant maximum to traversal the binary tree. The Algorithmic Perspective The algorithm QuickScorer in [12] looks like magicians' trick and its lack of intuitions makes it difficult to understand the deeper insight behind it. The bitvector is position-sensitive in algorithm 1, where the nodes of are required to be numbered in breadth-first order and leaves from left to right. Since the reachability of a leaf node is only determined by the internal nodes over the path from its root to itself, it is these bitvectors in algorithm (1) that play the lead role during traversal rather than the bitvectors of other false nodes. Thus we should take it into consideration when designing new test vector s. Definition 5 The signed test vector s of the input x is a binary vector where the digit 1 corresponds to the false node and the digit −1 corresponds to the true node for a given decision tree. By comparing definition 1 and 5, we can find the equation s = 2 t − 1. We only consider the matrix-vector B t in 3 as the linear combination of bitvectors while it is the inner products of the column vectors of B and test vector t that determine the maximum of B t. The bitvectors as rows in matrix B is designed for the internal nodes rather than the leaf nodes. We need to design vectors to represent the leaf node as the column vector and ensure that the inner product of the leaf vector and the test vector is equal to the number of the internal nodes over the path from the root node to itself if the input reaches this leaf node. Definition 6 The representation vector of a leaf node is a ternary vector determined by the following rules: • the digit +1 corresponds to the true nodes over its path from the root node; • the digit −1 corresponds to the false nodes over its path from the root node; • and digit 0 corresponds to the rest of internal nodes. The representation vector of a leaf node actually describes the path from the root to this leaf node, which contains the encoding character: −1 represents an edge to a left child, and +1 represents an edge to a right child. And we defined the signed matrix to describe the stricture of the binary decision tree as 7. Definition 7 The signed matrix S of a decision tree consists of its leaf representation vectors as its row vector. It is easy to find the upper bound of the inner product of a leaf vector p = (p 1 , · · · , p d ) T ∈ {+1, 0, −1} d and the signed test vector s = (s 1 , · · · , s d ) T ∈ {1, −1} d as shown below p, s = d i=1 p i s i ≤ d i=1 |p i | = p 1 .(5) If p i s i > 0, we say that the leaf node and the input reach a consensus on the i-th test. Definition 8 The depth of a leaf node is the path length measured as the number of nonterminal nodes between the root node and itself. The depth vector d of a decision tree consists of all leaf nodes depth and depth matrix D is defined as diag( d). If the number of consensuses reached between the input and the leaf node is equal to the depth of the leaf node, then the leaf is the destination of the input in the decision tree. We propose the algorithm termed SignQuickScorer as (3). Algorithm 3 SignQuickScorer Input: • a feature vector x • a binary decision tree T Output: tree traversal output value 1: procedure SignQuickScorer(x, T ) 2: Find the signed test vector s given x, T as defined in 5 3: Compute the result vector: v ← D −1 S s D −1 and S are precomputed in 7 8. First, we prove that s * , t s * ,s * = max{D −1 S t}. And max b∈B t, b = t, b * = t, t if and only if t − b * , t = 0 by the inequality 5. Thus s * , t s * ,s * ≤ 1. And according to the definition 5 and 6, we have s * , t − s * = 0 and imply that s * , t = s * , s * so max{D −1 S t} = 1. For the rest of leaf vectors s ∈ T and the test vector t we have the equation: s, t < s, s because there is at least one test which the input does not match over the path from the root node to this leaf node. So that s, t − s = 0. And we complete this proof. And the result vector can be divided element-wisely by the number of the internal nodes over the path, then we can obtain the unique maximum as the indicator of the exit leaf. We can take the example in figure 2 as follows s =       1 1 −1 −1 +1       , S =         −1 −1 0 0 0 −1 +1 0 0 0 +1 0 −1 −1 0 +1 0 −1 +1 0 +1 0 +1 0 −1 +1 0 +1 0 +1         , D =                , D −1 S s =         −1 0 1 1 3 − 1 3 1 3         . And we can regard the algorithm (3) as the decoding of error-correcting output coding [4; 5] shown in (4), which is deigned for multi-classification by combining binary classifiers. Both methods believe on the cluster assumption, which is based on the similarity in essence. Each class is attached to an unique code in error-correcting output coding(ECOC) [3; 10; 1] while a leaf vector only indicates a specific region of the feature space and each class may be attached to more than one leaves in (3). And the key step in (4) is find the maximum similarity, while key step in (3) is to find the maximum scaled inner product. The value attached to the leaf node is voted by majority or averaging which is different from other mixed regression method [6]. Next we will show how to take advantage of the unique maximizer of D −1 S s. Algorithm 4 SignQuickScorer as template matching Input: • a feature vector x • a binary decision tree T Output: tree traversal output value 1: procedure SignQuickScorer(x, T ) 2: Find the test vector t given x, T as defined in 5 3: Find the most likely template: j ← arg i max Si, t di S i is the i-thh column of S 4: return l j .val. The Representation Perspective We define the δ function to replace the max operator in (3) as following: δ(x) = 0 x = 0 1 x = 0 ∀x ∈ R. According to the proof of 3, we can find the index of the leftmost maximum of v by the following equation j = i δ( v i − 1)i because that max v = 1 and arg max v is unique, where v = D −1 S t. And we find a succinct expression of 3 as shown in algorithm 5. We can explain this δ Algorithm 5 SignQuickScorer as Attention Input: • a feature vector x • a binary decision tree T Output: tree traversal output value 1: procedure SignQuickScorer(x, T ) 2: Find the test vector t given x, T as defined in 5. 3: Compute the result vector: v ← D −1 S t as computed in 3. 4: return i δ( v i − 1)l i .val function as selector in order to find the most likely prediction value. Note that v i − 1 ⇐⇒ Si, t di = 1 ⇐⇒ S i , t = d i so we can obtain a variant of the algorithm (3) as shown in (6). In terms of attention mechanism [17; 14; 9], the decision tree is in the following form: T (x) = i δ((D −1 S t) i − 1)l i .val = Attention( t, S, l)(6) where the weight assigned to each value is computed by the following function w i = δ((D −1 S t) i − 1) = 0 (D −1 S t) i = 1 1 otherwise . Algorithm 6 SignQuickScorer Input: • a feature vector x • a binary decision tree T Output: tree traversal output value 1: procedure SignQuickScorer(x, T ) 2: Find the test vector t given x, T t defined in 5. 3: Compute the result vector: v ← S t − d. S, d defined in (7), (8), respectively. 4: return i δ( v i )l i .val In fact, it is the hard attention. And we can output a soft attention distribution by applying the scaled dot-product scoring function instead of the δ function as below w i = exp((D −1 S t) i ) i exp((D −1 S t) i ) . Similarly, we can discover that the soft decision tree is the soft attention. The Boolean expression may occur in different path so that we can rewrite the signed test vector s as a selection of the raw Boolean expression s = M e, where the matrix M describe the contribution of each Boolean expression and the importance of each attribute. It is a general consensus that decision trees are of interpretability. Interpretablity is partially dependent on the model complexity. For some model sparsity is a proxy for interpretability. However, interpretability is independent of the model complexity for decision trees because of their explicit decision boundaries. Thus, the algorithm 5 provides an interpretable explanation of hard attention. In [24; 23], we restricted the input vector in numerical data. The difficulty is to handle the hybrid data. The method 5 is a general scheme on how to convert the rule-based methods to the attention mechanism. And test vector t is the feature vector in {+1, −1} d extracted from the raw feature space and we only need d + 1 patterns determined by the leaf vector in {+1, 0, −1} d , where different patterns may share the same value. Each path in decision trees consists of Boolean interactions of features [? ]. The lack of feature transformation in (3), (4), (5) may restrict their predictability and stability. We can discover that the most expensive computation of (3), (4), (5) and (6) is maximum inner product search, which belong to the nearest neighbor search problem. The binary decision tree traversal is a router/gate based on Boolean tests, which select a destination leaf node. For example, the centroid of the i-th leaf node is to minimize x∈X δ( v i − 1)d(x, c i ) and the value of the i-th leaf node is to minimize (x,y)∈X ×Y δ( v i − 1)d(y, l i .val) where v i is dependent of x as computed in algorithm (5); d(·, ·) is the quasi-distance function. Ensemble and mixture essence occurs in the decision tree. We would like find the prototype sample of each leaf node and convert the decision problem into nearest neighbor search directly while it is beyond our scope here. Discussion We clarify some properties of the bit-matrix of decision tree in QuickScorer. And we introduce the emergent recursive patterns in the bit-matrix and some equivalent algorithms of QuickScorer, which is to evaluate the binary decision tree in the language of matrix computation. Our main contribution is to propose some novel methods to perform decision tree traversal from different perspectives. The simplest bit-matrix is 0 1 T . The bit-matrix of the decision tree infigure ( Figure 2 : 2The scorer in QuickScorer is equal to arg max of the result vector in algorithm (1), where the operator arg max returns all the index of maximum in the vector. Additionally, finding the index of leftmost maximum of formula (3) is to find the minimum index in arg max of formula (3). Theorem 2 2The algorithm (1) is equivalent to the algorithm (2).Algorithm 2 Scorer of QuickScorer in language of matrix computationInput:• a feature vector x • a binary decision tree T (N, L) with internal nodes N : {n 0 , · · · , n t−1 }, 1 ≤ t < ∞ -a set of leaves L: {l 0 , · · · , l t }, 1 ≤ t < ∞ -the prediction of leaf l ∈ L :l.val ∈ R the bit-matrix of T (N, L): B Output: tree traversal output value 1: procedure Scorer(x, T )2:Find the test vector t given x, T as defined in 43:Compute the result vector: v ← B t + 1 tree traversal via matrix operation4: j ← min arg max v index of the leftmost maximum of v 5: We only need to prove the inner product of the normalized exit leaf vector s * s * ,s * and the test vector t is the unique maximum of D −1 S t. Reducing multiclass to binary: a unifying approach for margin classifiers. Robert E Erin L Allwein, Yoram Schapire, Singer, Journal of Machine Learning Research. 12Erin L Allwein, Robert E Schapire, and Yoram Singer. Reducing multiclass to binary: a unifying approach for margin classifiers. Journal of Machine Learning Research, 1(2):113-141, 2001. Fast ranking with additive ensembles of oblivious and non-oblivious regression trees. Domenico Dato, Claudio Lucchese, Maria Franco, Salvatore Nardini, Raffaele Orlando, Nicola Perego, Rossano Tonellotto, Venturini, ACM Transactions on Information Systems (TOIS). 352Domenico Dato, Claudio Lucchese, Franco Maria Nardini, Salvatore Orlando, Raffaele Perego, Nicola Tonellotto, and Rossano Venturini. Fast ranking with additive ensembles of oblivious and non-oblivious regression trees. ACM Transactions on Information Systems (TOIS), 35(2):1-31, 2016. Solving multiclass learning problems via error-correcting output codes. G Thomas, Ghulum Dietterich, Bakiri, Journal of Artificial Intelligence Research. 21Thomas G Dietterich and Ghulum Bakiri. Solving multiclass learning problems via error-correcting output codes. Journal of Artificial Intelligence Research, 2(1):263-286, 1994. Decoding of ternary error correcting output codes. Sergio Escalera, Oriol Pujol, Petia Radeva, Sergio Escalera, Oriol Pujol, and Petia Radeva. Decoding of ternary error correcting output codes. pages 753-763, 2006. On the decoding process in ternary errorcorrecting output codes. Pujoloriol Escalerasergio, Radevapetia , IEEE Transactions on Pattern Analysis and Machine Intelligence. Escalerasergio, Pujoloriol, and Radevapetia. On the decoding process in ternary error- correcting output codes. IEEE Transactions on Pattern Analysis and Machine Intelli- gence, 2010. Learning piece-wise linear models from large scale data for ad click prediction. Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang, Machine Learning. Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, and Zhe Wang. Learning piece-wise linear models from large scale data for ad click prediction. arXiv: Machine Learning, 2017. One loss for all: Deep hashing with a single cosine similarity based learning objective. Kam Woh Jiun Tian Hoe, Tianyu Ng, Zhang, Yi-Zhe Chee Seng Chan, Tao Song, Xiang, Advances in Neural Information Processing Systems. M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman VaughanCurran Associates, Inc34Jiun Tian Hoe, Kam Woh Ng, Tianyu Zhang, Chee Seng Chan, Yi-Zhe Song, and Tao Xiang. One loss for all: Deep hashing with a single cosine similarity based learn- ing objective. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wort- man Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 24286-24298. Curran Associates, Inc., 2021. Maximummargin hamming hashing. Rong Kang, Yue Cao, Mingsheng Long, Jianmin Wang, Philip S Yu, IEEE/CVF International Conference on Computer Vision (ICCV). Rong Kang, Yue Cao, Mingsheng Long, Jianmin Wang, and Philip S. Yu. Maximum- margin hamming hashing. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 8251-8260, 2019. Transformers are RNNs: Fast autoregressive transformers with linear attention. A Katharopoulos, A Vyas, N Pappas, F Fleuret, Proceedings of the International Conference on Machine Learning (ICML). the International Conference on Machine Learning (ICML)2020A. Katharopoulos, A. Vyas, N. Pappas, and F. Fleuret. Transformers are RNNs: Fast autoregressive transformers with linear attention. In Proceedings of the International Conference on Machine Learning (ICML), 2020. Error-correcting output coding corrects bias and variance. Eun Bae Kong, Thomas G Dietterich, Proceedings of the Twelfth International Conference on Machine Learning. the Twelfth International Conference on Machine LearningMorgan KaufmannEun Bae Kong and Thomas G. Dietterich. Error-correcting output coding corrects bias and variance. In In Proceedings of the Twelfth International Conference on Machine Learning, pages 313-321. Morgan Kaufmann, 1995. GPU-based parallelization of QuickScorer to speed-up document ranking with tree ensembles. Francesco Lettich, Claudio Lucchese, Franco Nardini, Salvatore Orlando, Raffaele Perego, Nicola Tonellotto, Rossano Venturini, 7th Italian Information Retrieval Workshop. 1653Francesco Lettich, Claudio Lucchese, Franco Nardini, Salvatore Orlando, Raffaele Perego, Nicola Tonellotto, and Rossano Venturini. GPU-based parallelization of QuickScorer to speed-up document ranking with tree ensembles. In 7th Italian In- formation Retrieval Workshop, IIR 2016, volume 1653. CEUR-WS, 2016. QuickScorer: A fast algorithm to rank documents with additive ensembles of regression trees. Claudio Lucchese, Maria Franco, Salvatore Nardini, Raffaele Orlando, Nicola Perego, Rossano Tonellotto, Venturini, Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval. the 38th International ACM SIGIR Conference on Research and Development in Information RetrievalClaudio Lucchese, Franco Maria Nardini, Salvatore Orlando, Raffaele Perego, Nicola Tonellotto, and Rossano Venturini. QuickScorer: A fast algorithm to rank documents with additive ensembles of regression trees. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 73-82, 2015. QuickScorer: Efficient traversal of large ensembles of decision trees. Claudio Lucchese, Maria Franco, Salvatore Nardini, Raffaele Orlando, Nicola Perego, Rossano Tonellotto, Venturini, Joint European Conference on Machine Learning and Knowledge Discovery in Databases. SpringerClaudio Lucchese, Franco Maria Nardini, Salvatore Orlando, Raffaele Perego, Nicola Tonellotto, and Rossano Venturini. QuickScorer: Efficient traversal of large ensembles of decision trees. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 383-387. Springer, 2017. Key-value attention mechanism for neural machine translation. Hideya Mino, Masao Utiyama, Eiichiro Sumita, Takenobu Tokunaga, Proceedings of the Eighth International Joint Conference on Natural Language Processing. the Eighth International Joint Conference on Natural Language ProcessingTaipei, Taiwan2Asian Federation of Natural Language ProcessingHideya Mino, Masao Utiyama, Eiichiro Sumita, and Takenobu Tokunaga. Key-value attention mechanism for neural machine translation. In Proceedings of the Eighth Inter- national Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 290-295, Taipei, Taiwan, November 2017. Asian Federation of Natural Language Processing. Efficient traversal of decision tree ensembles with FPGAs. Romina Molina, Fernando Loor, Veronica Gil-Costa, Maria Franco, Raffaele Nardini, Salvatore Perego, Trani, Journal of Parallel and Distributed Computing. 155Romina Molina, Fernando Loor, Veronica Gil-Costa, Franco Maria Nardini, Raffaele Perego, and Salvatore Trani. Efficient traversal of decision tree ensembles with FPGAs. Journal of Parallel and Distributed Computing, 155:38-49, 2021. On quick scorer predication of the additive ensembles of regression trees model. Shitoumu, Shitoumu. On quick scorer predication of the additive ensembles of regression trees model. Website. Accessed March 22, 2022. Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Kaiser, ; I Illia Polosukhin, U V Guyon, S Luxburg, H Bengio, R Wallach, S Fergus, R Vishwanathan, Garnett, Advances in Neural Information Processing Systems. Curran Associates, Inc30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, ed- itors, Advances in Neural Information Processing Systems 30, pages 5998-6008. Curran Associates, Inc., 2017. Alvin Wan, Lisa Dunlap, Daniel Ho, Jihan Yin, Scott Lee, Henry Jin, ; Joseph, E Gonzalez, NBDT: Neural-backed decision trees. Suzanne Petryk, Sarah Adel BargalAlvin Wan, Lisa Dunlap, Daniel Ho, Jihan Yin, Scott Lee, Henry Jin, Suzanne Petryk, Sarah Adel Bargal, and Joseph E. Gonzalez. NBDT: Neural-backed decision trees, 2020. Beyond sparsity: Tree regularization of deep models for interpretability. Mike Wu, Michael C Hughes, Sonali Parbhoo, Maurizio Zazzi, Volker Roth, Finale Doshi-Velez, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI'18/IAAI'18/EAAI'18. the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI'18/IAAI'18/EAAI'18AAAI PressMike Wu, Michael C. Hughes, Sonali Parbhoo, Maurizio Zazzi, Volker Roth, and Finale Doshi-Velez. Beyond sparsity: Tree regularization of deep models for interpretability. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirti- eth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Sym- posium on Educational Advances in Artificial Intelligence, AAAI'18/IAAI'18/EAAI'18. AAAI Press, 2018. Optimizing for interpretability in deep neural networks with tree regularization. Mike Wu, Sonali Parbhoo, Michael C Hughes, Volker Roth, Finale Doshi-Velez, J. Artif. Int. Res. 72Mike Wu, Sonali Parbhoo, Michael C. Hughes, Volker Roth, and Finale Doshi-Velez. Optimizing for interpretability in deep neural networks with tree regularization. J. Artif. Int. Res., 72:1-37, jan 2022. RapidScorer: fast tree ensemble evaluation by maximizing compactness in data level parallelization. Hucheng Ting Ye, Zhou, Y Will, Bin Zou, Ruofei Gao, Zhang, Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningTing Ye, Hucheng Zhou, Will Y Zou, Bin Gao, and Ruofei Zhang. RapidScorer: fast tree ensemble evaluation by maximizing compactness in data level parallelization. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Dis- covery & Data Mining, pages 941-950, 2018. Transition matrix representation of trees with transposed convolutions. Jaemin Yoo, Lee Sael, Proceedings of the 2022 SIAM International Conference on Data Mining (SDM). SIAM. the 2022 SIAM International Conference on Data Mining (SDM). SIAMJaemin Yoo and Lee Sael. Transition matrix representation of trees with transposed convolutions. In Proceedings of the 2022 SIAM International Conference on Data Min- ing (SDM). SIAM, 2022. Decision machines: Interpreting decision tree as a model combination method. ArXiv, abs. Jinxiong Zhang, Jinxiong Zhang. Decision machines: Interpreting decision tree as a model combination method. ArXiv, abs/2101.11347, 2021. Yet another representation of binary decision trees: A mathematical demonstration. Jinxiong Zhang, abs/2101.07077ArXiv. Jinxiong Zhang. Yet another representation of binary decision trees: A mathematical demonstration. ArXiv, abs/2101.07077, 2021.
[]
[ "Simple and E ective Gradient-Based Tuning of Sequence-to-Sequence Models", "Simple and E ective Gradient-Based Tuning of Sequence-to-Sequence Models" ]
[ "Jared Lichtarge \nGoogle Research\n\n", "Chris Alberti \nGoogle Research\n\n", "Shankar Kumar \nGoogle Research\n\n" ]
[ "Google Research\n", "Google Research\n", "Google Research\n" ]
[]
Recent trends towards training ever-larger language models have substantially improved machine learning performance across linguistic tasks. However, the huge cost of training larger models can make tuning them prohibitively expensive, motivating the study of more e cient methods. Gradient-based hyper-parameter optimization o ers the capacity to tune hyperparameters during training, yet has not previously been studied in a sequence-to-sequence setting. We apply a simple and general gradient-based hyperparameter optimization method to sequence-to-sequence tasks for the rst time, demonstrating both e ciency and performance gains over strong baselines for both Neural Machine Translation and Natural Language Understanding (NLU) tasks (via T5 pretraining). For translation, we show the method generalizes across language pairs, is more e cient than Bayesian hyper-parameter optimization, and that learned schedules for some hyper-parameters can out-perform even optimal constant-valued tuning. For T5, we show that learning hyper-parameters during pretraining can improve performance across downstream NLU tasks. When learning multiple hyper-parameters concurrently, we show that the global learning rate can follow a schedule over training that improves performance and is not explainable by the 'short-horizon bias' of greedy methods(Wu et al., 2018). We release the code used to facilitate further research.
10.48550/arxiv.2209.04683
[ "https://export.arxiv.org/pdf/2209.04683v1.pdf" ]
252,200,056
2209.04683
9237b9e65eb3ca60bc92b086e78e29280bd6fda4
Simple and E ective Gradient-Based Tuning of Sequence-to-Sequence Models Jared Lichtarge Google Research Chris Alberti Google Research Shankar Kumar Google Research Simple and E ective Gradient-Based Tuning of Sequence-to-Sequence Models Recent trends towards training ever-larger language models have substantially improved machine learning performance across linguistic tasks. However, the huge cost of training larger models can make tuning them prohibitively expensive, motivating the study of more e cient methods. Gradient-based hyper-parameter optimization o ers the capacity to tune hyperparameters during training, yet has not previously been studied in a sequence-to-sequence setting. We apply a simple and general gradient-based hyperparameter optimization method to sequence-to-sequence tasks for the rst time, demonstrating both e ciency and performance gains over strong baselines for both Neural Machine Translation and Natural Language Understanding (NLU) tasks (via T5 pretraining). For translation, we show the method generalizes across language pairs, is more e cient than Bayesian hyper-parameter optimization, and that learned schedules for some hyper-parameters can out-perform even optimal constant-valued tuning. For T5, we show that learning hyper-parameters during pretraining can improve performance across downstream NLU tasks. When learning multiple hyper-parameters concurrently, we show that the global learning rate can follow a schedule over training that improves performance and is not explainable by the 'short-horizon bias' of greedy methods(Wu et al., 2018). We release the code used to facilitate further research. Introduction Finding good hyper-parameter values is critical to achieving good performance across machine learning domains; this has inspired much work into hyper-parameter optimization (HPO) (see Feurer & Hutter (2019)). Traditionally popular HPO methods require running many trials of hyperparameter sets in parallel or sequential training runs (Bengio, 2012;Snoek et al., 2012;Li et al., 2016). These methods become infeasible as the cost of individual runs increases. This di culty is exacerbated by recent trends towards larger models (Devlin et al., 2019;Brown et al., 2020;Adiwardana et al., 2020;Chowdhery et al., 2022), which have come to dominate progress on linguistic tasks, yet are only sparsely or indirectly tuned. The growing eld of gradient-based HPO methods o ers an alternative to conventional HPO by allowing hyper-parameters to be learned based on a loss function, which can greatly improve over the e ciency of comparing constant values tuned across multiple runs (Maclaurin et al., 2015;Pedregosa, 2016;Franceschi et al., 2018) 1 . Many gradient-based methods additionally allow hyper-parameters to dynamically vary in value over a training run as opposed to only taking static values 2 . However, most prior work on gradient-based HPO methods has not focused on text-processing, with notable exceptions in Hu et al. (2019) and Lorraine et al. (2020). This domain mismatch makes it unclear how well these methods may work for the large language model setting. We present the rst study of gradient-based hyper-parameter learning on sequence-to-sequence tasks (Sutskever et al., 2014). We extend a greedy gradient-based approach that has been applied previously to image classi cation tasks (Luketina et al., 2016;Wu et al., 2018;Baydin et al., 2017), as it is simple, generalizable, and easily extensible. This allows us to apply greedy hyper-parameter learning to a) multiple hyper-parameters simultaneously and b) experiment across models and tasks. We learn hyper-parameters for momentum and learning rate scaling for Transformer (Vaswani et al., 2017) sequence-to-sequence models for neural machine translation (NMT) and T5 model pretraining (Ra el et al., 2019). For NMT, we show that hyper-parameter schedules can be learned greedily with minimal tuning across language pairs, and that those learned schedules can be more e cient than Bayesianoptimized tuning and more performant than optimal constant-valued tuning. We demonstrate the absence of 'short-horizon bias' while learning momentum, and the bene t of treating momentum as a dynamic hyper-parameter. For T5, we show that learning a learning rate scalar alongside momentum changes the behavior of that scalar, improving both the convergence speed and performance of T5 pretraining, gains which are re ected in performance on downstream NLU tasks. Method We use a method that allows hyper-parameters to be learned greedily by gradient descent over the course of training. Per training step, we perform a bi-level optimization to learn both the model parameters via the training loss, and learned hyperparameters via the guidance loss. The guidance set is held-out from the training data to provide the loss by which the hyperparameters are learned. Let denote a training dataset and Ω be a general optimizer function for training a model on , with hyperparameters and loss function L . Our training method can be summarized as: = ∇ L ( ) +1 = Ω( , , ) = ∇ L ( +1 ) +1 =Ω( ,ˆ,ˆ), where at each time step , the updated model parameters +1 are rst computed based on the gradient ( ) of the training loss L . To compute the guidance loss gradients (ˆ) for the hyperparameters, we calculate the loss L of the new model +1 on the guidance set. Finally, the updated hyperparameter values +1 are obtained based on a meta-optimizerΩ with corresponding meta-hyperparametersˆ. Thus in every training step, we update both the model parameters and the hyperparameters. The process is formalized in Algorithm 1 in Appendix B. This method is greedy; the horizon of the guidance objective is limited to a single step. Wu et al. (2018) showed that greedy methods applied to learning the learning rate can have a bias towards tiny learning rates, which prevent them from learning and achieving good performance over longer horizons (short-horizon bias). We will explore the practical consequences of this phenomenon by using this method to learn a learning rate scalar and momentum 1 . Experiments For NMT, we use Transformer models with 121M parameters and the LAMB optimizer (You et al., 2019) 3 . We train on NMT datasets from the WMT19 machine translation task (Barrault et al., 2019). For evaluation, we decode using beam search and report BLEU (Papineni et al., 2002) scores. For T5, we use the small con guration (60M parameters) and the Adafactor optimizer (Shazeer & Stern, 2018). We use the C4 dataset (Ra el et al., 2019). We report loss on the C4 development set and the same evaluation criteria as the original T5 paper for downstream tasks. For all hyper-parameter learning experiments, we use the Adam optimizer (Kingma & Ba, 2014) with default settings as meta-optimizer, tuning only the meta-learning-rateˆ. For the guidance set, we use a single held-out training batch 4 . As the hyper-parameters must vary within constrained ranges, is kept positive by an exponential activation function, and 1 is constrained between 0 and 1 by a sigmoid. Neural Machine Translation In Figure 1, we compare the evolution of learning the learning rate scalar ( ) over training for runs with di ering meta-learning rates (ˆ) for the German-English language pair. In Figure 2 we do the same for learning 1 , varying the initialization values in addition toˆ. For learning , the guidance optimization drives all runs to as low a learning rate as is allowed by the meta-learning rate, demonstrating the 'short horizon bias'. Note that some guided runs do outperform the baseline, but require tuning ofˆto prevent convergence on the guidance objective. In contrast, the learned 1 (Figure 2) runs converge to a similar schedule given a su ciently highˆ, decaying from high to low momentum over the course of training, regardless of the initialization value. All runs with guided 1 outperform the baseline. To evaluate how well these gains generalize, we guide and 1 alone and together for 6 language pairs, settingˆto 3e-5 for all runs (Table 1). In order to evaluate the practical applicability of guiding these hyperparameters, we compare the guided runs to a typical hyperparameter optimization scheme, against which we can evaluate both performance and e ciency. We tune both the baseline runs (via the hyper-parameters directly) and the guided runs (viaˆ) with Bayesian optimization (BO) 5 for 100 trials. In Table 2, we nd that across guided-parameter settings, the non-BO-optimized guided run outperforms the best BO-tuned baseline model, with some slight gains for the guided run with further BO-tuning 6 . Note the guided 1 runs do not requireˆtuning to reach best performance. For all setups, the learned hyper-parameters achieve better performance than Bayesian optimization in fewer training runs and less time. Though the 'short-horizon bias' requires tuningˆwhile learning , doing so still yields performance and e ciency gains over BO-tuning. For 1 alone, there seems to be no equivalent bias, as any su ciently highˆconverges to roughly the same useful schedule. The BO-tuned optimal static 1 value (0.73) approximates the average 1 of the converged runs in Figure 2, suggesting that the remaining 0.4 BLEU points are only attainable with a 1 value that changes over the course of training. Learning both hyper-parameters together does not change their evolution but yields a small additional boost. We run similar experiments for T5 models, learning , 1 , and both. For , we see the learning rate scalar decrease prematurely similarly to the NMT setting, demonstrating again the 'short-horizon bias' (Appendix, Figure 2), but no guided run outperforms the baseline, even with lowˆvalues 7 . For 1 alone, we replicate a similar converged schedule as in the NMT setting, but see only minor changes in development set loss across all models, including those varying 1 without learning during training (Appendix, Figure 2). This suggests that tuning 1 in general is less useful in this setting. Interestingly, when we tune both hyper-parameters together, the evolution of the parameter changes character (Figure 3), and we nd a 3X improvement in speed of convergence relative to baseline and increases in nal performance for multiple different settings ofˆ. T5 pretraining We netune the baseline andˆ=1e-5 models on each of the downstream NLU tasks drawn from the GLUE (Wang et al., 2018) and super-GLUE (Wang et al., 2019) benchmarks, as well as SQuAD (Rajpurkar et al., 2016), using the same netuning settings as the original T5 paper 8 ( Discussion and Limitations Our results shed light into multiple facets of Hyper-parameter optimization (HPO). For Neural Machine Translation, we show that although learning the learning rate scalar decays the learning rate prematurely when allowed to converge to the guidance objective (exhibiting the 'short-horizon bias' (Wu et al., 2018)), tuning the meta-learning-rate produces better results with less tuning than Bayesian optimized static tuning. In learning momentum, we demonstrate the absence of short-horizon bias; for momentum, and potentially other hyper-parameters, greedy gradient-based HPO can learn over a single run a schedule which out-performs optimal static tuning. For hyperparameters such as momentum whose optimal values change over training, methods which allow for dynamic hyper-parameters will always have an edge over static tuning methods. In our T5 experiments, we show that the 'recipe' which yielded good results in NMT produced, with minimal tuning, a pretrained model which outperforms the baseline after netuning on downstream NLU tasks. We discovered that learning hyper-parameters in conjunction can alter their evolution over training. When learned alongside momentum, the initial growth of the learning-rate scalar followed by gradual decay is a result that is not explicable by the short-horizon bias, which would predict monotonic and premature decay to zero. This raises the possibility that learning certain hyper-parameters dynamically may be constrained by the static values of non-learned hyper-parameters, and that learning multiple hyper-parameters together may be necessary in some settings to make learning any of them useful. Characterizing the phenomenon of interaction between hyper-parameters is a direction for future work. Our experiments are limited to two global hyper-parameters which are typically tuned. Future work should explore a wider set of hyper-parameters and at a varying granularity (e.g. a distinct hyper-parameter value per parameter (Lorraine et al., 2020)). We show that learning hyperparameters together can alter their dynamics but leave to future work the characterization of the mechanism and mapping of interactions between learned hyper-parameters. We have shown that greedily learning the learning rate scalar can produce behavior unexplained by the short-horizon bias, but have left to future work the characterization of this phenomenon. The method we explore is limited to di erentiable hyper-parameters, and is greedy, so may be improved upon by more complex methods which can take into account either non-di erentiable hyperparameters (MacKay et al., 2019) and/or longer horizons (Micaelli & Storkey, 2021). Broader Impact Since Wu et al. (2018) described short-horizon bias for greedy methods, work in the gradient-based HPO community has progressed towards more complex methods which seek to address shorthorizon bias with longer horizons (Micaelli & Storkey, 2021) or by other means . Our result showing the absence of bias for learning momentum, and easy performance gains for NMT when doing so, should encourage further evaluation of the behavior of diverse learnable hyper-parameters under greedy meta-optimization. Additionally, we have shown that intuitions about the short-horizon bias do not fully explain the behavior of the learning-rate scalar, which increases at the start of training when learned alongside momentum. These observations, taken together, should encourage further exploration of greedy gradient-based methods. We do not anticipate this work having potential negative societal impacts beyond those posed by automated methods in machine learning in general. Rather we hope that it may contribute towards the realization of e cient and general gradient-based HPO, which will help improve the e ciency of training models, reduce energy consumption, and democratize access to machine learning. We hope that our encouraging results and release of the code we used to produce them 9 will facilitate future work within the research community and give practitioners the tools to apply gradient-based HPO in diverse settings. (c) Did you discuss any potential negative societal impacts of your work? [N/A] We anticipate no speci c potential negative impacts beyond those of improving automated machine learning methods in general. We state this in Section 5. 3. If you ran experiments. . . (a) Did you include the code, data, and instructions needed to reproduce the main experimental results, including all requirements (e.g., requirements.txt with explicit version), an instructive README with installation, and execution commands (either in the supplemental material or as a )? [N/A] We will release the code prior to the publication of the work. While this is clearly not the same as releasing it now (at submission time), we intend to do so as open-sourcing the code is a main aspect of the intended impact of the work. (c) Did you include scripts and commands that can be used to generate the gures and tables in your paper based on the raw results of the code, data, and instructions given? [N/A] Close analogues of the gures in this paper will be automatically generated by the training code. (d) Did you ensure su cient code quality such that your code can be safely executed and the code is properly documented? [Yes] The code, which will be released prior to publication, will be well documented. (e) Did you specify all the training details (e.g., data splits, pre-processing, search spaces, xed hyperparameter settings, and how they were chosen)? [Yes] See Appendix C. (f) Did you ensure that you compared di erent methods (including your own) exactly on the same benchmarks, including the same datasets, search space, code for training and hyperparameters for that code? [Yes] We took care to ensure our experiments comparing methods were fair, including in these mentioned categories. (g) Did you run ablation studies to assess the impact of di erent components of your approach? [Yes] We varyˆ, combination of hyper-parameters, and in comparing NMT to T5 pretraining, we vary optimizer, model, and task. (j) Did you perform multiple runs of your experiments and report random seeds? [No] We did perform multiple runs of the experiments but do not report random seeds. (k) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [No] We do not report error bars, we report the max and average metric values over repeated runs. (l) Did you use tabular or surrogate benchmarks for in-depth evaluations? [N/A] We do not employ NAS approaches. (m) Did you include the total amount of compute and the type of resources used (e.g., type of s, internal cluster, or cloud provider)? [Yes] See Appendix D. (n) Did you report how you tuned hyperparameters, and what time and resources this required (if they were not automatically tuned by your AutoML method, e.g. in a approach; and also hyperparameters of your own method)? [Yes] See Section 3. 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . . (c) Did you include any new assets either in the supplemental material or as a ? [No] We will include a link to the code at publication time. (d) Did you discuss whether and how consent was obtained from people whose data you're using/curating? [N/A] Our experiments were performed on publicly available datasets. For all experiments, we use the JAX framework (Bradbury et al., 2018), building o of models from the ax library (Heek et al., 2020). We use Transformer models (Vaswani et al., 2017) and the LAMB optimizer (You et al., 2019), with a 32k sentence-piece vocabulary (Kudo & Richardson, 2018) for each language pair. Our Transformers have 8 heads and 6 layers with a total of 121M parameters, and for the LAMB optimizer we use the default values of 1 , 2 , and as 0.9, 0.999, and 1e-6 respectively. C.1.2 Data. We train on 6 di erent language pairs, with training, development, and test sets drawn from the WMT19 machine translation task (Barrault et al., 2019). We tokenize the language pairs into joint 32K subword vocabularies with SentencePiece models (Kudo & Richardson, 2018). After ltering the datasets slightly by language ID and with length-based heuristics, we remove a single batch of the remaining data to set aside as a guidance set for each language pair. This is based on our preliminary experiments where we found no change in performance between holding out 1% of the training data for guidance (iterated through repeatedly over training) or holding out a single batch (applied at every step), so throughout this work we hold out only a single batch for the guidance set 10 . The resulting dataset sizes are shown in Table 4. C.2 Training We train with dropout and attention dropout both set to 0.1, and without label smoothing or weight decay regularization. The default learning rate is set to 0.4, which follows a square-root decay schedule following a a linear warmup of 4000 steps. We use a training batch size of ∼2, 300 examples on average. In the experiments where we compare learned hyperparameters to Bayesian HPO (Snoek et al., 2012), the objective for the BO is to minimize the loss on the development set, and we select the best of 100 trials for each BO run. C.2.1 Evaluation. We decode with beam search decoding with a beam size of 4, and report BLEU (Papineni et al., 2002) scores calculated using the sacreBLEU tool (Post, 2018 (Shazeer & Stern, 2018). We train a T5 model in the small con guration, with 8 layers and 6 attention heads per layer and a total of 60M parameters. For Adafactor we use a default learning rate of 1e-3 and a decay_rate of 0.8. C.3.2 Data and Training. For pretraining, we train for 1M steps on the C4 dataset (Ra el et al., 2019), using a 32k sentence-piece vocabulary, the same as in the original T5 paper. We use a batch size of 256 packed examples and a maximum input length of 512 sentence-pieces, with dropout set to 0.0. We use a learning rate of 0.01 with 10000 steps of constant value followed by reciprocal square root decay. The unsupervised objective is the same masked language modeling objective that was proposed in the original T5 paper. 15% of tokens are masked in the input sequence, replacing each masked span with a sentinel token. The model is then trained to predict the missing text for each sentinel token. C.3.3 Evaluation. In pretraining, we report the loss on the C4 development set. For netuning, we evaluate the appropriate metrics for each of the GLUE and superGLUE tasks. To arrive at the nal average for each set of tasks, we follow the T5 paper in averaging the metrics within each tasks (to get the met values shown in Table 3) and then simply averaging those scores across the tasks of the super-task. C.4 Finetuning on Downstream NLU Tasks We netune on downstream NLU tasks from the GLUE and superGLUE meta-tasks. We initialize from the 1M step pretraining checkpoints and train for an additional 250,000 steps with a batch size of 8, mirroring the T5 paper netuning scheme (Ra el et al., 2019). C.5 Meta-Optimization In our hyperparameter learning experiments, we meta-optimize with Adam and its default hyperparameters. 1 , 2 , and are set to 0.9, 0.999, and 1e-8 respectively. For both NMT and T5 experiments, we use a guidance batch size mirroring the size of the training batch in each setting. While model parameters may be allowed to take positive or negative values, the hyperparameters we study must be bound to a range of appropriate values; the learning rate must be positive and momentum must be between 0 and 1. To achieve this, we pass the learned hyperparameters through an activation function; exponential for learning rate and sigmoid for momentum (Table 5). Unlike other hyperparameters, the learning rate is frequently set on a pre-determined schedule. In order to not override the pre-existing schedule, we learn a scalar on the schedule which is initialized at 1. (1 + − ) −1 (0, 1) 0.9 D Hardware For all experiments, we use TPUv3 with 16 cores. NMT training runs took ∼7 hours to train. T5 training took ∼48 hours for pretraining and ∼3-6 hours for netuning depending on the task. In both setups, training runs that guided hyper-parameters took approximately 1.5X as long in terms of wall-clock time than baseline runs. The memory requirements of the guided and unguided runs were similar. Figure 2: Learning alone for T5 pretraining, comparison of a sweep over color-coded meta learning rates to baseline (black). The lowest meta-learning rate setting (1e-6, in blue) does outperform the baseline, but only very slightly. The short-horizon bias is evident; note that all learning rate scalars only decrease relative to the baseline learning rate schedule. vs baseline for initializations [0.1, 0.5, 0.9], and default baseline 0.0. Note that learned 1 values converge to the same gradually decaying schedule, similar to that of the NMT models in Figure 2. The runs on that schedule do very slightly out-perform the non-learned hyperparameter runs. However, unlike in the NMT case, none of the changes in 1 , dynamic or static, have a signi cant impact upon the accuracy of the model at any point in training. This suggests that this setup is simply insensitive to the value of 1 . LICENSING OF DATA (from the statmt.org website) states that it can be used for research purposes: F.1 Downstream NLU Task Full Results GLUE The data released for the WMT19 news translation task can be freely used for research purposes, we just ask that you cite the WMT19 shared task overview paper, and respect any additional citation requirements on the individual data sets. For other uses of the data, you should consult with original owners of the data sets. The C4 dataset is released by Google, available at https://www.tensor ow.org/datasets/catalog/c4, licensed by Creative Commons Attribution 4.0 License Figure 1 :Figure 2 : 12Learning Learning 1 , varyingˆand init. values. German-English NMT Figure 3 : 3Learning + 1 for T5, varyingˆ. 9 https://www.github.com/goog e-research/goog e-research/tree/master/gradient_based_tuning the main claims made in the abstract and introduction accurately re ect the paper's contributions and scope? [Yes] See Sections 3 and 4., (b) Did you describe the limitations of your work? [Yes] See latter portion of Section 4. ( b ) bDid you include the raw results of running the given instructions on the given code and data? [N/A] See above. ( h ) hDid you use the same evaluation protocol for the methods being compared? [Yes] See section 3 and Appendix C.(i) Did you compare performance over time? [Yes] See Figures in Section 3. (a) If your work uses existing assets, did you cite the creators? [Yes] See Section 3. (b) Did you mention the license of the assets? [Yes] See Appendix G. ( e ) eDid you discuss whether the data you are using/curating contains personally identi able information or o ensive content? [No] None of our datasets contains personally identi able information or o ensive content. 5. If you used crowdsourcing or conducted research with human subjects. . . (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] (b) Did you describe any potential participant risks, with links to Institutional Review Board ( ) approvals, if applicable? [N/A] (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [ Figure 1 : 1Learning 1 for NMT models, comparison of untuned learned and baseline runs (solid lines) to BO-tuned learned and baseline runs (dotted lines). These runs correspond to those reported in the 1 column in Figure 3 : 3Learning 1 alone for T5 pretraining, comparison of learned (meta learning rate 3e-3) runs Table 1 : 1BLEU scores of baseline vs guided runs across language pairs.ˆis set to 3e-5. de-en en-de -en en- lt-en en-lt base 38.6 37.4 27.2 18.4 27.3 11.3 39.6 39.4 27.6 19.7 27.7 11.7 1 39.8 39.4 28.4 19.4 28.0 12.1 + 1 39.9 38.8 27.5 19.6 27.8 12.2 Table 2 : 2BLEU scores of 100 BO-tuned runs vs un- tuned for baseline and guided runs, on de-en. Time is summed runtime in hours. time # runs 1 + 1 base 7.1 1 38.6 38.6 38.6 + BO 708 100 39.4 39.4 39.5 guided 43.2 4 39.6 39.8 39.9 + BO 1.1k 100 39.7 39.8 39.9 Table 3 ) 3. We nd improvements across 15 of 18 downstream NLU tasks, with average improvements of 0.4 points on GLUE and 1.4 points on superGLUE. See Appendix F for full details on isolated and 1 experiments.8 For details on the setup of netuning, see Appendix C.4. For full results, including on SQuAD, see Appendix F.1.7 GLUE CoLA SST MRPC STS QQP MNLI QNLI RTE WNLI avg corr acc met met met met acc acc acc base 47.9 92.3 88.5 84.1 87.4 84.1 90.2 67.9 57.7 77.8 + 1 44.5 92.6 90.0 85.1 87.7 84.9 90.5 69.0 59.2 78.2 sGLUE BoolQ CB COPA MultiRC ReCoRD RTE WiC WSC avg acc met acc met met acc acc acc base 73.6 98.4 58.0 43.7 63.8 64.6 67.2 67.3 67.1 + 1 73.6 95.3 61.0 46.0 64.0 66.4 67.6 74.0 68.5 Table 3 : 3Fine-tuning baseline and + 1 models on the GLUE and superGLUE (sGLUE) NLU tasks. met denotes the mean of the two metrics typically reported for that task, and avg takes the average across tasks. Max value of 5 runs shown, see Appendix F.1 for average results. Table 4 : 4Comparing dataset sentence count across language pairs. The acronyms de, en, and lt refer to German, English, Finnish and Lithuanian, respectively.de-en en-de -en en- lt-en en-lt train 32M 32M 5.5M 5.5M 1.9M 1.9M guide 2165 2227 2363 2337 2339 2305 ).C.3 T5 Experiments C.3.1 Model. For the T5 experiments, we pretrain T5 models (Ra el et al., 2019) using the Adafactor optimizer Table 5 : 5Learned hyperparameters and their activation functions.hparam activation fn domain init (0, ∞) 1 1 Table 2 . 2F T5 experiments 0 1M 0.00 0.01 learning rate x 0 1M steps 2 3 4 5 loss 1.8 2.0 baseline 3e-3 3e-4 3e-5 1e-5 3e-6 1e-6 Table 6 : 6Fine-tuning baseline and learned + 1 models on SQuAD, plus the 9 GLUE and 8 superGLUE (sGLUE) downstream NLU tasks. All values are the max of 5 separate runs.GLUE CoLA SST MRPC MRPC STS STS QQP avg corr acc F1 acc Pearson Spearman F1 base 79.8 47.3 92.2 89.9 86.0 83.6 84.0 87.2 + 1 80.1 43.6 92.5 90.8 87.0 84.6 85.0 87.2 QQP MNLI-m MNLI-mm QNLI RTE WNLI SQuAD SQuAD acc acc acc acc acc acc EM F1 base 90.5 83.8 84.4 89.1 64.8 57.5 88.1 80.0 + 1 90.5 84.5 85.1 90.3 66.0 57.2 88.2 80.5 sGLUE BoolQ CB CB COPA MultiRC MultiRC ReCoRD avg acc F1 acc acc F1a EM F1 base 64.8 73.1 97.9 98.4 56.4 6.4 62.1 63.1 + 1 67.3 72.7 93.2 93.1 57.2 21.0 69.4 63.3 ReCoRD RTE WiC WSC acc acc acc acc base 64.1 63.8 66.1 63.5 + 1 64.3 65.9 66.7 73.5 Table 7 : 7Fine-tuning baseline and learned + 1 models on SQuAD, plus the 9 GLUE and 8 superGLUE (sGLUE) downstream NLU tasks. All values are the average of 5 separate runs.WMT (Workshop on Machine Translation) 2019: http://www.statmt.org/wmt19/translation-task.html and downloaded from https://www.tensor ow.org/datasets/catalog/wmt19_translateG Licensing of Data See Appendix A for a full description of related works.2 We refer to hyper-parameters which vary over a training run as dynamic, and those which are constant as static. For complete experiment setup details, see Appendix C.4 In preliminary experiments, we found no bene t to a larger guidance set. The speci c algorithm we use is Gaussian Process Bandits(Frazier, 2018;Golovin et al., 2017).6 We count the 4 di erent values ofˆwe tried inFigure 1as tuning runs for the non-BO-tuned guided setup. We likely see no di erence because at most we learn two hyperparameters. With higher-dimensional learned hyperparameterizations, over tting on the guidance set may become a concern that can be addressed by iterating through a larger guidance dataset. Acknowledgements. The authors would like to thank Andrew Chou, Felix Stahlberg, Ji Ma, and the anonymous reviewers, for their helpful comments.A Related WorkThe eld of hyperparameter optimization (HPO) is well summarized inFeurer & Hutter (2019). Here we review related work in gradient-based HPO, of which our method is one approach. Online gradient-based HPO was proposed byAlmeida et al. (1998).Bengio (2000)formulated hyperparameter search in terms of optimization. Domke (2012) described a strategy to compute the gradient of loss with respect to hyperparameters in a CRF model. The use of validation-loss gradients to update continuous hyperparameters by backpropagating through the entire training procedure was demonstrated byMaclaurin et al. (2015). To reduce time-complexity of tracing back through the entire training procedure, subsequent work explored approaches where the parameter and hyperparameter updates are performed in an alternating fashion(Luketina et al., 2016;Franceschi et al., 2017Franceschi et al., , 2018Baydin et al., 2017;Majumder et al., 2019).Luketina et al. (2016)proposed greedy per-step validation loss gradient updates, applied to regularization hyperparameters that are trained alongside the elementary parameters of the model.Baydin et al. (2017)described an application of the greedy approach to optimize learning rates using the training set loss.Wu et al. (2018)highlighted the short-horizon biases arising from the greedy strategy.(2021)presented approaches that overcome some of the limitations of the greedy strategy while being more e cient than the full trajectory approach ofMaclaurin et al. (2015). The above methods considered either forward-or reverse-mode di erentiation to compute the hyper-gradients. Alternative approaches, using the Implicit Function Theorem to approximate the gradients, were explored in Pedregosa(2016)2019)presented a method for learning a hyperparameter schedule that works for non-di erentiable hyperparameters. Some works focus on using gradients to learn data weighting or augmentation schemes, such asHu et al. (2019).Raghu et al. (2020)leverages gradient methods to learn various 'commentaries' that are example-level parameters that can improve performance via example weighting and data manipulation and also provide insights into model training. MAML(Finn et al., 2017)and subsequent works(Antoniou et al., 2018;Bansal et al., 2020)employ a bi-level, gradient based training procedure using a distribution over tasks that improves the generalization performance and can be utilized to learn hyperparameters.Raghu et al. (2021)apply a gradient-based method to meta-learn hyperparameters for multi-task pretraining on protein-protein interaction networks.B AlgorithmAlgorithm 1 Guided Learning Require: 0 : initial parameter vector Require: 0 : initial hyperparameter vector Require:ˆ: hyperparameter vector of meta-optimizer ← 0 ⊲ Initialization while not converged do ( , ) ← GetNewMiniBatch() ⊲ New training/guidance mini-batch ← ∇ ComputeLoss( , ) ⊲ Gradient of train loss wrt +1 ← Optimizer( , , ) ⊲ Parameter updatê ← ∇ ComputeLoss( , +1 ) ⊲ Gradient of guidance loss wrt +1 ← MetaOptimizer(ˆ, ,ˆ) ⊲ Hyperparameter update ← + 1 end while Towards a human-like open-domain chatbot. CoRR, abs. D Adiwardana, M Luong, D R So, J Hall, N Fiedel, R Thoppilan, Z Yang, A Kulshreshtha, G Nemade, Y Lu, Q V Le, Adiwardana, D., Luong, M., So, D. R., Hall, J., Fiedel, N., Thoppilan, R., Yang, Z., Kulshreshtha, A., Nemade, G., Lu, Y., and Le, Q. V. Towards a human-like open-domain chatbot. CoRR, abs/2001.09977, 2020. URL https://arxiv.org/abs/2001.09977. Parameter adaptation in stochastic optimization. On-Line Learning in Neural Networks. L B Almeida, T Langlois, J D Amaral, A Plakhov, Publications of the. Newton InstituteAlmeida, L. B., Langlois, T., Amaral, J. D., and Plakhov, A. Parameter adaptation in stochastic optimization. On-Line Learning in Neural Networks, Publications of the Newton Institute, pp. 111-134, 1998. How to train your maml. A Antoniou, H Edwards, A Storkey, arXiv:1810.09502arXiv preprintAntoniou, A., Edwards, H., and Storkey, A. How to train your maml. arXiv preprint arXiv:1810.09502, 2018. Self-supervised meta-learning for few-shot natural language classi cation tasks. T Bansal, R Jha, T Munkhdalai, A Mccallum, arXiv:2009.08445arXiv preprintBansal, T., Jha, R., Munkhdalai, T., and McCallum, A. Self-supervised meta-learning for few-shot natural language classi cation tasks. arXiv preprint arXiv:2009.08445, 2020. Findings of the 2019 conference on machine translation (WMT19). L Barrault, O Bojar, M R Costa-Jussà, C Federmann, M Fishel, Y Graham, B Haddow, M Huck, P Koehn, S Malmasi, C Monz, M Müller, S Pal, M Post, M Zampieri, 10.18653/v1/W19-5301Proceedings of the Fourth Conference on Machine Translation. the Fourth Conference on Machine TranslationFlorence, ItalyAssociation for Computational Linguistics2Barrault, L., Bojar, O., Costa-jussà, M. R., Federmann, C., Fishel, M., Graham, Y., Haddow, B., Huck, M., Koehn, P., Malmasi, S., Monz, C., Müller, M., Pal, S., Post, M., and Zampieri, M. Findings of the 2019 conference on machine translation (WMT19). In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pp. 1-61, Florence, Italy, August 2019. Association for Computational Linguistics. doi: 10.18653/v1/W19-5301. URL https://ac antho ogy.org/W19-5301. Online learning rate adaptation with hypergradient descent. A G Baydin, R Cornish, D M Rubio, M Schmidt, F Wood, arXiv:1703.04782arXiv preprintBaydin, A. G., Cornish, R., Rubio, D. M., Schmidt, M., and Wood, F. Online learning rate adaptation with hypergradient descent. arXiv preprint arXiv:1703.04782, 2017. Gradient-based optimization of hyperparameters. Y Bengio, 10.1162/089976600300015187Neural Comput. 128Bengio, Y. Gradient-based optimization of hyperparameters. Neural Comput., 12(8):1889-1900, aug 2000. ISSN 0899-7667. doi: 10.1162/089976600300015187. URL https://doi.org/10.1162/ 089976600300015187. Practical Recommendations for Gradient-Based Training of Deep Architectures. Y Bengio, Bengio, Y. Practical Recommendations for Gradient-Based Training of Deep Architectures, pp. 437-478. . Heidelberg Springer Berlin, 10.1007/978-3-642-35289-8_26978-3-642-35289-8. doi: 10.1007/ 978-3-642-35289-8_26Berlin, HeidelbergSpringer Berlin Heidelberg, Berlin, Heidelberg, 2012. ISBN 978-3-642-35289-8. doi: 10.1007/ 978-3-642-35289-8_26. URL https://doi.org/10.1007/978-3-642-35289-8_26. JAX: composable transformations of Python+NumPy programs. J Bradbury, R Frostig, P Hawkins, M J Johnson, C Leary, D Maclaurin, G Necula, A Paszke, J Vanderplas, S Wanderman-Milne, Q Zhang, Bradbury, J., Frostig, R., Hawkins, P., Johnson, M. J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., and Zhang, Q. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/goog e/jax. Language models are few-shot learners. T Brown, B Mann, N Ryder, M Subbiah, J D Kaplan, P Dhariwal, A Neelakantan, P Shyam, G Sastry, A Askell, S Agarwal, A Herbert-Voss, G Krueger, T Henighan, R Child, A Ramesh, D Ziegler, J Wu, C Winter, C Hesse, M Chen, E Sigler, M Litwin, S Gray, B Chess, J Clark, C Berner, S Mccandlish, A Radford, I Sutskever, Amodei , D , Advances in Neural Information Processing Systems. Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M. F., and Lin, H.Curran Associates, Inc33Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, D. Language models are few-shot learners. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M. F., and Lin, H. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 1877-1901. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/ fi e/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf. . A Chowdhery, S Narang, J Devlin, M Bosma, G Mishra, A Roberts, P Barham, H W Chung, C Sutton, S Gehrmann, P Schuh, K Shi, S Tsvyashchenko, J Maynez, A Rao, P Barnes, Y Tay, N Shazeer, V Prabhakaran, E Reif, N Du, B Hutchinson, R Pope, J Bradbury, J Austin, M Isard, G Gur-Ari, P Yin, T Duke, A Levskaya, S Ghemawat, S Dev, H Michalewski, X Garcia, V Misra, K Robinson, L Fedus, D Zhou, D Ippolito, D Luan, H Lim, B Zoph, A Spiridonov, R Sepassi, D Dohan, S Agrawal, M Omernick, A M Dai, T S Pillai, M Pellat, A Lewkowycz, E Moreira, R Child, O Polozov, K Lee, Z Zhou, X Wang, B Saeta, M Diaz, O Firat, M Catasta, J Wei, K Meier-Hellstern, D Eck, J Dean, S Petrov, N Fiedel, Palm: Scaling language modeling with pathways. CoRR, 2022Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra, G., Roberts, A., Barham, P., Chung, H. W., Sutton, C., Gehrmann, S., Schuh, P., Shi, K., Tsvyashchenko, S., Maynez, J., Rao, A., Barnes, P., Tay, Y., Shazeer, N., Prabhakaran, V., Reif, E., Du, N., Hutchinson, B., Pope, R., Bradbury, J., Austin, J., Isard, M., Gur-Ari, G., Yin, P., Duke, T., Levskaya, A., Ghemawat, S., Dev, S., Michalewski, H., Garcia, X., Misra, V., Robinson, K., Fedus, L., Zhou, D., Ippolito, D., Luan, D., Lim, H., Zoph, B., Spiridonov, A., Sepassi, R., Dohan, D., Agrawal, S., Omernick, M., Dai, A. M., Pillai, T. S., Pellat, M., Lewkowycz, A., Moreira, E., Child, R., Polozov, O., Lee, K., Zhou, Z., Wang, X., Saeta, B., Diaz, M., Firat, O., Catasta, M., Wei, J., Meier-Hellstern, K., Eck, D., Dean, J., Petrov, S., and Fiedel, N. Palm: Scaling language modeling with pathways. CoRR, 2022. URL https://arxiv.org/abs/2204.02311. Scalable one-pass optimisation of highdimensional weight-update hyperparameters by implicit di erentiation. CoRR, abs/2110.10461. R M Clarke, E T Oldewage, J M Hernández-Lobato, Clarke, R. M., Oldewage, E. T., and Hernández-Lobato, J. M. Scalable one-pass optimisation of high- dimensional weight-update hyperparameters by implicit di erentiation. CoRR, abs/2110.10461, 2021. URL https://arxiv.org/abs/2110.10461. Pre-training of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, Bert, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171-4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https://ac antho ogy.org/ N19-1423. Generic methods for optimization-based modeling. J Domke, Proceedings of the Fifteenth International Conference on Arti cial Intelligence and Statistics. Lawrence, N. D. and Girolami, M.the Fifteenth International Conference on Arti cial Intelligence and StatisticsLa Palma, Canary Islands22of Proceedings of Machine Learning ResearchDomke, J. Generic methods for optimization-based modeling. In Lawrence, N. D. and Girolami, M. (eds.), Proceedings of the Fifteenth International Conference on Arti cial Intelligence and Statistics, volume 22 of Proceedings of Machine Learning Research, pp. 318-326, La Palma, Canary Islands, 21-23 Apr 2012. PMLR. URL https://proceedings.m r.press/v22/domke12.htm . Scheduling the learning rate via hypergradients: New insights and a new algorithm. M Donini, L Franceschi, M Pontil, O Majumder, P Frasconi, abs/1910.08525CoRRDonini, M., Franceschi, L., Pontil, M., Majumder, O., and Frasconi, P. Scheduling the learning rate via hypergradients: New insights and a new algorithm. CoRR, abs/1910.08525, 2019. URL http://arxiv.org/abs/1910.08525. Hyperparameter Optimization. M Feurer, F Hutter, 10.1007/978-3-030-05318-5_1Springer International PublishingChamFeurer, M. and Hutter, F. Hyperparameter Optimization, pp. 3-33. Springer International Publishing, Cham, 2019. ISBN 978-3-030-05318-5. doi: 10.1007/978-3-030-05318-5_1. URL https://doi. org/10.1007/978-3-030-05318-5_1. Model-agnostic meta-learning for fast adaptation of deep networks. C Finn, P Abbeel, S Levine, International conference on machine learning. PMLRFinn, C., Abbeel, P., and Levine, S. Model-agnostic meta-learning for fast adaptation of deep networks. In International conference on machine learning, pp. 1126-1135. PMLR, 2017. Forward and reverse gradient-based hyperparameter optimization. L Franceschi, M Donini, P Frasconi, M Pontil, PMLRProceedings of the 34th International Conference on Machine Learning. Precup, D. and Teh, Y. W.the 34th International Conference on Machine Learning70Franceschi, L., Donini, M., Frasconi, P., and Pontil, M. Forward and reverse gradient-based hyper- parameter optimization. In Precup, D. and Teh, Y. W. (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 1165-1173. PMLR, 06-11 Aug 2017. URL https://proceedings.m r.press/v70/franceschi17a. htm . Bilevel programming for hyperparameter optimization and meta-learning. L Franceschi, P Frasconi, S Salzo, R Grazzi, M Pontil, PMLRProceedings of the 35th International Conference on Machine Learning. Dy, J. and Krause, A.the 35th International Conference on Machine Learning80Franceschi, L., Frasconi, P., Salzo, S., Grazzi, R., and Pontil, M. Bilevel programming for hyperpa- rameter optimization and meta-learning. In Dy, J. and Krause, A. (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 1568-1577. PMLR, 10-15 Jul 2018. URL https://proceedings.m r.press/v80/ franceschi18a.htm . P I Frazier, arXiv:1807.02811A tutorial on bayesian optimization. arXiv preprintFrazier, P. I. A tutorial on bayesian optimization. arXiv preprint arXiv:1807.02811, 2018. Drmad: Distilling reverse-mode automatic di erentiation for optimizing hyperparameters of deep neural networks. J Fu, H Luo, J Feng, K H Low, T Chua, abs/1601.00917CoRRFu, J., Luo, H., Feng, J., Low, K. H., and Chua, T. Drmad: Distilling reverse-mode automatic di erentiation for optimizing hyperparameters of deep neural networks. CoRR, abs/1601.00917, 2016. URL http://arxiv.org/abs/1601.00917. Google vizier: A service for black-box optimization. D Golovin, B Solnik, S Moitra, G Kochanski, J Karro, D Sculley, Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningACMGolovin, D., Solnik, B., Moitra, S., Kochanski, G., Karro, J., and Sculley, D. Google vizier: A service for black-box optimization. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1487-1495. ACM, 2017. Convergence properties of stochastic hypergradients. R Grazzi, M Pontil, S Salzo, PMLRThe 24th International Conference on Arti cial Intelligence and Statistics, AISTATS 2021. Banerjee, A. and Fukumizu, K.130Grazzi, R., Pontil, M., and Salzo, S. Convergence properties of stochastic hypergradients. In Banerjee, A. and Fukumizu, K. (eds.), The 24th International Conference on Arti cial Intelligence and Statistics, AISTATS 2021, April 13-15, 2021, Virtual Event, volume 130 of Proceedings of Machine Learning Research, pp. 3826-3834. PMLR, 2021. URL http://proceedings.m r.press/v130/grazzi21a. htm . Flax: A neural network library and ecosystem for JAX. J Heek, A Levskaya, A Oliver, M Ritter, B Rondepierre, A Steiner, M Van Zee, Heek, J., Levskaya, A., Oliver, A., Ritter, M., Rondepierre, B., Steiner, A., and van Zee, M. Flax: A neural network library and ecosystem for JAX, 2020. URL http://github.com/goog e/f ax. Learning data manipulation for augmentation and weighting. Z Hu, B Tan, R R Salakhutdinov, T M Mitchell, E P Xing, Advances in Neural Information Processing Systems. 32Hu, Z., Tan, B., Salakhutdinov, R. R., Mitchell, T. M., and Xing, E. P. Learning data manipulation for augmentation and weighting. Advances in Neural Information Processing Systems, 32, 2019. D P Kingma, J Ba, Adam, arXiv:1412.6980A method for stochastic optimization. arXiv preprintKingma, D. P. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. A simple and language independent subword tokenizer and detokenizer for neural text processing. T Kudo, J Richardson, Sentencepiece, 10.18653/v1/D18-2012Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2018 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsBrussels, BelgiumAssociation for Computational LinguisticsKudo, T. and Richardson, J. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 66-71, Brussels, Belgium, November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-2012. URL https://ac antho ogy.org/D18-2012. E cient hyperparameter optimization and in nitely many armed bandits. L Li, K G Jamieson, G Desalvo, A Rostamizadeh, A Talwalkar, abs/1603.06560CoRRLi, L., Jamieson, K. G., DeSalvo, G., Rostamizadeh, A., and Talwalkar, A. E cient hyperparameter optimization and in nitely many armed bandits. CoRR, abs/1603.06560, 2016. URL http://arxiv. org/abs/1603.06560. Optimizing millions of hyperparameters by implicit di erentiation. J Lorraine, P Vicol, D Duvenaud, PMLRProceedings of the Twenty Third International Conference on Arti cial Intelligence and Statistics. Chiappa, S. and Calandra, R.the Twenty Third International Conference on Arti cial Intelligence and Statistics108Lorraine, J., Vicol, P., and Duvenaud, D. Optimizing millions of hyperparameters by implicit di erentiation. In Chiappa, S. and Calandra, R. (eds.), Proceedings of the Twenty Third International Conference on Arti cial Intelligence and Statistics, volume 108 of Proceedings of Machine Learning Research, pp. 1540-1552. PMLR, 26-28 Aug 2020. URL https://proceedings.m r.press/v108/ orraine20a.htm . Scalable gradient-based tuning of continuous regularization hyperparameters. J Luketina, M Berglund, K Gre, T Raiko, International conference on machine learning. PMLRLuketina, J., Berglund, M., Gre , K., and Raiko, T. Scalable gradient-based tuning of continuous regularization hyperparameters. In International conference on machine learning, pp. 2952-2960. PMLR, 2016. Self-tuning networks: Bilevel optimization of hyperparameters using structured best-response functions. CoRR, abs/1903.03088. M Mackay, P Vicol, J Lorraine, D Duvenaud, R B Grosse, MacKay, M., Vicol, P., Lorraine, J., Duvenaud, D., and Grosse, R. B. Self-tuning networks: Bilevel optimization of hyperparameters using structured best-response functions. CoRR, abs/1903.03088, 2019. URL http://arxiv.org/abs/1903.03088. Gradient-based hyperparameter optimization through reversible learning. D Maclaurin, D Duvenaud, Adams , R , Proceedings of the 32nd International Conference on Machine Learning. Bach, F. and Blei, D.the 32nd International Conference on Machine LearningLille, France37Maclaurin, D., Duvenaud, D., and Adams, R. Gradient-based hyperparameter optimization through reversible learning. In Bach, F. and Blei, D. (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pp. 2113-2122, Lille, France, 07-09 Jul 2015. PMLR. URL http://proceedings.m r.press/v37/mac aurin15.htm . Learning the learning rate for gradient descent by gradient descent. O Majumder, M Donini, P Chaudhari, Proceedings of the AutoML workshop. the AutoML workshopMajumder, O., Donini, M., and Chaudhari, P. Learning the learning rate for gradient descent by gradient descent. In Proceedings of the AutoML workshop, 2019. Gradient-based hyperparameter optimization over long horizons. P Micaelli, A J Storkey, Advances in Neural Information Processing Systems. Curran Associates, Inc34Micaelli, P. and Storkey, A. J. Gradient-based hyperparameter optimization over long horizons. In Advances in Neural Information Processing Systems, volume 34. Curran Associates, Inc., 2021. Bleu: a method for automatic evaluation of machine translation. K Papineni, S Roukos, T Ward, W.-J Zhu, 10.3115/1073083.1073135Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphia, Pennsylvania, USAAssociation for Computational LinguisticsPapineni, K., Roukos, S., Ward, T., and Zhu, W.-J. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Lin- guistics, pp. 311-318, Philadelphia, Pennsylvania, USA, July 2002. Association for Computational Linguistics. doi: 10.3115/1073083.1073135. URL https://ac antho ogy.org/P02-1040. Hyperparameter optimization with approximate gradient. F Pedregosa, International conference on machine learning. Pedregosa, F. Hyperparameter optimization with approximate gradient. In International conference on machine learning, pp. 737 -746, 2016. A call for clarity in reporting BLEU scores. M Post, 10.18653/v1/W18-6319Proceedings of the Third Conference on Machine Translation: Research Papers. the Third Conference on Machine Translation: Research PapersBrussels, BelgiumAssociation for Computational LinguisticsPost, M. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pp. 186-191, Brussels, Belgium, October 2018. Association for Computational Linguistics. doi: 10.18653/v1/W18-6319. URL https://ac antho ogy.org/ W18-6319. Exploring the limits of transfer learning with a uni ed text-to-text transformer. C Ra El, N Shazeer, A Roberts, K Lee, S Narang, M Matena, Y Zhou, W Li, P J Liu, arXiv:1910.10683arXiv preprintRa el, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., and Liu, P. J. Exploring the limits of transfer learning with a uni ed text-to-text transformer. arXiv preprint arXiv:1910.10683, 2019. . A Raghu, M Raghu, S Kornblith, D Duvenaud, G Hinton, arXiv:2011.03037Teaching with commentaries. arXiv preprintRaghu, A., Raghu, M., Kornblith, S., Duvenaud, D., and Hinton, G. Teaching with commentaries. arXiv preprint arXiv:2011.03037, 2020. Meta-learning to improve pre-training. A Raghu, J Lorraine, S Kornblith, M Mcdermott, D K Duvenaud, Advances in Neural Information Processing Systems. 34Raghu, A., Lorraine, J., Kornblith, S., McDermott, M., and Duvenaud, D. K. Meta-learning to improve pre-training. Advances in Neural Information Processing Systems, 34:23231-23244, 2021. Squad: 100,000+ questions for machine comprehension of text. P Rajpurkar, J Zhang, K Lopyrev, P Liang, arXiv:1606.05250arXiv preprintRajpurkar, P., Zhang, J., Lopyrev, K., and Liang, P. Squad: 100,000+ questions for machine compre- hension of text. arXiv preprint arXiv:1606.05250, 2016. Truncated back-propagation for bilevel optimization. A Shaban, C.-A Cheng, N Hatch, B Boots, The 22nd International Conference on Arti cial Intelligence and Statistics. PMLRShaban, A., Cheng, C.-A., Hatch, N., and Boots, B. Truncated back-propagation for bilevel optimiza- tion. In The 22nd International Conference on Arti cial Intelligence and Statistics, pp. 1723-1732. PMLR, 2019. Adafactor: Adaptive learning rates with sublinear memory cost. N Shazeer, M Stern, International Conference on Machine Learning. PMLRShazeer, N. and Stern, M. Adafactor: Adaptive learning rates with sublinear memory cost. In International Conference on Machine Learning, pp. 4596-4604. PMLR, 2018. Practical bayesian optimization of machine learning algorithms. J Snoek, H Larochelle, R P Adams, Advances in Neural Information Processing Systems. Pereira, F., Burges, C. J. C., Bottou, L., and Weinberger, K. Q.Curran Associates, Inc25Snoek, J., Larochelle, H., and Adams, R. P. Practical bayesian optimization of ma- chine learning algorithms. In Pereira, F., Burges, C. J. C., Bottou, L., and Wein- berger, K. Q. (eds.), Advances in Neural Information Processing Systems, volume 25. Cur- ran Associates, Inc., 2012. URL https://proceedings.neurips.cc/paper/2012/fi e/ 05311655a15b75fab86956663e1819cd-Paper.pdf. Sequence to sequence learning with neural networks. I Sutskever, O Vinyals, Q V. ; Z Le, M Welling, C Cortes, N Lawrence, Advances in Neural Information Processing Systems. Weinberger, K. Q.Curran Associates, Inc27Ghahramani,Sutskever, I., Vinyals, O., and Le, Q. V. Sequence to sequence learning with neu- ral networks. In Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., and Wein- berger, K. Q. (eds.), Advances in Neural Information Processing Systems, volume 27. Cur- ran Associates, Inc., 2014. URL https://proceedings.neurips.cc/paper/2014/fi e/ a14ac55a4f27472c5d894ec1c3c743d2-Paper.pdf. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, arXiv:1706.03762Attention is all you need. arXiv preprintVaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Attention is all you need. arXiv preprint arXiv:1706.03762, 2017. A Wang, A Singh, J Michael, F Hill, O Levy, S R Bowman, Glue, arXiv:1804.07461A multi-task benchmark and analysis platform for natural language understanding. arXiv preprintWang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461, 2018. Superglue: A stickier benchmark for general-purpose language understanding systems. A Wang, Y Pruksachatkun, N Nangia, A Singh, J Michael, F Hill, O Levy, S Bowman, Advances in neural information processing systems. 32Wang, A., Pruksachatkun, Y., Nangia, N., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. Superglue: A stickier benchmark for general-purpose language understanding systems. Advances in neural information processing systems, 32, 2019. Understanding short-horizon bias in stochastic metaoptimization. Y Wu, M Ren, R Liao, R B Grosse, abs/1803.02021CoRRWu, Y., Ren, M., Liao, R., and Grosse, R. B. Understanding short-horizon bias in stochastic meta- optimization. CoRR, abs/1803.02021, 2018. URL http://arxiv.org/abs/1803.02021. Y You, J Li, S Reddi, J Hseu, S Kumar, S Bhojanapalli, X Song, J Demmel, K Keutzer, C.-J Hsieh, arXiv:1904.00962Large batch optimization for deep learning: Training bert in 76 minutes. arXiv preprintYou, Y., Li, J., Reddi, S., Hseu, J., Kumar, S., Bhojanapalli, S., Song, X., Demmel, J., Keutzer, K., and Hsieh, C.-J. Large batch optimization for deep learning: Training bert in 76 minutes. arXiv preprint arXiv:1904.00962, 2019.
[ "https://www.github.com/goog", "http://github.com/goog", "http://github.com/goog" ]
[ "Journal Logo An exploration of the effectiveness of artificial mini-magnetospheres as a potential Solar Storm shelter for long term human space missions", "Journal Logo An exploration of the effectiveness of artificial mini-magnetospheres as a potential Solar Storm shelter for long term human space missions" ]
[ "R A Bamford \nRAL Space\nSTFC\nHarwell OxfordOX11 0QXRutherford Appleton Laboratory, DidcotU.K\n", "B Kellett \nRAL Space\nSTFC\nHarwell OxfordOX11 0QXRutherford Appleton Laboratory, DidcotU.K\n", "J Bradford \nRAL Space\nSTFC\nHarwell OxfordOX11 0QXRutherford Appleton Laboratory, DidcotU.K\n", "T N Todd \nCulham Centre for Fusion Energy\nCulham Science Centre\nOX14 3DBAbingdonOxfordshireU.K\n", "Sr. cM G Benton \nThe Boeing Company\n90009-2919El SegundoCAUSA\n", "R Stafford-Allen \nCulham Centre for Fusion Energy\nCulham Science Centre\nOX14 3DBAbingdonOxfordshireU.K\n", "E P Alves \nGoLP/Instituto de Plasmas e Fusão Nuclear\nInstituto Superior Técnico\n1049-001LisboaPortugal\n", "L Silva \nGoLP/Instituto de Plasmas e Fusão Nuclear\nInstituto Superior Técnico\n1049-001LisboaPortugal\n", "C Collingwood \nRAL Space\nSTFC\nHarwell OxfordOX11 0QXRutherford Appleton Laboratory, DidcotU.K\n", "I A Crawford \nDept of Earth and Planetary Sciences\nBirkbeck College\nLondon\n", "R Bingham \nRAL Space\nSTFC\nHarwell OxfordOX11 0QXRutherford Appleton Laboratory, DidcotU.K\n\nUniversity of Strathclyde\nGlasgowScotland, UK\n" ]
[ "RAL Space\nSTFC\nHarwell OxfordOX11 0QXRutherford Appleton Laboratory, DidcotU.K", "RAL Space\nSTFC\nHarwell OxfordOX11 0QXRutherford Appleton Laboratory, DidcotU.K", "RAL Space\nSTFC\nHarwell OxfordOX11 0QXRutherford Appleton Laboratory, DidcotU.K", "Culham Centre for Fusion Energy\nCulham Science Centre\nOX14 3DBAbingdonOxfordshireU.K", "The Boeing Company\n90009-2919El SegundoCAUSA", "Culham Centre for Fusion Energy\nCulham Science Centre\nOX14 3DBAbingdonOxfordshireU.K", "GoLP/Instituto de Plasmas e Fusão Nuclear\nInstituto Superior Técnico\n1049-001LisboaPortugal", "GoLP/Instituto de Plasmas e Fusão Nuclear\nInstituto Superior Técnico\n1049-001LisboaPortugal", "RAL Space\nSTFC\nHarwell OxfordOX11 0QXRutherford Appleton Laboratory, DidcotU.K", "Dept of Earth and Planetary Sciences\nBirkbeck College\nLondon", "RAL Space\nSTFC\nHarwell OxfordOX11 0QXRutherford Appleton Laboratory, DidcotU.K", "University of Strathclyde\nGlasgowScotland, UK" ]
[ "Acta Astronautica" ]
In this paper we explore the effectiveness of an artificial mini-magnetosphere as a potential radiation shelter for long term human space missions. Our study includes the differences that the plasma environment makes to the efficiency of the shielding from the high energy charged particle component of solar and cosmic rays, which radically alters the power requirements. The incoming electrostatic charges are shielded by fields supported by the self captured environmental plasma of the solar wind, potentially augmented with additional density. The artificial magnetic field generated on board acts as the means of confinement and control. Evidence for similar behaviour of electromagnetic fields and ionised particles in interplanetary space can be gained by the example of the enhanced shielding effectiveness of naturally occurring "mini-magnetospheres" on the moon. The shielding effect of surface magnetic fields of the order of ∼100s nanoTesla is sufficient to provide effective shielding from solar proton bombardment that culminate in visible discolouration of the lunar regolith known as "lunar swirls". Supporting evidence comes from theory, laboratory experiments and computer simulations that have been obtained on this topic. The result of this work is, hopefully, to provide the tools for a more realistic estimation of the resources versus effectiveness and risk that spacecraft engineers need to work with in designing radiation protection for long-duration human space missions.
10.1016/j.actaastro.2014.10.012
[ "https://arxiv.org/pdf/1406.1159v2.pdf" ]
55,171,785
1406.1159
c21961a56624d0fdd981fad4209b09c174205bc8
Journal Logo An exploration of the effectiveness of artificial mini-magnetospheres as a potential Solar Storm shelter for long term human space missions 2014 R A Bamford RAL Space STFC Harwell OxfordOX11 0QXRutherford Appleton Laboratory, DidcotU.K B Kellett RAL Space STFC Harwell OxfordOX11 0QXRutherford Appleton Laboratory, DidcotU.K J Bradford RAL Space STFC Harwell OxfordOX11 0QXRutherford Appleton Laboratory, DidcotU.K T N Todd Culham Centre for Fusion Energy Culham Science Centre OX14 3DBAbingdonOxfordshireU.K Sr. cM G Benton The Boeing Company 90009-2919El SegundoCAUSA R Stafford-Allen Culham Centre for Fusion Energy Culham Science Centre OX14 3DBAbingdonOxfordshireU.K E P Alves GoLP/Instituto de Plasmas e Fusão Nuclear Instituto Superior Técnico 1049-001LisboaPortugal L Silva GoLP/Instituto de Plasmas e Fusão Nuclear Instituto Superior Técnico 1049-001LisboaPortugal C Collingwood RAL Space STFC Harwell OxfordOX11 0QXRutherford Appleton Laboratory, DidcotU.K I A Crawford Dept of Earth and Planetary Sciences Birkbeck College London R Bingham RAL Space STFC Harwell OxfordOX11 0QXRutherford Appleton Laboratory, DidcotU.K University of Strathclyde GlasgowScotland, UK Journal Logo An exploration of the effectiveness of artificial mini-magnetospheres as a potential Solar Storm shelter for long term human space missions Acta Astronautica 002014plasmaradiation protectionshieldingmanned missionscosmic rays In this paper we explore the effectiveness of an artificial mini-magnetosphere as a potential radiation shelter for long term human space missions. Our study includes the differences that the plasma environment makes to the efficiency of the shielding from the high energy charged particle component of solar and cosmic rays, which radically alters the power requirements. The incoming electrostatic charges are shielded by fields supported by the self captured environmental plasma of the solar wind, potentially augmented with additional density. The artificial magnetic field generated on board acts as the means of confinement and control. Evidence for similar behaviour of electromagnetic fields and ionised particles in interplanetary space can be gained by the example of the enhanced shielding effectiveness of naturally occurring "mini-magnetospheres" on the moon. The shielding effect of surface magnetic fields of the order of ∼100s nanoTesla is sufficient to provide effective shielding from solar proton bombardment that culminate in visible discolouration of the lunar regolith known as "lunar swirls". Supporting evidence comes from theory, laboratory experiments and computer simulations that have been obtained on this topic. The result of this work is, hopefully, to provide the tools for a more realistic estimation of the resources versus effectiveness and risk that spacecraft engineers need to work with in designing radiation protection for long-duration human space missions. Introduction The Worlds space agencies are actively drawing up plans for human space missions beyond Low Earth Orbit [1], and scientific benefits resulting from the human exploration of the Moon, Mars and asteroids may be considerable [2,3,4]. However the risk posed by the radiation in space is one of the major obstacles when considering long term human space exploration [30,5,6], which means that careful consideration must be given to radiation protection. The US National Research Council Committee on the Evaluation of Radiation Shielding for Space Exploration [6] recently stated: "Materials used as shielding serve no purpose except to provide their atomic and nuclear constituents as targets to interact with the incident radiation projectiles, 1 and so either remove them from the radiation stream to which individuals are exposed or change the particles' characteristics-energy, charge, and mass -in ways that reduce their damaging effects." This paper outlines one possible way to achieve this, by radically reducing the numbers of particles reaching the spacecraft. The technology concerns the use of 'Active' or electromagnetic shielding -far from a new idea (for a reviews see [33,12]) -but one that has, up until now, been analysed with some crucial factors missing. Specifically the plasma environment of interplanetary space. So, presented here are the results of asking three questions: 1. What difference does the fact that the environment of interplanetary space contains a low density (∼ 10 per cm −3 ) plasma of positive and negative charges, make to how a potential artificial electromagnetic radiation shield would work on a manned spacecraft? 2. How differently does a plasma behave at the small scales of a spacecraft compared to, say, a magnetosphere barrier of a planet? 3. How does this change the task of balancing the cost and benefits of countermeasures for the engineers designing an interplanetary or long duration manned mission? Initiatives such as Earth-Moon-Mars Radiation Environment Module (EMMREM) [11] aim to provide frameworks to overcome the mission safety challenges from Solar Proton/Particle Events (SPEs). But accurate prediction is only of any use if the means to protect the craft and crew actually exist. In this paper we discuss the principles and optimisations specifically of miniature magnetospheres. The upper panel in Figure 1 shows a photograph of a mini-magnetosphere formed in the laboratory [20] from the principles outlined in this paper. The application of these principles to the space environment has been shown by comparison between in-situ spacecraft observations of the naturally occurring Lunar minimagnetospheres [38]. Below in Figure 1 is an illustration showing a mini-magnetosphere around a conceptual manned interplanetary spacecraft. Mini-magnetospheres and plasmas In space the charged particles (protons, electrons and other trace ions) mostly originate from the sun and a magnetosphere is a particular type of 'diamagnetic cavity' formed within the plasma of the solar wind. [20] (upper panel) and conceptually around a spacecraft (lower panel). Above: The supersonic hydrogen plasma (pink glow) from the Solar Wind Tunnel is coming in from the left hand side and encountering a magnetic field (which inside the protective casing visible in the photograph). The self-captured plasma forms a thin sheath barrier that redirects the incoming hazard. A cavity in the density is created within, confirmed by probe measurements [20]. This photograph is taken from above looking through the sheath onto the south pole. The graphic below shows the various zones whose characteristics are discussed within the paper and their relationship to a conceptual spacecraft [21]. Importantly in the laboratory experiment the overall dimensions are of the order, or less than, the ion skin depth c/ω pi which will be essential to be of practical size. The width L of the current sheath is approximately the electron skin depth L ∼ c/ω pe . The pressure balance between incoming and defensive forces occurs at distance r s from the source of magnetic field. Together these parameters define the effectiveness of the active shield. (Conceptual spacecraft design c Mark Benton, Sr.) A plasma is a state of matter where the diffuse conglomeration of approximately equal numbers of positive and negative charges is sufficiently hot, that they do not recombine significantly to become neutral particles. Rather the charges remain in a dynamical state of quasineutrality interacting, and self-organising, in a fashion dependant upon on the interaction of internal and external electromagnetic forces. It is these attributes that are to be exploited here as a means to protect vulnerable manned spacecraft/bases. In interplanetary space high energy component of the solar particles is what forms the 'hazard' itself because of the high penetrating capability of energetic ions in particular. These are the Solar Cosmic Rays (SCR). There are also a smaller % (about 6 orders of magnitude less) of super energetic particles at GeV energies that have been accelerated by exotic events like supernovas.n These form the Galactic Cosmic Ray (GCR) component. Both high fluxes of SCR during storms and the long term exposure to GCR are a concern for astronaut health [10]. Space plasmas are very diffuse indeed with about 10 particles occupying the volume of the end of the average human thumb, and are considered ultra high vacuum by terrestrial standards. The mean-free-path between physical collisions between the particles is far longer than the system (in solar wind the mean-free-path is about 1 A.U. (Astronomical Unit). This means the particles 'collide' through their electrostatic charges and collective movements (such as currents) that are guided by, or result in, magnetic or electric fields. Because of the large dimensions of space, even a very low density is important. The electrostatic forces between two charges are 10 39 times more intense than their gravitational attraction [13]. Because a plasma is a rapidly responding (because of the free moving charges), conducting medium it creates a magnetic field in opposition to an externally applied magnetic field making it diamagnetic and can result in local cavities. Diamagnetic cavities are a general phenomenon in plasmas not just in space plasmas and can be formed with or without magnetic fields [40]. Magnetospheres are more generally associated with planetary magnetic fields, such as the Earth's, interacting with the solar wind plasma [15]. Miniature magnetospheres are fully formed magnetospheres, with collisionless shocks and diamagnetic cavities, but the whole structure is very much smaller, of the order of 10s to 1000s km across. Mini-magnetospheres have been observed associated with the anomalous patches of surface magnetic field on the Moon [16], Mars [18] and Mercury [19], and also with asteroids like Gaspra and Ida [17]. It has also been demonstrated that minimagnetospheres can form without magnetic fields, such as from natural comets [22] and artificial comets such as AMPTE [23]. In these cases the term "magneto" can still be used because the currents induced in the sheath region include magnetic fields. Mini-magnetospheres are determined by the plasma physics of the very small scale which in general has been neglected in the analysis of the electromagnetic deflection as a means of spacecraft protection. The entire structures are smaller than the bending radius of an energetic ion about the magnetic field in a vacuum, so this is not a convential 'magnetic shield'. Presented here is a 'block diagram' of the characteristics and parameters needed to implement a minimagnetosphere deflector shield for a manned space craft. The real physics of the interaction is immensely complicated and largely non-deterministic analytically due to non-linearities. So what is here are 'rules of thumb' as guide only. A full detailed analysis requires the use of complex plasma physics and simulations codes best conducted on specific case due to the resources needed. The hazard At the radius of the Earths orbit the level of ultraviolet radiation from the Sun is sufficiently high that photo ionisation results in very little matter in free space remaining unionised. The medium of space is therefore a plasma albeit of very low density. Solar eruptions consist of electromagnetic waves but also protons and electrons with a small percentage of higher mass ions. The radiation encountered in space (see Figure 2) is a composite of a small percentage of extremely high energy galactic particles, a higher density but, much lower energy continuous outflow of particles from the sun (the solar wind), interspersed with intermittent, high density eruptions of very energetic particles originating from a variety of violent events on the sun. Events on or near the sun that result in shockwaves can accelerate ions and electrons to extremely high energies [27,28]. An examples showing the temporal and energy spectra of large Solar Energetic Particle (SEP) events is shown in Figure 3 [29]. One of the more recent large Solar Energetic Particle (SEP) events illustrates the magnitude of the problem [29]. The temporal plot of particle flux from [29] is shown in Figure 3(a). The x-ray flare on the Sun provided only a few minutes warning before high-energy > 0.5GeV protons arrived at Earth, peaking within 5 minutes. Approximately 10 minutes later, the peak in 3 Figure 2: A different type of "radiation" in space. Radiation hazard on Earth is generally related to radioactive decay of heavy elements like uranium and electromagnetic waves like gamma and x-rays (left). The radiation in space also has a broadband electromagnetic component but also has an additional form of radiation not seen on Earth except in particle accelerators and as cosmic rays. The forces in space stars, supernovae etc accelerate abundant light elements like hydrogen to MeV to 100's GeV energies. At these energies the particles are predominantly the nuclei of atoms and electrons separately constituting a high energy plasma. Figure 3: (a) The time evolution of proton fluence for a large storm. High-energy > 0.5GeV protons arrived at Earth, peaking within 5 minutes followed 10 minutes later by the peak > 100MeV fluence, ∼ ×10, 000 the background level. Over the next 12 hours directional, lower energy particles arrive still at elevated densities above quiet times. (b) The energy fluence spectra of some of the largest SEP events of the last 50 years [29]. Under normal conditions the numbers of particles with energies >10MeV is negligible. But increases of 10 to 1000 times during a SEP are typical and can rise to as much as 10 6 for more extreme events [29]. the > 100MeV protons arrived. At its peak the particle flux rate is ∼ ×10, 000 the background level. A second peak occurred about 90 minutes later. Over the next 12 hours directional, lower energy particles arrive still at elevated densities above quiet times. For a spacecraft in interplanetary space, this results in intense bursts of radiation of deeply penetrating particles capable of passing through the hull to the crew inside. The result is a significant increase for dose-rates above 0.05 Gy/h [31]. The variable shape of the energy spectrum for each SPE is an extremely important factor for the total exposure calculation and not just the total fluence ??. For instance, protons with energies > 30MeV can pass through space suits, above 70−100MeV then hull walls of 5 − −10gcm −2 aluminium can be penetrated with the added consequences of secondariness. The energy spectrum of some of the largest events (based on fluence of particles) of the last 50 years is shown in Figure 3(b) [29]. The vulnerability of different organs and systems (such as Blood Forming Organs or nervous system) varies considerably [7,9,32]. Thus it becomes difficult to quantify the potential mission disruption caused solar events based purely on predicted size of event. Current estimates [8] suggest that there is ∼20% chance of exceeding the current NASA 30-d limit for a future SPE with Φ 30 = 2 × 10 9 protonscm −2 on an in interplanetary journey.The probability of multiple events increases with mission period. Protection against extremely large solar energetic particle events (SPE) that sporadically occur with very little warning, is a mission critical issue for long term, interplanetary manned missions [30,5]. Particle description of the incoming Pressure The characteristics of the instantaneous plasma (quasi-neutral collection of approximately equal numbers of positive and negative charges), particle distributions impacting the spacecraft define how the plasma shield will function at any one instant. The pressure from the environmental plasma P in can consist of more than one component. Considering a thermal part and a bulk flow ram pressures, and pressure from the magnetic field in the solar wind: P in = P th,sw + P ram,sw + P B I MF + P ram,++ (2.1) The component terms being P th = n th kT th , P ram = n sw m sw v sw 2 and P B I MF = |B IMF 2 |/2µ o . Here n sw represents the density of particles flowing at at velocity v sw and B IMF is the interplanetary magnetic field (IMF). The final term is the ram pressure from the high energy particles P ram,++ which has been differentiated from the main distribution for this analysis. As can be seen from Figure 3(b) the density of particles at the high energy tail can be a significant fraction of the bulk density but in general can be considered as negligible fraction of the pressure. As will be seen in the following sections the ever present, though variable, background 'solar wind' plasma is what is used to initially create the barrier. As will be shown in later sections it can then be augmented artificially if necessary to increase the deflection of the hazardous high energy part of the particle spectrum. Mini-magnetospheres Pressure Balance The principle of 'Active Shielding' requires electromagnetic forces to balance the incoming pressure. (Many authors have reviewed the general principles of Active Shielding, for example [33]). We start with a generic expression of the required pressure balance: P mm = P th,mm + P ram,mm + P B mm (3.1) Here the subscripts represent the pressures from within the mini-magnetosphere. In practice the ram component, although it exists due to the motion of the spacecraft, is just a frame of reference issue and insignificant. In many cases the thermal outward pressure is also insignificant. So to first order a magnetosphere pressure balance provides: P B mm ≈ P ram,sw . (3.2) As will be shown later there is an electric field generated by the formation of a mini-magnetosphere, so this term will remain. If we create an artificial minimagnetosphere on board the spacecraft we can define the initial P B mm . Creating an artificial mini-magnetosphere An on-board Mini-Mag system would most likely be comprised of a superconducting coil [35]. In a non-conductive medium, the magnetic field intensity of a dipole magnetic field diminishes rapidly with range. Higher-order structures, such as quadrapoles and octopoles, have fields which fall off 5 Figure 4: How the vacuum magnetic field intensity varies with range from the spacecraft based on solenoid characteristics. The magnetic field of a flat round coil of major radius = a, with N turns of current carrying windings of I c the current in each turn results in a total amp-turns I of I = NI c , provides a magnetic field intensity at the centre of the coil of B o . The highest magnetic field is B o (b) and is at the surface of the winding pack where b is the minor radius winding pack. As can be seen from the figure the magnetic field intensity in the center can be less than the outer edge of the winding pack. The wider the radius the more this is the case, and the greater the range field beyond the spacecraft. A central dipole magnetic or multiple magnets (multipole field) the magnetic field intensity will drop off much more rapidly than a wide diameter loop. Figure 5: A plot illustrating the difference the plasma environment makes. Sketch of the vacuum magnetic fields when the plasma environment is included. The foot region is caused by ion reflection and is of the order of the ion inertial length whereas the current or barrier layer width, L, is associated with the electron inertial length. Ideally r s L. Because the interaction is a collisionless-shock, the initial pile-up of density and magnetic field is accompanied by both turbulence and a reduction in the velocity of the ions and changes in temperature of both the ions and electrons. Inside the barrier region is the cavity where the population of energetic particles is reduced. To optimise logistics this need be only as wide as to afford sufficient protection as required. 6 much more rapidly with radius from the coils that create them. Figure 4 shows the magnetic field at a distance (far field) where r a (in any direction) is |B vac (r)| ≈ |B o (a/r)| 3 but only when no plasma is present. Or in terms of the current in the coil: B(r) ≈ µ o I 2a a r 3 (4.1) Here I is the total loop-current of the solenoid I = NI c , where N is the total number of turns carrying current I c at radius of a. The presence of the plasma changes the profile. This can be seen illustrated in Figure 4. The prohibitively high power estimates of a magnetic shield are based on the vacuum profile (the near field plot generation and profile with distance is shown in Figure 4). The vacuum field power estimates do not allow for the alteration in the profile and additional force illustrated in Figure 4. The effect of the plasma environment is not just to extend the range of the magnetic field intensity. The effect of the magnetic 'pile-up' comes with cross field currents in a narrow barrier region (or shell in 3D) some distance from the spacecraft. These currents and accompanying electric fields alter how the incoming plasma is deflected. The efficiency of the shielding is therefore found to be much greater than the initial vacuum calculation would have predicted. Evidence that this is the case will be shown in section 7. Quantifying the level of the enhancement and the effectiveness at deflecting higher energy particles is nontrivial. In the following section we shall determine some estimates that can be used to determine the value of an artificial mini-magnetosphere shield for astronaut protection. Characterising the effectiveness of an artificial mini-magnetosphere Figure 1 shows a two dimensional sketch of the morphology of a mini-magnetosphere about a spacecraft. The size of the mini-magnetosphere is dependant on two parameters. Firstly, r s the 'stagnation' or 'stand-off' distance of a magnetopause is where the pressure of the incoming plasma, P in , is balanced by the combined pressure of the mini-magnetosphere. The second parameter is L the width of the magnetopause boundary. Clearly to be within the safety of a mini-magnetosphere diamagnetic cavity one must be further away than the thickness of the boundary. In kinetic studies of mini-magnetospheres we find that L ≈ electron skin depth. Calculating stand-off distance,r s To first order equation(2.1), the magnetosphere pressure balance can be approximately taken to be balance by the vacuum magnetic field: P in ≈ P B this occurs at a distance r s from the source of the magnetic field. In planetary magnetospheres r s would be the ChapmanFerraro distance. The calculation is the same but the source of the magnetic field from an artificial source can be included to better relate the effectiveness to power requirements on board. For dipole magnetic field produced by a solenoid, such as that shown in Figure 4, the magnetic field at the center of the solenoid B o = µ o NI c /2a, where a is the radius of the solenoid loop containing total loop turns of I = NI c (N number of turns, I c current per turn). This provides: r 6 s ∼ µ o (NI c ) 2 8P in a 4 (5.1) Here P in is obtained from equation 2.1. This same calculation for the Earth's magnetosphere, leads to a consistent under-estimation of the true stand-off distance indicating the importance of the other terms in Equations (2.1) and (3.1). Interestingly equation (5.1) reveals that the largest achievable stand-off distance is achieved with the largest possible coil radius. This is intuitively reasonable because the long-range field strength goes like B o a 3 , so a small change in the radius of the coil has a large effect. Calculating barrier width, L The plasma physics of the interaction [40,41] tells us that the width of the magnetopause boundary L is of the order of the electron skin depth λ e . L ≈ λ e = c ω pe (5.2) Here ω pe is the electron plasma frequency, ω pe = n e e 2 o m e 1/2 , c is the speed of light. The classical skin depth is a rapid decay of electromagnetic fields with depth inside a conductor caused by eddy currents in the conductor. High frequencies and high conductivity shorten the skin depth as does an increase in the number of current carriers (plasma density). The same is true here with some differences, for instance the conditions of a collisionless shock of the 7 mini-magnetosphere means the attenuation profile is closer to a linear approximation profile than the 1/e attenuation in metals. The Normalised Linear attenuation factor, α We can now introduce a geometric parameter α as quasi linear attenuation factor. This is to provide an indication of relative effectiveness. The number of skin depths required for complete ambient plasma exclusion is not generally known but values of 4-6 have been calculated up to relativistic energies [42]. This is similar to the exponential 1 form of the electromagnetic skin depth in metals. However given the level of other approximations being made in these formulations we shall take a normalised parameter where the number of required skin depths is taken to be = 1. α := r s L (5. 3) The plasma for this calculation can be a combination of the incoming plasma density and any additional density added from the spacecraft to enhance the shield effectiveness. The assumed value of r s = L is good, r s > L would be better if multiple skin depths r s ∼ 4 − 6 × L, and r s < L is less than optimum. The origin of the electric field The expressions above provide an estimation of how the bulk pressures balance. For a practical shield to reduce the penetrating high energy component we need to determine the value of the electric field within the barrier -and how it can effect the higher energy exclusion. Figure 6 is a close up view of Figure 1 with the force vectors overlaid. The particle trajectory of a representative high energy ion velocity v ++ and mass M is also shown. The forces on the charged particle are determined by the Lorentz force: F = q(E + v × B). (5.4) Unlike in a vacuum, the presence of the plasma means that the E cannot be neglected and is related to B. The electric field component comes from the formation of a currents that are induced to exclude the interplanetary magnetic field and create the cavity. The physics of collisionless shocks provides us with an expression for the instantaneous electric potential component, φ, responsible for slowing and deflecting the ions such that: φ(r) ≈ − κ n mm |∆B mm | 2 ∆r (5.5) Here κ is a constant κ = 1/(2µ 0 e). If |∆B mm | ∼ |B mm | is the intensity of the magnetic field orthogonal to r and v at distance (r s − L), then ∆r ∼ L. Equation (5.4) shows that the potential is related to the gradient in the magnetic field intensity, or the pondermotive force. This is a much more effective and short-range force than calculations of the magnetic bending alone would suggest. Because the density within the mini-magnetosphere, n mm is within both κ and L (from equation (5.2) ), the magnitude of φ ∝ √ n mm . This offers a means to boost the effectiveness of the deflector shield by adding additional density. This shall be discussed in Section 5.6. High Energy particle deflection The electric field that is created is responsible for changing the energy and trajectory of the energetic particles. Although the electric field values from equation 5.4, even with augmented plasma density, are not going to be sufficient to stop > 100MeV or GeV ion, this is not required. The particles need only be refracted sufficiently away from the central safe-zone. Much like defending against a charging rugby-footballer, rather than stand in his way to protect the goal line, a better policy is to deflect the player sideways using a small amount of force so he is pushed into touch and out of the field of play. The geometry of this for our case is illustrated in Figure 6. In Cartesian coordinates, the deflection component of the electric field, E ⊥ , required to just miss the spacecraft, is acquired across the whole barrier width, L, in the one plane is |E ⊥ | = |E r | tan θ where θ is the angle of the charged particle to the radial or the scattering angle. The needed deflection velocity v ⊥ becomes: v 2 ⊥ ≈ κ n mm M |∆B 2 mm | L (5.6) As has already been mentioned in Section 5.4, in 3D the physics is such that the electric field will always point outwards away from the spacecraft. This results in a 3D safe zone effective against both directional and omni-directional threats. Thus we must determine the effectiveness of the high energy scattering process. Because the electric field is formed self-consistently by the plasma itself, and the high energy particles are 8 Figure 6: The deflection of a high energy ion (green) by the electric field (E r ) (red) created by the low energy plasma captured and retained by the magnetic field from the spacecraft. Augmenting the natural density, by releasing readily ionised gas from the craft, can enable protection of >> 100MeV/amu ions. scattered by a lower electric field, the problem of generating a secondary population of ions accelerated towards the spacecraft by the deflector shield itself, does not occur. The mini-magnetosphere barrier interaction with high energy particles is far from simple.The incoming high energy particle not only sees the electric field set up by the interaction of the solar wind and the spacecraft field, it also experiences the usual convective electric field as seen by a charged particle moving relative to a magnetic field. This convective field (E ⊥ ) is perpendicular to the magnetic field. This results in the particle being deflected by a series of fields in a complex manner [41]. Quantifying the shield performance for specific spectra of high energy particles (like an SEP) requires a full 3D recreation using a computer simulation, or an experiment either in space or in the laboratory. Figure 7 shows a simulation of high energy scattering from a dipole magnetic field (center of box). The 3000 'SEP' incoming ions are 100, 000 the energy of the thermal background plasma (∼ 5eV) contained within the box (the 'solar wind'). These simulations showed that 100% of particles were excluded from the "safe zone", whilst 95% were excluded for particles × ∼ 10 6 . Computer simulation of high energy scattering This indicates a narrow electric field is responsible for the deflection and not the gradual bending due to a magnetic field. Figure 7: A simulation of particle tracks (red) scattered from a thin electrostatic "shell" (green) surrounding a magnetic dipole (center with B field intensity projected onto faces of the cube). The particles are not being deflected by the magnetic field rather by the electric field created by the interaction of the background plasma (not shown for clarity) and the magnetic field. The energy of the red 'protons' is 100, 000 times that of the background plasma. Simulation in dimensionless units. Boosting the shield effectiveness: Mass loading The very biggest storms could be mitigated against by adding additional plasma density around the spacecraft. Similar to creating an artificial cometary halo cloud. Increasing the density within the minimagnetosphere reduces the thickness of the skin depth (equ. (5.3)). This could be done either to reduce the power required from the space craft to achieve the same deflection efficiency, or boost the shield effectiveness during the severe parts of a SEP or CME event. Practically this could be done by releasing easily ionised material from the spacecraft. EUV ionisation, charge exchange, and collisional ionisation lead to the generation of ions and electrons which are incorporated in the mini-magnetosphere barrier. The mass loading leads to enhancement in the currents. For manned spacecraft utilising Nuclear Thermal Propulsion (NTP) or Nuclear Electric Propulsion (NEP) as primary propulsion this would already be achieved by the release of propellant from the thrusters; typical propellants for these systems being hydrogen and other volatile propellants, and for NEP systems, also inert gases such as Argon and Xenon [44,45,46]. If more localised injection of plasma is required toward the shield region, ion or plasma sources, as already used for spacecraft propulsion [46], could be used to provide more directed ion or plasma beams from multiple locations on the spacecraft. During the transit to Mars it might be necessary to use the augmented storm shield upon 0 ∼ 2 occasions [8]. Increasing n mm by ×10 4 would provide an increase the potential φ ∼ ×100. This could be achieved by approximately 1 mole of Xe (in the volume of ∼ 4 3 πr 3 s ). Given the atomic mass of Xe is ∼ 131 this would mean 131g of Xe would be needed per occasion of use. Exactly how much Xe would be needed on a mission would depend upon the frequency of use. Allowing for approximately 3 SEP events to encompass the spacecraft in 18 month period this would require less than half a kilo of Xe. It would also be required to keep the enhancement sustained for 2-6 hours. How much resources will be required would then depend upon the rate of plasma loss from the mini-magnetosphere. This is discussed in the next section. Retaining the Shield To function as a shield it is required that enough density within the cavity barrier is retained for long enough, to ensure the cavity will not get over whelmed by an intense storm for the hours that the peak fluence may last (Figure 3 (a)). The plasma parameter, β, defined as the ratio of the plasma pressure to the magnetic pressure, does not provide a useful guide in this instance because the profiles of plasma density and temperature are varying on spacial scales below the ion gyration radius. Furthermore the parameter β does not allow for electric fields which we know are fundamental to the mini-magnetosphere barrier. Since an analytical approach is not available as a guide, we can take an observational example from comets [50,22], and in particular the AMPTE artificial comet [23]. The data recorded by the spacecraft monitoring the active magnetospheric particle tracer explorers (AMPTE) mission, provide us with a lower limit of retainment in the absence of a magnetic field. A ∼ 1Kg mass of Barium (amu=137) exhibits an ionisation time of ∼20 minutes for a volume of 100km [47]. With the cometary case the particle pick-up means the confinement structure is essentially open ended and the matter is lost rapidly. The addition of a magnetic field would undoubtably extend the plasma retention (as is the case with magnetically confined plasma fusion experiments such as JET [52]) but by precisely how much, particularly on the scale size of a mini-magnetosphere, could only be determined experimentally in space. Estimating the requirements of the on board hardware Having outlined the principles behind the minimagnetosphere shield operation, and assembled some performance parameters, we can now compute some figures of merit. A conceptual deep space vehicle for human exploration described in [21] included a mini-magnetosphere radiation shield. The purpose was to present a candidate vehicle concept to accomplish a potential manned near-Earth object asteroid exploration mission. The power (including cyroplant), physical dimensions and magnetic field intensities and density augmentation capabilities used here will be those presented in [21,48]. The maximum feasible coil radius, a of 3.0m (set by the launch rocket cowling I/D), I c = 700A, N = 8000 resulting from equation (4) in NI c = 5.6MAt. This produces a peak magnetic field, B o , of ∼ 6.4T. Inserting all the practical parameters provides r s = 0.86/P 1/6 in in units of km and nPa. The total mini-magnetosphere power demand limit of 16kW, and a 5kW for the cryoplant and the control system. The total mass was = 1.5 × 10 3 kg. 10 [49] using current established technology and incorporated a minimag system. Evidence for the processes from other fields The experimental and observational evidence for the formation of mini-magnetospheres has been established from laboratory using Solar Wind Plasma Tunnels [20,37] and spacecraft observations of natural mini-magnetospheres on the Moon [16,38]. A photograph of a laboratory sized minimagnetosphere is shown in Figure 9 [20]. A vacuum or an MHD (magnetohydrodynamic) description of the laboratory experiment would have predicted that the plasma stream would not be deflected and hit the magnet. Summary The equations provided above, can only provide "ball-park" values as the complexity of the interaction is highly variable with multiple parameters interdependant in time and orientation. This is typical description of a non-linear system. We know that minimagnetospheres work because of the example on the Moon [16,38]. We know that the same principles used here occur for both natural and artificial comets [50,25]. The addition of extra, cold, plasma, such as Xenon gas which can be easily ionised by the uv-radiation from the Sun, can potentially greatly increase the effectiveness of the shield. The concept of having a plasma around the spacecraft may at first sound familiar to those looking at Figure 9: A photograph of a mini-magnetosphere diamagnetic cavity formed in a laboratory Solar Wind Tunnel [20].The 'light' areas show where the plasma is present. The beam comes in from the left hand side and gets redirected into a thin layer around the target.The width of the layer L and the stand-off distance r s agree very well with those expected by equ.5.1 and 5.2 Active shield systems. [35] amongst others have proposed various 'Plasma Shield' schemes of using flowing currents in plasmas around the spacecraft as means to extend the magnetic field or source of electrons to counter the incoming protons. The difficulty with these schemes has been the omission of what the environmental plasma would do by way of short circuiting, reactive screening or 'blowing away' of the plasma of the mini-magnetosphere by the solar wind. The scheme suggested here does not try to control the plasma too much but confine it enough to allow it's own nature to work for us. Regardless of whether some of the details contained in this paper can be improved or adapted, this paper has aimed to emphasise the importance of including the plasma environment when considering any means of Active or electromagnetic shielding to protect spacecraft from ionising radiation. This paper has also aimed to demonstrate the importance of using the appropriate plasma physics dominant at the "human" rather than "celestial" scale size. We have tried to provide ball-park expressions to estimate a realistic prediction of effectiveness that are credible and not underplay the complexity and research to be done. The analysis shown here is for a modest powered mini-magnetosphere system that may function as a permanent means to increase the operating time in interplanetary space for crew and systems, much like the Earth's magnetosphere does. Such a shield could also be bolstered to deal with extreme storms, for which it 11 maybe the only means of providing an effective storm shelter. Conclusions Proposals for electromagnetic shields generally come with amazing predictions of effectiveness. And yet no such system has even been tried as a prototype in space. The reason for this is that the claims of effectiveness appear unbelievable -quite rightly. What has been presented in this paper, is an indication of the true complexity involved in active shielding. Simple back-of-the-envelope calculations for the particle deflection in a vacuum are vastly over-simplistic. Inevitably the role of the plasma environment has either been overlooked completely, or analysed on the wrong scale size. Much has yet to be determined quantifiably before the full engineering standard of precision is available.It may be that an active shield system is not practical until on-board power systems are comparable to those envisioned in science fiction, but the concept should not be dismissed based on an incorrect analysis. An active deflector shield system would never be a replacement for passive shielding nor biological advances. But it can offer options, particularly for EVAs, extending the longevity of hardware, preventing secondary activation of the ships hull and systems and the only theoretical means to deflect even GeV particles. The evidence that mini-magnetosphere do work on the bulk plasma in space comes from magnetic anomalies on the moon [38,39], around asteroids [17], comets both natural [22] and artificial [47]. This combined with laboratory experiments [24] and simulations have suggested that the high energy distribution can be sufficiently effected to enable optimism, because even the ability to predict the occurrence of storms is of little use if there is nowhere to hide. Figure 1 : 1A magnetically held plasma barrier creating an artificial mini-magnetosphere in the laboratory Figure 8 : 8A conceptual design for a manned interplanetary vehicle was presented by Benton The plasma physics here provides a more linear rather than exponential drop off. AcknowledgementsThe authors would like to thank Science and Technology Facilities Research Council's Center for Fundamental Physics. International Space Exploration Coordination Group (ISECG). The Global Exploration RoadmapInternational Space Exploration Coordination Group (ISECG). The Global Exploration Roadmap (2013). See http://www.globalspaceexploration.org/web/isecg/news/2013- 08-20. An argument for human exploration of the Moon and Mars American Scientist. P D Spudis, 80Spudis, P.D. An argument for human exploration of the Moon and Mars American Scientist, 80, 269-277 (1992). The scientific case for renewed human activities on the Moon Space Policy. I A Crawford, 20Crawford, I.A. The scientific case for renewed human activities on the Moon Space Policy, 20, 91-97 (2004) Toward a global space exploration program: a stepping stone approach. P Ehrenfreund, Advances in Space Research. 49Ehrenfreund P et al. Toward a global space exploration program: a stepping stone approach Advances in Space Research, 49, 2- 48 (2012). The rough guide to the Moon and Mars. M Lockwood, M Hapgood, Astronomy and Geophysics. 48Lockwood M. and Hapgood, M. The rough guide to the Moon and Mars, Astronomy and Geophysics, 48, 6.11-6.17 (2007). Committee on the Evaluation of Radiation Shielding for Space Exploration, National Research Council, Managing Space Radiation Risk in the New Era of Space Exploration. The National Academies Press9780309113830Committee on the Evaluation of Radiation Shielding for Space Exploration, National Research Council, Managing Space Ra- diation Risk in the New Era of Space Exploration, The Na- tional Academies Press, ISBN 9780309113830, (2008). http: //www.nap.edu/openbook.php?record_id=12045. Space Radiation Cancer Risks and Uncertainties for Mars Missions. F A Cucinotta, Radiation Research. 1565Cucinotta, F. A., et al. "Space Radiation Cancer Risks and Un- certainties for Mars Missions." Radiation Research: November 2001, Vol. 156, No. 5, pp. 682-688. Prediction of frequency and exposure level of solar particle events. Myung Kim, Y Hee, Health physics. 97Kim, Myung Hee Y., et al. Prediction of frequency and exposure level of solar particle events. Health physics 97.1 (2009), 68-81. Space radiation risk limits and EarthMoonMars environmental models. Francis A Cucinotta, Space Weather. 812Cucinotta, Francis A., et al. Space radiation risk limits and EarthMoonMars environmental models. Space Weather 8.12 (2010). Cancer risk from exposure to galactic cosmic rays: implications for space exploration by human beings. Francis A Cucinotta, Marco Durante, The lancet oncology. 75Cucinotta, Francis A., and Marco Durante. "Cancer risk from exposure to galactic cosmic rays: implications for space explo- ration by human beings." The lancet oncology 7.5 (2006): 431- 435. Revolutionary concepts of radiation shielding for human exploration of space. J H AdamsJr, NASA TM. 213688Adams Jr, J. H., et al. "Revolutionary concepts of radiation shielding for human exploration of space." NASA TM 213688 (2005). Plasma in laboratory and space. Hannes Alfvén, Le Journal de Physique Colloques. 40Alfvén, Hannes. "Plasma in laboratory and space." Le Journal de Physique Colloques 40.C7 (1979): C7-1. Paradigm transition in cosmic plasma physics. Hannes Alfvén, Physica Scripta. 210Alfvén, Hannes. "Paradigm transition in cosmic plasma physics." Physica Scripta 1982.T2A (1982): 10. An introduction to the ionosphere and magnetosphere. CUP Archive. John Ratcliffe, Ashworth, Ratcliffe, John Ashworth. An introduction to the ionosphere and magnetosphere. CUP Archive, 1972. Lunar Surface Magnetic Fields and Their Interaction with the Solar Wind: Results from Lunar Prospector. R P Lin, Science. 2811480R. P. Lin, et al., "Lunar Surface Magnetic Fields and Their In- teraction with the Solar Wind: Results from Lunar Prospector", Science 281, 1480 (1998). Magnetic field signatures near Galileo's closest approach to Gaspra. M G Kivelson, SCIENCE-NEW YORK THEN WASHINGTON-261. Kivelson, M. G., et al. "Magnetic field signatures near Galileo's closest approach to Gaspra." SCIENCE-NEW YORK THEN WASHINGTON-261 (1993): 331-331. Density cavity observed over a strong lunar crustal magnetic anomaly in the solar wind: A minimagnetosphere?. J S Halekas, Planetary and Space Science. 56Halekas, J. S., et al. "Density cavity observed over a strong lunar crustal magnetic anomaly in the solar wind: A mini- magnetosphere?." Planetary and Space Science 56.7 (2008): 941-946. The global magnetic eld of Mercury from MESSENGER orbital observations. B J Anderson, Science. 33360518591862Anderson, B. J., et al. , The global magnetic eld of Mercury from MESSENGER orbital observations, Science, 333(605), 18591862, 2011. The interaction of a flowing plasma with a dipole magnetic field: measurements and modelling of a diamagnetic cavity relevant to spacecraft protection. R Bamford, Plasma physics and controlled fusion. 50124025Bamford, R., et al. "The interaction of a flowing plasma with a dipole magnetic field: measurements and modelling of a diamagnetic cavity relevant to spacecraft protection." Plasma physics and controlled fusion 50.12 (2008): 124025. Concept for Human Exploration of NEO Asteroids using MPCV , Deep Space Vehicle , Artificial Gravity Module , and Mini -Magnetosphere Radiation Shield. Snr Benton, M G , 10.2514/6.2012-5114Intelligence. pg 1-45Benton, Snr., M. G. et al., "Concept for Human Exploration of NEO Asteroids using MPCV , Deep Space Vehicle , Artificial Gravity Module , and Mini -Magnetosphere Radiation Shield", Intelligence, AIAA-2011-7138, pg 1-45, doi:10.2514/6.2012- 5114, 2012. Plasma parameters near the comet Halley bow shock. A J Coates, Journal of Geophysical Research: Space Physics. 95Coates, A. J., et al. "Plasma parameters near the comet Halley bow shock." Journal of Geophysical Research: Space Physics (19782012) 95.A12 (1990): 20701-20716. Outline of the active magnetospheric particle tracer explorers (AMPTE) mission. D A Bryant, S M Krimigis, G Haerendel, IEEE Transactions on. 3Geoscience and Remote SensingBryant, D. A., S. M. Krimigis, and G. Haerendel. "Outline of the active magnetospheric particle tracer explorers (AMPTE) mis- sion." Geoscience and Remote Sensing, IEEE Transactions on 3 (1985): 177-181. Collective refraction of a beam of electrons at a plasma-gas interface. P Muggli, Physical Review Special Topics-Accelerators and Beams. 491301Muggli, P., et al. "Collective refraction of a beam of electrons at a plasma-gas interface." Physical Review Special Topics- Accelerators and Beams 4.9 (2001): 091301. Theory of wave activity occurring in the AMPTE artificial comet. R Bingham, Physics of Fluids B: Plasma Physics. 31728Bingham, R., et al. "Theory of wave activity occurring in the AMPTE artificial comet." Physics of Fluids B: Plasma Physics 3 (1991): 1728. Some researches on double layers. J E Allen, Plasma Physics and Controlled Fusion. 271343Allen, J. E. "Some researches on double layers." Plasma Physics and Controlled Fusion 27.12A (1985): 1343. The role of interplanetary shocks in the longitude distribution of solar energetic particles. H V Cane, D V Reames, T T Rosenvinge, Journal of Geophysical Research: Space Physics. 93Cane, H. V., D. V. Reames, and T. T. Rosenvinge. "The role of interplanetary shocks in the longitude distribution of solar energetic particles." Journal of Geophysical Research: Space Physics (19782012) 93.A9 (1988): 9555-9567. Acceleration of cosmic ray electrons by ion-excited waves at quasiperpendicular shocks. K G Mcclements, Monthly Notices of the Royal Astronomical Society. 291McClements, K. G., et al. "Acceleration of cosmic ray electrons by ion-excited waves at quasiperpendicular shocks." Monthly Notices of the Royal Astronomical Society 291.1 (1997): 241- 249. Solar-Particle Energy Spectra during the Large Events of. R A Mewaldt, 29th International Cosmic Ray Conference Pune 00. Mewaldt, R.A., et al., Solar-Particle Energy Spectra during the Large Events of October-November 2003 and January 2005., 29th International Cosmic Ray Conference Pune 00, 101-104, (2005). Implications of the space radiation environment for human exploration in deep space. L W Townsend, Radiation protection dosimetry. 115Townsend, L. W. "Implications of the space radiation environ- ment for human exploration in deep space." Radiation protection dosimetry 115.1-4 (2005): 44-50. Risk of Acute Radiation Syndromes due to Solar Particle Events. Honglu Wu, H NASA. HHCWu, Honglu, et al. "Risk of Acute Radiation Syndromes due to Solar Particle Events." H NASA, HHC (2008). Wouter de Wet, Quality factors for space radiation: A new approach, Life Sciences in Space Research. Thomas B Borak, Lawrence H Heilbronn, Lawrence W Townsend, Rafe A Mcbeth, 10.1016/j.lssr.2014.02.0052214-5524Thomas B. Borak, Lawrence H. Heilbronn, Lawrence W. Townsend, Rafe A. McBeth, Wouter de Wet, Quality factors for space radiation: A new approach, Life Sciences in Space Re- search, Available online 17 February 2014, ISSN 2214-5524, http://dx.doi.org/10.1016/j.lssr.2014.02.005. Shielding space travelers. Eugene N Parker, Scientific American. 2943Parker, Eugene N. "Shielding space travelers." Scientific Amer- ican 294.3 (2006): 40-47. Shielding from cosmic radiation for interplanetary missions: active and passive methods. P Spillantini, Radiation measurements. 42Spillantini, P., et al. "Shielding from cosmic radiation for in- terplanetary missions: active and passive methods." Radiation measurements 42.1 (2007): 14-23. Plasma radiation shield-Concept and applications to space vehicles. F W French, R H Levy, Journal of Spacecraft and Rockets. 5French, F. W., and R. H. Levy. "Plasma radiation shield-Concept and applications to space vehicles." Journal of Spacecraft and Rockets 5.5 (1968): 570-577. Marco Durante, 10.1016/j.lssr.2014.01.0022214-5524Space radiation protection: Destination Mars. Marco Durante, Space radiation protection: Destina- tion Mars, Life Sciences in Space Research, Avail- able online 5 February 2014, ISSN 2214-5524, http://dx.doi.org/10.1016/j.lssr.2014.01.002. Hybrid simulations of mini-magnetospheres in the laboratory. L Gargaté, Plasma Physics and Controlled Fusion. 5074017Gargaté, L., et al. "Hybrid simulations of mini-magnetospheres in the laboratory." Plasma Physics and Controlled Fusion 50.7 (2008): 074017. Mini-magnetospheres above the Lunar Surface and the Formation of Lunar Swirls. R A Bamford, Phys. Rev. Lett. 10981101R. A. Bamford et al., "Mini-magnetospheres above the Lunar Surface and the Formation of Lunar Swirls", Phys. Rev. Lett. 109, 081101 (2012). PIC code simulations Minimagnetospheres above the lunar surface. R A Bamford, Paper in PreparationR.A. Bamford, et al.,"PIC code simulations Mini- magnetospheres above the lunar surface", Paper in Preparation, 2014. D Tidman, N A Krall, Shock Waves in Collisionless Plasmas. S. C. BrownNew YorkJohn Wiley and SonsD. Tidman and N. A. Krall, Shock Waves in Collisionless Plas- mas, edited by S. C. Brown (John Wiley and Sons, New York, 1971). L C Woods, Principles of Magnetoplasmas. Clarendon, Oxford397L. C. Woods, Principles of Magnetoplasmas (Clarendon, Ox- ford, 1987), p. 397. Interactions of plasmas with magnetic field boundaries. A D R Phelps, Planetary and Space Science. 21Phelps, A. D. R. "Interactions of plasmas with magnetic field boundaries." Planetary and Space Science 21.9 (1973): 1497- 1509. Propagation of a wide ion beam into a magnetic barrier. William Peter, Ron , Amiram Rostoker, Norman , 10.1063/1.862757Physics of Fluids. 22Peter, William and Ron, Amiram and Rostoker, Norman, "Propagation of a wide ion beam into a magnetic bar- rier ",Physics of Fluids (1958-1988), 22, 1471-1477 (1979), DOI:http://dx.doi.org/10.1063/1.862757 Nuclear Propulsion Techniques for Spacecraft: Utilization of Nuclear Reactors in Spacecraft for Space Propulsion and Space Power in a Microgravity Environment. U Guyen, LAP Lambert Academic PublishingGuyen, U., Nuclear Propulsion Techniques for Spacecraft: Uti- lization of Nuclear Reactors in Spacecraft for Space Propulsion and Space Power in a Microgravity Environment, LAP Lambert Academic Publishing (2011). Deep Space Propulsion. K F Long, SpringerLong. K. F., Deep Space Propulsion, Springer (2012) Fundamentals of Electric Propulsion. D Goebel, I Katz, Wiley-BlackwellGoebel, D., Katz, I., Fundamentals of Electric Propulsion, Wiley-Blackwell (2008). In situ magnetic field observations of the AMPTE artificial comet. H Lühr, Nature. 320Lühr, H., et al. "In situ magnetic field observations of the AMPTE artificial comet." Nature 320 (1986): 708-711. Modular space vehicle architecture for human exploration of Mars using artificial gravity and minimagnetosphere crew radiation shield. Snr Benton, M G , 10.2514/6.2011-7138pg 1-53Benton, Snr., M. G. et al., "Modular space vehicle architecture for human exploration of Mars using artificial gravity and mini- magnetosphere crew radiation shield", pg 1-53, AIAA-2012- 0633, doi:10.2514/6.2011-7138, 2012. Conceptual Common Modular Design for Crew and Cargo Landers and Deep Space Vehicles for Human Exploration of the Solar System. Benton Sr, G Mark, Benton Sr, Mark G. "Conceptual Common Modular Design for Crew and Cargo Landers and Deep Space Vehicles for Human Exploration of the Solar System." (2013). Solar wind interaction with comet Halley. A A Galeev, 5Advances in Space ResearchGaleev, A. A. "Solar wind interaction with comet Halley." Ad- vances in Space Research 5.12 (1985): 155-163. Solar wind effects on atmosphere evolution at Venus and Mars. J G Luhmann, S J Bauer, Ionospheres, and Solar Wind InteractionsVenus and Mars: AtmospheresLuhmann, J. G., and S. J. Bauer. "Solar wind effects on atmo- sphere evolution at Venus and Mars." Venus and Mars: Atmo- spheres, Ionospheres, and Solar Wind Interactions (1992): 417- 430.
[]
[ "Exploring rotational resonance in elastic metamaterial plates to realize doubly negative property", "Exploring rotational resonance in elastic metamaterial plates to realize doubly negative property" ]
[ "Wei Wang \nINSP-UMR CNRS 7588)\nSorbonne Université\nUPMC Université Paris 06\n4, place Jussieu75005ParisFrance\n", "Bernard Bonello \nINSP-UMR CNRS 7588)\nSorbonne Université\nUPMC Université Paris 06\n4, place Jussieu75005ParisFrance\n", "Bahram Djafari-Rouhani \nInstitut d'Electronique, de Micro-électronique et de Nanotechnologie (IEMN-UMR CNRS 8520)\nUniversité de Lille Sciences et Technologies\nCité Scientifique\n59652Villeneuve d'Ascq CedexFrance\n", "Yan Pennec \nInstitut d'Electronique, de Micro-électronique et de Nanotechnologie (IEMN-UMR CNRS 8520)\nUniversité de Lille Sciences et Technologies\nCité Scientifique\n59652Villeneuve d'Ascq CedexFrance\n", "Jinfeng Zhao \nSchool of Aerospace Engineering and Applied Mechanics\nTongji University\n100 Zhangwu Road200092ShanghaiChina\n" ]
[ "INSP-UMR CNRS 7588)\nSorbonne Université\nUPMC Université Paris 06\n4, place Jussieu75005ParisFrance", "INSP-UMR CNRS 7588)\nSorbonne Université\nUPMC Université Paris 06\n4, place Jussieu75005ParisFrance", "Institut d'Electronique, de Micro-électronique et de Nanotechnologie (IEMN-UMR CNRS 8520)\nUniversité de Lille Sciences et Technologies\nCité Scientifique\n59652Villeneuve d'Ascq CedexFrance", "Institut d'Electronique, de Micro-électronique et de Nanotechnologie (IEMN-UMR CNRS 8520)\nUniversité de Lille Sciences et Technologies\nCité Scientifique\n59652Villeneuve d'Ascq CedexFrance", "School of Aerospace Engineering and Applied Mechanics\nTongji University\n100 Zhangwu Road200092ShanghaiChina" ]
[]
We report the realization of simultaneously negative effective mass density and shear modulus in a single-phase asymmetric double-sided pillared metamaterial. The negative effective mass density is achieved by the combination of bending and compressional resonances of one pillar whereas the rotational resonance of the other pillar leads to the negative effective shear modulus. The coupling between these two pillars is investigated to describe the formation of the doubly negative property. Then, a pillared system featuring chirality is designed in order to make efficient the excitation of the rotational vibration, the occurrence of which is demonstrated by the transmission spectrum. Finally, numerical simulations of the zero-index refraction are carried out to prove the occurrence of the doubly negative property.
null
[ "https://arxiv.org/pdf/1809.06771v1.pdf" ]
91,183,757
1809.06771
2171ec809243c36c209cd5a7cb235eededae0414
Exploring rotational resonance in elastic metamaterial plates to realize doubly negative property Wei Wang INSP-UMR CNRS 7588) Sorbonne Université UPMC Université Paris 06 4, place Jussieu75005ParisFrance Bernard Bonello INSP-UMR CNRS 7588) Sorbonne Université UPMC Université Paris 06 4, place Jussieu75005ParisFrance Bahram Djafari-Rouhani Institut d'Electronique, de Micro-électronique et de Nanotechnologie (IEMN-UMR CNRS 8520) Université de Lille Sciences et Technologies Cité Scientifique 59652Villeneuve d'Ascq CedexFrance Yan Pennec Institut d'Electronique, de Micro-électronique et de Nanotechnologie (IEMN-UMR CNRS 8520) Université de Lille Sciences et Technologies Cité Scientifique 59652Villeneuve d'Ascq CedexFrance Jinfeng Zhao School of Aerospace Engineering and Applied Mechanics Tongji University 100 Zhangwu Road200092ShanghaiChina Exploring rotational resonance in elastic metamaterial plates to realize doubly negative property 1 We report the realization of simultaneously negative effective mass density and shear modulus in a single-phase asymmetric double-sided pillared metamaterial. The negative effective mass density is achieved by the combination of bending and compressional resonances of one pillar whereas the rotational resonance of the other pillar leads to the negative effective shear modulus. The coupling between these two pillars is investigated to describe the formation of the doubly negative property. Then, a pillared system featuring chirality is designed in order to make efficient the excitation of the rotational vibration, the occurrence of which is demonstrated by the transmission spectrum. Finally, numerical simulations of the zero-index refraction are carried out to prove the occurrence of the doubly negative property. The advent of locally resonant metamaterials almost two decades ago, 1 and the great deal of research that ensued, [2][3][4][5][6][7] have significantly contributed to the possibilities we have now for controlling the propagation and the dispersion of acoustic/elastic waves. Some effective elastic properties of these artificial structures, may exhibit abnormal behaviors in narrow frequency bands where they may be infinite positive, null or even negative. 8,9 In the frequency intervals where only one effective parameter is negative, either the mass density or the Young's modulus (shear modulus), the propagation of waves is forbidden. In contrast, if the structure is engineered to support frequency intervals where the doubly negative property occurs, i.e. simultaneously negative effective mass density and modulus, phenomena not present in nature may arise, as for instance, the negative refraction or the cloaking effect. In the past decade, a couple of configurations allowing for the double negativity have been reported in theoretical [10][11][12][13][14][15][16][17][18][19] as well as in experimental 11,17 works. However, most of these studies were focusing on bulk waves whereas the control over other types of elastic waves, such as the Lamb waves, is a prerequisite to the development of planar double-negative elastic metamaterials. 20 Among the most 2 suitable candidates in that respect are probably the pillared metamaterials [21][22][23][24][25][26] which could be described as phononic stubbed plates constructed by depositing cylindrical dots on a thin homogeneous membrane. 27 Their peculiar elastic properties ensue from the vibration of the pillars at resonance that couples with the wave propagating in the plate. Although to different extents, three kinds of resonances may be involved in the dynamic properties of these systems, namely the bending, the compressional and the rotational modes. 26 In contrast to the bending and the compressional resonances, the combination of which has been reported to turn mass density negative in single-sided pillared metamaterials, 31 less attention has been paid to the rotational resonance to date. Interestingly, it has been theoretically demonstrated that rotational resonance of a core mass in a mass-spring system can lead to negative effective stiffness 12,16,28 and designs involving rotational inertia have been proposed to demonstrate both numerically and experimentally the occurrence of the double negativity 15,17,29 which in turn widen the scope of applications. Remembering that the double negativity can be achieved either by combining two different substructures, each supporting a different resonant mode 3,11,13 or by constructing a single structure where two resonances occur at a single frequency, 12,[15][16][17]26 we propose in this letter a new path to achieve the doubly negative property that consists in combining the bending, compressional and rotational modes into one pillared system. We first describe and analyze the dynamic behavior of a single-phase asymmetric double-sided pillared metamaterial (DPM) whose unit cell is shown in Fig. 1 Fig. 1(h) shows that µeff is negative from 5.29MHz to 5.36MHz, in very good agreement with the frequency interval in between points F (5.29MHz) and E (5.35MHz). Therefore, a locally resonant band gap should be expected in this region. However, because of the dispersion of the compressional resonance labelled as G in Fig. 1(f) To validate this approach, we have computed the band structure of DPM. As expected, an isolated negative-slope branch, highlighted in red in Fig. 1(d), appears in between 5.28MHz and 5.35MHz. Additionally, we show in Fig. 2 Fig. 2(b) show that the deformation of pillar B is very small at the compressional resonance or even null at the bending resonance. This suggests that pillar B acts as an inert mass attached to the plate that simply shifts the resonant frequencies of pillar A. Accordingly, the frequency interval of NMD generated by resonances C' and D' of pillar A in DPM also shifts and appears now in between 5.21MHz and 5.48MHz instead of 5.19MHz to 5.47MHz in SPMA, but the overall mechanism leading to NMD is the same for both structures. For a sake of coherency in the notations, these points are labelled as E' and F' in Figs. 1(d) and 2(a). As mentioned before, the effective shear modulus turns negative in the frequency interval between these two points. Therefore, both NMD and NSM are achieved in this interval which perfectly explains the occurrence of the negative-slope propagative branch in Fig. 1(d). More generally, the preceding analysis demonstrates that the double negativity can be obtained if the bending, compressional and rotational resonances of the pillared system are well designed to occur within the same frequency interval but it says nothing on how to excite these resonances. Fig. 3(a). The corresponding band structure along ΓX direction is displayed as red lines in Fig. 3(b). The double-negative branch goes from 5.37MHz to 5.41MHz. To illustrate the efficiency of chirality in exciting the rotational vibration, the transmission spectrum of an antisymmetric Lamb wave impinging at normal incidence a structure made of nine unit cells along x-axis and infinite along the ydirection is displayed as red lines in Fig. 3(c). For comparison, the black solid lines in Fig. 3 (b) and 3(c) correspond to DPM shown in Fig. 1(a). In the absence of chirality, the transmission coefficient at a frequency in the double-negative branch is null since the rotational resonance is not excited with such a design. In contrast, the chiral pillars allow for a transmission coefficient of about 0.25 thanks to the combination of the bending and compressional vibrations of pillar A and the rotational vibration of pillar B, which is the key point for the occurrence of the doubly negative property. is displayed in the middle panel in Fig. 4. It can be seen that the wave front keeps plane upon transmission through the sample, except around the void where scattering effects are observable. As a consequence of the infinite effective shear modulus and finite effective mass density in the metamaterial, the phase velocity gets nearly infinite and there is no phase change of the antisymmetric Lamb wave propagating in the metamaterial, allowing for a cloaking effect in this system. In contrast, when the working frequency is tuned to 6MHz, i.e. a frequency where the effective shear modulus is positive, the incident antisymmetric Lamb wave undergoes strong scattering on the void, giving rise to the distorted wave front observable in the bottom panel of Fig. 4. This simple analysis of the transmission through the pillared system unambiguously shows that the shielding of substructures at specific frequencies may be achieved with this geometry. To conclude, we have realized the doubly negative property in an asymmetric double-sided pillared metamaterial. The mechanism responsible for the negative effective mass density is described as being the combination of the bending and compressional resonances of one pillar, whereas the negative effective shear modulus results from the rotational resonance of the other. This design contributes to broaden the field of applications of the pillared metamaterials that includes the negative refraction and over-diffraction-limit imaging of Lamb waves. U (a). We show that the negative effective mass density (NMD) results from the combination of the bending and compressional resonances of one pillar whereas the rotational resonance of the other pillar leads to the negative effective shear modulus (NSM). Lastly, numerical simulations of the zero-index refraction are carried out to put into evidence the doubly negative property. Two distinct pillars (labelled as A and B respectively) are concentrically assembled over a thin matrix plate. Their dimensions were laid down for the resonances to occur in the MHz range: the diameter and height of pillar A (resp. pillar B) were d A = 80μm (dB = 110μm) and hA = 200μm (hB = 130μm); the lattice constant and the thickness of the plate were a = 200μm and e = 100μm respectively. Both the matrix plate and pillars were made of steel whose Young's modulus, Poisson's ratio, and mass density are E = 200GPa, v = 0.3 and ρ = 7850kg.m -3 respectively. Because of the asymmetry with respect to the mid-plane of the matrix plate, it can be anticipated that the symmetric and antisymmetric Lamb waves cannot be decoupled. Before investigating the double-sided pillared system, we have studied separately the two single-sided pillared metamaterials depicted in Figs. 1(b) and 1(c). Each of them was built with pillar A or pillar B erected in the center of a square unit cell of side a, on a plate having a thickness e; we refer hereafter to these systems as SPMA and SPMB. Their band structures computed using a finite element method are displayed in Figs. 1(e) and 1(f) respectively. The band structure of SPMA comprises a low frequency band gap that opens up in between 5.19MHz and 5.47MHz. The flatness of the dispersion curves around the lower limit of this band gap suggests that it 3 results from a local resonance of the pillar. This is further evidenced by the eigenmodes at point M of the first irreducible Brillouin zone (BZ), labelled as C and D in Figs. 1(e) and 2(a). The result displayed inFig. 2(b)unambiguously shows that these eigenmodes are the second-order bending resonance and the first-order compressional resonance of the pillar. The next step was to evaluate the 3×3 dynamic effective mass density matrix [ ] eff ρ . The method consists of applying an external displacement field U on the four lateral boundaries of the unit cell while leaving the other two faces free. The induced force F is then derived by evaluating the stress average over the four boundaries.10,15,30 In the harmonic regime at frequency 2 ω π , F and U are related by , where V denotes the volume of the unit cell. Both the normalized of the square symmetry of the unit cell) and33 ρ against the excitation frequency are shown inFig. 1(g). Both components turn negative from 5.32MHz to 5.49MHz, in good agreement with the stop band shown inFig. 1(e) that goes from 5.19MHz to 5.47MHz. The small discrepancy of about 2.5% at the lower edge of the band gap may be readily ascribed to the phase change across the unit cell, not accounted for in the calculation since this numerical method is only valid in the long wavelength limit. It can be stated from this analysis that the low frequency band gap relates to NMD caused by the combination of the bending and compressional resonances of pillar A.Regarding the band structure of SPMB displayed inFig. 1(f), no complete band gap arises in the investigated frequency range from 0MHz to 7MHz. The eigenmodes at points Γ and M, labelled as E and F in Figs. 1(f) and 2(a), show that pillar B undergoes an alternative rotational motion around its center axis. One may suspect that this rotational motion can couple with the local shear deformation of the matrix plate allowing in turn the effective shear modulus µeff to turn negative. To verify this assumption, we have calculated µeff using the numerical method described in Refs. 10,29,31 . In the calculation, we have considered a simple shear strain field applied along two parallel lateral boundaries. This sets the local displacement field and excites the rotational vibration of pillar B. The behavior of µeff against the excitation frequency is then deduced from the equivalence between the energy of the induced force vector on the lateral boundaries of the unit cell and the strain energy of the effective medium. The relationship between them can be expressed the lateral boundaries of the unit cell and γ represents the applied simple shear strain. The result displayed in (b) the displacement field at some characteristic points, labelled from C' to G' in Figs. 1(d) and 2(a). Comparing the band structures of these three pillared metamaterials allows understanding the formation of this branch. They are drawn in Fig. 2(a) where the black, red and blue dotted lines represent the dispersion along ΓM direction of DPM, SPMA and SPMB respectively. At point M of the BZ, both the bending (point C) and the compressional modes (point D) slightly shift to points C' and D' upon attachment of pillar B to the plate. For both these resonances, the displacement fields of DPM displayed in The situation is totally different when comparing the band structures of DPM and SPMB. In this case appending pillar A to SPMB does not summarize into a simple shift of the resonant frequency of pillar B: at point M, the compressional resonance labelled as G' in Fig. 2(a) affects both pillars in DPM (see Fig. 2(b) panel G') and therefore the shift from G (compressional resonance of SPMB) to G' cannot be ascribed to an inert mass attached to the plate like before. At the same time, the eigenfrequencies at points labelled as E and F in Figs. 1(f) and 2(a) remain unchanged because there is no coupling between the rotational vibration of pillar B and the bending and compressional vibrations of pillar A. Figure 1 : 1(a)-(c) Representative square lattice unit cells of DPM, SPMA and SPMB respectively and (d)-(f) their corresponding band structures. (g) Normalized effective mass density components 33 ρ (black line) and 11 ρ (red line) of SPMA. (h) Normalized effective shear modulus of SPMB. Figure 2 : 2(a) Comparison of the band structures of DPM (black dotted lines), SPMA (red dotted lines) and SPMB (blue dotted lines) along ΓM direction. (b) Normalized total displacement and deformation of the unit cell corresponding to the points indicated in panel (a) and Figs. 1(d)-1(f).One might take advantage of the in-plane polarization of a SH Lamb wave to trigger the rotational vibration of the pillar and actually we have verified that an incident wave with the frequency inside the double-negative branch can propagate across the metamaterial (not shown here). However, the other two types of Lamb wave waves, i.e. the symmetric and the antisymmetric Lamb modes, are polarized in the sagittal plane and therefore they cannot excite the rotational resonance because of the mirror symmetry in the unit cell. To overcome this difficulty, chirality may be introduced in the pillar so that the waves propagating in the plate can create an asymmetric deformation. Both the cross section and the side view of pillar B fulfilling this requirement is shown inFig. 3(a). Eight flanks equally spaced in azimuth with a length l = 60μm and a width w = 10μm are inserted along a solid cylinder with a diameter d = 100μm. Pillar B is formed by stretching the cross section along negative z direction with height h = 105μm and a twist angle θ = 45º in anti-clockwise direction as shown in the bottom panel of Figure 3 : 3(a) Representative profile of the chiral pillar B. (b) Band structure along ΓX direction of DPM involving the chiral pillar B (red lines) or without chirality (black lines). (c) Transmission spectrum of an antisymmetric Lamb wave impinging at normal incidence on the phononic crystal with (red line) and without (black line) the chiral pillar. One of the most amazing properties that ensues from the double negativity is the cloaking effect. At a frequency in the negative-slope branch, this effect results from the fact that both phase velocity and wavelength become infinite and in turn we find the case of a zero refractive index material. 10,20,32-34 We have investigated this effect at the frequency of 5.4MHz, where the effective shear modulus tends toward infinity. The FEM model is shown in the top panel of Fig. 4. It consists of 132 unit cells and features a 7a×3a rectangular void in its center. A zero-order antisymmetric Lamb wave is excited at a distance of 1mm from the left edge of the metamaterial and perfectly match layers are implemented on each side of the sample to eliminate any reflection from the boundaries. Periodic boundary conditions are applied on the other two edges. The out-of-plane component of the displacement field at 5.4MHz Figure 4 : 4FEM model implemented to verify the cloaking effect (top panel); out-of-plane component of the displacement field upon antisymmetric excitation at frequencies 5.4MHz (middle panel) and 6MHz (bottom panel). no complete band gap opens up in this interval. It should be pointed out that pillar B was specifically designed in such a way that its rotational resonance falls inside the frequency range of NMD achieved in SPMA. In this case, the double negativity can be 4 expected when combining both pillar A and pillar B to form DPM. In this merged structure, NMD would result from the bending and compressional resonances of pillar A whereas NSM would ensue from the rotational resonance of pillar B. . Z Liu, X Zhang, Y Mao, Y Y Zhu, Z Yang, C T Chan, P Sheng, Science. 2891734Z. Liu, X. Zhang, Y. Mao, Y.Y. Zhu, Z. Yang, C.T. Chan, and P. Sheng, Science 289, 1734 (2000). . J Li, C T Chan, Phys. Rev. E. 7055602J. Li and C.T. Chan, Phys. Rev. E 70, 055602(R) (2004). . Y Ding, Z Liu, C Qiu, J Shi, Phys. Rev. Lett. 9993904Y. Ding, Z. Liu, C. Qiu, and J. Shi, Phys. Rev. Lett. 99, 093904 (2007). . S H Lee, C M Park, Y M Seo, Z G Wang, C K Kim, Phys. Rev. Lett. 10454301S.H. Lee, C.M. Park, Y.M. Seo, Z.G. Wang, and C.K. Kim, Phys. Rev. Lett. 104, 054301 (2010). . L Fok, X Zhang, Phys. Rev. B. 83214304L. Fok and X. Zhang, Phys. Rev. B 83, 214304 (2011). . C Ding, L Hao, X Zhao, J. Appl. Phys. 10874911C. Ding, L. Hao, and X. Zhao, J. Appl. Phys. 108, 074911 (2010). . Z Liang, T Feng, S Lok, F Liu, K B Ng, C H Chan, J Wang, S Han, S Lee, J Li, Sci. Rep. 31614Z. Liang, T. Feng, S. Lok, F. Liu, K.B. Ng, C.H. Chan, J. Wang, S. Han, S. Lee, and J. Li, Sci. Rep. 3, 1614 (2013). . Y Wu, Y Lai, Z Zhang, Phys. Rev. B. 76205313Y. Wu, Y. Lai, and Z. Zhang, Phys. Rev. B 76, 205313 (2007). . X Zhou, G Hu, Phys. Rev. B. 79195109X. Zhou and G. Hu, Phys. Rev. B 79, 195109 (2009). . H.-W Dong, S.-D Zhao, Y.-S Wang, C Zhang, J. Mech. Phys. Solids. 10554H.-W. Dong, S.-D. Zhao, Y.-S. Wang, and C. Zhang, J. Mech. Phys. Solids 105, 54 (2017). . J H Oh, Y E Kwon, H J Lee, Y Y Kim, Sci. Rep. 623630J.H. Oh, Y.E. Kwon, H.J. Lee, and Y.Y. Kim, Sci. Rep. 6, 23630 (2016). . X Wang, Int. J. Solids Struct. 511534X. Wang, Int. J. Solids Struct. 51, 1534 (2014). . H H Huang, C T Sun, J. Acoust. Soc. Am. 1322887H.H. Huang and C.T. Sun, J. Acoust. Soc. Am. 132, 2887 (2012). . Y Lai, Y Wu, P Sheng, Z.-Q Zhang, Nat. Mater. 10620Y. Lai, Y. Wu, P. Sheng, and Z.-Q. Zhang, Nat. Mater. 10, 620 (2011). . X N Liu, G K Hu, G L Huang, C T Sun, Appl. Phys. Lett. 98251907X.N. Liu, G.K. Hu, G.L. Huang, and C.T. Sun, Appl. Phys. Lett. 98, 251907 (2011). . Z Li, X Wang, Int. J. Solids Struct. 78174Z. Li and X. Wang, Int. J. Solids Struct. 78, 174 (2016). . R Zhu, X N Liu, G K Hu, C T Sun, G L Huang, Nat. Commun. 55510R. Zhu, X.N. Liu, G.K. Hu, C.T. Sun, and G.L. Huang, Nat. Commun. 5, 5510 (2014). . V E Gusev, O B Wright, New J. Phys. 16123053V.E. Gusev and O.B. Wright, New J. Phys. 16, 123053 (2014). . Y Chen, G Hu, G Huang, J. Mech. Phys. Solids. 105179Y. Chen, G. Hu, and G. Huang, J. Mech. Phys. Solids 105, 179 (2017). . H Zhu, F Semperlotti, Phys. Rev. Appl. 864031H. Zhu and F. Semperlotti, Phys. Rev. Appl. 8, 064031 (2017). . M B Assouar, M Oudich, Appl. Phys. Lett. 100123506M. B. Assouar and M. Oudich, Appl. Phys. Lett. 100, 123506 (2012). . O R Bilal, M I Hussein, Appl. Phys. Lett. 103111901O.R. Bilal and M.I. Hussein, Appl. Phys. Lett. 103, 111901 (2013). . O R Bilal, A Foehr, C Daraio, Extrem. Mech. Lett. 15103O.R. Bilal, A. Foehr, and C. Daraio, Extrem. Mech. Lett. 15, 103 (2017). . M Oudich, Y Li, B M Assouar, Z Hou, New J. Phys. 1283049M. Oudich, Y. Li, B.M. Assouar, and Z. Hou, New J. Phys. 12, 083049 (2010). . M Oudich, M Senesi, M B Assouar, M Ruzenne, J H Sun, B Vincent, Z Hou, T T Wu, Phys. Rev. B. 84165136M. Oudich, M. Senesi, M.B. Assouar, M. Ruzenne, J.H. Sun, B. Vincent, Z. Hou, and T.T. Wu, Phys. Rev. B 84, 165136 (2011). . Y Jin, B Bonello, R P Moiseyenko, Y Pennec, O Boyko, B Djafari-Rouhani, Phys. Rev. B. 96104311Y. Jin, B. Bonello, R.P. Moiseyenko, Y. Pennec, O. Boyko, and B. Djafari-Rouhani, Phys. Rev. B 96, 104311 (2017). . Y Pennec, B Djafari-Rouhani, H Larabi, J O Vasseur, A C Hladky-Hennion, Phys. Rev. B. 78104105Y. Pennec, B. Djafari-Rouhani, H. Larabi, J.O. Vasseur, and A.C. Hladky-Hennion, Phys. Rev. B 78, 104105 (2008). . J H Oh, B Assouar, Sci. Rep. 633410J.H. Oh and B. Assouar, Sci. Rep. 6, 33410 (2016). . Y.-F Wang, Y.-S Wang, C Zhang, J. Acoust. Soc. Am. 1393311Y.-F. Wang, Y.-S. Wang, and C. Zhang, J. Acoust. Soc. Am. 139, 3311 (2016). . M Oudich, B Djafari-Rouhani, Y Pennec, M B Assouar, B Bonello, J. Appl. Phys. 116184504M. Oudich, B. Djafari-Rouhani, Y. Pennec, M.B. Assouar, and B. Bonello, J. Appl. Phys. 116, 184504 (2014). . X N Liu, G K Hu, C T Sun, G L Huang, J. Sound Vib. 3302536X.N. Liu, G.K. Hu, C.T. Sun, and G.L. Huang, J. Sound Vib. 330, 2536 (2011). . Q Wei, Y Cheng, X J Liu, Appl. Phys. Lett. 10210174104Q. Wei, Y. Cheng, and X.J. Liu, Appl. Phys. Lett. 102, 174104 (2013). 10 . F Liu, X Huang, C T Chan, Appl. Phys. Lett. 10071911F. Liu, X. Huang, and C.T. Chan, Appl. Phys. Lett. 100, 071911 (2012). . Y Li, B Liang, Z M Gu, X Y Zou, J C Cheng, Appl. Phys. Lett. 10353505Y. Li, B. Liang, Z.M. Gu, X.Y. Zou, and J.C. Cheng, Appl. Phys. Lett. 103, 053505 (2013).
[]
[ "Chiral Polaritonics: Analytic Solutions, Intuition and its Use", "Chiral Polaritonics: Analytic Solutions, Intuition and its Use" ]
[ "Christian Schäfer [email protected] ", "Denis G Baranov [email protected] ", "\nDepartment\n‡Center for Photonics and 2D Materials\nChalmers University of Technology\nSweden\n", "\nMoscow Institute of Physics and Technology\n141700DolgoprudnyRussia\n" ]
[ "Department\n‡Center for Photonics and 2D Materials\nChalmers University of Technology\nSweden", "Moscow Institute of Physics and Technology\n141700DolgoprudnyRussia" ]
[]
Preferential selection of a given enantiomer over its chiral counterpart becomes increasingly relevant in the advent of the next era of medical drug design. In parallel, cavity quantum electrodynamics has grown into a solid framework to control energy transfer and chemical reactivity. In this work, we derive an analytical solution to a system of many chiral emitters interacting with a chiral cavity -in analogy to the widely used Tavis-Cummings and Hopfield models of quantum optics. We are able to estimate the discriminating strength of chiral polaritonics, discuss possible future development directions, exciting applications such as elucidating homochirality, and deliver much needed intuition to foster the freshly flourishing field of chiral polaritonics.AbstractThe Supplemental Information provides an extended derivation of the chiral Hopfield and Tavis-Cummings model and an alternative derivation of the chiral Hopfield model in which the selfpolarization terms are partially cancelled which results in an instability of the chiral system.
10.1021/acs.jpclett.3c00286
[ "https://export.arxiv.org/pdf/2209.07177v2.pdf" ]
252,280,517
2209.07177
5117a1ba8a2ce9c58c5618f411c4a268894abee8
Chiral Polaritonics: Analytic Solutions, Intuition and its Use Christian Schäfer [email protected] Denis G Baranov [email protected] Department ‡Center for Photonics and 2D Materials Chalmers University of Technology Sweden Moscow Institute of Physics and Technology 141700DolgoprudnyRussia Chiral Polaritonics: Analytic Solutions, Intuition and its Use Acknowledgement We thank Göran Jo-hansson, Maxim Gorkunov, and Timur Shegai for stimulating discussions. C.S. acknowledges support from the Swedish Research Council (VR) through Grant No. 2016-06059. D.G.B. acknowledges support from the Russian Science Foundation (21-72-00051) and BASIS founda-tion (Grant No. 22-1-3-2-1).Graphical TOC Entry Keywords Chiral PolaritonicsStrong CouplingPolari- tonsOptical CavitiesResonancesChiralityHandedness Preferential selection of a given enantiomer over its chiral counterpart becomes increasingly relevant in the advent of the next era of medical drug design. In parallel, cavity quantum electrodynamics has grown into a solid framework to control energy transfer and chemical reactivity. In this work, we derive an analytical solution to a system of many chiral emitters interacting with a chiral cavity -in analogy to the widely used Tavis-Cummings and Hopfield models of quantum optics. We are able to estimate the discriminating strength of chiral polaritonics, discuss possible future development directions, exciting applications such as elucidating homochirality, and deliver much needed intuition to foster the freshly flourishing field of chiral polaritonics.AbstractThe Supplemental Information provides an extended derivation of the chiral Hopfield and Tavis-Cummings model and an alternative derivation of the chiral Hopfield model in which the selfpolarization terms are partially cancelled which results in an instability of the chiral system. Coupling between two harmonic oscillators, either of classical or quantum origin, leads to a hybridization and the creation of new quasiparticle states if the coupling strength exceeds all the decay and decoherence rates in the combined system. A common representative for such a system is the interaction between an energetically isolated electromagnetic mode and a set of quantum emitters such as molecules. The associated quasi-particle states are referred to as polaritons and possess mixed light and matter characteristics which opens a toolbox with enormous versatility. [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15] Polaritons of various flavours have been used or proposed as path to enhanced charge and excitation transfer, 16-22 modify chemical reactivity, 23-38 and alter a systems state and response to external stimuli, [39][40][41][42][43][44][45][46][47] to name only a few. So far, most experimental and theoretical efforts in this field have focused on coupling optical cavities with either linearly-or circularlypolarized electronic transitions of various quantum emitters. This is perfectly justified by the fact that in the visible and infrared ranges the interaction of light with electronic and vibrational transitions is dominated by the electric dipole term of the Hamiltonian. Nevertheless, there are examples of media that exhibit resonances with non-negligible magnetic transition dipole moment. One such practically relevant example is presented by the class of chiral media. [48][49][50] A geometrical shape in three-dimensional space is called chiral if it cannot be aligned with its mirror image by a series of rotations and translations. 51 The chirality occurs at various scales ranging from the shapes of galaxies down to drug and bio-molecules. Especially the latter receives a steady stream of attention in the ongoing quest for new, safer and affordable ways to design chemical complexes and drugs. 52,53 It is then intuitively pivotal for its success to develop a solid understanding of relevant processes and a wide range of readily usable techniques that allow a separation or discrimination of the two enantiomers of a chiral structure. While the recent years showed major progress in this field, 54,55 sometimes referred to as chiral recognition, widely used chemical strate-gies such as (re)crystallization 56 can be cumbersome and require often highly specified approaches for each individual compound. Interaction of chiral matter with circularly polarized electromagnetic fields leads to the effect of circular dichroism, which underlies numerous methods for distinguishing molecular enantiomers. 57 However, those interactions are usually weak and can be well understood without the need to consider a correlated motion between light and matter. If and how strong light-matter interaction can aid those challenging tasks remained largely unclear thus far. While chiral polaritonics is still in its infancy, recent theoretical work is beginning to explore this question. Mauro et al. investigated the optical features of a single-handedness cavity loaded with a Pasteur medium using classical electromagnetism. 58 Riso et al. studied changes in the correlated ground state of single (or few) realistic molecules minimally coupled to an amplified chiral mode. 59 In this letter, we provide an analytic solution and much needed intuition that will be of good use for the future development of chiral polaritonics. Starting from non-relativistic quantum electrodynamics (QED), we derive the quantized electromagnetic fields supported by a single-handedness chiral cavity 60 and couple them to a large set of chiral emitters as illustrated in Fig. 1. The resulting Hamiltonian can serve as starting point for any kind of ab initio QED. 13,61-65 Here, we focus on a simplified model that allows for an analytic solution which illustrates the physical playground, potential, and forthcoming challenges of chiral polaritonics. In contrast to previous approaches, 54,55,66-68 the strong coupling to a chiral cavity allows one to reach sizeable interaction strength even in the absence of any pumping field. The here derived model paves the way to invigorate an entirely new research domain. We start our derivation from the nonrelativistic and spin-less limit of QED in Coulomb gauge. Using the Power-Zienau-Wooley transformation and expanding the multi-polar light-matter interaction to second order introduces magnetic dipolar couplings and electric quadrupole terms 69-72 according tô H =Ĥ M +Ĥ L +Ĥ LM , where we differentiate between the N M electronic (q i = −e) and nuclear (q i = eZ i ) charges plus longitudinal Coulomb interaction among them constituting the molecule, and inter-molecular Coulomb interactionŝ H M = N n=1 N M i=1 1 2m ip 2 i,H L = 1 2 d 3 r D 2 ⊥ (r) ε 0 + ε 0 c 2B2 (r) . The interaction H LM up to magnetic order takes the form H LM = − 1 ε 0 nμ n ·D ⊥ (r n ) − nm n ·B(r n ) − 1 ε 0 n a,b∈{x,y,z}Q ab,n ∇ a,nD⊥,b (r n ) (1) + 1 8m n,i (q n,irn,i ×B(r n )) 2 + 1 2ε 0 d 3 rP 2 ⊥ , with the canonical particle momentum p i = m iṙi − q i 2 r i × B(r), the total transverse polar-izationP ⊥ = nP ⊥,n , the electric dipole momentμ = i q iri and quadrupole Q ab , as well as the magnetic dipolem = 1 2 i q i m ir i ×p i . All positions are defined relative to each individual molecular center of mass. Importantly, the multi-polar form introduces the displacement fieldD ⊥ = ε 0Ê +P as the canonical momentum to the vector potential. The last term in Eq. (1) can be written in the tensorial form nχ m n,ijB i (r n )B j (r n ) withχ m n,ij = a q 2 a /8m a (r (i) n,ar (i) n,a δ ij −r (i) n,ar (j) n,a ). Let us briefly comment on the relevance of the appearing self-interaction contributions. The magnetic nχ m n,ijB i (r n )B j (r n ) and electric 1 2ε 0 d 3 rP 2 ⊥ self-polarization terms ensure gauge invariance and guarantee the stability of the correlated system. 70,73 A common simplification is to assume the molecules as well separated. In this dilute limit and when all photonic modes are considered, the inter-molecular Coulomb interactions cancel perturbatively with the inter-molecular contributions arising from 1 2ε 0 d 3 r n,n P ⊥,nP⊥,n such that only retarded intermolecular interactions via the photonic fields remain. 70,74 However, when the number of photonic modes is truncated, as commonly done for polaritonic systems, this intuitive result does no longer hold. If the intermolecular contributions are neglected nevertheless, one arrives at the widely used Dicke model that falsely predicts a transition into a superradiant phase, while the associated model in the Coulomb gauge does not exhibit such a transition. 75 Under which conditions a phasetransition for more realistic systems could appear and what characterizes such a transition is still a matter of active debate. 42,76- 80 We provide a derivation in analogy to the common Dicke model in the SI and focus in the following on the development of a non-perturbative chiral model. Figure 1: (a) Illustration of the system under study. N identical chiral (quantum) emitters interact with the electromagnetic field of a chiral standing wave. A chiral standing wave is formed between two handedness-preserving metasurface mirrors. (b) A model of a single chiral emitter as generic N -level system. Chiral emitters, which represent molecules, biological structures or plasmonic meta-atoms, are modeled as simplified multi-level systems whose transitions are quantified by collinear electric and magnetic dipole moments. The electromagnetic fields follow the generic mode expansion D ⊥ (r) = i k,λ ckε 0 2V kλ e ik·râ kλ − * kλ e −ik·râ † kλ , B(r) = i 1 c k,λ ck 2ε 0 V β kλ e ik·râ kλ − β * kλ e −ik·râ † kλ . where V is the cavity mode volume, k,λ and β k,λ are the electric and magnetic field unit polarization vectors; k labels wave vectors, and λ labels polarization states. Although Maxwell's equations in free space admit solutions in the form of chiral photons, in this case both handednesses coexist at the same time. Contrary to one's intuition, illuminating an ordinary Fabry-Pérot cavity with a circularly polarized light does not address this problem. 13,81 However, using cleverly designed asymmetric "single-handedness" cavities, 60,82 it is possible to engineer pure chiral electromagnetic fields with only one handedness. In a right-handed (left-handed) monochromatic wave the magnetic field is π/2 behind (ahead) the electric field everywhere in space ZH(r; ω) = −iλE(r; ω), where Z = µ 0 /ε 0 is the impedance of free space, and λ is the eigenvalue of the helicity operator, 83 which takes values +1 and −1 for LH and RH fields, respectively. Only a subset of modes will adhere to the conditions that are imposed by the boundary conditions of the chiral cavity, it should be noted that the electromagnetic fields are not zero at the mirror-surfaces. A planar optical cavity, such as the one described in Ref., 60 supports a continuous spectrum of resonant states that can be labels by their in-plane momenta k . Cavity fields maintain their single-handedness quality for a substantial range of in-plane wave vectors (incident angles). 60 For simplicity, we will illustrate only coupling of a single standing wave with k = 0 to the chiral emitters, and refer the interested reader to the SI for a generalized discussion. The chiral standing wave is the superposition of two counter-propagating circularly polarized plane waves of the same handedness. Assuming the axis of the cavity to be pointed along the z direction and considering a vertical stand-ing wave with k = ±ke z , the displacement field D λ ⊥ (r) = k,λD λ k,⊥ (r) of a LH/RH standing wave isD λ ⊥ (r) = (D λ +k,⊥ (r) +D λ −k,⊥ (r))/ √ 2 withâ +k,λ =â −k,λ and ±k,λ = 1 √ 2 (1, ±iλ, 0) T . With these simplifications, the displacement field of a chiral standing waves takes the form D λ ⊥ (r) = − ε 0 V ε λ k (z)p k,λ , where ε λ k (z) = (cos(kz), −λ sin(kz), 0) T is the z-dependent polarization vector of the chiral standing wave, and the canonical coordinates arep k,λ = −i ck/2(â k,λ −â † k,λ ) andq k,λ = /2ck(â k,λ +â † k,λ ) . Notice that the left-and right-handed polarization vectors are orthogonal only in the spatially averaged sense. Recalling the relation between the electric and magnetic field of a chiral field, β k,λ = −iλ k,λ , we obtain the magnetic field of a chiral standing wave asB λ (r) = k 2 /ε 0 V λ ε λ k (z)q k,λ . The standing chiral fields satisfy Maxwell's equation and contribute with ck = ω k a photonic energy ofĤ L = (p 2 k + ω 2 kq 2 k )/2 = ω k (â † kâ k + 1 2 ) for a given handedness. The extraordinary consequence is now that the standing field in an empty cavity will feature chiral quantum-fluctuations, i.e., for each Fockstate |n the optical chirality density C n (r, ω) = ε 0 ω 2 Im n|Ê ·B * |n reduces to C n (r, ω) = λ ω k k/4V . Also a dark chiral cavity will influence the ground and excited states of matter located within it. In the following, we will introduce a series of simplifications and derive an analytic solution to the combined system of many chiral molecules coupled to the chiral cavity. Let us here briefly describe the derivation of the analytical solution and their underlying models, a detailed version can be found in the SI. For negligible intermolecular Coulombic interactions, i.e.,V n,n ≈ 0, the molecular component to the Hamiltonian becomes diagonalĤ M = N n=1 ∞ k=1 E n k |k n k n | in the many-body eigenstates |k . Expanding all transition elements in this eigenbasis would, in principle, allow for a numerical solutions of the (ultra-)strongly coupled system. The inter-ested reader might refer to the area of ab initio QED. 84 Here, we focus on simplified models that provide analytical solutions. The self-magnetization term mediated viâ χ m n,ij will be assumed to be purely parametricχ m n,ij ≈ χ m n,ij as it obstructs otherwise the Hopfield diagonalization scheme. The selfmagnetization ensures gauge invariance and should be expected to play an important role for more sophisticated ab initio approaches. We define the dressed photonic frequencyω 2 k = ω 2 k 1 + 2 n 3 i,j=1 χ m n,ij ε λ k,i (z) ε λ k,j (z)/(c 2 ε 0 V ) which is related via the sum-ruleω 2 k = 2 n (E n − E m )/ 2 | m|p k |n | 2 ∀ m to the eigenvalues E m and eigenstates |m . In order to provide analytical solutions, we will limit our selves in the following to either two-level systemsμ n → (µ n 10σ + + µ n 01σ − ) or harmonic oscillatorsμ n → (µ n, * b † +µ nb ). Both approaches are widely used but we will focus ultimately on the harmonic representation as it allows to access non-perturbative correlation in the analytic solution. Our molecules are neutral such that we disregard permanent dipole moments for brevity. The relationship between the transition dipole moments has the generic form m n 01 = −ic ↔ ξ µ n 01 , where ↔ ξ is the product of a 3D rotation and scaling (see SI). For brevity, we are going to limit our analysis to molecules with collinear transition dipole moments and refer the reader to the SI for a generalization. In this scalar case, ξ = +1 and ξ = −1 describe an ideal LH and RH emitter, respectively. 50 This allows us to combine electric dipole, electric quadrupole and magnetic dipole into a single compact expression when µ n 10 = µ n 01 and Q 10 ab,n = Q 01 ab,n , Q n = Q 10 ab,n ∇ a e b . From here on, we are left to follow two different but equally popular directions that we detail in the following. A convenient and widely used approximation in quantum optics is to assume that the molecular basis consists of merely two states, motivated by the anharmonicity of excitonic transitions. For very large coupling strength, there is little reason to believe that the complicated multi-level structure of chiral molecules is well captured by only a single excitation. Let us assume for a moment we would limit ourselves to this parameter regime. The excitation spectrum is then approximated by a single excitation of energy ω m which is commonly transferred into a Pauli spin-basis |1 0| → σ + . If we further discard the self-polarization term and counter-rotating terms (âσ − ,â †σ+ ), we obtain the strongly simplified chiral Tavis-Cummings Hamiltonian H CT C = n ω mσ + nσ − n + ω k (â †â + 1 2 ) − i nḡ n (1 +ξ n λ) σ + nâ −σ − nâ † . The chiral coupling is encoded via the effective interaction strength proportional toḡ n (1+ξ n λ). A chiral emitter that features the same (opposite) handedness as the cavity will couple stronger (weaker) to the mode. In the extreme case thatξ n = ±1, the mismatched enantiomer will entirely decouple from the mode. The above chiral Tavis-Cummings model can be solved analogously to the standard Tavis-Cummings model, i.e, by limiting ourselves to the single-excitation subspace and introducing collective spin-operators. Once we approach the ultra-strong coupling domain, many excitations of the chiral molecule will contribute to the renormalization of transitions. A possible alternative is the harmonic approximation, in analogy to Hopfield,85 in which we identify the excitation structure with that of an harmonic oscillator. The resulting Hamiltonian takes the form of N + 1 coupled harmonic oscillators and can be solved via Hopfield diagonalization (detailed in the SI). The Hopfield solution is known to provide accurate predictions for effectively bosonic systems, such as vibrations 86 and intersubband transitions. 87 We find that the qualitative predictions of our Hopfield model are consistent with available ab initio calculations (see SI and following). It should be noted that both models provide consistent predictions for the first polariton-manifold under strong coupling asσ + ↔b † in the singleexcitation space. Assuming identical (but distinguishable) molecules and homogeneous in-plane distribution, it is convenient to introduce collective molecular operatorsB † k = 1 √ N n e ik ·rnb † n ,b † n = 1 √ N k e −ik ·rnB † k , where k is the in-plane momentum of the matter excitation. The general feature of chiral selectivity is unchanged (see SI). Under those approximationŝ H ≈ ω m B † k=0B k=0 + 1 2 + ω k â †â + 1 2 − i √ N g (B † k=0 +B k=0 )(â −â † ) (2) + ξλ(B † k=0 −B k=0 )(â +â † ) , where ω 2 m = ω 2 m + N (2ω m / ε 0 V )( ε λ k (z) · µ) 2 , g = ω k ω m /(2ε 0 V ω m )(µ + Q) · ε λ k (z) and ξ = ξ ω m ω k µ· ε λ k (z)/(ω mωk (µ+Q)· ε λ k (z) ) are the renormalized effective excitation energy, coupling strength, and chirality factor. Eq. (2) is diagonalized by following the standard Hopfield 85,87 procedure, i.e., defining the polaritonic operatorΠ = xâ + yâ † + zB k=0 + uB † k=0 that fulfills the eigenvalue equation [Ĥ,Π] = ΩΠ with the normalization condition |x| 2 − |y| 2 + |z| 2 − |u| 2 = 1. We obtain the polaritonic frequencies as real and positive solutions Ω ± = 1 √ 2 ω 2 k + ω 2 m + 8 ξλN g 2 ± (ω 2 k − ω 2 m ) 2 + 16N g 2 (ω k + ω m ξλ)(ω k ξλ + ω m ) 1 2 1 2 ,(3) with eigenvalues E ± = Ω ± + E vac , E vac = (Ω + + Ω − )/2 up to an arbitrary constant that is independent of the handedness of the emitter. As illustrated in Fig. 2(a), an ensemble of ideal chiral emitters featuring the opposite handedness ( ξλ = −1) compared to the LH cavity mode effectively decouples from the cavity. Switching the cavity handedness would therefore allow to open and close the avoided crossing and control associated conical intersections. This is an intuitively expected result: an ensemble of LH molecules couples to the LH photonic mode, whereas an ensemble of equivalent RH molecules becomes transparent for the same optical mode. The cross-section of the full plot at ξ = 0 yields the familiar picture of "traditional" polaritons with the electricdipole-mediated coupling. 87 Furthermore, the photonic and matter polariton fractions exhibit a gradual transition from the regime of hybridized eigenstates at ξλ = 1 to the uncoupled regime at ξλ = −1, when the two eigenstates represent bare optical and matter excitations. Our simple Hopfield solution recovers therefore the promising feature that the vacuum coupling in chiral cavities can be used to discriminate between two enantiomers. Importantly, the cavity distinguishes only the chiral component, i.e., the parallel projection of µ. This can be easily seen by extending our discussion to consider general bi-isotropic media and performing an angular average (see SI). The vast majority of widely used molecular systems exhibits extremely weak chirality factors ξ 1, rendering it challenging to separate left-and right-handed enantiomers to high fidelity. Chiral polaritonics can serve this purpose as the collective interaction results in a √ N scaling of the coupling strength, thus increasing the selectivity. Fig. 3 illustrates the difference of upper and lower polaritonic eigenfrequencies (blue) between left-and righthanded enantiomer for typical dye molecules 57 with a small and conservative estimate of ξ ≈ 3.712 · 10 −5 for the chirality factor. δΩ + δΩ - δ E v a c ξ = -1 λ = + 1 ξ = + 1 λ = + 1 For large N , the interacting system enters into the ultra-strong coupling domain in which the combined light-matter ground state is no longer separable. The relative energy difference between LH/LH and RH/LH ground states is shown in Fig. 3 (orange-dotted) as a function of the number of molecules N . Clearly, the correlated ground and excited states illustrate a quick increase of the discriminating effect -the center-point of chiral polaritonics. The groundstate discrimination scales linear in N for moderate coupling and continues to scale as √ N in the deep ultra-strong coupling domain (see SI). Both limits are consistent with the observation by Riso et al. 59 The small value of ξ for typical molecules translates into an overall small eigenvalue difference that scales on resonance approximately with √ N g ξ. While √ N g can reach a sizeable fraction of the excitation energy, a major limi-tation to be fought here is the commonly weak chirality. In order to leverage chiral polaritonics for enantiomer selectivity, it seems thus essential to either magnify the magnetic components or establish a protocol that can exploit the small energetic differences. The latter follows closely the still open question for the origin of homochirality, i.e., how could a minute energetic imbalance between the enantiomers result in the real-world dominance of a given handedness. 88 Among the frequently discussed options are autocatalytic processes that turn a small imbalance into a substantial excess. 89 Chiral polaritonics could not just explore a similar path but serve as a sensitive framework to further elucidate the origin of homochirality. The former approach on the other hand would propose the design of a cavity that would compensate for the small ξ. The discriminating factor 1 + ξλ is in our purely transversal cavity bound by the small size of ξ as |λ| = 1 is fixed by ZH = −iλE. The latter, however, no longer holds in subwavelength cavities, where the field has a significant longitudinal component. In (non-chiral) plasmonic nanocavities, for example, |ZH| |E|. Thus, the challenge could be addressed by designing a compact chiral nanocavity, whose quasi-normal mode is dominated by the longitudinal magnetic field, |ZH| |E|, and maintains a chiral character expressed by a non-zero local chirality density C(r). We developed a new analytical model describing non-perturbative interaction of an ensemble of chiral molecules with a common chiral optical mode, i.e., a resonator that supports only optical modes with a given handedness. The model illustrates that a chiral cavity can be used to selectively couple to molecules of a specific handedness and thus provides means to discriminate enantiomers from a racemic. Such a chiral discriminating effect can be observed in all eigenstates of the strongly hybridized light-matter system and exists in absence of any pumping, i.e., in the dark cavity. How strong left and right-handed enantiomer can be distinguished is proportional to √ N gξ. While √ N g can become sizeable, the degree of chirality satisfies typically ξ 1 which currently limits the capability for chiral recognition. Possible strategies to exploit field enhancement techniques 57 in combined optical and plasmonic systems 90 might pave a way to enhance the recognition capabilities. It should be noted that this simple, accurately controllable and easily realizable system breaks a discrete symmetry with possibly wide and yet unforeseen ramifications. Chiral polaritonics contributes, even at this early stage, an exciting perspective to further elucidate homochirality. The intuitive analytical model and perspective put forward in this letter will foster this new domain on the intersection of cavity QED, chiral chemistry, biology and nanophotonics. FIELDS OF A STANDING CHIRAL WAVE In the following we obtain explicit expressions for the electric and magnetic fields of a chiral standing wave describing the optical mode of a single-handedness optical cavity. We begin with the case of a standing wave formed by two counter-propagating circularly polarized plane waves. The field of a monochromatic circularly polarized plane wave propagating through air in the positive direction of the z axis takes the form (e −iωt time dependence for the harmonic field is assumed): E λ +z (r) = E √ 2       1 iλ 0       e ikz , ZH λ +z (r) = −iλE λ +z (r),(S1) where λ = ±1 denotes the handedness of the wave, k = ω/c, and E has units of electric field. Correspondingly, the fields of a wave travelling in the negative direction of the z axis takes the form: E λ −z (r) = E √ 2       1 −iλ 0       e −ikz , ZH λ −z (r) = −iλE λ −z (r). (S2) The fields of a "vertical" standing wave take the form: E λ k =0 = E λ +z + E λ −z √ 2 = E       cos kz −λ sin kz 0       , ZH λ k =0 = −iλE λ k =0 (r). (S3) Now consider a pair of circularly polarized plane waves with a given handedness λ both propagating with a fixed in-plane momentum k (in-plane with respect to the vertical axis of the cavity) and opposite vertical component of the wave vector ±k zẑ . Without loss of generality let us assume k = k xx with k x = k sin θ and k z = ±k cos θ. Electric fields take the form: E λ +z,k (r) = E √ 2       cos θ iλ − sin θ       e ikzz+ikxx , E λ −z,k (r) = E √ 2       cos θ −iλ + sin θ       e −ikzz+ikxx (S4) 2 The z-dependent electric field of the combination of the two waves with the common k takes the form: E λ k (r) = E       cos θ cos k z z −λ sin k z z −i sin θ sin k z z       e ikxx . (S5) Magnetic field of the chiral standing wave with handedness λ follows from the electric field: S1. Geometry of a chiral standing wave. ZH λ k = −iλE λ k . (S6) θ x y z -k z E x E y k z E x E y k z (a) (b) k x λ λ E z θ FIG. DERIVING THE CHIRAL HOPFIELD AND TAVIS-CUMMINGS MODELS We start with the chiral standing fields that are derived from the generic mode expansion D ⊥ (r) = i k,λ ckε 0 2V kλ e ik·râ kλ − * kλ e −ik·râ † kλ , B(r) = i 1 c k,λ ck 2ε 0 V β kλ e ik·râ kλ − β * kλ e −ik·râ † kλ . Following the explanations in the main text, 'upwards' and 'downwards' propagating fields are superimposed asD λ ⊥ (r) = k,λD λ k,⊥ (r), D λ ⊥ (r) = k (D λ +k,⊥ (r) +D λ −k,⊥ (r))/ √ 2, withâ +kz,λ =â −kz,λ and ±k,λ = 1 √ 2 (1, ±iλ, 0) T . One obtainŝ D λ ⊥ (r) = − k>0 ε 0 V ε λ k (z)p k,λ , with ε λ k (z) = (cos(kz), −λ sin(kz), 0) T andp k,λ = −i ck/2(â k,λ −â † k,λ ),q k,λ = /2ck(â k,λ + a † k,λ ). The same procedure is applied to the magnetic fields where now however β k,λ = −iλ k,λ such that the chiral standing magnetic field features the additional λ B λ (r) = k>0 k 2 /ε 0 V λ ε λ k (z)q k,λ . Considering that ∇ × ε λ k (z) = λk ε λ k (z), it is easily validated that those fields fulfill Maxwell's equations of motion and contribute the common photonic energy k>0 ω k (â †â + 1 2 ) per handedness λ. We will assume in the following that only a single mode couples substantially to the ensemble of chiral molecules, a reasonable approximation for most cavity realizations that should be however relaxed if the inter-molecular distances become much larger than the wavelength of the electromagnetic standing field. Explicitly expressing all components in it should be noted that it scales linear in the number of molecules. As χ m n,ij is rarely even mentioned and to the best of our knowledge never considered, it remains an open problem to specify its dynamic value. It is however possible to set the value forω k in relation to the systems characteristic solution. Using the sum-rule m|[p k , [Ĥ,p k ]]|m = 2 n (E n − E m )| m|p k |n | 2 with eigenvalues E m and eigenstates |m providesω 2 k = 2 n (E n − E m )/ 2 | m|p k |n | 2 ∀ m. We will formally retain the self-magnetization term but ultimately ignore its influence on the visualization. It should be noted that the following model features a magnetic instability for sizeable particle number and ξ ifω k is not adjusted accordingly. We will limit our discussion to the stable domain, i.e., where phase transitions are absent. Importantly, any ab initio calculation that includes the flexibility to change the electric/nuclear structure should include all components [1,2]. Transition dipole moments We will illustrate the identification and relation of the transition moment for the two-level approximation in the following. The corresponding Hopfield model, featuring an identification with harmonic oscillators instead, is derived analogously. We disregard permanent dipole moments for brevityμ n → (µ n 10σ + + µ n 01σ − ), (S8) m n → (m n 10σ + + m n 01σ − ),(S9) where the matrix elements of the transition dipole moment (TDM) operators are calculated according to µ 01 = 0|μ|1 , m 01 = 0|m|1 ,(S10) and the lowering and raising operators of the TLS are given by the standard expressions σ + =|1 0|, σ − = σ † + =|0 1|. (S11) Without the loss of generality, the matrix element of the electric TDM operator may be assumed real-valued, µ 01 = µ * 01 . Let us establish the general relationship between transition dipole moments of a two-level quantum emitter. For a bi-isotropic molecule with parallel electric and magnetic transition dipole moments this equation takes the simple form [3]: m n 01 = −icξµ n 01 ,(S12) with ξ = ±1 corresponding to LH (+1) and RH (−1) emitters, respectively. Correspondingly, the magnetic dipole moment operator becomesm = icξ(µ * 01σ + − µ 01σ − ). The above relationship between the TDMs of a chiral emitter is consistent with the chirality definition of a classical monochromatic dipolar source [4]. Let us now look for a more general tenso- (S13) We can further specify its form by incorporating two rotations into the spatial orientation of µ 01 . Choosing a linearly-polarized transition electric dipole moment µ 01 is equivalent to fixing two Euler angles of the molecular orientation, leaving an arbitrary rotation around µ 01 . For example, we can parameterize the electric TDM by the polar and azimuthal angles in the spherical coordinate system: µ 01 = |µ|       sin θ cos ϕ sin θ sin ϕ cos θ       . (S14) A given tensor ↔ ξ will map a given electric dipole moment µ 01 into another fixed vector. However, given a fixed µ 01 all allowed positions of m 01 occupy an entire circle in R 3 . Thus, mapping µ 01 → m 01 must be parameterized by an additional angle describing rotations of 7 the molecule around µ 01 (see Fig. S2). The sought for mapping thus can be written as: m 01 = −ic ↔ R µ (δ) ↔ ξ µ 01 ,(S15) where ↔ R µ (δ) is the rotation matrix that describes rotation of the molecule around µ 01 by an angle δ. The mapping parameterized by three angles (θ, ϕ, δ) in Eq. S15 encompasses all possible orientations of the molecule, which allows us to average any characteristic of the coupled system (such as the coupling constant) over molecular orientation. Let us now establish the constraints imposed on ↔ ξ by reciprocity. The linear relationship between induced electric and magnetic dipole moments of a polarizable subwavelength object and incident monochromatic field can be written as:    p m    =    ε 0 ↔ α e ↔ α em /c ε 0 c ↔ α me ↔ α m       E H    ,(S16) where ↔ ε , ↔ µ, ↔ α em , and ↔ α me are all rank-2 3 × 3 tensors with units of volume. We limit our treatment to the class of reciprocal media; polarizabilities of any reciprocal particle are subject to Onsager-Casimir relations [5]: ↔ α e = ↔ α T e , ↔ α m = ↔ α T m , ↔ α em = − ↔ α T me . (S17) This criterion allows us to decompose the magneto-electric coupling tensors into reciprocal ('R') and non-reciprocal ('NR') components: ↔ α em = ↔ α (N R) em + ↔ α (R) em ≡ ↔ χ + i ↔ κ,(S18)↔ α me = ↔ α (N R) me + ↔ α (R) me ≡ ↔ χ T − i ↔ κ T .(S19) where the reciprocal part is presented by ↔ κ: κ = ↔ α em − ↔ α T me 2i ,(S20) and the non-reciprocal part is presented by χ = ↔ α em + ↔ α T me 2 . (S21) 8 Obviously, if ↔ χ = 0, then ↔ α em = − ↔ α T me , and the reciprocity criterion is satisfied. It is self-explanatory that the reciprocal part of magneto-electric polarizability is responsible for effects that respect reciprocity. Atomic polarizabilities of the elementary two-level system with arbitrary transition dipole moments µ and m can be written as: ↔ α e = µ 01 ⊗ µ * 01 ε 0 1 ω 0 − ω − iγ/2 , ↔ α m = m 01 ⊗ m * 01 ε 0 c 2 1 ω 0 − ω − iγ/2 , (S22) ↔ α em = µ 01 ⊗ m * 01 ε 0 c 1 ω 0 − ω − iγ/2 , ↔ α me = m 01 ⊗ µ * 01 ε 0 c 1 ω 0 − ω − iγ/2 . (S23) Equations S22 suggest that for ↔ α e and ↔ α m to comply with Casimir-Onsager relations, Eq. S17, µ 01 and m 01 must be real-valued vectors (up to an arbitrary global phase e iφ ), thus describing linearly-polarized transitions. One can easily see that plugging m 01 = −icξµ 01 (Eq. S12) into the above expressions yields ↔ α em = − ↔ α T me and ↔ χ = 0. Now let us utilize Eq. S15 and for brevity work with the numerator of the full expression in Eq. S23: ↔ α em ∝ µ 01 ⊗ m * 01 ≡ µ 01 m † 01 = µ 01 −ic ↔ R µ ↔ ξ µ 01 † = µ 01 −icµ T 01 ( ↔ R µ ↔ ξ ) T * = icµ 01 µ † 01 ( ↔ R µ ↔ ξ ) † .(S24) Similarly, for α me we obtain: ↔ α me ∝ m 01 ⊗ µ * 01 ≡ m 01 µ † 01 = −ic ↔ R µ ↔ ξ µ 01 µ † 01 .(S25) Transposing the latter, assuming without the loss of generality the real-valued µ 01 and inserting into the Casimir-Onsager relation, we get: ↔ α em + ↔ α T me ∝ icµ 01 µ † 01 [( ↔ R µ ↔ ξ ) † − ( ↔ R µ ↔ ξ ) T ] = 0.(S26) Since ↔ R µ is a real-valued orthogonal matrix, the latter implies that for a reciprocal bianisotropic two-level emitter ↔ ξ must be real-valued: Im[ ↔ ξ ] = 0. (S27) In other words, the scaling factor s of the transformation is real-valued, Im[s] = 0. For now, let us assume that the transition dipole moments are related by the simple expression with a scalar ξ, Eq. S12. Using this compact relation between electric and magnetic moments allows us to combine electric dipole, electric quadrupole and magnetic dipole into a single compact expression (assuming µ n 10 = µ n, * 10 and Q 10 ab,n = Q 01 ab,n , Q n = Q 10 ab,n ∇ a e b ):Ĥ = nĤ M,n + 1 2ε 0 V n ε λ k (z) · µ n 01 (σ + n + σ − n ) 2 + ω k â †â + 1 2 − i nḡ n (σ + n + σ − n )(â −â † ) −ξ n λ(σ + n − σ − n )(â +â † ) (S28) whereḡ n = ω k 2ε 0 V (µ n 10 + Q n ) · ε λ k (z) (S29) andξ n = ω k ω k µ n 10 · ε λ k (z) (µ n 10 + Q n ) · ε λ k (z) ξ. (S30) Tavis-Cummings models -a hint at the effective coupling strength Let us here introduce the common matter-representation of two-level models, discard the self-polarization term and all counter-rotating terms (∝âσ − ≈ 0 and similar). We obtain the strongly simplified chiral Tavis-Cummings Hamiltonian: H CT C = n ω mσ + nσ − n + ω k (â †â + 1 2 ) − i nḡ n (1 +ξ n λ) σ + nâ −σ − nâ † (S31) that shows explicitly that the effective interaction strength is proportional toḡ n (1 +ξ n λ). A chiral emitter that features the same handedness as the cavity will couple stronger to the mode. In the extreme case thatξ n = ±1, the mismatched enantiomer will entirely decouple from the mode. The above chiral Tavis-Cummings model could be solved analytically in the same way as any Tavis-Cummings model, i.e, by limiting ourselves to the single-excitation subspace and introducing collective spin-operators. Here, we will focus on the Hopfield model that includes also the counter-rotating and self-polarization terms that can lead to sizeable renormalizations for large N . Chiral Hopfield model In contrast to the Dicke and Tavis-Cummings models with a two-level approximation, the Hopfield approach is based on representing the material in terms of harmonic oscillators which allows for an analytic solution also in the ultra-strong coupling domain. The first polariton manifold is identical in the single-excitation + strong-coupling regime. Our Chiral Hopfield Hamiltonian takes the form of N + 1 coupled harmonic oscillatorŝ H = n ω n (b † nb n + 1 2 ) + ω k (â †â + 1 2 ) − i nḡ n (b † n +b n )(â −â † ) +ξ n λ(b † n −b n )(â +â † ) + 1 2ε 0 V n ε λ k (z) · µ n 01 (b † n +b n ) 2 . (S32) The transition-dipole moments are related to the optical oscillator strength of the harmonic model. Assuming identical molecules, it is convenient to introduce the Fourier-representation for the molecular ensemblê B † k = 1 √ N n e ik·rnb † n ,b † n = 1 √ N k e −ik·rnB † k (S33) such that the collective operators are represented by bright nb † = √ NB † k=0 and dark statesĤ ≈ ω m (B † k=0B k=0 + 1 2 ) + ω m k =0B † kB k + (N − 1) ω m 2 + ω k (â †â + 1 2 ) − i √ Nḡ (B † k=0 +B k=0 )(â −â † ) +ξλ(B † k=0 −B k=0 )(â +â † ) + N 2ε 0 V ε λ k (z) · µ 01 (B † k=0 +B k=0 ) 2 .(S34) Disregarding the dark states ω m k =0B † kB k and their vacuum-fluctuations (N − 1) ωm 2 , the self-polarization term can be absorbed into an adjusted matter frequency of the bright states ω 2 m = ω 2 m + N 2ωm ε 0 V ( ε λ k (z) · µ 01 ) 2 which results in H = ω m (B † k=0B k=0 + 1 2 ) + ω k (â †â + 1 2 ) − i √ N g (B † k=0 +B k=0 )(â −â † ) + ξλ(B † k=0 −B k=0 )(â +â † ) .(S35) where g = ω k ωm 2ε 0 V ωm (µ 01 +Q n )· ε λ k (z) and ξ = ωmω k ωmω k µ 01 · ε λ k (z) (µ 01 +Q)· ε λ k (z) ξ are the renormalized effective coupling strength and chirality factor. We can diagonalize Eq. (S35) by following the standard Hopfield [6,7] procedure, i.e., defining the polaritonic operatorΠ = xâ + yâ † + zB k=0 + uB † k=0 that fulfills the eigenvalue equation [Ĥ,Π] = ΩΠ with the normalization condition |x| 2 − |y| 2 + |z| 2 − |u| 2 = 1. We obtain the polaritonic frequencies from −ω k − Ω 0 (1 + ξλ)i √ N g −(1 − ξλ)i √ N g 0ω k − Ω −(1 − ξλ)i √ N g (1 + ξλ)i √ N g −(1 + ξλ)i √ N g −(1 − ξλ)i √ N g − ω m − Ω 0 −(1 − ξλ)i √ N g −(1 + ξλ)i √ N g 0 ω m − Ω = 0 (S36) as real and positive solutions Ω ± = 1 √ 2 ω 2 k + ω 2 m + 8 ξλN g 2 ± [ω 2 k − ω 2 m ] 2 + 16N g 2 (ω k + ω m ξλ)(ω k ξλ + ω m ) . (S37) The corresponding eigenvectors encode in |x| 2 −|y| 2 the matter contribution and in |z| 2 −|u| 2 the photonic contribution to the polaritonic states. Generic alignment and its influence on chiral recognition Let us briefly examine the more general case where the electric and magnetic transition dipole moments are arbitrarily oriented. In this case, they are related by Eq. S22 with ↔ ξ = s ↔ U where s is real-valued and ↔ U is an orthogonal transformation. The orientation average is described by the energy-conserving squared coupling element: |g| 2 ∝ 1 4π 2π 0 dφ π 0 dθ sin(θ)| √ N ε · (1 + λ ↔ ξ )µ 01 | 2 (S38) proportional to the combined electric plus magnetic moments. Expanding the dot product | ε · (1 + λ ↔ ξ )µ 01 | 2 = cos 2 θ|(1 + λ ↔ ξ )µ 01 | 2 = cos 2 θ |µ 01 | 2 + ↔ ξ µ 01 | ↔ ξ µ 01 + 2λ µ 01 | ↔ ξ µ 01(S39) and utilizing ↔ ξ = s ↔ U , we obtain: |g| 2 = | ω k ω m 2ε 0 V ω m ω m ω k ω mωk √ N (1 + λ ↔ ξ ) · µ 01 | 2 1 2 π 0 dθ sin(θ) cos 2 (θ) = N 3 ω m ω 2 k 2ε 0 Vω k ω m |(1 + λ ↔ ξ ) · µ 01 | 2 = N 3 ω m ω 2 k 2ε 0 Vω k ω m (1 + s 2 )|µ 01 | 2 + 2λsRe µ 01 | ↔ U µ 01 .(S40) In addition to chiral features that arise from the parallel components of the transition dipole moments, the emitters now also feature Omega-type magneto-electric coupling originating from the orthogonal components of the dipole moments. However, as we can easily see from the angular average, only the chiral components are discriminated by the cavity. Eq. (S40) clarifies that the chiral cavity will only distinguish the chiral components of the emitters. Take for example pure chirality with λ = +1 and (anti-)alignment µ||m with ↔ ξ = ±1, then [(1 + s 2 )|µ 01 | 2 + 2λsRe µ 01 | ↔ U ξ µ 01 ] = (1 + 1 + 2(±1))|µ 01 | 2 , which is either 0 or 4|µ 01 | 2 . However, for Ω-coupling µ 01 | ↔ U µ 01 = 0 and we obtain always (1+s 2 )|µ 01 | 2 . The transition-dipole moments still contribute constructively to the chiral coupling but there is no handedness selectivity left. Influence of the self-magnetization Let us illustrate briefly how the self-magnetization can influence our conclusions. First of all, it should be noticed that the dressing of the photonic frequency via χ m is a factor 1/c 2 smaller than the self-polarization effect on the matter frequency. However, even if we chose enormously large values, the effect is small as demonstrated in Fig. S3. where kλ (r) = (k z /k cos(k z z), −λ sin(k z z), −ik x /k sin(k z z)) T e ikxx . The projection of the matter degrees of freedom follows as before and the overall structure remains unchanged. As an example, we will derive the new Tavis-Cummings analogue, the Hopfield model can be obtain in analogy to the previous steps but the many-mode coupling renders the process more verbose. As before, we perform the rotating-wave approxima- We introduce again a Fourier-representationσ + n = 1 √ N K e −iK·rnŜ † K which implies a regular molecular distance along the x-axis r n = e x 2πn N and assume identical couplings and frequencies. As n e ikxxn e −iKxxn = N δ(k x − K x ), the chiral Tavis-Cummings Hamiltonian simplifies toĤ λ CT C = K ω mŜ + KŜ − K + kz>0,kx ω k (â † kλâ kλ + 1 2 ) − kz>0,kx i √ N ck 2ε 0 V kλ (z, x = 0)(1 + λ ↔ ξ ) · µ 01Ŝ + kxâ kλ − h.a. .(S41) Comparison with Eq. (S31) clarifies that the chiral effect, i.e., (1 + λ ↔ ξ ), remains unchanged. An important difference is that for non-zero in-plane momentum, the coupling is not only to the mode K x = 0 but also to higher momenta, similar to the standard Tavis-Cummings model as shown for example in Ref. [9]. CONSISTENCY OF THE CHIRAL HOPFIELD MODEL IN THE ULTRA-STRONG COUPLING DOMAIN WITH AB INTIO CALCULATIONS The Hopfield model has been shown to provide excellent results for vibrational strong coupling [10] and intersubband transitions [7]. It is not obvious that the same qualitative accuracy can be expected for the electronic subspace in atomic/molecular structures. However, recent work by Riso et al. [11] utilized an adjusted version of QED Coupled-cluster to estimate the discriminating strength of chiral fields on single and few molecules. They predict a √ N behavior of the discriminating strength in the correlated ground-state (Fig. 5 of Ref. [11]) that is consistent with our observations shown in Fig. S5 if the coupling is deep within the ultra-strong coupling domain. dye molecules introduced in the paper (red-dashed). The green line following 6.5 · 10 −8 √ N serves as guide to the eye. It is apparent that the large-N limit, which is equivalent with increasing the fundamental coupling strength, is dominated by a √ N behavior that is consistent with the available literature. All parameters consistent with Fig. 3. The overall trend of the polaritonic eigenvalues predicted by the Hopfield model is consistent with exact results for hydrogen as illustrated in Fig. S6. solutions. The exact solution will naturally produce (avoided) crossings between higher excited states that are not included in our model but that do not influence the drawn conclusions. Figure 2 : 2(a) Polaritonic eigenvalues Ω ± of the chiral Hopfield model with a LH cavity mode (λ = 1) for N = 100 as a function of the cavity frequency ω k and chiral factor ξ. The eigenvalues are calculated with typical values for optical transitions in dye molecules, 57 Q = χ m i,j = z = 0, and the fundamental coupling strength of 1/ε 0 V = 0.001 (a.u.). (b) Hopfield coefficients (light blue -photonic, red -matter) for the lower polariton for the same system as in panel (a). Figure 3 : 3Normalized difference between the polaritonic excitation energies δΩ ± = and right-handed chiral dye molecules inside a LH chiral cavity. We have chosen common values for dye molecules, 57 the resonant condition ω k = ω m and 1/ε 0 V = 0.001 in atomic units for the chiral Hopfield model. For comparison, a molecular concentration of 1 mol/L is on the order of magnitude of 10 3 molecules in the chosen volume. The actual number of collectively coupled emitters under experimental conditions is usually unknown and is estimated based on simplified models, such as the one presented here. , H. L.; Feist, J.; Toppari, J. J.; Groenhof, G. Multiscale Molecular Dynamics Simulations of Polaritonic Chemistry. Journal of chemical theory and computation 2017, 13, 4324-4335. (8) Fregoni, J.; Haugland, T. S.; Pipolo, S.; Giovannini, T.; Koch, H.; Corni, S. Strong coupling between localized surface plasmons and molecules by coupled cluster theory. Nano Letters 2021, 21, 6664-6670. (9) Bajoni, D.; Senellart, P.; Wertz, E.; Sagnes, I.; Miard, A.; Lemaître, A.; Bloch, J. Polariton laser using single micropillar GaAs-GaAlAs semiconductor cavities. Physical review letters 2008, 100, 047401. (10) Chikkaraddy, R.; de Nijs, B.; Benz, F.; Barrow, S. J.; Scherman, O. A.; Rosta, E.; Demetriadou, A.; Fox, P.; Hess, O.; Baumberg, J. J. Single-molecule strong coupling at room temperature in plasmonic nanocavities. Nature 2016, 535, 127-130. (11) Wang, D.; Kelkar, H.; Martin-Cano, D.; Utikal, T.; Götzinger, S.; Sandoghdar, V. Coherent Coupling of a Single Molecule to a Scanning Fabry-Perot Microcavity. Phys. Rev. X 2017, 7, 021014. (12) Baranov, D. G.; Munkhbat, B.; Zhukova, E.; Bisht, A.; Canales, A.; Rousseaux, B.; Johansson, G.; Antosiewicz, T. J.; Shegai, T. Ultrastrong coupling between nanoparticle plasmons and cavity photons at ambient conditions. Nat. Commun. 2020, 11, 2715. Gubbin, C. R.; De Liberato, S. Optical Nonlocality in Polar Dielectrics. Phys. Rev. X 2020, 10, 021027. (15) Thomas, P. A.; Menghrajani, K. S.; Barnes, W. L. Cavity-Free Ultrastrong Light-Matter Coupling. The Journal of Physical Chemistry Letters 2021, 12, 6914-6918. (16) Coles, D. M.; Somaschi, N.; Michetti, P.; Clark, C.; Lagoudakis, P. G.; Savvidis, P. G.; Lidzey, D. G. Polaritonmediated energy transfer between organic dyes in a strongly coupled optical microcavity. Nat. Mater. 2014, 13, 712-719. (17) Orgiu, E.; George, J.; Hutchison, J. A.; Devaux, E.; Dayen, J. F.; Doudin, B.; Stellacci, F.; Genet, C.; Schachenmayer, J.; Genes, C. et al. Conductivity in organic semiconductors hybridized with the vacuum field. Nat. Mater. 2015, 14, 1123-1129. (18) Zhong, X.; Chervy, T.; Zhang, L.; Thomas, A.; George, J.; Genet, C.; Hutchison, J. A.; Ebbesen, T. W. Energy Transfer between Spatially Separated Entangled Molecules. Angew. Chem. Int. Ed. 2017, 56, 9034-9038. (19) Schäfer, C.; Ruggenthaler, M.; Appel, H.; Rubio, A. Modification of excitation and charge transfer in cavity quantumelectrodynamical chemistry. Proceedings of the National Academy of Sciences 2019, 116, 4883-4892. (20) Du, M.; Martínez-Martínez, L. A.; Ribeiro, R. F.; Hu, Z.; Menon, V. M.; Yuen-Zhou, J. Theory for polaritonassisted remote energy transfer. Chem. Sci. 2018, 9, 6659-6669. )) Molecular Ensembles. The Journal of Physical Chemistry Letters 2022, 13, 8369-8375. (23) Hutchison, J. A.; Schwartz, T.; Genet, C.; Devaux, E.; Ebbesen, T. W. Modifying Chemical Landscapes by Coupling to Vacuum Fields. Angew. Chem. Int. Ed. 2012, 51, 1592-1596. (24) Thomas, A.; Lethuillier-Karl, L.; Nagarajan, K.; Vergauwe, R. M.; George, J.; Chervy, T.; Shalabney, A.; Devaux, E.; Genet, C.; Moran, J. et al. Tilting a ground-state reactivity landscape by vibrational strong coupling. Science 2019, 363, 615-619. (25) Singh, J.; Lather, J.; George, J. Solvent Dependence on Cooperative Vibrational Strong Coupling and Cavity Catalysis. ChemRxiv 2022,(26)Imperatore, M. V.; Asbury, J. B.; Giebink, N. C. Reproducibility of cavityenhanced chemical reaction rates in the vibrational strong coupling regime. The Journal of Chemical Physics 2021, 154, 191103. (27) Schäfer, C.; Flick, J.; Ronca, E.; Narang, P.; Rubio, A. Shining light on the microscopic resonant mechanism responsible for cavity-mediated chemical reactivity. Nature Communications 2022, 13, 1-9. (28) Schäfer, C. Polaritonic Chemistry from First Principles via Embedding Radiation Reaction. The Journal of Physical Chemistry Letters 2022, 13, 6905-6911. (29) Li, T. E.; Nitzan, A.; Subotnik, J. E. Col-Li, X.; Mandal, A.; Huo, P. Cavity frequency-dependent theory for vibrational polariton chemistry. Nat. Commun. 2021, 12, 1315.(31) Galego, J.; Garcia-Vidal, F. J.; Feist, J. Suppressing photochemical reactions with quantized light fields. Nat. Commun. 2016, 7, 13841. (32) Munkhbat, B.; Wersäll, M.; Baranov, D. G.; Antosiewicz, T. J.; Shegai, T. Suppression of photo-oxidation of organic chromophores by strong coupling to plasmonic nanoantennas. Sci. Adv. 2018, 4, eaas9552. (33) Groenhof, G.; Climent, C.; Feist, J.; Morozov, D.; Toppari, J. J. Tracking Polariton Relaxation with Multiscale Molecular Dynamics Simulations. J. Phys. Chem. Lett. 2019, 10, , S. Cavity femtochemistry: Manipulating nonadiabatic dynamics at avoided crossings. The journal of physical chemistry letters 2016, 7, 2050-2054. (35) Vendrell, O. Collective Jahn-Teller Interactions through Light-Matter Coupling in a Cavity. Phys. Rev. Lett. 2018, 121, 253001. (36) Fábri, C.; Halász, G. J.; Vibók,Á. Probing Light-Induced Conical Intersections by Monitoring Multidimensional Polaritonic Surfaces. The Journal of Physical Chemistry Letters 2021, 13, 1172-1179. (37) Li, T. E.; Tao, Z.; Hammes-Schiffer, S. Semiclassical Real-Time Nuclear-Electronic Orbital Dynamics for Molecular Polaritons: resonant localization in vibro-polaritonic chemistry. The Journal of Chemical Physics 2022, 156, 154305. (39) Deng, H.; Weihs, G.; Snoke, D.; Bloch, J.; Yamamoto, Y. Polariton lasing vs. photon lasing in a semiconductor microcavity. Proceedings of the National Academy of Sciences 2003, 100, 15318-15323. (40) Kéna-Cohen, S.; Forrest, S. Roomtemperature polariton lasing in an organic single-crystal microcavity. Nature Photonics 2010, 4, 371-375. (41) Slootsky, M.; Liu, X.; Menon, V. M.; Forrest, S. R. Room temperature Frenkel-Wannier-Mott hybridization of degenerate excitons in a strongly coupled microcavity. Phys. Rev. Lett. 2014, 112, 076401. (42) Latini, S.; Shin, D.; Sato, S. A.; Schäfer, C.; De Giovannini, U.; Hübener, H.; Rubio, A. The ferroelectric photo ground state of SrTiO 3 : Cavity materials engineering. Proceedings of the National Academy of Sciences 2021, 118, e2105618118. (43) Flick, J.; Welakuh, D. M.; Ruggenthaler, M.; Appel, H.; Rubio, A. Light-Matter Response in Nonrelativistic Quantum Electrodynamics. ACS Photonics 2019, 6, 2757-2778. (44) Schlawin, F.; Kennes, D. M.; Sentef, M. A. Cavity quantum materials. Applied Physics Reviews 2022, 9, 011312. (45) Schäfer, C.; Johansson, G. Shortcut to Self-Consistent Light-Matter Interaction and Realistic Spectra from First Principles. Phys. Rev. Lett. 2022, 128, 156402. (46) Lentrodt, D.; Heeg, K. P.; Keitel, C. H.; Evers, J. Ab initio quantum Lindell, I.; Sihvola, A.; Tretyakov, S.; Vitanen, A. Electromagnetic waves in chiral and Bi-isotropic media; Artech House, 2018; p 332. (49) Barron, L. D. Molecular Light Scattering and Optical Activity; Cambridge University Press, 2004; p 443. (50) Condon, E. U. Theories of Optical Rotatory Power. Reviews of Modern Physics 1937, 9, 432-457. (51) Kelvin, W. T. B. The Molecular Tactics of a Crystal ; Robert Boyle lecture; Clarendon Press, 1894. (52) Weiskopf, R. B.; Nau, C.; Strichartz, G. R. Drug Chirality in Anesthesia. Anesthesiology 2002, 97, 497-502. (53) Calcaterra, A.; D'Acquarica, I. The market of chiral drugs: Chiral switches versus de novo enantiomerically pure compounds. Journal of Pharmaceutical and Biomedical Analysis 2018, 147, 323-340. (54) Scriba, G. K. Chiral recognition in separation science -an update. Journal of Chromatography A 2016, 1467, 56-78. (55) Weinberger, R. In Practical Capillary Electrophoresis (Second Edition), second edition ed.; Weinberger, R., Ed.; Academic Press: San Diego, 2000; pp 139-208. (56) Roberts, J. D.; Caserio, M. C. Basic principles of organic chemistry; WA Benjamin, Inc., 1977. (57) Govorov, A. O.; Fan, Z.; Hernandez, P.; Slocik, J. M.; Naik, R. R. Fabry-P\'erot Cavities. arXiv preprint arXiv:2209.00402 2022, (59) Riso, R. R.; Grazioli, L.; Ronca, E.; Giovannini, T.; Koch, H. Strong coupling in chiral cavities: nonperturbative framework for enantiomer discrimination. arXiv preprint arXiv:2209.01987 2022, M. V.; Baranov, D. G. Single-handedness chiral optical cavities. ACS Photonics 2022, 9, 2652-2659. (61) Schäfer, C.; Buchholz, F.; Penz, M.; Ruggenthaler, M.; Rubio, A. Making ab initio QED functional (s): Nonperturbative and photon-free effective frameworks for strong light-matter coupling. Proceedings of the National Academy of Sciences 2021, 118, e2110464118. , E. F.; Rubio, A.; Koch, H. Coupled cluster theory for molecular polaritons: Changing ground and excited states. Phys. Rev. X 2020, 10, 041043. (63) Haugland, T. S.; Schäfer, C.; Ronca, E.; Rubio, A.; Koch, H. Intermolecular interactions in optical cavities: An ab initio QED study. J. Chem. Phys. 2021, 154, 094113. (64) Flick, J.; Ruggenthaler, M.; Appel, H.; Rubio, A. Atoms and molecules in cavities, from weak to strong coupling in quantum-electrodynamics (QED) chemistry. Proceedings of the National Academy of Sciences 2017, 114, 3026-3034. (65) Ruggenthaler, M.; Tancogne-Dejean, N.; Flick, J.; Appel, H.; Rubio, Li, X.; Shapiro, M. Theory of the optical spatial separation of racemic mixtures of chiral molecules. The Journal of chemical physics 2010, 132, 194315. (68) Forbes, K. A.; Andrews, D. L. Orbital angular momentum of twisted light: chirality and optical activity. Journal of Physics: Photonics 2021, 3, 022007. (69) Babiker, M.; Power, E. A.; Thirunamachandran, T. On a generalization of the Power-Zienau-Woolley transformation in quantum electrodynamics and atomic field equations. Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences 1974, 338, 235-249. (70) Craig, I. R.; Manolopoulos, D. E. Quantum statistics and classical mechanics: Real time correlation functions from ring polymer molecular dynamics. J. Chem. Phys. 2004, 121, 3368-3373. (71) Forbes, K. A. Role of magnetic and diamagnetic interactions in molecular optics and scattering. Phys. Rev. A 2018, 97, 053832. (72) Andrews, D. L.; Jones, G. A.; Salam, A.; Woolley, R. G. Perspective: Quantum Hamiltonians for optical interactions. The Journal of chemical physics 2018, 148, 040901. (73) Schäfer, C.; Ruggenthaler, M.; Rokaj, V.; Rubio, A. Relevance of the quadratic diamagnetic and self-polarization terms in cavity quantum electrodynamics. ACS Photonics 2020, 7, 975-990. (74) Power, E. A.; Thirunamachandran, T. Quantum electrodynamics in a cavity. Phys. Rev. A 1982, 25, 2473. rial expression that would relate the transition dipole moments of an anisotropic molecular emitter. Three Euler angles unambiguously describe the orientation of any rigid body (such as a molecule) in space. Similarly, a pair of non-collinear vectors µ 01 and m 01 hard-wired to the molecule would work. But µ 01 and m 01 are not independent themselves, and must satisfy a molecule-specific characteristic equation. First let us fix the orientation of the electric dipole moment of the molecule µ 01 . This vector needs to be mapped into the magnetic transition dipole moment m 01 . Suppose this mapping is performed by a dyad ↔ ξ , as illustrated in Fig. S2. This dyad cannot be an arbitrary linear transformation: the action of ↔ ξ on µ 01 must be invariant with respect to an arbitrary rotation of the molecule. Thus, . S2. Illustration of the relationship between the electric and magnetic transition dipole moments in a generic molecular emitter. The orientation of the transition electric dipole moment µ 01 associated with the molecule can be described by two angles θ, ϕ in a local spherical coordinates system. ↔ ξ describes a unitary mapping µ → m. For a given orientation of the electric dipole moment µ 01 , all allowed positions of m 01 occupy a circle denoted by the shaded area. This additional mapping is accomplished by the rotation of the molecule around µ 01 , which is described by R µ . of an orthogonal linear transformation ↔ U in R 3 and scaling by a complex number s: FIG. S3 . S3Left, including a quite large self-magnetization of χ m = µ 2 . Right, using a vastly enhanced self-magnetization of χ m = c 2 µ 2 , so even if we compensate the 1/c 2 factor, the effect of the self-magnetization is small. Interestingly, the large self-magnetization can even enhance the recognition capabilities in the lower polariton. FIG . S5. N-scaling differences in the correlated ground-state between left and right-handed chiral FIG . S6. Eigenvalues of two-dimensional soft-Coulomb hydrogen coupled in electric dipole approximation to a single cavity mode in resonance with the first matter-excitation. The exact solution uses a grid representation with 151 x 151 grid-points for hydrogen and 30 Fock-states for the cavity mode. We show the first 5 correlated eigenstates. In total, this amounts to a Hilbert space with 684030 states. Our simplified Hopfield model provides a similar trend (as long as the magnetic components do not dominate the coupling) and is therefore qualitatively consistent with exact tion for the chiral Tavis-Cummings model and disregard all self-correction terms as well as quadrupole contributions to obtainnâ kλ − h.a.) + λ( kλ (r) nâ kλ − h.a.) . nâ kλ − h.a. .H λ CT C = n ω mσ + nσ − n + kz>0,kx ω k (â † kλâ kλ + 1 2 ) − n,kz>0,kx i ck 2ε 0 V ( kλ (r) · µ n 01σ + ↔ ξ n µ n 01σ + = n ω mσ + nσ − n + kz>0,kx ω k (â † kλâ kλ + 1 2 ) − n,kz>0,kx i ck 2ε 0 V kλ (r)(1 + λ ↔ ξ n ) · µ n 01σ + i,j=1 χ m n,ij ε λ k,i (z) ε λ k,j (z)/(c 2 ε 0 V )characterized the effective photonic frequency. While this correction is inherently small (∝ 1/c 2 ), The self-magnetization term mediated viaχ m n,ij represents our first obstacle since it combines operators to cubic order, thus going beyond the otherwise quadratic form. In combination with the self-polarization term, the self-magnetization ensures gauge invariance and the stability of the combined system which renders it essential for any future developments of ab initio cavity QED. As we strive for a simple analytical solution, we will assume a parametric dependenceχ m n,ij ≈ χ m n,ij such thatω 2 k = ω 2 k 1 + 2 n Extension to modes with non-zero in-plane momentum k x = 0Our previous derivations used the simplifying assumption of a cavity mode represented by a standing wave with k = ±ke z , which leads to compact and highly intuitive equations.A more generic description might allow for non-zero in-plane momentum k x = 0, resulting in mixing of propagating waves and dark states. We will provide in the following a brief discussion what such an extension would look like and what changes are to be expected.We would like to emphasize that such an extended model would go more naturally with a many-mode description and present a straightforward generalization of our work.A planar optical cavity, such as the one described in ref.[8], supports a continuous spectrum of resonant states that can be labeled by their in-plane momenta k . More sophisticated cavities, such as micro-domes, support more complex modes with in-plane contribution and non-Gaussian spot-distribution but we retain here with the simplified Fabry-Pérot set up.A minimal representation for the field of such cavity modes is given in Section Fields of a standing chiral wave. Cavity modes with k = 0 do maintain their single-handedness quality in a substantial range of in-plane wave vectors (incident angles) according to the findings of ref.[8], and thus are expected to feature similar energy spectra when coupled with chiral molecular emitters.As before, we can expand the fields in its eigenmodeŝWe again restrict enforce the chiral standing wave in z-direction byD λ ⊥ (r) = 1 √ 2 (D λ ⊥,kz>0 (r)+ D λ ⊥,kz<0 (r)) which results withâ kx,kz>0,λ =â kx,kz<0,λ inCHIRAL HOPFIELD MODEL UNDER THE ASSUMPTION OF CANCELLINGINSTANTANEOUS INTERMOLECULAR CONTRIBUTIONSWe provide here a derivation of the chiral Hopfield model under the assumption that the instantaneous intermolecular interactions cancel. Using the explicit form of the chiral fields, our starting point reads thenwhere the only difference is that the self-polarization is local only.We can follow the same steps as before and absorb the self-polarization into adjusted local matter frequencies ω 2 n = ω 2 n + 2mωn ε 0 V ( ε λ k (z) · µ n 10 ) 2 -notice the missing N. We obtainwhere as before g n = ω k ωn 2ε 0 V ωn (µ n 10 +Q n )· ε λ k (z) and ξ n = ωnω k ωnω k µ n 10 · ε λ k (z) (µ n 10 +Q n )· ε λ k (z) ξ are the renormalized effective coupling strength and chirality factor. Introducing the Fourier-representation leads tôImportantly, also the dark states are now dressed by the self-polarization and ω m does not depend on the number of molecules N. The analytic solution has the same form but deviatesin ω m which results in instabilities for large N. Ebbesen, T. W. Manipulating matter by strong coupling to vacuum fields. F J Garcia-Vidal, C Ciuti, Science. 373336Garcia-Vidal, F. J.; Ciuti, C.; Ebbe- sen, T. W. Manipulating matter by strong coupling to vacuum fields. Science 2021, 373, eabd0336. Mode-specific chemistry through vibrational strong coupling (or A wish come true). B S Simpkins, A D Dunkelberger, J C Owrutsky, The Journal of Physical Chemistry C. 125Simpkins, B. S.; Dunkelberger, A. D.; Owrutsky, J. C. Mode-specific chemistry through vibrational strong coupling (or A wish come true). The Journal of Physical Chemistry C 2021, 125, 19081-19087. A perspective on ab initio modeling of polaritonic chemistry: The role of non-equilibrium effects and quantum collectivity. D Sidler, M Ruggenthaler, C Schäfer, E Ronca, A Rubio, The Journal of Chemical Physics. 2022230901Sidler, D.; Ruggenthaler, M.; Schäfer, C.; Ronca, E.; Rubio, A. A perspective on ab initio modeling of polaritonic chem- istry: The role of non-equilibrium effects and quantum collectivity. The Journal of Chemical Physics 2022, 156, 230901. Ultrastrong coupling regimes of light-matter interaction. P Forn-Díaz, L Lamata, E Rico, J Kono, E Solano, Rev. Mod. Phys. 25005Forn-Díaz, P.; Lamata, L.; Rico, E.; Kono, J.; Solano, E. Ultrastrong coupling regimes of light-matter interaction. Rev. Mod. Phys. 2019, 91, 025005. Rashba Cavity QED: A Route Towards the Superradiant Quantum Phase Transition. P Nataf, T Champel, G Blatter, D M Basko, Phys. Rev. Lett. 207402Nataf, P.; Champel, T.; Blatter, G.; Basko, D. M. Rashba Cavity QED: A Route Towards the Superradiant Quan- tum Phase Transition. Phys. Rev. Lett. 2019, 123, 207402. Theory of photon condensation in a spatially varying electromagnetic field. G M Andolina, F M D Pellegrino, V Giovannetti, A H Macdonald, M Polini, Phys. Rev. 2020125137Andolina, G. M.; Pellegrino, F. M. D.; Giovannetti, V.; MacDonald, A. H.; Polini, M. Theory of photon condensa- tion in a spatially varying electromagnetic field. Phys. Rev. B 2020, 102, 125137. Circular dichroism mode splitting and bounds to its enhancement with cavity-plasmon-polaritons. D G Baranov, B Munkhbat, N O Länk, R Verre, M Käll, T Shegai, Nanophotonics. 2020Baranov, D. G.; Munkhbat, B.; Länk, N. O.; Verre, R.; Käll, M.; Shegai, T. Circular dichroism mode splitting and bounds to its enhance- ment with cavity-plasmon-polaritons. Nanophotonics 2020, 9, 283-293. Spin-preserving chiral photonic crystal mirror. B Semnani, J Flannery, R Al Maruf, M Bajcsy, Light: Science and Applications. 202023Semnani, B.; Flannery, J.; Al Maruf, R.; Bajcsy, M. Spin-preserving chiral photonic crystal mirror. Light: Science and Appli- cations 2020, 9, 23. Objects of maximum electromagnetic chirality. I Fernandez-Corbaton, M Fruhnert, C Rockstuhl, Physical Review X. 631013Fernandez-Corbaton, I.; Fruhnert, M.; Rockstuhl, C. Objects of maximum elec- tromagnetic chirality. Physical Review X 2016, 6, 031013. . C Schäfer, M Ruggenthaler, V Rokaj, A Rubio, ACS Photonics. 7975C. Schäfer, M. Ruggenthaler, V. Rokaj, and A. Rubio, ACS Photonics 7, 975 (2020). . V Rokaj, D M Welakuh, M Ruggenthaler, A Rubio, J. Phys. B. 5134005V. Rokaj, D. M. Welakuh, M. Ruggenthaler, and A. Rubio, J. Phys. B 51, 034005 (2018). . E U Condon, Reviews of Modern Physics. 9432E. U. Condon, Reviews of Modern Physics 9, 432 (1937). Helicity and duality symmetry in light matter interactions: Theory and applications. I F Corbaton, Macquarie University, Faculty of Science and EngineeringPh.D. thesisI. F. Corbaton, Helicity and duality symmetry in light matter interactions: Theory and appli- cations, Ph.D. thesis, Macquarie University, Faculty of Science and Engineering (2014). . C Caloz, A Alu, S Tretyakov, D Sounas, K Achouri, Z.-L Deck-Léger, Physical Review Applied. 1047001C. Caloz, A. Alu, S. Tretyakov, D. Sounas, K. Achouri, and Z.-L. Deck-Léger, Physical Review Applied 10, 047001 (2018). . J J Hopfield, Phys. Rev. 1121555J. J. Hopfield, Phys. Rev. 112, 1555 (1958). . Y Todorov, C Sirtori, Phys. Rev. B. 8545304Y. Todorov and C. Sirtori, Phys. Rev. B 85, 045304 (2012). . K Voronin, A S Taradin, M V Gorkunov, D G Baranov, ACS Photonics. 92652K. Voronin, A. S. Taradin, M. V. Gorkunov, and D. G. Baranov, ACS Photonics 9, 2652 (2022). . R H Tichauer, J Feist, G Groenhof, The Journal of Chemical Physics. 154104112R. H. Tichauer, J. Feist, and G. Groenhof, The Journal of Chemical Physics 154, 104112 (2021). . J George, T Chervy, A Shalabney, E Devaux, H Hiura, C Genet, T W Ebbesen, Phys. Rev. Lett. 117153601J. George, T. Chervy, A. Shalabney, E. Devaux, H. Hiura, C. Genet, and T. W. Ebbesen, Phys. Rev. Lett. 117, 153601 (2016). . R R Riso, L Grazioli, E Ronca, T Giovannini, H Koch, arXiv:2209.01987arXiv preprintR. R. Riso, L. Grazioli, E. Ronca, T. Giovannini, and H. Koch, arXiv preprint arXiv:2209.01987 (2022).
[]
[ "A Decade of Code Comment Quality Assessment: A Systematic Literature Review", "A Decade of Code Comment Quality Assessment: A Systematic Literature Review" ]
[ "Pooja Rani \nSoftware Composition Group\nUniversity of Bern\nBernSwitzerland\n", "Arianna Blasi \nUniversità della Svizzera italiana\nLuganoSwitzerland\n", "Nataliia Stulova \nSoftware Composition Group\nUniversity of Bern\nBernSwitzerland\n", "Sebastiano Panichella \nZurich University of Applied Science\nZurichSwitzerland\n", "Alessandra Gorla \nIMDEA Software Institute\nMadridSpain\n", "Oscar Nierstrasz \nSoftware Composition Group\nUniversity of Bern\nBernSwitzerland\n" ]
[ "Software Composition Group\nUniversity of Bern\nBernSwitzerland", "Università della Svizzera italiana\nLuganoSwitzerland", "Software Composition Group\nUniversity of Bern\nBernSwitzerland", "Zurich University of Applied Science\nZurichSwitzerland", "IMDEA Software Institute\nMadridSpain", "Software Composition Group\nUniversity of Bern\nBernSwitzerland" ]
[]
Code comments are important artifacts in software systems and play a paramount role in many software engineering (SE) tasks related to maintenance and program comprehension. However, while it is widely accepted that high quality matters in code comments just as it matters in source code, assessing comment quality in practice is still an open problem. First and foremost, there is no unique definition of quality when it comes to evaluating code comments. The few existing studies on this topic rather focus on specific attributes of quality that can be easily quantified and measured. Existing techniques and corresponding tools may also focus on comments bound to a specific programming language, and may only deal with comments with specific scopes and clear goals (e.g., Javadoc comments at the method level, or in-body comments describing TODOs to be addressed).In this paper, we present a Systematic Literature Review (SLR) of the last decade of research in SE to answer the following research questions: (i) What types of comments do researchers focus on when assessing comment quality? (ii) What quality attributes (QAs) do they consider? (iii) Which tools and techniques do they use to assess comment quality?, and (iv) How do they evaluate their studies on comment quality assessment in general?Our evaluation, based on the analysis of 2353 papers and the actual review of 47 relevant ones, shows that (i) most studies and techniques focus on comments in Java code, thus may not be generalizable to other languages, and (ii) the analyzed studies focus on four main QAs of a total of 21 QAs identified in the literature, with a clear predominance of checking consistency between comments and the code. We observe that researchers rely on manual assessment and specific heuristics rather than the automated assessment of the comment quality attributes, with evaluations often involving surveys of students and the authors of the original studies but rarely professional developers.
10.1016/j.jss.2022.111515
[ "https://export.arxiv.org/pdf/2209.08165v1.pdf" ]
252,367,416
2209.08165
3c0d063d8f95788cccc363aec8fa1cde0dd97ddb
A Decade of Code Comment Quality Assessment: A Systematic Literature Review Pooja Rani Software Composition Group University of Bern BernSwitzerland Arianna Blasi Università della Svizzera italiana LuganoSwitzerland Nataliia Stulova Software Composition Group University of Bern BernSwitzerland Sebastiano Panichella Zurich University of Applied Science ZurichSwitzerland Alessandra Gorla IMDEA Software Institute MadridSpain Oscar Nierstrasz Software Composition Group University of Bern BernSwitzerland A Decade of Code Comment Quality Assessment: A Systematic Literature Review 10.5281/zenodo.4729054code commentsdocumentation qualitysystematic literature review Code comments are important artifacts in software systems and play a paramount role in many software engineering (SE) tasks related to maintenance and program comprehension. However, while it is widely accepted that high quality matters in code comments just as it matters in source code, assessing comment quality in practice is still an open problem. First and foremost, there is no unique definition of quality when it comes to evaluating code comments. The few existing studies on this topic rather focus on specific attributes of quality that can be easily quantified and measured. Existing techniques and corresponding tools may also focus on comments bound to a specific programming language, and may only deal with comments with specific scopes and clear goals (e.g., Javadoc comments at the method level, or in-body comments describing TODOs to be addressed).In this paper, we present a Systematic Literature Review (SLR) of the last decade of research in SE to answer the following research questions: (i) What types of comments do researchers focus on when assessing comment quality? (ii) What quality attributes (QAs) do they consider? (iii) Which tools and techniques do they use to assess comment quality?, and (iv) How do they evaluate their studies on comment quality assessment in general?Our evaluation, based on the analysis of 2353 papers and the actual review of 47 relevant ones, shows that (i) most studies and techniques focus on comments in Java code, thus may not be generalizable to other languages, and (ii) the analyzed studies focus on four main QAs of a total of 21 QAs identified in the literature, with a clear predominance of checking consistency between comments and the code. We observe that researchers rely on manual assessment and specific heuristics rather than the automated assessment of the comment quality attributes, with evaluations often involving surveys of students and the authors of the original studies but rarely professional developers. fore, writing high-quality comments and maintaining them in projects is a responsibility mostly left to developers [8,9]. The problem of assessing the quality of code comments has gained a lot of attention from researchers during the last decade [10,11,12,13,14]. Despite the research community's interest in this topic, there is no clear agreement on what quality means when referring to code comments. Having a general definition of quality when referring to code comments is hard, as comments are diverse in purpose and scope. Problem Statement. Maintaining high-quality code comments is vital for software evolution activities, however, assessing the overall quality of comments is not a trivial problem. As developers use various programming languages, adopt project-specific conventions to write comments, embed different kinds of information in a semi-structured or unstructured form [15,13], and lack quality assessment tools for comments, ensuring comment quality in practice is a complex task. Even though specific comments follow all language-specific guidelines in terms of syntax, it is still challenging to determine automatically whether they satisfy other quality aspects, such as whether they are consistent or complete with respect to the code or not [16]. There are various such aspects, e.g., readability, content relevance, and correctness that should be considered when assessing comments, but tools do not support all of them. Therefore, a comprehensive study of the specific attributes that influence code comment quality and techniques proposed to assess them is essential for further improving comment quality tools. Previous mapping and literature review studies have collected numerous quality attributes (QAs) that are used to assess the quality of software documentation based on their importance and effect on the documentation quality. Ding et al. [17] focused specifically on software architecture and requirement documents, while Zhi et al. [18] analyzed code comments along with other types of documentation, such as requirement and design documents. They identified 16 QAs that influence the quality of software documentation. However, the identified QAs are extracted from a body of literature concerning relatively old studies (i.e., studies conducted prior to the year 2011) and are limited in the context of code comments. For instance, only 10% of the studies considered by Zhi et al. concern code comments. Given the increasing attention that researchers pay to comment quality assessment, it is essential to know which QAs, tools and techniques they propose to assess code comment quality. To achieve this objective, we perform an SLR on studies published in the last decade, i.e., 2011-2020. We review 2353 studies and find 47 to be relevant to assessing comment quality. From these we extract the programming language, the types of analyzed comments, QAs for comments, techniques to measure them, and the preferred evaluation type to validate their results. We observe that (i) most of the studies and techniques focus on comments in Java code, (ii) many techniques that are used to assess QAs are based on heuristics and thus may not be generalizable to other languages, (iii) a total of 21 QAs are used across studies, with a clear dominance of consistency, completeness, accuracy, and readability, and (iv) several QAs are often assessed manually rather than with the automated approaches. We find that the studies are typically evaluated by measuring performance metrics and surveying students rather than by performing validations with practitioners. This shows that there is much room for improvement in the state of the art of comment quality assessment. The contributions of this paper are: i) an SLR of a total of 2353 papers, of which we review the 47 most relevant ones, focusing on QAs mentioned and research solutions proposed to assess code comment quality, ii) a catalog of 21 QAs of which four QAs are often investigated, while the majority is rarely considered in the studies, and of which 10 are new with respect to the previous study by Zhi et al. [18], iii) a catalog of methods used to measure these 21 QAs in research studies, iv) an overview of the approaches and tools proposed to assess comment quality, taking into account the types of comments and the programming languages they consider, v) a discussion of the challenges and limitations of approaches and tools proposed to assess different and complementary comment QAs, and vi) a publicly available dataset including all validated data, and steps to reproduce the study in the replication package. 1 Paper structure. The rest of the paper is organized as follows. In section 2 we highlight our motivation and rationale behind each research question, and we present our methodology, including the different steps performed to answer our research questions. In section 3 we report the study results. We discuss the results in section 4 and their implications and future direction in section 5. We highlight the possible threats to validity for our study in section 6. Then section 7 summarizes the related work, in relation to the formulated research questions. Finally, section 8 concludes our study, outlining future directions. Study Design The main objective of our study is to present an overview of the state of the art in assessing the quality of code comments. Specifically, we aim to highlight the QAs mentioned in the literature, and the techniques used so far to assess comment quality. To this end, we carry out an SLR, following the widely accepted guidelines of Kitchenham et al. [19] and Keele [20]. The first step in this direction is to specify the research questions related to the topic of interest [19]. The following steps focus on finding a set of relevant studies that are related to the research questions based on an unbiased search strategy. Research questions Our goal is to foster research that aims at building code comment assessment tools. To achieve this goal, we conduct an SLR, investigating the literature of the last decade to identify 1 https://doi.org/10.5281/zenodo.4729054 comment related QAs and solutions that address related challenges. We formulate the following research questions: • RQ 1 : What types of comments do researchers focus on when assessing comment quality? Motivation: Comments are typically placed at the beginning of a file, usually to report licensing or author information, or placed preceding a class or function to document the overview of a class or function and its implementation details. Depending on the specific type of comment used in source code and the specific programming language, researchers may use different techniques to assess them. These techniques may not be generalizable to other languages. For example, studies analyzing class comments in object-oriented programming languages may need extra effort to generalize the comment assessment approach to functional programming languages. We, therefore, investigate the comment types researchers target. • RQ 2 : What QAs do researchers consider in assessing comment quality? Motivation: QAs may solely concern syntactic aspects of the comments (e.g., syntax of comments), writing style (e.g., grammar), or content aspects (e.g., consistency with the code). Researchers may use different terminology for the same QA and thus these terms must be mapped across studies to obtain a unifying view of them, for instance, if the accuracy QA is defined consistently across studies or another terminology is used for it. We collect all the possible QAs that researchers refer to and map them, if necessary, following the methodology of Zhi et al.. Future studies that aim to improve specific aspects of comment quality evaluation can use this information to design their tools and techniques. • RQ 3 : Which tools and techniques do researchers use to assess comment QAs? Motivation: Researchers may assess QAs manually, or may use sophisticated tools and techniques based on simple heuristics or complex machine learning (ML) to as-3 sess them automatically. We aim to identify if there are clear winning techniques for this domain and collect various metrics and tools used for this purpose. • RQ 4 : What kinds of contribution do studies often make? Motivation: Engineering researchers usually motivate their research based on the utility of their results. Auyang clarifies that engineering aims to apply scientific methods to real world problems [21]. However, software engineering currently lacks validation [22]. With this question, we want to understand what types of solution researchers contribute to improving automatic comment quality assessment, such as metrics, methods, or tools. This RQ can provide insight into specific kinds of solutions for future work. • RQ 5 : How do researchers evaluate their comment quality assessment studies? Motivation: Researchers may evaluate their comment assessment approaches, e.g., by surveying developers, or by using a dataset of case studies. However, how often they involve professional developers and industries in such studies is unknown. Search Strategy After formulating the research questions, the next steps focus on finding relevant studies that are related to the research questions. In these steps, we to collect the main keywords. According to the definition, we identify the keywords comment, documentation, and specification and add them to the set K 1 . We further add frequently mentioned comment-related keywords, such as API, annotation, and summar to the set K 1 . The interventions include terms that are related to software methodology, tools, technology, or procedures. With respect to quality assessment, we define the intervention keywords to be quality, assess, metric, measure, score, analy, practice, structur, study, or studied and add them to the set K 2 . Note that we add common variations of the words manually, for example, we add "summar" keyword to the set to cover both "summary" and "summarization". We do not use any NLP libraries to stem words due to two main reasons, (i) to reduce the noisy matches, and (ii) the words from the title and abstract of the papers are not preprocessed (stemmed or lemmatized), therefore stemming the keywords might not find the exact or prefix matches. For example, using the porter stemming approach, the word "study" will be stemmed to "studi" and we might miss the papers with "study" word. To avoid such cases, we add common variations of this word study and studied to our search keywords. The outcomes include terms that are related to factors of significance to developers (e.g., reduced cost, reduced time to assess quality). Since it is not a required unit to restrict the search scope, and our focus is on all kinds of quality assessment approaches, we exclude the outcomes in our search keywords. Figure 1: SLR stages to collect relevant papers Hence, using the final set of keywords (also given in Table 1), we select a paper if its title and abstract match the keywords from K 1 and K 2 but not from K 3 where the prefix function is used to match the keywords in the paper. Timeline We focus our SLR on the last decade (i.e., January 2011-December 2020) since Zhi et al. investigated the works on software documentation quality -including code commentsfrom 1971 to 2011 [18]. Our results can thus be used to observe the evolution of comment quality assessment, but, more importantly, they naturally complement the existing body of knowledge on the topic. We then proceed to the main steps i.e., retrieving the paper data, selecting venues, and identifying the relevant papers for our comment context. Data collection Concretely, our data collection approach comprises three main steps, i.e., literature data collection, data selection, and data evaluation, which we sketch in Figure 1 and present in further detail as follows: We now describe how we automatically collect the data from the literature, explaining the rationale behind our selection of venues and our automatic keyword-based filtering to identify the likely relevant papers regarding comment quality assessment. We justify the need for another step of data gathering based on the snowball approach in Section 2. Data Retrieval We retrieve in step 2 the proceedings from January 2011 to December 2020 of the selected venues from the DBLP digital library. From each paper, we collect its metadata using the GitHub repository 5 , such as the title, authors, conference track (if present), its page length, and its Digital Object Identifier (DOI), directly from DBLP for a total of 17554 publications. For each paper, the DOI is resolved and its abstract is collected from the publisher webpage. Keyword-based filtering. We apply in step 3 a keyword-based search (given in subsubsection 2.2.1) using a prefix function to the retrieved proceedings to select potentially relevant papers. We account for possible upper-and lowercase letters in the keywords, and sometimes use variations of keywords (e.g., singular and plural forms). Our filtering will get papers (whose title and abstract include keywords from K 1 and K 2 but not from K 3 ) that explicitly mention concepts we are interested in, e.g., "A Human Study of Comprehension and Code Summarization" from ICPC 2020 [24] is matched by keywords summar from K 1 in the title and quality from K 2 in the abstract, but will exclude papers not sufficiently close to our research subject, e.g., "aComment: mining annotations from comments and code to detect interrupt related concurrency bugs" from ICSE 2011 has two keywords comment and annotation from K 1 but none from the K 2 . The final set of keywords we use for filtering is the result of an iterative approach: we manually scan the full venue proceedings metadata to make sure the set of keywords did not prune relevant papers, and we refine the set of keywords during several iterative discussions. This iterative approach gives us confidence that our keyword-based filtering approach does not lead to false negatives for the selected venues. After applying the keyword-based filtering, we identify 2043 studies as potentially-relevant papers from a total of 17554, which we review manually. 5 https://github.com/sbaltes/dblp-retriever Data selection We analyze 4 the 2043 selected papers following the protocol where four authors or evaluators manually evaluate the papers based on the inclusion and exclusion criterion to ensure that they indeed assess comment quality. Inclusion criteria I1 The topic of the paper is about code comment quality. I2 The study presents a model/technique/approach to assess code comments or software documentation including code comments. Exclusion criteria E1 The paper is not in English. E2 It does not assess any form of quality aspects of comments e.g., content, style, or language used. E3 It is not published in a technical track. E4 It is a survey paper. E5 It is not a peer reviewed paper, or it is a pre-print. E6 It covers other documentation artifacts, i.e., not comments. E7 It is shorter than 5 pages. Manual analysis. The selected papers were equally divided among four evaluators (i.e., two Ph.D. candidates and two faculty members) based on years of publications so that each evaluator gets papers from all venues, e.g., the first author evaluate proceedings from 2011 to 2013. We make sure that evaluators do not take decisions on papers they co-authored to avoid conflicts of interest. Each evaluator has at least two years of experience in the domain of comment analysis. Each paper is reviewed by three evaluators. The evaluators follow a threeiteration-based process to evaluate the assigned papers. In the first iteration, the first evaluator independently assesses the relevance of a paper based on the criteria by inspecting each paper's title and abstract, to make an initial guess, then inspecting its conclusion to reach the final decision. In the next iteration, another evaluator reviews the paper and validates the previous decision by adding the label "agrees/disagrees with the first evaluator". With this process, every publication selected in the final set is reviewed by at least two researchers. In case they do not agree, the third evaluator reviews it [25], and the final decision is taken based on the majority voting mechanism. We decide, for instance, to include the study by Hata et al. [26], even though it only talks about links in comments. Though it does not explicitly describe any quality aspect of comments, it mentions the traceability of the links, which is a QA we consider in our study. All studies considered in our SLR together with their evaluation (the agreement and disagreement for each study) are available in our replication package. The column Total reports the total number of references and citations collected. The Unique column reports a total number of unique items (i.e., since relevant papers cover similar topics many references, and citations are shared across our set of studies). Finally, the column Selected reports the total number of unique references and citations whose publication year falls within our time frame range, i.e., 2011-2020. Data selection from snowballing. We repeat in step 7 the same keyword-based filtering to these 3704 papers, as described in subsubsection 2.2.3. As a result, 311 papers were added for manual analysis. We repeat in step 8 the three-iteration based manual analysis process and find 39 additional candidate papers to analyze. After the second round of discussion 9 we keep 17 additional relevant papers. We find a total of 47 papers shown in Table 3 published in the venues shown in Table 2. In Table 3, the column Study ID indicates the ID assigned to each paper, the column Title presents the title of the paper, and the column Year indicates the years in which the paper is published. To further ensure the relevance of our search strategy, we search our keywords on popular publication databases, such as ACM, IEEE Xplore, Wiley etc. We search for our keywords in titles and abstracts. 6 We retrieve 13 144 results from IEEE Xplore, and 10 567 from ACM for the same timeline (2011-2020). We inspect first 200 results (sorted by relevance criterion on the publisher webpage) from each of these databases. We apply our inclusion and exclusion criterion to find the extent to which our venue selection criteria might have missed relevant papers. Our results from ACM show that 19% of the these papers are already covered by our search strategy but only 5% of them fulfilled our inclusion criterion. Nearly 81% of the papers are excluded due to their non-SE venue. Among these papers, 80% are unrelated to the code comment quality aspect while 1% of papers (two papers) that are related to code comments are missed due to two main reasons, (i) the venue not being indexed in CORE2020, and (ii) the paper being from a non-technical track. Similarly, the results from IEEE show that 30% of the papers are already covered by our search strategy but only 5% of them fulfilled the inclusion criterion. Nearly 69% of the papers are excluded due to their non-SE venue and unrelated to code comment quality aspect. We also find 1% pa- 6 It is not possible to search the keywords in abstracts in Wiley. pers that are relevant to our topic of interest but excluded due to the length criteria, specifically one of the paper is a poster paper and another is a short paper. Data Evaluation We work in step 10 on the full versions of the 47 relevant papers to identify the QAs and the approaches to assess comments. In case we cannot retrieve the full PDF version of a paper, we use university resources to access it. This affects only one paper by Sun et al., which requires payment to access the full version [41]. In case we cannot access a paper via any resource, we remove it from our list. We find no such inaccessible study. We report all papers in an online shared spreadsheet on Google Drive to facilitate their analysis collaboratively. For each paper we extract common metadata, namely Publication year, Venue, Title, Authors, Authors' country, and Authors' affiliation. We then extract various dimensions (described in the following paragraphs) formulated to answer all research questions. Data extraction for research questions To answer RQ1 (What types of comments do researchers focus on when assessing comment quality?), we record the Comment scope dimension. It lists the scope of comments under assessment such as class, API, method (function), package, license, or inline comments. In case the comment type is not mentioned, we classify it as "code comments". Additionally, we identify the programming languages whose comments are analyzed, and record this in the Language analyzed dimension. To answer RQ2 (What QAs do researchers consider in assessing comment quality?), we identify various QAs researchers mention to assess comment quality. This reflects the various quality aspects researchers perceive as important to have high-quality comments. Table 4 lists the QAs in the Quality attribute (QA) column and their brief summary in the Description column. Of these QAs, several are mentioned by Maintainability the extent to which comments are maintainable (S15-S17,S20-S21) Understandability the extent to which comments contribute to understanding the system (S19, S23) Usability Usefulness to which extent the comment can be used by readers to achieve their objectives (S02, S16, S34, S35) To capture this information, we formulate the following dimensions: • Research type. This identifies the nature of the research approach used in the studies, such as empirical, validation, evaluation, solution proposal, philosophical, opinion, or experience paper [69,70]. The dimension values are described in detail in Table 5. • Paper contribution. This dimension describes the type of contribution the study provides in terms of a method/technique, tool, process, model, metric, survey, or empirical results [70]. The dimension values are described in detail in Table 5. If we cannot categorize it into any of these, we mark it "Other". • Tool availability. This reflects whether the tool proposed in the study is accessible or not at the time of conducting our study. González et al. identified the reproducibility aspects characterizing empirical software engineering studies [71] in which availability of the artifact (the tool proposed in the study, or the dataset used to conduct the study) is shown as an important aspect to facilitate the replication and extension of the study. Therefore, we record the availability of the proposed tool in this dimension and the availability of the dataset in the following dimension. • Evaluation purpose. It states the motivation of evaluation by authors such as evaluate the functionality, efficiency, applicability, usability, accuracy, comment quality in general, or importance of attributes. Results As mentioned in subsubsection 2.2.5, we analyze 47 relevant papers in total. Before answering our four RQs, we present a brief overview of the metadata (publishing venues) of the papers. Table 2 highlights the publication venues of these papers. Most studies were published in top-tier software engineering conferences (e.g., ICSE) and journals, especially the ones with a focus on empirical studies (e.g., EMSE). This means that the SE community agrees that assessing comment quality is an important topic deserving of research effort. Figure 2 shows the paper distribution over the past decade, indicating a clear trend of increasing interest of the SE research community in comment quality assessment. Figure 3 shows the author distribu-13 Solution Proposal The paper proposes a novel or a significant extension of an existing technique for a problem and describes its applicability, intended use, components, and how the components fit together using a small example or argumentation. Philosophical These papers present a new view to look at the existing problems by proposing a taxonomy or a conceptual framework, e.g., developing a new language or framework to describe the observations is a philosophical activity. Opinion These papers describe the author's opinion in terms of how things should be done, or if a certain technique is good or bad. They do not rely on research methodologies and related work. Experience These papers explain the personal experience of a practitioner in using a certain technique to show how something has been done in practice. They do not propose a new technique and are not scientific experiments. Contribution type Empirical The paper provides empirical results based on analyzing relevant projects to understand and highlights the problems related to comment quality. Method/technique The paper provides a novel or significant extension of an existing approach. Model Provides a taxonomy to describe their observations or an automated model based on machine/deep learning. Metric Provides a new metric to assess specific aspects of comments. Survey Conducts survey to understand a specific problem and contribute insights from developers. Tool Develops a tool to analyze comments. Overflow show that Java stands fifth after JavaScript, HTM-L/CSS, SQL, and Python among the most commonly used programming languages. 8 We find only one study (S44) that seems to address the comment quality aspect in JavaScript. Given the emerging trend of studies leveraging natural-language information in JavaScript code [72,73], more research about comment quality may be needed in this environment. It indicates that researchers need to analyze comments of other languages to verify their proposed approaches and support developers of other languages. We find that half of the studies (51% of the studies) focus on all types of comments whereas the other half focus on specific types of comments, such as inline, method, or TODO comments. However, we also see in Figure 4 that studies frequently focus on method comments and API documentation. This proves the effort the research community is putting into improving API quality. While some attention is given to often overlooked kinds of comments, such as license comments (S28,S33), TODO comments (S14), inline comments (S17), and deprecation comments (S45), no relevant paper seems to [63,13]. Such trends also reflect the increasing use of polyglot environments in software development [77]. The "Other" label in Figure 4 comprises language-agnostic studies, e.g., S16 or the studies considering less popular languages, e.g., S28 focuses on COBOL. We find only one study (S44) that analyzes comments of six programming languages et al. [26]. consistency is by far the one that receives constant and consistent attention across the years, with several in 2017 (S07, S08, S09, S29) and 2018 (S10, S11, S39, S42, S43). Indeed, the problem of inconsistency has been studied from multiple points of view, such as inconsistency between code and comments that may emerge after code refactoring (S07), or the inconsistencies revealed by so-called linguistic antipatterns (S11, S37). Unsurprisingly, the plot shows that up-to-dateness increasingly has received attention in the last three years of the decade, given that comments that are not updated together with code are also a cause of inconsistency (S15, S16 Another aspect to analyze is whether researchers perceive the QAs as being the same or not. For example, do all studies mean the same by consistency, conciseness, accuracy of comments? We therefore collect the definition of each QA considered in the study. We find that for various QAs researchers refer to the same QA but using different terminology. We map such cases to the Synonyms column presented in Table 4. From this analysis we find that not all studies precisely define the QAs, or they refer to their existing definitions while evaluating comments using them. For instance, the studies (S01, S04, S13, S17, S20, We see that machine learning-based approaches are used more often than deep-learning approaches, but whether it is due to their high accuracy, easy interpretation, or need for a small dataset is unclear and requires further investigation. In addition to identifying general techniques, we collect which metrics and tools have been used to measure various QAs. Table 6 shows various QAs in the column QAs, and metrics and tools used for each QA in the column Metrics, and Tools respectively. The description of the collected metrics is presented in Table 7. We can see that out of 21, only 10 QAs have metrics defined for them. A software metric is a function that takes some software data as input and provides a numerical value as an output. The output provides the degree to which the software possesses a certain attribute affecting its quality [84]. To limit the incorrect interpretation of the metric, threshold values are defined. However, the threshold value may change according to the type of comments analyzed, and the interpretation of the metric output may vary in turn. We report threshold values, if present, for the collected metrics. For readability QA, researchers were often found to be using the same metric (S08, S22, S39). As developers spend significant amount of time reading code, including comments, having readable comment can help them in understanding code eas- Table 6: Metrics and tools used for various quality attributes. Note: the description of each metric is given in Table 7 QAs Metrics Tools Accessibility S08: Accessibility_1, Accessibility_2 S12: Text2KnowledgeGraph ier. Yet readability remains a subjective concept. Several studies, such as S08, S22, S39 identified various syntactic and textual features for source code and comments. However, in context of code comments, they focus on the Flesch-Kincaid index method, which is typically used to assess readability of natural language text. Since comments often consist of a mix of source code and natural language text, such methods can have disadvantages. For example, developers can refer to the same code concept differently in comments, and they can structure their information differently. Thus, formulating metrics that consider the special context of code comments can improve the the as-sessment of readability of comments. Techniques QAs Another popular metric is Consistency_1 used for assessing consistency between comments and code (S08, S22, S39). This metric measures the overlap between the terms of method comments and method body. These studies assume that the higher the overlap, better the readability of that code. Similarly, metrics (coherence_1 , coherence_3, coherence_4) used for measuring the coherence QA suggest higher overlap between comments and code. However, having too many overlapping words can defy the purpose of comments and can lead to redundant comments. Using such metrics, a comment containing only Table 7: Description of each metric listed in Table 6 Metrics Description A class comment should contain authorship. Check the presence and absence of the @author tag with the following name. [S18] Coherence_1 The similarity between words from method comments and method names where similarity is computed using Levenshtein distance. The value should be between 0 and 0.5 to have a coherent comment. [S02] Coherence_2 The length of comments should be between 2 words to 30 words. [S02] Coherence_3 Percentage of the number of class or method's words contained in the class or method comments divided by the total class or method's words. The value should be above or equal to 0.5. [S18] Coherence_4 There is coherence between the comment and the implementation of a method when they have a high lexical similarity, where lexical similarity is computed using cosine similarity. [S38] Completeness_1 A class comment should contain a description and authorship. A method should contain comments if it is complex (more than three method invocation) and have 30 LOC. [S18] Completeness_2 How many of the public classes, types, and methods have a comment preceding them. [S02] Completeness_3 Exceptions that are present in App Programs, Crashes, and API source code but not in API reference documentation. [S43] Consistency_1 The overlap between the terms used in a method comment and the terms used in the method body. They correlate a higher value of CIC with a higher readability level of that code. [S08, S22, S39] Consistency_2 The Kullback-Leibler divergence is a measure that finds the difference between two probability distributions. SpellGrammar_1 The sentence has no subject or predicate, or has incomplete punctuations (e.g., the right parenthesis is missing). [S13] Understandability_1 Remove a sentence if it is incomplete, contains code elements, is a question, or it mentions the concept in its subordinate clauses. [S13] Empirical study S11 S15 S16 S26 S44 S47 S36 S27 S37 S01 S03 S25 Evaluation S05 S29 S21 S43 S22 S08 Replication study S19 Solution proposal S13 S23 S28 S30 S33 S41 S46 S39 S48 S40 S17 S31 S32 S34 Validation S02 S04 S07 S09 S10 S12 S42 S45 S35 S18 S06 S29 S38 S14 S24 Figure 7: Types of contribution for each research type Paper contribution Evaluation type Authors of the work Experiment Performance metrics Survey practitioners Survey practitioners and students Survey students Empirical results S15 S16 S11 S26 S19 Method/Technique S30 S42 S44 S47 S13 S28 S43 S45 S46 S02 S04 S05 S07 S09 S13 S20 S41 S42 S02 S05 S21 S20 S23 S43 S10 S12 Metric S36 S35 S18 Model S27 S38 S06 S27 S37 S38 S39 S06 S22 S29 S33 S40 S37 S40 Survey S01 S03 S25 S08 Tool S32 S34 S17 S24 S14 S34 S24 S31 . We find only one study performing a replication study (S19). Given the importance of research replicability in any field, future work needs to focus more on evaluating the proposed solution and testing their replicability in this domain. Paper contribution types. By categorizing the papers according to the paper contribution definition, Figure 7 and Figure 8 show that over 44% of papers propose an approach (method-/technique) to assess code comments. A large part (75%) of them are heuristics-based approaches, e.g., Zhou et al. and Wang et al. present such NLP based heuristics (S9, S13). A few approaches rely on manual assessments. As an example, consider how taxonomies assessing comment quality have emerged [14,39]. Models are the second contribution by frequency, which makes sense considering the increasing trend of leveraging machine learning during the considered decade: 60% of the relevant papers proposing models are based on such approaches. The label Empirical results comprises studies which mainly offer insights through authors' observations (e.g., S11, S15, S16, S19, S26). Finally, given the important role that metrics have in software engineering [85,86], it is valuable to look into metrics that are proposed or used to assess code comment quality as well. For example, three studies (S18, S35, and S36) contribute metrics for completeness, accuracy, or coherence whereas other studies use existing established metrics, e.g., S08, S22, or S39 compute the readability of comments using the metric named the Flesch-Kincaid index. Tool availability. Previous work indicates the developers' effort in seeking tools to assess documentation quality, and highlights the lack of such tools [39]. In our study, we find that 32% of the studies propose tools to assess specific QAs, mainly 23 for detecting inconsistencies between code and comments. Of these studies proposing tools, 60% provide a link to them. The lack of a direct link in the remaining 40% can hinder the reproducibility of such studies. Dataset availability. In terms of dataset availability, 49% of the studies provide a link to a replication package. Of the remaining papers, some provide a link to the case studies they an- alyze (typically open-source projects) [28], build on previously existing datasets [61], or mention the reasons why they could not provide a dataset. For instance, Garousi et al. indicated the company policy as a reason to not to share the analyzed documentation in their case study [48]. Discussion Below we detail our observations about state of the art in comment quality analysis together with implications and suggestions for future research. Comment Types. The analysis of the comment quality assessment studies in the last decade shows that the trend of analyzing comments from multiple languages and systems is increasing compared to the previous decade where a majority of the studies focus on one system [18]. It reflects the increasing use of polyglot environments in software development [77]. Python (S29, S41) for class comments of Java and Python [76]. They mapped the taxonomies to Smalltalk class comments and found that developers write similar kinds of information in class comments across languages. Such a mapping can encourage building language-independent approaches for other aspects of comment quality evaluation. Implication for Future studies Besides the aspects discussed above, future studies on code comment assessment should be devoted to filling the gaps of the last decade of research as well as coping with the needs of developers interested in leveraging comment assessment tools in different program languages. Investigating specific comment types (RQ1). Several works showed the importance of different types of comments to achieve specific development tasks and understanding about code. Although, the trend of analyzing specific comment types has increased over the last decade, there are still comment types (e.g., class and package comments) that need more attention. Generalizing across languages (RQ1). Given the preponderance of studies focusing on the Java language, and considering that statistics from various developer boards (StackOverflow, GitHub) suggest that there are other popular languages as well (e.g., Python and JavaScript), more studies on analyzing various types of comments in these languages are needed. Interesting questions in this direction could concern the comparison of practices (e.g., given Python is often considered to be "self-explainable", do developers write fewer comments in is becoming more and more popular also due to such advanced techniques emerging, we envision that future work may study techniques and metrics to assess the quality of automatically generated code comments. Research evaluation (RQ4 and RQ5). Scientific methods play a crucial role in the growth of engineering knowledge [88]. Several studies have indicated the weak validation in software engineering [22]. We also find that several studies propose solutions but do not evaluate their solution. Also, various approaches were validated only by the authors of the work or by surveying students. However, we need to do all steps as engineering science researchers do, empirically investigating the problems, proposing solutions, and validating those solutions. In contrast to seven research types listed in Threats to validity We now outline potential threats to the validity of our study. Threats to construct validity mainly concern the measurements used in the evaluation process. In this case, threats can be mainly due to (i) the imprecision in the automated selection of relevant papers (i.e., the three-step search on the conference proceedings based on regular expressions), and to (ii) the subjectivity and error-proneness of the subsequent manual classification and categorization of relevant papers. We Moreover, we formulated a set of keywords to discard irrelevant studies that presented similar keywords (e.g., code review comments). To verify the correctness of the final set of keywords, we manually scanned the full venue proceedings metadata to make sure the set of keywords did not prune relevant papers. This iterative approach allowed us to verify that our keywordbased filtering approach does not lead to false negatives for the selected venues. We mitigated the second threat by applying multi-stage manual classification of conference proceedings, involving multiple evaluators and reviewers, as detailed in section 2. Threats to internal validity concern confounding factors that could influence our results and findings. A possible source of bias might be related to the way we selected and analyzed the conference proceedings. To deal with potential threats regarding the actual regular expressions considered for the selection of relevant studies, we created regular expressions that tend to be very inclusive, i.e., that select papers that are marginally related to the topic of interest, and we take a final decision only after a manual assessment. Threats to external validity concern the generalization and completeness of results and findings. Although the number of analyzed papers is large, since it involves studies spanning the last ten years of research, there is still the possibility that we missed some relevant studies. We mitigate this threat by applying various selection criteria to select relevant conference proceedings, considering the well-established venues and communities related to code comment-related studies, as detailed in section 2. It is important to mention that this paper intentionally limits its scope in two ways, which threatens to the completeness of the study results and findings. First of all, we mainly focus on research work investigating code comment quality without integrating studies from industry tracks of conference venues (as was done in previous studies thematically close to ours [17,18]). Second, we focus on those studies that involve manually written code comments in order to avoid auto-generated comments (already investigated in recent related work [89,90]). To further limit potential threats concerning the completeness of our study, we use the snowball approach to reach potentially relevant studies that we could have missed with our venue selection. However, we support the argument of Garousi et al. [91] who report that a multivocal literature review, with further replications, is desirable to make the overall interpretation of code comment quality attributes more complete for future work. Related Work This section discusses the literature concerning (i) studies motivating the importance of quality attributes for software documentation, (ii) comment quality aspects, and (iii) recent SLRs discussing topics closely related to our investigation. [96]. The majority of the highlighted documentation quality attributes apply to code comments as well (as a type of software documentation). However, which specific quality attributes (e.g., outdated, complete, consistent, traceable) researchers consider important to assess code comment quality and how these quality attributes are measured is yet to study. Comment quality. Evaluating comment quality according to various aspects has gained a lot of attention from researchers, for instance, assessing their adequacy [97] and their content quality [10,11], analyzing co-evolution of comments and code [98], or detecting inconsistent comments [12,14]. Several works have proposed tools and techniques for the automatic assessment of comment quality [10,11,99]. projects [100], the usage of ontologies in software process as-sessment [101], and improvement aspects in DevOps process and practices [102]. Previous SLRs in the field investigated code comments and software documentation [17,18], which are closely related to our work. Specifically, Ding et al. conducted an SLR to explore the usage of knowledge-based approaches in software documentation [17]. They identified twelve QAs. They also highlighted the need to improve QAs, especially conciseness, credibility, and unambiguity. Zhi Some QAs, such as conciseness, coherence, organization, and usefulness, are rarely investigated. As coherent and concise comments play an important role in program understanding, es-tablishing approaches to assess these attributes requires more attention from the community. We also observe that the majority of the approaches appear to be based on heuristics rather than machine learning or other techniques and, in general, need better evaluation. Such approaches require validation on other languages and projects to generalize them. Though the trend of analyzing comments appearing in multiple projects and languages is increasing compared to the previous decade, as reported by Zhi et al., the approaches still need more thorough validation [18]. Figure 2 : 2Relevant papers by years tion of the selected papers by the institution. For the timeline 1971-2011, we rely on the geographical statistics data from the replication package of our reference study by Zhi et al. [18], while for the period 2011-2021, and we collect these statistics as follows. For each paper, the primary affiliations of all authors are taken into account. If people from different countriesco-authored a paper, we calculate the proportion of a country's contribution for each paper so that each paper gets a total score of one to avoid over-representing papers. For example, if five authors of a paper belong to Switzerland and one belongs to Spain, we assign 5/6 score for Switzerland and 1/6 for Spain for the paper. Comparison with the previous data allows us to see the evolution of the field, with more even distribution of researchers nowadays and (unsurprising) rise of contributions from southeast Asia, specifically from China. Finding 1 . 1The trend of analyzing comment quality has increased in the last decade (2011-2020), in part due to more researchers from southeast Asia working on the topic. 3. 1 . 1RQ 1 : What types of comments do researchers focus on when assessing comment quality? To describe the rationale behind code implementation, various programming languages use source code comments. Our results show that researchers focus more on some programming languages compared to others as shown in Figure 4. This plot highlights the types of comments on the y-axis; each stack in the bar shows the ratio of the studies belonging to a particular language. For instance, the majority (87%) of the studies focus on code comments from Java, whereas only 15% of the studies focus on code comments from Python, and 10% of them focus on C# and C++. These results are in contrast to popular languages indicated by various developer boards, such as GitHub, Stack Overflow, or TIOBE. For instance, the TIOBE index show Python and C languages more popular than Java. 7 Similarly, the developer survey of 2019 and 2020 by Stack Finding 2 . 287% of the studies analyze comments from Java while other languages have not yet received enough attention from the research community.As code comments play an important role in describing the rationale behind source code, various programming languages use different types of comments to describe code at various abstraction levels. For example, Java class comments should present high-level information about the class, while method comments should present implementation-level details[74]. ( a )Figure 3 : a3Zhi et al. [18] 1971-2011, all countries (b) Our work 2011-2021, all countries (c) Zhi et al. [18] 1971-2011, Europe only (d) Our work 2011-2021, Europe only Relevant papers by countries Figure 4 :Finding 3 . 43Types of comments per programming language focus specifically on the quality of class or package comments. Recently Rani et al. studied the characteristics of class comments of Smalltalk in the Pharo environment 9 and highlighted the contexts they differ from Java and Python class comments, and why the existing approaches (based on Java, or Python) need heavy adaption for Smalltalk comments [75, 76]. This may encourage more research in that direction, possibly for other programming languages. 9 https://pharo.org/ Even though 50% of the studies analyze all types of code comments, the rest focus on studying a specific type of comments such as method comments, or API comments, indicating research interest in leveraging a particular type of comment for specific development tasks. Previous work by Zhi et al. showed that a majority of studies analyze just one type of system [18]. In contrast, our findings suggest that the trend of analyzing comments of multiple languages and systems is increasing. For example, 80% of the studies analyzing comments from Python and all studies analyzing comments from C++ also analyze comments from Java. Only Pascarella et al. (S42) and Zhang et al. (S41) focus solely on Python [64, 63]. However, Zhang et al. (S41) perform the comment analysis work in Python based on the Java study (S29) by Pascarella et al. Finding 4 . 4The trend of analyzing multiple software systems of a programming language, or of several languages, shows the increasing use of polyglot environments in software projects. 3.2. RQ 2 : Which QAs are used to assess code comments? To characterize the attention that the relevant studies reserve to each QA over the past decade, Figure 5 shows all the QAs on the y-axis and the corresponding years on the x-axis. Each bubble in the plot indicates both the number of papers by the size of the bubble and IDs of the studies. Comparing the y-axis with the QAs in Finding 6 . 6While QAs such as consistency and completeness are frequently used to assess comment quality, others are rarely investigated, such as conciseness and coherence. Figure 6 : 6Types of techniques used to analyze various QAs similar source code files have different licenses. Find the number of files in a group, number of different licenses in the group, number of files with an unknown license in the group, number of files without any license in the group, and number of licenses in the GPL family. [S28]Readability_1Flesch reading-ease test. [S08, S22, S39] Usability_1ADI: number of words in the method comments. The threshold is decided based on the simple average of the ADI for all methoddeclaration. [S35] rationale information about a method or class might be qualified as an incoherent or inconsistent comment whereas such comments can be very helpful in providing additional important information. Although metrics can help developers easily estimate the quality of comments, their sensitivity towards various QAs can degrade comment quality overall. More research is required to know the implication of given metrics on various QAs or combinations of QAs. Finding 10. Nearly 25% of the studies use metric-based methods to measure comment quality. However, the metrics are defined or used for only 10 QAs out of 21QAs.3.4. RQ 4 : What kinds of contribution do studies often make? Research types. As a typical development cycle can contain various research tasks, such as investigation of a problem, or validation of a solution, we collect which types of research are performed for the comment quality assessment domain, and what kinds of solutions researchers often contribute. We categorize the papers according to the research type dimension and show its results in Figure 7. The results show that the studies often conduct validation research (investigating the properties of a solution) followed by the solution proposal (offering a proof-of-concept method or technique). However, very few Figure 8 : 8Types of evaluation for each paper contribution type studies focus on evaluation research (investigating the problem or a technique implementation in practice) Finding 11 . 11Nearly 50% of the studies still are lacking on the replicability dimension, with their respective dataset or tool often not publicly accessible. 3.5. RQ 5 : How do researchers evaluate their comment quality assessment studies?Figure 8 shows how authors evaluate their contributions. We see that code comment assessment studies generally lack a systematic evaluation, surveying only students, or conducting case studies on specific projects only. Most of the time, an experiment is conducted without assessing the results through any kind of external expertise judgment. Hence, only 30% of the relevant studies survey practitioners to evaluate their approach. This tendency leads to several disadvantages. First, it is difficult to assess the extent to which a certain approach may overfit specific case studies while overlooking others. Second, approaches may be unaware of the real needs and interests of project developers. Finally, the approaches may tend to focus too little on real-world software projects (such as large software products evolving at a fast pace in industrial environments). Similarly, when a new method or technique or comment classification model is proposed, it is often assessed based on conventional performance metrics, such as Precision, Recall, or F1 (S02, S04, S07, S29, S41 etc.) and rarely are the results verified in an industry setting or with practitioners. Finding 12 . 12Many code comment assessment studies still lack systematic industrial evaluations for their proposed approaches, such as evaluating the metric, model, or method/technique with practitioners. Additionally, while in the past researchers focused on the quality of code comments in general terms, there is a new trend of studies that narrow their research investigation to particular comment types (methods, TODOs, deprecation, inline comments), indicating the increasing interest of researchers in supporting developers in providing a particular type of information for program comprehension and maintenance tasks.Emerging QAs. Our analysis of the last decade of studies on code comment assessment shows that new QAs (coherence, conciseness, maintainability, understandability etc.), which were not identified in previous work[18], are now being investigated and explored by researchers. This change can be explained by the fact that while in the past researchers focused on the quality of code comments in general terms, in the last decade there has been a new trend of studies that narrow their research investigation to specific comment types (methods, TO-DOs, deprecation, inline comments) and related QAs.Mapping QAs. As a consequence of this shift of focus towards specific comment types, the same QAs used in prior studies can assume different definition nuances, depending on the kind of comments considered. For instance, let us consider how the QA up-to-dateness, referred to in studies on code-comment inconsistency, assumes a different interpretation in the context of TODO comments. A TODO comment that becomes outdated describes a feature that is not being implemented, which means that such a comment should be addressed within some deadline, and then removed from the code base (S14) when either the respective code is written and potentially documented with a different comment, or the feature is abandoned altogether. At the same time, more research nowadays is conducted to understand the relations between different QAs. Mapping taxonomies. In recent years, several taxonomies concerning code comments have been proposed, however, all of them are characterized by a rather different focus, such as the scope of the comments (S02), the information embedded in the comment (S29, S41), the issues related to specific comment types (S06, S33, S40 ), as well as the programming language they belong to. This suggests the need for a comprehensive code comment taxonomy or model that maps all these aspects and definitions in a more coherent manner to have a better overview of developer commenting practices across languages. Rani et al. adapted the code comment taxonomies of Java and mitigated the first threat by manually classifying a sample of relevant papers from a set of conference proceedings and compared this classification with the one recommended by the 26 automated approach based on regular expressions. This allowed us to incrementally improve the initial set of regular expressions. To avoid any bias in the selection of the papers, we selected regular expression in a deterministic way (as detailed in the section 2): We first examined the definition of documentation and comment in IEEE Standard Glossary of Software Engineering Terminology (IEEE Standard 610.) and identified the first set of keywords comment, documentation, and specification; we further added comment-related keywords that are frequently mentioned in the context of code comments. For instance, Khamis et al. assessed the quality of inline comments based on consistency and language quality using a heuristic-based approach [10]. Steidl et al. evaluated documentation comment quality based on four quality attributes, such as consistency, coherence, completeness, and usefulness of comments using a machine learning-based model [11]. Zhou et al. proposed a heuristic and natural language processing-based technique to detect incomplete and incorrect comments [16]. These works have proposed various new quality attributes to assess comment quality, such as completeness, coherence, and language quality, that are not included in previous quality models. However, a unifying overview of comment QAs and their assessment approaches is still missing. Our paper complements these previous works by investigating comment QAs discussed in the last decade of research. Previous SLRs on code comments and software documentation. In recent years, SLRs have been conducted to investigate agile software development aspects in open-source 1. construct search keywords in subsection 2.2.1, 2. choose the search timeline in subsection 2.2.2, 3. collect sources of information in subsection 2.2.3, 4. retrieve studies in subsection 2.2.4, 5. select studies based on the inclusion/exclusion criteria in subsection 2.2.5, and 6. evaluate the relevant studies to answer the research questions in subsection 2.2.6.2.2.1. Search Keywords Kitchenham et al. recommended formulating individual facets or search units based on the research questions [19]. These search units include abbreviations, synonyms and other spellings, and they are combined using boolean operators. Pettricrew et al. suggested PIO (population, interventions, and outcome) criterion to define such search units [23]. The populations include terms related to the standards. We first examine the definitions of documentation and comment in IEEE Standard Glossary of Software Engineering Terminology (IEEE Standard 610. However, to narrow down our search and exclude irrelevant pa-4 SLR Stages Data Collection CORE 2020 venue extraction venue selection DBLP proceedings metadata download 195 venues 26 venues 332 procee- dings 3,704 unique papers forward and backward snowball 30 initial papers Semantic Scholar 1 2 6 Data Selection keyword-based publication filtering 2,043 potentially relevant papers 71 candidate papers 311 potentially relevant papers keyword-based publication filtering filtering by title and abstract 4 39 candidate papers filtering by title and abstract 3 8 Legend Database Action Multiple entities Data source combination Automatic processing Manual processing filtering by full text 5 Data Evaluation 47 papers (initial + snowballed) 17 snowballed papers filtering by full text 9 Paper analysis 10 -QA catalog -tools and approaches overview -discussion 7 Table 1 : 1keywords selected according to PIO criterion we include code review, test, keynote, invited, and poster, to exclude entries of non-technical papers that were not filtered out using the heuristics on the number of pages.Criteria keywords Populations (K 1 ) comment, documentation, specifica- tion, API, annotation, and summar Interventions (K 2 ) quality, assess, metric, measure, score, analy, practice, structur, study, and studied pers, such those about code reviews or testing, or non-technical papers, we formulate another set of keywords, K 3 . In this set, 2.5. Finally, we present our criteria for the careful evaluation of the relevant pa-pers in Sec 2.2.6. Venue Selection. Code comment analysis, generation, us- age, and maintenance are of primary interest to the SE research community. Thus, in order to systematically review the litera- ture on the comment quality assessment, we start by focusing on the SE venues. We use the latest 2020 updated version of the conference and journal database of the CORE ranking por- tal as a primary data source to identify all the potentially rel- evant SE venues. 2 The portal provides assessments of major conferences and journals in the computing disciplines, and it is a well-established and regularly-validated registry maintained 2 https://www.core.edu.au/conference-portal 5 by the academic community. We extract all ranked journals in SE (search code 803) from the CORE portal 3 and all top confer- ences and workshops in the SE field (search code 4612). 4 This process gives us an initial list of 85 journal and 110 conference venues. We select in step 1 26 software engineering (SE) con- ferences and journals from 195 candidate venues based on the likelihood of finding relevant papers in their proceedings. We focus on A* and A conferences and journals, and add conferences of rank B or C if they are co-located with previ- ously selected A* and A conferences to have venues, such as the IEEE/ACM International Conference on Program Compre- hension (ICPC) or the IEEE International Workshop on Source Code Analysis and Manipulation (SCAM) that focus on source code comprehension and manipulation. We prune venues that may not contain relevant contributions to source code comments. Specifically, we exclude a venue if its ten years of proceedings contain fewer than five occurrences of the words documentation or comment. This way, we exclude conferences, such as IEEE International Conference on Engi- neering of Complex Computer Systems (ICECCS), Foundations of Software Science and Computational Structures (FoSSaCS), and many others that primarily focus on other topics, such as verification or programming languages. Thus, we reduce our dataset to 20 conferences and six journals, as shown in Table 2. In Table 2, the column Type specifies whether a venue is a conference (C) or a journal (J), and the column Rank de- notes the corresponding CORE rank of the venue as of April 2021. The column Selection indicates the data collection phase in which the venue was first selected. The column Papers per venue indicates the total number of papers selected from this venue, both during the direct search and the snowball search. We consider only full papers (published in a technical track and longer than five pages) since they are likely to be an ex- tended or mature version of the papers published in other tracks, 3 http://portal.core.edu.au/jnl-ranks/?search=803&by=for& source=CORE2020&sort=arank&page=1 accessed on 25 Mar, 2021 4 http://portal.core.edu.au/conf-ranks/?search=4612&by= for&source=CORE2020&sort=arank&page=1 accessed on 25 Mar, 2021 such as NIER, ERA, or Poster. Table 2 : 2Included Journals, Conferences, and Workshops.Venue Table 3 : 3Included studies S20 CPC: Automatically Classifying and Propagating Natural Language Comments via Program Analysis. S27 What Should Developers be Aware of? An Empirical Study on the Directives of API Documentation. 2012 [49] S28 Analysis of License Inconsistency in Large Collections of Open Source Projects. S30 Augmenting Java Method Comments Generation with Context Information based on Neural Networks.Study ID Title Table 4 : 4RQ2 QAs mentioned by Zhi et al. (highlighted in bold) and other works QAs mentioned by Zhi et al. whether comment content can be accessed or retrieved by developers or not Readability clarity the extent to which comments can be easily read by other readers Trustworthiness the extent to which developers perceive the comment as trustworthy Author-related identity of the author who wrote the comment Correctness whether the information in the comment is correct or not Completeness adequacy how complete the comment content is to support development and maintenance tasks or whether there is missing information in comments or not Similarity uniqueness, duplication how similar the comment is to other code documents or code Consistency uniformity, integrity the extent to which the comment content is consistent with other documents or code Traceability the extent to which any modification in the comment can be traced, including who performed it Up-to-datedness how the comment is kept up-to-date with software evolution Accuracy preciseness accuracy or preciseness of the comment content. If the documentation is too abstract or vague and does not present concrete examples, then it can seem imprecise. organization how the information inside a comment is organized in comments Format including visual models, use of examples quality of documents in terms of writing style, description perspective, use of diagrams or examples, spatial arrangement, etc. QAs mentioned by other works Coherence how comment and code are related to each other, e.g., method comment should be related to the method name(S02, S38) Conciseness the extent to which comments are not verbose and do not contain unnecessaryQuality Attribute (QA) Synonyms Description Accessibility availability, information hid- ing, easiness to find Spelling and grammar natural language quality grammatical aspect of the comment content Documentation technology whether the technology to write, generate, store documentation is current or not Internationalization the extent to which comments are correctly translated in other languages (S16) Other the study does not mention any QA and cannot be mapped to any of the above attributes Zhi et al. in their work [18], and are highlighted by the bold text compared to QAs mentioned in other works. As Zhi et al. considered various types of documentation, such as requirement and architectural documents, not all attributes fit exactly into our study. For instance, the category "Format" includes the format of the documentation (e.g., UML, flow chart) in addition to the other aspects such as writing style of the document, use of diagrams etc. Although the format of the documentation is not applicable in our case due to our comment-specific interest, we keep other applicable aspects (writing style, use of diagram) of this QA. In addition to their QAs, we include any additional attribute mentioned in our set of relevant papers. If a study uses different terminology but similar meaning to QAs in our list, we map such QAs to our list and update the list of possible synonyms as shown in the column Synonyms in Table 4. In case we cannot map a study to the existing QAs, we map it to the Other category. For the cases where the studies do not mention any specific QA and mention comment quality analysis in general, we map the study to the list of existing QAs or classify it as Other based on their goal behind the quality analysis. For example, Pascarella et al. identify various information types in comments to support developers in easily finding relevant information for code comprehension tasks and to improve the comment quality assessment [13]. They do not mention any specific QA, but based on their study goal of finding relevant information easily, we map their study to the content relevance QA. Similarly, we map other comment classification studies such as S06, S29, S33, and S41 to the content relevance attribute. At the same time, the studies on linguistic anti-patterns (LAs) are mapped to the consistency attribute, given that LAs are practices that lead to lexical inconsistencies among code elements, or between code and associated comments [59, 34, 35]. Additionally, the studies that mention the negation of the QAs such as inconsistency, incorrectness, or incompleteness are mapped to their antonyms as consistency, correctness, or completeness, respectively to prevent duplication. RQ3 (Which tools and techniques do researchers use to as-sess comment QAs?) concerns various methods researchers use or propose to assess comment QAs, for instance, whether they use machine-learning based methods to assess comment quality or not. • Technique type. This identifies whether the technique used to assess a QA is based on natural language processing (NLP), heuristics, static analysis, metrics, machinelearning (ML), or deep neural network (DNN) approaches. The rationale is to identify which QAs are often assessed manually or using a specific automated approach. For instance, if the study uses specific heuristics related to the learning-based techniques (including any or both of the supervised or unsupervised learning algorithms), then it is classified as ML-based, or DNN-based respectively. A study can use mixed techniques to assess a specific QAprogramming environment to assess a QA, it is classified as heuristic-based technique, if it uses abstract syntax tree (AST) based static analysis approaches, then it is assigned to static analysis, and if it uses machine-learning or deep- and thus can be assigned to multiple techniques for the corresponding QA. We often find cases where the studies do not use any automated technique to measure a QA and instead ask other developers to assess it manually, so we put such cases into the manual assessment category. In case the study mentions a different technique, we extend the dimension values. • Metrics or tools. This further elaborates specific metrics, or tools the studies propose or use to assess a QA. A study can use an existing metric or can propose a new one. Sim- ilarly, one metric can be used to assess multiple QAs. We identify such metrics to highlight popular metrics amongst researchers. RQ4 (What kinds of contribution do studies often make?) captures the nature of the study and the type of contribution researchers use or propose to assess comment quality. We first identify the nature of research of a study and then identify the type of contribution it provides. This can reflect the kind of re- search often conducted to assess comment quality and the kind of contribution they make to support developers in assessing comment quality, for instance, what kind of solutions the Solu- tion Proposal research often propose, such as a method, metric, model, or tool. • Dataset availability. This reflects if the dataset used in the empirical study is accessible or not.under the performance metrics. In case the approach is validated by the authors of the work, we identify the evaluation type as Authors of the work.RQ5 (How do researchers evaluate their comment quality as- sessment studies?) concerns how various kinds of research (Re- search type dimension described in the previous RQ), and var- ious kinds of contribution (Paper contribution dimension) are evaluated in the studies. For example, it helps us to observe that if a study proposes a new method/technique to assess com- ments, then the authors also conduct an experiment on open- source projects to validate the contribution, or they consult the project developers, or both. We capture the type of evaluation in the Evaluation type dimension, and its purpose in Evaluation purpose. The rationale behind capturing this information is to identify the shortcomings in their evaluations, e.g., how often the studies proposing a tool are validated with practitioners. • Evaluation type. It states the type of evaluation the studies conduct to validate their approaches, such as conducting an experiment on open-source projects (Experiment), or surveying students, practitioners, or both. For the auto- mated approaches, we consider various performance met- rics, also known as Information Retrieval (IR) metrics, that are used to assess the machine/deep learning-based mod- els, such as Precision, Recall, F1 Measure, or Accuracy Table 5 : 5Type of research approach studies use and type of contributions studies makeDimension Category Description Research type Empirical This research task focuses on understanding and highlighting various problems by analyzing relevant projects, or surveying developers. These papers often provide empirical insights rather than a concrete technique. Validation This research task focus on investigating the properties of a technique that is novel and is not yet implemented in practice, e.g., techniques used for mathematical analysis or lab experimentation Evaluation The paper investigates the techniques that are implemented in practice and their evaluation is conducted to show the results of the implementation in terms of its pros and cons and thus help researchers in improving the tech- nique. Table 4 4demonstrates that our analysis finds new QAs with respect to the previous work of Zhi et al. The 10 additional QAs are: usefulness, use of examples, usability, references, preciseness, natural language quality, maintainability, visual models, internationalization, documentation technology, content relevance, conciseness, coherence, and availability. However, not all QAs reported by Zhi et al. for software documentation quality (highlighted in bold inTable 4) are used in comment quality assessment. In particular, we find no mention of trustworthiness, and similarity QAs even though previous works have highlighted the importance of both QAs to have high-quality documentation[78,79,80]. Also,Maalej et al. showed in their study that developers trust code comments more than other kinds of software documentation[81], indicating the need to develop approaches to assess the trustworthiness of comments. 5. Compared to the previous work by Zhi et al., we find 10 additional QAs researchers use to assess code comment quality.Finding Although several QAs received attention in 2013, the detailed analysis shows that there were mainly two studies (S02, S03) covering several QAs. There is only one study published in 2014 (S05), while 2015 sees the first studies focusing on as- sessing comment quality. One in particular, S26, attempts to cover multiple QAs. The plot also shows which QAs receive the most attention. A few QAs such as completeness, accuracy, content relevance, readability are often investigated. The QA ence, conciseness, author related and accessibility. More research would be needed to assess whether such attributes are intrinsically less important than others for comments according to practitioners.). A few attributes are rarely investigated, for instance the QAs investigated only by at most two studies over the past decade are format, understandability, spelling & grammar, organi- zation, internationalization, documentation technology, coher- S29 , S29S41) do not mention the specific QAs or their definition.Finding 7. Many studies miss a clear definition of the QAs they use in their studies. This poses various challenges for developers and researchers, e.g., understanding what a specific QA means, mapping a QA to other similar QAs, and adapting the approaches to assess the QA to a certain programming environment. usability are often assessed manually. This indicates the need and opportunities to automate the measurement of such QAs. A significant number of studies experimented with various automated approaches based on machine or deep learning approaches, but they focus on specific QAs and miss other QAs such as natural language quality, conciseness, correctness,We put such studies, classifying comment content with the aim to improve comment quality, under content relevance. On the other hand, in some studies researchers mention the QAs but not their definition. For instance, S26 refers to various existing studies for the QA definitions but which QA definition is ex- Figure 5: Frequency of various comment quality QAs over year tracted from which study is not very clear. Lack of precise defi- nitions of QAs or having different definitions for the same QAs can create confusion among developers and researchers while assessing comment quality. Future work needs to pay attention to either refer to the existing standard definition of a QA or de- fine it clearly in the study to ensure the consistency and aware- ness across developer and scientific communities. In this study, we focus on identifying the mention of QAs and their definition if given, and not on comparing and standardizing their defini- tion. Such work would require not only the existing definitions available in the literature for QAs but also collecting how re- searchers use them in practice, and what developers perceive from each QA for source code comments, which is out of scope for this work. However, we provide the list of QAs researchers use for comment quality assessment to facilitate future work in mapping their definition and standardizing them for code com- ments. Although each QA has its own importance and role in com- ment quality, they are not measured in a mutually exclusive way. We find cases where a specific QA is measured by mea- suring another QA. For example, accuracy is measured by mea- suring the correctness and completeness of comment, such as "the documentation is incorrect or incomplete and therefore no longer accurate documentation of an API." (S24) Similarly, up-to-dateness is measured through consistency of comments (S40) or consistency is evaluated and improved using traceabil- ity (S31). This indicates the dependency of various QAs on each other, and improving one aspect of comments can auto- matically improve other related aspects. However, which tech- niques are used to measure which QAs is not yet known. 3.3. RQ 3 : Which tools and techniques do researchers use to assess comment QAs? With respect to each QA, we first identify which techniques have been used to measure them. We use the dimension Tech- nique type to capture the type of techniques. Figure 6 shows that the majority of the QAs are measured by asking developers to manually assess it (manual assessment). For instance, QAs such as coherence, format, organization, understandability, and traceability, coherence etc. Similarly, another significant por- tion of studies uses heuristic-based approaches to measure var- ious QAs. The limitation of such heuristic-based approaches is their applicability to other software systems and programming languages. More studies are required to verify the generaliz- ability of such approaches. Finding 8. Manual assessment is still the most frequently-used tech- nique to measure various QAs. Machine learning based techniques are the preferred automated approach to asses QAs, but the majority of them focus on specific QAs, such as consistency, content rele- vance, and up-to-dateness, while ignoring other QAs. We find that the majority of the machine learning-based ap- proaches are supervised ML approaches. These approaches re- quire labeling the data and are therefore expensive in terms of time and effort. To avoid the longer training time and mem- ory consumption of ML strategies, Kallis et al. used fastText to classify the issues reports on GitHub [82]. The fastText tool uses linear models and has achieved comparable results in clas- sification to various deep-learning based approaches. A recent study by Minaee et al. shows that deep learning-based ap- proaches surpassed common machine learning-based models in various text analysis areas, such as news categorization and sen- timent analysis [83]. We also find some studies that use deep learning-based techniques partly (S06, S13, S20) along with machine learning techniques for a few QAs, such as assessing conciseness, spelling and grammar, and completeness. How- ever, there are still many QAs that are assessed manually and require considerable effort to support developers in automati- cally assessing comment quality. Finding 9. In the case of automated approaches to assess various QAs of comments, we observe that deep-learning based approaches are not yet explored even though various studies showed that they surpassed ML-based approaches in text analysis areas. Python?) and tools used to write code comments in different languages (e.g., popularity of Javadoc v.s. Pydoc). Similarly, whether various programming language paradigms, such as functional versus object-oriented languages, or staticallytyped versus dynamic-typed languages, play a role in the way developers embed information in comments, or the way they treat comments, needs further work in this direction.Identifying QAs (RQ2). Our results show various QAs, e.g., consistency, completeness, and accuracy that are frequently considered in assessing comment quality. Additionally, various metrics, tools, and techniques that are proposed to assess them automatically. Indeed, some QAs are largely overlooked in the literature, e.g., there is not enough research on approaches and automated tools that ensure that comments are accessible, trustworthy, and understandable, despite numerous studies suggesting that having good code comments brings several benefits.Standardizing QAs (RQ2). We identify various QAs that researchers consider assessing comment quality. Not all of these QAs are unique i.e., they have conceptual overlap (based on their definitions inTable 4and measurement techniques inTable 6). For example, the definition of up-to-datedness and consistency mention of keeping comments updated. Similarly, the definition of coherence and similarity focus on the relatedness between code and comments. In this study, we mainly focus on identifying various QAs from the literature and on extracting metrics, tools, and techniques to measure them. Standardizing their definition can be an essential next step in the direction of comment quality assessment research. Since not every study provides the definition of mentioned QAs, such a work will require surveying the authors to understand how they perceive various QAs and where they refer to for QAs definitions. smells (RQ2). Although there is no standard definition of good or bad comments, many studies indicate bloated comments (or non-informative comments), redundant comments (contain same information as in the code), or inconsistent comments (e.g., contain conflicting information compared to the code) as code or comment smells. Arnaoudva et al. identified various LAs that developers perceive as poor practices and should be avoided[59]. Still, what information is vital in comments is a subjective concept and can sometimes be contradictory. For instance, Oracle's coding style guideline suggests including author information in class comments, whereas the Apache style guideline suggests removing it as it can be inferred from the version control system[87]. We find that researchers use the completeness QA to identify informative comments. They define various metrics to assess the completeness of comments, as shown inTable 7. These metrics check the presence of specific information, such as summary, author, or exception information in class or method comments Future work can investigate the definition of good and bad comments by surveying various sources, such as documentation guidelines, researchers, and developers, and comparing the sources across to improve the understanding of high-quality comments.Such work can inspire the development of more metrics and tools to ensure the adherence of comments to the standards.Automated tools and techniques (RQ3). Finally, concerning techniques to assess comment quality, we observed that those based on AI, such as NLP and ML, were increasingly used in the past decade. On the other hand, deep learning techniques do not yet seem to have gained a foothold within the community for assessing comment quality. Since code comment generationComment Table 5 , 5we observe only limited types of research studies. For example, we do not find any philosophical, opinion, or experience papers for the comment quality assessment domain even though this domain is more than a decade old now. Philosophical papers sketch a new perspective of looking at things, conceptual frameworks, metrics etc. Opinion papers present good or bad opinions of authors about something, such as different approaches to assess quality, using particular frameworks etc. Similarly, experience papers often present insights about lessons learned or anecdotes by authors in using tools or techniques in practice. Such papers help tool designers better shape their future tools. Important quality attributes for software documentation. Various research works conducted surveys with developers to identify important quality attributes of good software documentation. Forward and Lethbridge surveyed 48 developers, and highlighted developer concerns about outdated documentation [92]. Chen and Huang surveyed 137 project managers and software engineers [93]. Their study highlighted the typical quality problems developers face in maintaining software documentation: adequacy, complete, traceability, consistency, and trustworthiness. Robillard et al. conducted personal interviews with 80 practitioners and presented the important attributes for good documentation, such as including examples and usage information, complete, organized, and better design [94]. Similarly, Plosch et al. surveyed 88 practitioners and identified consistency, clarity, accuracy, readability, organization, and understandability as the most important attributes [95]. They also indicated that developers do not consider documentation standards important (e.g., ISO 26514:2008, IEEE Std.1063:2001). Sohan et al. in their survey study highlighted the importance of examples in documentation et al. have explored various types of software documentation to see which QAs impact it [18]. Both of the studies considered the timeline until 2011. Additionally, they have not studied how the proposed comment quality assessment approaches are computed in practice for comments. Inspired by these related studies, we focused specifically on the code comment aspect. Song et al. conducted a literature review on code comment generation techniques, and indicated the need to design an objective comment quality assessment model [89]. Complementarily, Nazar et al. [90] presented a literature review in the field of summarizing software artifacts, which included source code comment generation as well as bug reports, mailing lists, and developer discussion artifacts. Our work complements these previous studies since we mainly focus on manually written comments. In this work, we present the results of a systematic literature review on source code comment quality evaluation practices in the decade 2011-2020. We study 47 publications to understand of effort of Software Engineering researchers, in terms of what type of comments they focus their studies on, what QAs they consider relevant, what techniques they resort to in orderto assess their QAs, and finally, how they evaluate their contributions. Our findings show that most studies consider only comments in Java source files, and thus may not generalize to comments of other languages, and they focus on only a few QAs, especially on consistency between code and comments.8. Conclusion https://www.tiobe.com/tiobe-index/ verified on Sep, 2021 8 https://insights.stackoverflow.com/survey/2020 AcknowledgementWe gratefully acknowledge the financial support of the Swiss National Science Foundation for the project "Agile Software Towards the definition of patterns and code smells for multi-language systems. M Abidi, F Khomh, 10.1145/3424771.3424792EuroPLoP '20: European Conference on Pattern Languages of Programs 2020, Virtual Event. GermanyM. Abidi, F. Khomh, Towards the definition of patterns and code smells for multi-language systems, in: EuroPLoP '20: European Conference on Pattern Languages of Programs 2020, Virtual Event, Germany, 1-4 July, 2020, ACM, 2020, pp. 37:1-37:13. doi:10.1145/3424771.3424792. URL https://doi.org/10.1145/3424771.3424792 Metrics and laws of software evolution-the nineties view. M Lehman, D Perry, J Ramil, W Turski, P Wernick, 10.1109/METRIC.1997.637156doi:10.1109/ METRIC.1997.637156Proceedings IEEE International Software Metrics Symposium (METRICS'97). IEEE International Software Metrics Symposium (METRICS'97)Los Alamitos CAIEEE Computer Society PressM. Lehman, D. Perry, J. Ramil, W. Turski, P. Wernick, Metrics and laws of software evolution-the nineties view, in: Proceedings IEEE Interna- tional Software Metrics Symposium (METRICS'97), IEEE Computer Society Press, Los Alamitos CA, 1997, pp. 20-32. doi:10.1109/ METRIC.1997.637156. Complexity Challenges in Development of Cyber-Physical Systems. M Törngren, U Sellgren, 10.1007/978-3-319-95246-8_27Springer International PublishingChamM. Törngren, U. Sellgren, Complexity Challenges in Development of Cyber-Physical Systems, Springer International Publishing, Cham, 2018, pp. 478-503. doi:10.1007/978-3-319-95246-8_27. URL https://doi.org/10.1007/978-3-319-95246-8_27 A study of the documentation essential to software maintenance. S C B Souza, N Anquetil, K M De Oliveira, 10.1145/1085313.1085331Proceedings of the 23rd annual international conference on Design of communication: documenting & designing for pervasive information, SIGDOC '05. the 23rd annual international conference on Design of communication: documenting & designing for pervasive information, SIGDOC '05New York, NY, USAACMS. C. B. de Souza, N. Anquetil, K. M. de Oliveira, A study of the doc- umentation essential to software maintenance, in: Proceedings of the 23rd annual international conference on Design of communication: doc- umenting & designing for pervasive information, SIGDOC '05, ACM, New York, NY, USA, 2005, pp. 68-75. doi:10.1145/1085313. 1085331. Reading the documentation of invoked API functions in program comprehension. U Dekel, J D Herbsleb, 2009 IEEE 17th International Conference on Program Comprehension. IEEEU. Dekel, J. D. Herbsleb, Reading the documentation of invoked API functions in program comprehension, in: 2009 IEEE 17th International Conference on Program Comprehension, IEEE, 2009, pp. 168-177. Recommending source code examples via API call usages and documentation. C Mcmillan, D Poshyvanyk, M Grechanik, Proceedings of the 2nd International Workshop on Recommendation Systems for Software Engineering. the 2nd International Workshop on Recommendation Systems for Software EngineeringC. McMillan, D. Poshyvanyk, M. Grechanik, Recommending source code examples via API call usages and documentation, in: Proceed- ings of the 2nd International Workshop on Recommendation Systems for Software Engineering, 2010, pp. 21-25. L Tan, D Yuan, G Krishna, Y Zhou, Bugs or bad comments?*/, in: Proceedings of twenty-first ACM SIGOPS symposium on Operating systems principles. L. Tan, D. Yuan, G. Krishna, Y. Zhou, /* iComment: Bugs or bad com- ments?*/, in: Proceedings of twenty-first ACM SIGOPS symposium on Operating systems principles, 2007, pp. 145-158. M Allamanis, E T Barr, C Bird, C Sutton, http:/doi.acm.org/10.1145/2635868.2635883Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE 2014. the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE 2014New York, NY, USAACMLearning natural coding conventionsM. Allamanis, E. T. Barr, C. Bird, C. Sutton, Learning natural coding conventions, in: Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE 2014, ACM, New York, NY, USA, 2014, pp. 281-293. doi:10.1145/2635868. 2635883. URL http://doi.acm.org/10.1145/2635868.2635883 B W Kernighan, R Pike, The Practice of Programming. Addison-Wesley1st EditionB. W. Kernighan, R. Pike, The Practice of Programming (Addison- Wesley Professional Computing Series), 1st Edition, Addison-Wesley, 1999. URL http://www.amazon.com/exec/obidos/redirect?tag= citeulike07-20&path=ASIN/020161586X Automatic quality assessment of source code comments: the JavadocMiner. N Khamis, R Witte, J Rilling, International Conference on Application of Natural Language to Information Systems. SpringerN. Khamis, R. Witte, J. Rilling, Automatic quality assessment of source code comments: the JavadocMiner, in: International Conference on Ap- plication of Natural Language to Information Systems, Springer, 2010, pp. 68-79. Quality analysis of source code comments. D Steidl, B Hummel, E Juergens, Program Comprehension (ICPC), 2013 IEEE 21st International Conference on. IEEED. Steidl, B. Hummel, E. Juergens, Quality analysis of source code com- ments, in: Program Comprehension (ICPC), 2013 IEEE 21st Interna- tional Conference on, IEEE, 2013, pp. 83-92. Detecting fragile comments. I K Ratol, M P Robillard, Proceedings of the 32Nd IEEE/ACM International Conference on Automated Software Engineering. the 32Nd IEEE/ACM International Conference on Automated Software EngineeringIEEE PressI. K. Ratol, M. P. Robillard, Detecting fragile comments, in: Proceed- ings of the 32Nd IEEE/ACM International Conference on Automated Software Engineering, IEEE Press, 2017, pp. 112-122. Classifying code comments in Java opensource software systems. L Pascarella, A Bacchelli, 10.1109/MSR.2017.63Proceedings of the 14th International Conference on Mining Software Repositories, MSR '17. the 14th International Conference on Mining Software Repositories, MSR '17IEEE PressL. Pascarella, A. Bacchelli, Classifying code comments in Java open- source software systems, in: Proceedings of the 14th International Con- ference on Mining Software Repositories, MSR '17, IEEE Press, 2017, pp. 227-237. doi:10.1109/MSR.2017.63. URL https://doi.org/10.1109/MSR.2017.63 A large-scale empirical study on code-comment inconsistencies. F Wen, C Nagy, G Bavota, M Lanza, Proceedings of the 27th International Conference on Program Comprehension. the 27th International Conference on Program ComprehensionIEEE PressF. Wen, C. Nagy, G. Bavota, M. Lanza, A large-scale empirical study on code-comment inconsistencies, in: Proceedings of the 27th International Conference on Program Comprehension, IEEE Press, 2019, pp. 53-64. Listening to programmers -taxonomies and characteristics of comments in operating system code. Y Padioleau, L Tan, Y Zhou, Proceedings of the 31st International Conference on Software Engineering. the 31st International Conference on Software EngineeringIEEE Computer SocietyY. Padioleau, L. Tan, Y. Zhou, Listening to programmers -taxonomies and characteristics of comments in operating system code, in: Pro- ceedings of the 31st International Conference on Software Engineering, IEEE Computer Society, 2009, pp. 331-341. Analyzing APIs documentation and code to detect directive defects. Y Zhou, R Gu, T Chen, Z Huang, S Panichella, H Gall, Proceedings of the 39th International Conference on Software Engineering. the 39th International Conference on Software EngineeringIEEE Press29Y. Zhou, R. Gu, T. Chen, Z. Huang, S. Panichella, H. Gall, Analyz- ing APIs documentation and code to detect directive defects, in: Pro- ceedings of the 39th International Conference on Software Engineering, 29 IEEE Press, 2017, pp. 27-37. Knowledge-based approaches in software documentation: A systematic literature review. W Ding, P Liang, A Tang, H Van, Vliet, Information and Software Technology. 566W. Ding, P. Liang, A. Tang, H. Van Vliet, Knowledge-based approaches in software documentation: A systematic literature review, Information and Software Technology 56 (6) (2014) 545-567. Cost, benefits and quality of software development documentation: A systematic mapping. J Zhi, V Garousi-Yusifoglu, B Sun, G Garousi, S Shahnewaz, G Ruhe, Journal of Systems and Software. 99J. Zhi, V. Garousi-Yusifoglu, B. Sun, G. Garousi, S. Shahnewaz, G. Ruhe, Cost, benefits and quality of software development documenta- tion: A systematic mapping, Journal of Systems and Software 99 (2015) 175-198. Guidelines for performing systematic literature reviews in software engineering. B Kitchenham, S Charters, B. Kitchenham, S. Charters, Guidelines for performing systematic liter- ature reviews in software engineering (2007). Guidelines for performing systematic literature reviews in software engineering. S Keele, EBSE-2007-01Technical ReportS. Keele, Guidelines for performing systematic literature reviews in soft- ware engineering, Tech. rep., Technical report, EBSE Technical Report EBSE-2007-01 (2007). Engineering-an endless frontier. S Y Auyang, Harvard University PressS. Y. Auyang, Engineering-an endless frontier, Harvard University Press, 2006. Experimental validation in software engineering. M V Zelkowitz, D Wallace, Information and Software Technology. 3911M. V. Zelkowitz, D. Wallace, Experimental validation in software engi- neering, Information and Software Technology 39 (11) (1997) 735-743. Systematic reviews in the social sciences: A practical guide. M Petticrew, H Roberts, John Wiley & SonsM. Petticrew, H. Roberts, Systematic reviews in the social sciences: A practical guide, John Wiley & Sons, 2008. A human study of comprehension and code summarization. S Stapleton, Y Gambhir, A Leclair, Z Eberhart, W Weimer, K Leach, Y Huang, 10.1145/3387904.3389258ICPC '20: 28th International Conference on Program Comprehension. Seoul, Republic of KoreaS. Stapleton, Y. Gambhir, A. LeClair, Z. Eberhart, W. Weimer, K. Leach, Y. Huang, A human study of comprehension and code summarization, in: ICPC '20: 28th International Conference on Program Comprehen- sion, Seoul, Republic of Korea, July 13-15, 2020, ACM, 2020, pp. 2-13. doi:10.1145/3387904.3389258. URL https://doi.org/10.1145/3387904.3389258 On the pragmatic design of literature studies in software engineering: An experience-based guideline. M Kuhrmann, D M Fernández, M Daneva, 10.1007/s10664-016-9492-yEmpirical Softw. Engg. 226M. Kuhrmann, D. M. Fernández, M. Daneva, On the pragmatic de- sign of literature studies in software engineering: An experience-based guideline, Empirical Softw. Engg. 22 (6) (2017) 2852-2891. doi: 10.1007/s10664-016-9492-y. . 10.1007/s10664-016-9492-yURL https://doi.org/10.1007/s10664-016-9492-y 6 million links in source code comments: Purpose, evolution, and decay. H Hata, C Treude, R G Kula, T Ishio, Proceedings of the 41st International Conference on Software Engineering. the 41st International Conference on Software EngineeringIEEE Press9H. Hata, C. Treude, R. G. Kula, T. Ishio, 9.6 million links in source code comments: Purpose, evolution, and decay, in: Proceedings of the 41st International Conference on Software Engineering, IEEE Press, 2019, pp. 1211-1221. Software documentation: the practitioners' perspective. E Aghajani, C Nagy, M Linares-Vásquez, L Moreno, G Bavota, M Lanza, D C Shepherd, 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE). IEEEE. Aghajani, C. Nagy, M. Linares-Vásquez, L. Moreno, G. Bavota, M. Lanza, D. C. Shepherd, Software documentation: the practition- ers' perspective, in: 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE), IEEE, 2020, pp. 590-601. How good is your comment? A study of comments in Java programs. D Haouari, H A Sahraoui, P Langlais, 10.1109/ESEM.2011.22Proceedings of the 5th International Symposium on Empirical Software Engineering and Measurement, ESEM 2011. the 5th International Symposium on Empirical Software Engineering and Measurement, ESEM 2011Banff, AB, CanadaIEEE Computer SocietyD. Haouari, H. A. Sahraoui, P. Langlais, How good is your comment? A study of comments in Java programs, in: Proceedings of the 5th Inter- national Symposium on Empirical Software Engineering and Measure- ment, ESEM 2011, Banff, AB, Canada, September 22-23, 2011, IEEE Computer Society, 2011, pp. 137-146. doi:10.1109/ESEM.2011.22. URL https://doi.org/10.1109/ESEM.2011.22 Evaluating usage and quality of technical software documentation: An empirical study. G Garousi, V Garousi, M Moussavi, G Ruhe, B Smith, 10.1145/2460999.246100317th International Conference on Evaluation and Assessment in Software Engineering, EASE '13. F. Q. B. da Silva, N. J. Juzgado, G. H. TravassosPorto de Galinhas, BrazilACMG. Garousi, V. Garousi, M. Moussavi, G. Ruhe, B. Smith, Evaluating usage and quality of technical software documentation: An empirical study, in: F. Q. B. da Silva, N. J. Juzgado, G. H. Travassos (Eds.), 17th International Conference on Evaluation and Assessment in Soft- ware Engineering, EASE '13, Porto de Galinhas, Brazil, April 14-16, 2013, ACM, 2013, pp. 24-35. doi:10.1145/2460999.2461003. URL https://doi.org/10.1145/2460999.2461003 Inferring method specifications from natural language API descriptions. R Pandita, X Xiao, H Zhong, T Xie, S Oney, A M Paradkar, 10.1109/ICSE.2012.6227137doi:10. 1109/ICSE.2012.622713734th International Conference on Software Engineering, ICSE 2012. M. Glinz, G. C. Murphy, M. PezzèZurich, SwitzerlandIEEE Computer SocietyR. Pandita, X. Xiao, H. Zhong, T. Xie, S. Oney, A. M. Paradkar, In- ferring method specifications from natural language API descriptions, in: M. Glinz, G. C. Murphy, M. Pezzè (Eds.), 34th International Con- ference on Software Engineering, ICSE 2012, June 2-9, 2012, Zurich, Switzerland, IEEE Computer Society, 2012, pp. 815-825. doi:10. 1109/ICSE.2012.6227137. URL https://doi.org/10.1109/ICSE.2012.6227137 Using traceability links to recommend adaptive changes for documentation evolution. B Dagenais, M P Robillard, 10.1109/TSE.2014.2347969IEEE Trans. Software Eng. 4011B. Dagenais, M. P. Robillard, Using traceability links to recommend adaptive changes for documentation evolution, IEEE Trans. Software Eng. 40 (11) (2014) 1126-1146. doi:10.1109/TSE.2014.2347969. URL https://doi.org/10.1109/TSE.2014.2347969 On using machine learning to identify knowledge in API reference documentation. D Fucci, A Mollaalizadehbahnemiri, W Maalej, Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software EngineeringD. Fucci, A. Mollaalizadehbahnemiri, W. Maalej, On using machine learning to identify knowledge in API reference documentation, in: Pro- ceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Soft- ware Engineering, 2019, pp. 109-119. Automatically assessing code understandability: how far are we?. S Scalabrino, G Bavota, C Vendome, M L Vásquez, D Poshyvanyk, R Oliveto, 10.1109/ASE.2017.8115654Proceedings of the 32nd IEEE/ACM International Conference on Automated Software Engineering. G. Rosu, M. D. Penta, T. N. Nguyenthe 32nd IEEE/ACM International Conference on Automated Software EngineeringUrbana, IL, USAIEEE Computer SocietyS. Scalabrino, G. Bavota, C. Vendome, M. L. Vásquez, D. Poshy- vanyk, R. Oliveto, Automatically assessing code understandability: how far are we?, in: G. Rosu, M. D. Penta, T. N. Nguyen (Eds.), Pro- ceedings of the 32nd IEEE/ACM International Conference on Auto- mated Software Engineering, ASE 2017, Urbana, IL, USA, October 30 -November 03, 2017, IEEE Computer Society, 2017, pp. 417-427. doi:10.1109/ASE.2017.8115654. URL https://doi.org/10.1109/ASE.2017.8115654 The effect of poor source code lexicon and readability on developers' cognitive load. S Fakhoury, Y Ma, V Arnaoudova, O O Adesope, 10.1145/3196321.3196347Proceedings of the 26th Conference on Program Comprehension, ICPC 2018. F. Khomh, C. K. Roy, J. Siegmundthe 26th Conference on Program Comprehension, ICPC 2018Gothenburg, SwedenACMS. Fakhoury, Y. Ma, V. Arnaoudova, O. O. Adesope, The effect of poor source code lexicon and readability on developers' cognitive load, in: F. Khomh, C. K. Roy, J. Siegmund (Eds.), Proceedings of the 26th Con- ference on Program Comprehension, ICPC 2018, Gothenburg, Sweden, May 27-28, 2018, ACM, 2018, pp. 286-296. doi:10.1145/3196321. 3196347. URL https://doi.org/10.1145/3196321.3196347 E Aghajani, C Nagy, G Bavota, M Lanza, 10.1109/ICSME.2018.000122018 IEEE International Conference on Software Maintenance and Evolution. Madrid, SpainA large-scale empirical study on linguistic antipatterns affecting APIsE. Aghajani, C. Nagy, G. Bavota, M. Lanza, A large-scale empirical study on linguistic antipatterns affecting APIs, in: 2018 IEEE Inter- national Conference on Software Maintenance and Evolution, ICSME 2018, Madrid, Spain, September 23-29, 2018, IEEE Computer Society, 2018, pp. 25-35. doi:10.1109/ICSME.2018.00012. URL https://doi.org/10.1109/ICSME.2018.00012 Improving API caveats accessibility by mining API caveats knowledge graph. H Li, S Li, J Sun, Z Xing, X Peng, M Liu, X Zhao, 10.1109/ICSME.2018.000282018 IEEE International Conference on Software Maintenance and Evolution. Madrid, SpainH. Li, S. Li, J. Sun, Z. Xing, X. Peng, M. Liu, X. Zhao, Improving API caveats accessibility by mining API caveats knowledge graph, in: 2018 IEEE International Conference on Software Maintenance and Evolution, ICSME 2018, Madrid, Spain, September 23-29, 2018, IEEE Computer Society, 2018, pp. 183-193. doi:10.1109/ICSME.2018.00028. URL https://doi.org/10.1109/ICSME.2018.00028 A learning-based approach for automatic construction of domain glossary 30 from source code and documentation. C Wang, X Peng, M Liu, Z Xing, X Bai, B Xie, T Wang, 10.1145/3338906.3338963doi:10.1145/ 3338906.3338963Proceedings of the ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/SIGSOFT FSE 2019. M. Dumas, D. Pfahl, S. Apel, A. Russothe ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/SIGSOFT FSE 2019Tallinn, EstoniaACMC. Wang, X. Peng, M. Liu, Z. Xing, X. Bai, B. Xie, T. Wang, A learning-based approach for automatic construction of domain glossary 30 from source code and documentation, in: M. Dumas, D. Pfahl, S. Apel, A. Russo (Eds.), Proceedings of the ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/SIGSOFT FSE 2019, Tallinn, Esto- nia, August 26-30, 2019, ACM, 2019, pp. 97-108. doi:10.1145/ 3338906.3338963. URL https://doi.org/10.1145/3338906.3338963 A framework for writing trigger-action todo comments in executable format. P Nie, R Rai, J J Li, S Khurshid, R J Mooney, M Gligoric, 10.1145/3338906.3338965Proceedings of the ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/SIGSOFT FSE 2019. M. Dumas, D. Pfahl, S. Apel, A. Russothe ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/SIGSOFT FSE 2019Tallinn, EstoniaACMP. Nie, R. Rai, J. J. Li, S. Khurshid, R. J. Mooney, M. Gligoric, A frame- work for writing trigger-action todo comments in executable format, in: M. Dumas, D. Pfahl, S. Apel, A. Russo (Eds.), Proceedings of the ACM Joint Meeting on European Software Engineering Conference and Sym- posium on the Foundations of Software Engineering, ESEC/SIGSOFT FSE 2019, Tallinn, Estonia, August 26-30, 2019, ACM, 2019, pp. 385- 396. doi:10.1145/3338906.3338965. URL https://doi.org/10.1145/3338906.3338965 Software documentation issues unveiled. E Aghajani, C Nagy, O L Vega-Márquez, M Linares-Vásquez, L Moreno, G Bavota, M Lanza, 10.1109/ICSE.2019.00122Proceedings of the 41st International Conference on Software Engineering, ICSE 2019. J. M. Atlee, T. Bultan, J. Whittlethe 41st International Conference on Software Engineering, ICSE 2019Montreal, QC, CanadaE. Aghajani, C. Nagy, O. L. Vega-Márquez, M. Linares-Vásquez, L. Moreno, G. Bavota, M. Lanza, Software documentation issues un- veiled, in: J. M. Atlee, T. Bultan, J. Whittle (Eds.), Proceedings of the 41st International Conference on Software Engineering, ICSE 2019, Montreal, QC, Canada, May 25-31, 2019, IEEE / ACM, 2019, pp. 1199- 1210. doi:10.1109/ICSE.2019.00122. URL https://doi.org/10.1109/ICSE.2019.00122 The secret life of commented-out source code. T M T Pham, J Yang, 10.1145/3387904.3389259ICPC '20: 28th International Conference on Program Comprehension. Seoul, Republic of KoreaT. M. T. Pham, J. Yang, The secret life of commented-out source code, in: ICPC '20: 28th International Conference on Program Comprehen- sion, Seoul, Republic of Korea, July 13-15, 2020, ACM, 2020, pp. 308- 318. doi:10.1145/3387904.3389259. URL https://doi.org/10.1145/3387904.3389259 Code comment quality analysis and improvement recommendation: an automated approach. X Sun, Q Geng, D Lo, Y Duan, X Liu, B Li, International Journal of Software Engineering and Knowledge Engineering. 2606X. Sun, Q. Geng, D. Lo, Y. Duan, X. Liu, B. Li, Code comment qual- ity analysis and improvement recommendation: an automated approach, International Journal of Software Engineering and Knowledge Engineer- ing 26 (06) (2016) 981-1000. CPC: automatically classifying and propagating natural language comments via program analysis. J Zhai, X Xu, Y Shi, G Tao, M Pan, S Ma, L Xu, W Zhang, L Tan, X Zhang, 10.1145/3377811.3380427ICSE '20: 42nd International Conference on Software Engineering. G. Rothermel, D. BaeSeoul, South KoreaACMJ. Zhai, X. Xu, Y. Shi, G. Tao, M. Pan, S. Ma, L. Xu, W. Zhang, L. Tan, X. Zhang, CPC: automatically classifying and propagating natu- ral language comments via program analysis, in: G. Rothermel, D. Bae (Eds.), ICSE '20: 42nd International Conference on Software Engineer- ing, Seoul, South Korea, 27 June -19 July, 2020, ACM, 2020, pp. 1359- 1371. doi:10.1145/3377811.3380427. URL https://doi.org/10.1145/3377811.3380427 Recommending insightful comments for source code using crowdsourced knowledge. M M Rahman, C K Roy, I Keivanloo, 10.1109/SCAM.2015.733540415th IEEE International Working Conference on Source Code Analysis and Manipulation, SCAM 2015. M. W. Godfrey, D. Lo, F. KhomhBremen, GermanyIEEE Computer SocietyM. M. Rahman, C. K. Roy, I. Keivanloo, Recommending insightful com- ments for source code using crowdsourced knowledge, in: M. W. God- frey, D. Lo, F. Khomh (Eds.), 15th IEEE International Working Confer- ence on Source Code Analysis and Manipulation, SCAM 2015, Bremen, Germany, September 27-28, 2015, IEEE Computer Society, 2015, pp. 81-90. doi:10.1109/SCAM.2015.7335404. URL https://doi.org/10.1109/SCAM.2015.7335404 S Scalabrino, M Linares-Vasquez, D Poshyvanyk, R Oliveto, 2016 IEEE 24th International Conference on Program Comprehension (ICPC). IEEEImproving code readability models with textual featuresS. Scalabrino, M. Linares-Vasquez, D. Poshyvanyk, R. Oliveto, Improv- ing code readability models with textual features, in: 2016 IEEE 24th International Conference on Program Comprehension (ICPC), IEEE, 2016, pp. 1-10. Automatic source code summarization of context for Java methods. P W Mcburney, C Mcmillan, 10.1109/TSE.2015.2465386IEEE Trans. Software Eng. 422P. W. McBurney, C. McMillan, Automatic source code summarization of context for Java methods, IEEE Trans. Software Eng. 42 (2) (2016) 103-119. doi:10.1109/TSE.2015.2465386. URL https://doi.org/10.1109/TSE.2015.2465386 Automatic detection and repair recommendation of directive defects in Java API documentation. Y Zhou, C Wang, X Yan, T Chen, S Panichella, H C Gall, 10.1109/TSE.2018.2872971IEEE Trans. Software Eng. 469Y. Zhou, C. Wang, X. Yan, T. Chen, S. Panichella, H. C. Gall, Automatic detection and repair recommendation of directive defects in Java API documentation, IEEE Trans. Software Eng. 46 (9) (2020) 1004-1023. doi:10.1109/TSE.2018.2872971. URL https://doi.org/10.1109/TSE.2018.2872971 Measuring program comprehension: A large-scale field study with professionals. X Xia, L Bao, D Lo, Z Xing, A E Hassan, S Li, 10.1109/TSE.2017.2734091IEEE Trans. Software Eng. 4410X. Xia, L. Bao, D. Lo, Z. Xing, A. E. Hassan, S. Li, Measuring pro- gram comprehension: A large-scale field study with professionals, IEEE Trans. Software Eng. 44 (10) (2018) 951-976. doi:10.1109/TSE. 2017.2734091. URL https://doi.org/10.1109/TSE.2017.2734091 Usage and usefulness of technical software documentation: An industrial case study. G Garousi, V Garousi-Yusifoglu, G Ruhe, J Zhi, M Moussavi, B Smith, Information and Software Technology. 57G. Garousi, V. Garousi-Yusifoglu, G. Ruhe, J. Zhi, M. Moussavi, B. Smith, Usage and usefulness of technical software documentation: An industrial case study, Information and Software Technology 57 (2015) 664-682. What should developers be aware of? An empirical study on the directives of API documentation. M Monperrus, M Eichberg, E Tekes, M Mezini, 10.1007/s10664-011-9186-4doi:10.1007/ s10664-011-9186-4Empir. Softw. Eng. 176M. Monperrus, M. Eichberg, E. Tekes, M. Mezini, What should devel- opers be aware of? An empirical study on the directives of API docu- mentation, Empir. Softw. Eng. 17 (6) (2012) 703-737. doi:10.1007/ s10664-011-9186-4. . 10.1007/s10664-011-9186-4URL https://doi.org/10.1007/s10664-011-9186-4 Analysis of license inconsistency in large collections of open source projects. Y Wu, Y Manabe, T Kanda, D M Germán, K Inoue, 10.1007/s10664-016-9487-8doi:10.1007/ s10664-016-9487-8Empir. Softw. Eng. 223Y. Wu, Y. Manabe, T. Kanda, D. M. Germán, K. Inoue, Anal- ysis of license inconsistency in large collections of open source projects, Empir. Softw. Eng. 22 (3) (2017) 1194-1222. doi:10.1007/ s10664-016-9487-8. . 10.1007/s10664-016-9487-8URL https://doi.org/10.1007/s10664-016-9487-8 Classifying code comments in Java software systems. L Pascarella, M Bruntink, A Bacchelli, 10.1007/s10664-019-09694-wEmpir. Softw. Eng. 243L. Pascarella, M. Bruntink, A. Bacchelli, Classifying code comments in Java software systems, Empir. Softw. Eng. 24 (3) (2019) 1499-1537. doi:10.1007/s10664-019-09694-w. . 10.1007/s10664-019-09694-wURL https://doi.org/10.1007/s10664-019-09694-w Augmenting Java method comments generation with context information based on neural networks. Y Zhou, X Yan, W Yang, T Chen, Z Huang, 10.1016/j.jss.2019.07.087J. Syst. Softw. 156Y. Zhou, X. Yan, W. Yang, T. Chen, Z. Huang, Augmenting Java method comments generation with context information based on neural net- works, J. Syst. Softw. 156 (2019) 328-340. doi:10.1016/j.jss. 2019.07.087. URL https://doi.org/10.1016/j.jss.2019.07.087 Improving source code lexicon via traceability and information retrieval. A D Lucia, M D Penta, R Oliveto, 10.1109/TSE.2010.89IEEE Trans. Software Eng. 372A. D. Lucia, M. D. Penta, R. Oliveto, Improving source code lexicon via traceability and information retrieval, IEEE Trans. Software Eng. 37 (2) (2011) 205-227. doi:10.1109/TSE.2010.89. URL https://doi.org/10.1109/TSE.2010.89 Detecting API documentation errors. H Zhong, Z Su, 10.1145/2509136.2509523Proceedings of the 2013 ACM SIGPLAN International Conference on Object Oriented Programming Systems Languages & Applications, OOPSLA 2013, part of SPLASH 2013. A. L. Hosking, P. T. Eugster, C. V. Lopesthe 2013 ACM SIGPLAN International Conference on Object Oriented Programming Systems Languages & Applications, OOPSLA 2013, part of SPLASH 2013Indianapolis, IN, USAACMH. Zhong, Z. Su, Detecting API documentation errors, in: A. L. Hosk- ing, P. T. Eugster, C. V. Lopes (Eds.), Proceedings of the 2013 ACM SIGPLAN International Conference on Object Oriented Programming Systems Languages & Applications, OOPSLA 2013, part of SPLASH 2013, Indianapolis, IN, USA, October 26-31, 2013, ACM, 2013, pp. 803-816. doi:10.1145/2509136.2509523. . 10.1145/2509136.2509523URL https://doi.org/10.1145/2509136.2509523 Analyzing code comments to boost program comprehension. Y Shinyama, Y Arahori, K Gondow, 25th Asia-Pacific Software Engineering Conference (APSEC). IEEEY. Shinyama, Y. Arahori, K. Gondow, Analyzing code comments to boost program comprehension, in: 2018 25th Asia-Pacific Software En- gineering Conference (APSEC), IEEE, 2018, pp. 325-334. Recommending reference API documentation. M P Robillard, Y B Chhetri, 10.1007/s10664-014-9323-ydoi:10.1007/ s10664-014-9323-yEmpir. Softw. Eng. 206M. P. Robillard, Y. B. Chhetri, Recommending reference API documen- tation, Empir. Softw. Eng. 20 (6) (2015) 1558-1586. doi:10.1007/ s10664-014-9323-y. . 10.1007/s10664-014-9323-yURL https://doi.org/10.1007/s10664-014-9323-y Some structural measures of API usability. G M Rama, A C Kak, 10.1002/spe.2215Softw. Pract. Exp. 451G. M. Rama, A. C. Kak, Some structural measures of API usability, Softw. Pract. Exp. 45 (1) (2015) 75-110. doi:10.1002/spe.2215. URL https://doi.org/10.1002/spe.2215 An empirical study of the textual similarity between source code and source code summaries. P W Mcburney, C Mcmillan, 10.1007/s10664-014-9344-6Empir. SoftwP. W. McBurney, C. McMillan, An empirical study of the textual sim- ilarity between source code and source code summaries, Empir. Softw. . Eng, 10.1007/s10664-014-9344-621Eng. 21 (1) (2016) 17-42. doi:10.1007/s10664-014-9344-6. URL https://doi.org/10.1007/s10664-014-9344-6 Linguistic antipatterns: what they are and how developers perceive them. V Arnaoudova, M D Penta, G , 10.1007/s10664-014-9350-8Empir. Softw. Eng. 211V. Arnaoudova, M. D. Penta, G. Antoniol, Linguistic antipatterns: what they are and how developers perceive them, Empir. Softw. Eng. 21 (1) (2016) 104-158. doi:10.1007/s10664-014-9350-8. URL https://doi.org/10.1007/s10664-014-9350-8 Coherence of comments and method implementations: A dataset and an empirical investigation, Software. A Corazza, V Maggio, G Scanniello, Quality Journal. 262A. Corazza, V. Maggio, G. Scanniello, Coherence of comments and method implementations: A dataset and an empirical investigation, Soft- ware Quality Journal 26 (2) (2018) 751-777. A comprehensive model for code readability. S Scalabrino, M Linares-Vásquez, R Oliveto, D Poshyvanyk, Journal of Software: Evolution and Process. 3061958S. Scalabrino, M. Linares-Vásquez, R. Oliveto, D. Poshyvanyk, A com- prehensive model for code readability, Journal of Software: Evolution and Process 30 (6) (2018) e1958. Automatic detection of outdated comments during code changes. Z Liu, H Chen, X Chen, X Luo, F Zhou ; S. Reisman, S I Ahamed, C Demartini, T M Conte, L Liu, W R Claycomb, M Nakamura, E Tovar, S Cimato, C Lung, H Takakura, 10.1109/COMPSAC.2018.000282018 IEEE 42nd Annual Computer Software and Applications Conference. J. Yang, T. Akiyama, Z. Zhang, K. HasanTokyo, Japan1COMPSACZ. Liu, H. Chen, X. Chen, X. Luo, F. Zhou, Automatic detection of out- dated comments during code changes, in: S. Reisman, S. I. Ahamed, C. Demartini, T. M. Conte, L. Liu, W. R. Claycomb, M. Nakamura, E. Tovar, S. Cimato, C. Lung, H. Takakura, J. Yang, T. Akiyama, Z. Zhang, K. Hasan (Eds.), 2018 IEEE 42nd Annual Computer Soft- ware and Applications Conference, COMPSAC 2018, Tokyo, Japan, 23-27 July 2018, Volume 1, IEEE Computer Society, 2018, pp. 154- 163. doi:10.1109/COMPSAC.2018.00028. URL https://doi.org/10.1109/COMPSAC.2018.00028 Classifying python code comments based on supervised learning. J Zhang, L Xu, Y Li, ; X Meng, R Li, K Wang, B Niu, X , 10.1007/978-3-030-02934-0_4Web Information Systems and Applications -15th International Conference, WISA 2018. Wang, G. ZhaoTaiyuan, ChinaSpringer11242ProceedingsJ. Zhang, L. Xu, Y. Li, Classifying python code comments based on supervised learning, in: X. Meng, R. Li, K. Wang, B. Niu, X. Wang, G. Zhao (Eds.), Web Information Systems and Applications -15th In- ternational Conference, WISA 2018, Taiyuan, China, September 14-15, 2018, Proceedings, Vol. 11242 of Lecture Notes in Computer Science, Springer, 2018, pp. 39-47. doi:10.1007/978-3-030-02934-0\_4. URL https://doi.org/10.1007/978-3-030-02934-0_4 Bacchelli, Investigating type declaration mismatches in Python. L Pascarella, A Ram, A Nadeem, D Bisesser, N Knyazev, A , 10.1109/MALTESQUE.2018.8368458F. A.L. Pascarella, A. Ram, A. Nadeem, D. Bisesser, N. Knyazev, A. Bac- chelli, Investigating type declaration mismatches in Python, in: F. A. 10.1109/MALTESQUE.2018.83684582018 IEEE Workshop on Machine Learning Techniques for Software Quality Evaluation. Fontana, B. Walter, A. Ampatzoglou, F. PalombaCampobasso, ItalyMaLTeSQuE,Fontana, B. Walter, A. Ampatzoglou, F. Palomba (Eds.), 2018 IEEE Workshop on Machine Learning Techniques for Software Quality Evalu- ation, MaLTeSQuE, SANER 2018, Campobasso, Italy, March 20, 2018, IEEE Computer Society, 2018, pp. 43-48. doi:10.1109/MALTESQUE. 2018.8368458. URL https://doi.org/10.1109/MALTESQUE.2018.8368458 The exception handling riddle: An empirical study on the Android API. M Kechagia, M Fragkoulis, P Louridas, D Spinellis, 10.1016/j.jss.2018.04.034J. Syst. Softw. 142M. Kechagia, M. Fragkoulis, P. Louridas, D. Spinellis, The exception handling riddle: An empirical study on the Android API, J. Syst. Softw. 142 (2018) 248-270. doi:10.1016/j.jss.2018.04.034. URL https://doi.org/10.1016/j.jss.2018.04.034 Migrating deprecated API to documented replacement: Patterns and tool. Y Xi, L Shen, Y Gui, W Zhao, Proceedings of the 11th. the 11thY. Xi, L. Shen, Y. Gui, W. Zhao, Migrating deprecated API to doc- umented replacement: Patterns and tool, in: Proceedings of the 11th . Asia-Pacific Symposium on Internetware. Asia-Pacific Symposium on Internetware, 2019, pp. 1-10. A topic modeling approach to evaluate the comments consistency to source code. M Iammarino, L Aversano, M L Bernardi, M Cimitile, 10.1109/IJCNN48605.2020.92076512020 International Joint Conference on Neural Networks. Glasgow, United KingdomIEEE2020M. Iammarino, L. Aversano, M. L. Bernardi, M. Cimitile, A topic mod- eling approach to evaluate the comments consistency to source code, in: 2020 International Joint Conference on Neural Networks, IJCNN 2020, Glasgow, United Kingdom, July 19-24, 2020, IEEE, 2020, pp. 1- 8. doi:10.1109/IJCNN48605.2020.9207651. URL https://doi.org/10.1109/IJCNN48605.2020.9207651 Comparing identifiers and comments in engineered and non-engineered code: A largescale empirical study. O A L Lemos, M Suzuki, A C De Paula, C L Goes, 10.1145/3341105.3373972SAC '20: The 35th ACM/SIGAPP Symposium on Applied Computing, online event. C. Hung, T. Cerný, D. Shin, A. BechiniBrno, Czech RepublicACMO. A. L. Lemos, M. Suzuki, A. C. de Paula, C. L. Goes, Comparing iden- tifiers and comments in engineered and non-engineered code: A large- scale empirical study, in: C. Hung, T. Cerný, D. Shin, A. Bechini (Eds.), SAC '20: The 35th ACM/SIGAPP Symposium on Applied Computing, online event, [Brno, Czech Republic], March 30 -April 3, 2020, ACM, 2020, pp. 100-109. doi:10.1145/3341105.3373972. URL https://doi.org/10.1145/3341105.3373972 Requirements engineering paper classification and evaluation criteria: a proposal and a discussion. R Wieringa, N Maiden, N Mead, C Rolland, Requirements engineering. 111R. Wieringa, N. Maiden, N. Mead, C. Rolland, Requirements engineer- ing paper classification and evaluation criteria: a proposal and a discus- sion, Requirements engineering 11 (1) (2006) 102-107. K Petersen, R Feldt, S Mujtaba, M Mattsson, 12th International Conference on Evaluation and Assessment in Software Engineering (EASE). 12Systematic mapping studies in software engineeringK. Petersen, R. Feldt, S. Mujtaba, M. Mattsson, Systematic mapping studies in software engineering, in: 12th International Conference on Evaluation and Assessment in Software Engineering (EASE) 12, 2008, pp. 1-10. On the reproducibility of empirical software engineering studies based on data retrieved from development repositories. J M González-Barahona, G Robles, 10.1007/s10664-011-9181-9Empirical Software Engineering. 171J. M. González-Barahona, G. Robles, On the reproducibility of empirical software engineering studies based on data retrieved from development repositories, Empirical Software Engineering 17 (1) (2012) 75-89. doi: 10.1007/s10664-011-9181-9. . 10.1007/s10664-011-9181-9URL http://dx.doi.org/10.1007/s10664-011-9181-9 Automatically generating precise oracles from structured natural language specifications. M Motwani, Y Brun, 10.1109/ICSE.2019.00035Proceedings of the 41st International Conference on Software Engineering. J. M. Atlee, T. Bultan, J. Whittlethe 41st International Conference on Software EngineeringM. Motwani, Y. Brun, Automatically generating precise oracles from structured natural language specifications, in: J. M. Atlee, T. Bultan, J. Whittle (Eds.), Proceedings of the 41st International Conference on Software Engineering, ICSE 2019, IEEE / ACM, 2019, pp. 188-199. doi:10.1109/ICSE.2019.00035. URL https://doi.org/10.1109/ICSE.2019.00035 Nl2type: inferring JavaScript function types from natural language information. R S Malik, J Patra, M Pradel, 10.1109/ICSE.2019.00045Proceedings of the 41st International Conference on Software Engineering. J. M. Atlee, T. Bultan, J. Whittlethe 41st International Conference on Software EngineeringR. S. Malik, J. Patra, M. Pradel, Nl2type: inferring JavaScript func- tion types from natural language information, in: J. M. Atlee, T. Bultan, J. Whittle (Eds.), Proceedings of the 41st International Conference on Software Engineering, ICSE 2019, IEEE / ACM, 2019, pp. 304-315. doi:10.1109/ICSE.2019.00045. URL https://doi.org/10.1109/ICSE.2019.00045 Do class comments aid Java program understanding?, in: 33rd Annual Frontiers in Education. E Nurvitadhi, W W Leung, C Cook, IEEE1E. Nurvitadhi, W. W. Leung, C. Cook, Do class comments aid Java pro- gram understanding?, in: 33rd Annual Frontiers in Education, 2003. FIE 2003., Vol. 1, IEEE, 2003, pp. T3C-T3C. What do class comments tell us? An investigation of comment evolution and practices in Pharo Smalltalk. P Rani, S Panichella, M Leuenberger, M Ghafari, O Nierstrasz, 10.1007/s10664-021-09981-5arXiv:2005.11583doi:10.1007/ s10664-021-09981-5Empirical Software Engineering. 266P. Rani, S. Panichella, M. Leuenberger, M. Ghafari, O. Nierstrasz, What do class comments tell us? An investigation of comment evo- lution and practices in Pharo Smalltalk, Empirical Software Engi- neering 26 (6) (2021) 1-49. arXiv:2005.11583, doi:10.1007/ s10664-021-09981-5. How to identify class comment types? A multi-language approach for class comment classification. P Rani, S Panichella, M Leuenberger, A Di Sorbo, O Nierstrasz, 10.1016/j.jss.2021.111047arXiv:2107.04521Journal of Systems and Software. 181111047P. Rani, S. Panichella, M. Leuenberger, A. Di Sorbo, O. Nierstrasz, How to identify class comment types? A multi-language approach for class comment classification, Journal of Systems and Software 181 (2021) 111047. arXiv:2107.04521, doi:https://doi.org/10.1016/j. jss.2021.111047. URL http://scg.unibe.ch/archive/papers/Rani21d.pdf An empirical assessment of polyglot-ism in GitHub. F Tomassetti, M Torchiano, Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering. the 18th International Conference on Evaluation and Assessment in Software EngineeringF. Tomassetti, M. Torchiano, An empirical assessment of polyglot-ism in GitHub, in: Proceedings of the 18th International Conference on Evalu- ation and Assessment in Software Engineering, 2014, pp. 1-4. Assessing the state of software documentation practices. M Visconti, C R Cook, International Conference on Product Focused Software Process Improvement. SpringerM. Visconti, C. R. Cook, Assessing the state of software documenta- tion practices, in: International Conference on Product Focused Soft- ware Process Improvement, Springer, 2004, pp. 485-496. Agile/lean documentation: strategies for agile software development. S W Ambler, RetrievedS. W. Ambler, Agile/lean documentation: strategies for agile software development, Retrieved June 20 (2007) 2007. Automated quality defect detection in software development documents. A Dautovic, R Plösch, M Saft, First International Workshop on Model-Driven Software Migration. 29A. Dautovic, R. Plösch, M. Saft, Automated quality defect detection in software development documents, in: First International Workshop on Model-Driven Software Migration (MDSM 2011), 2011, p. 29. On the comprehension of program comprehension. W Maalej, R Tiarks, T Roehm, R Koschke, 10.1145/2622669ACM TOSEM. 234W. Maalej, R. Tiarks, T. Roehm, R. Koschke, On the comprehension of program comprehension, ACM TOSEM 23 (4) (2014) 31:1-31:37. doi:10.1145/2622669. URL http://mobis.informatik.uni-hamburg. Predicting issue types on GitHub. R Kallis, A Di Sorbo, G Canfora, S Panichella, Science of Computer Programming. 205102598R. Kallis, A. Di Sorbo, G. Canfora, S. Panichella, Predicting issue types on GitHub, Science of Computer Programming 205 (2021) 102598. Deep learning-based text classification: A comprehensive review. S Minaee, N Kalchbrenner, E Cambria, N Nikzad, M Chenaghlu, J Gao, ACM Computing Surveys (CSUR). 543S. Minaee, N. Kalchbrenner, E. Cambria, N. Nikzad, M. Chenaghlu, J. Gao, Deep learning-based text classification: A comprehensive re- view, ACM Computing Surveys (CSUR) 54 (3) (2021) 1-40. Ieee standard for a software quality metrics methodology. S E S Committee, 10.1109/IEEESTD.1993.11512496doi:10.1109/ IEEESTD.1993.115124IEEE Std. 1S. E. S. Committee, et al., Ieee standard for a software quality met- rics methodology, IEEE Std 1061-1992 (1993) 1-96doi:10.1109/ IEEESTD.1993.115124. Software metrics: a rigorous and practical approach. N Fenton, J Bieman, CRC pressN. Fenton, J. Bieman, Software metrics: a rigorous and practical ap- proach, CRC press, 2014. Validating software metrics: A spectrum of philosophies. A Meneely, B H Smith, L A Williams, 10.1145/2377656.2377661ACM Trans. Softw. Eng. Methodol. 21428A. Meneely, B. H. Smith, L. A. Williams, Validating software metrics: A spectrum of philosophies, ACM Trans. Softw. Eng. Methodol. 21 (4) (2012) 24:1-24:28. doi:10.1145/2377656.2377661. URL https://doi.org/10.1145/2377656.2377661 W G Vincenti, What engineers know and how they know it. BaltimoreJohns Hopkins University Press141W. G. Vincenti, et al., What engineers know and how they know it, Vol. 141, Baltimore: Johns Hopkins University Press, 1990. A survey of automatic generation of source code comments: Algorithms and techniques. X Song, H Sun, X Wang, J Yan, IEEE Access. 7X. Song, H. Sun, X. Wang, J. Yan, A survey of automatic generation of source code comments: Algorithms and techniques, IEEE Access 7 (2019) 111411-111428. Summarizing software artifacts: A literature review. N Nazar, Y Hu, H Jiang, Journal of Computer Science and Technology. 315N. Nazar, Y. Hu, H. Jiang, Summarizing software artifacts: A literature review, Journal of Computer Science and Technology 31 (5) (2016) 883- 909. The need for multivocal literature reviews in software engineering: complementing systematic literature reviews with grey literature. V Garousi, M Felderer, M V Mäntylä, 10.1145/2915970.2916008Proceedings of the 20th International Conference on Evaluation and Assessment in Software Engineering. S. Beecham, B. A. Kitchenham, S. G. MacDonellthe 20th International Conference on Evaluation and Assessment in Software EngineeringLimerick, IrelandACM26V. Garousi, M. Felderer, M. V. Mäntylä, The need for multivocal liter- ature reviews in software engineering: complementing systematic liter- ature reviews with grey literature, in: S. Beecham, B. A. Kitchenham, S. G. MacDonell (Eds.), Proceedings of the 20th International Confer- ence on Evaluation and Assessment in Software Engineering, EASE 2016, Limerick, Ireland, June 01 -03, 2016, ACM, 2016, pp. 26:1-26:6. doi:10.1145/2915970.2916008. URL https://doi.org/10.1145/2915970.2916008 The relevance of software documentation, tools and technologies: A survey. A Forward, T C Lethbridge, 10.1145/585058.585065Proceedings of the 2002 ACM symposium on Document engineering, DocEng '02. the 2002 ACM symposium on Document engineering, DocEng '02New York, NY, USAACMA. Forward, T. C. Lethbridge, The relevance of software documentation, tools and technologies: A survey, in: Proceedings of the 2002 ACM symposium on Document engineering, DocEng '02, ACM, New York, NY, USA, 2002, pp. 26-33. doi:10.1145/585058.585065. An empirical analysis of the impact of software development problem factors on software maintainability. J.-C Chen, S.-J Huang, Journal of Systems and Software. 826J.-C. Chen, S.-J. Huang, An empirical analysis of the impact of soft- ware development problem factors on software maintainability, Journal of Systems and Software 82 (6) (2009) 981-992. What makes APIs hard to learn? answers from developers. M P Robillard, 10.1109/MS.2009.193IEEE Softw. 266M. P. Robillard, What makes APIs hard to learn? answers from develop- ers, IEEE Softw. 26 (6) (2009) 27-34. doi:10.1109/MS.2009.193. URL https://doi.org/10.1109/MS.2009.193 The value of software documentation quality. R Plösch, A Dautovic, M Saft, 10.1109/QSIC.2014.2214th International Conference on Quality Software. Allen, TX, USAIEEER. Plösch, A. Dautovic, M. Saft, The value of software documentation quality, in: 2014 14th International Conference on Quality Software, Allen, TX, USA, October 2-3, 2014, IEEE, 2014, pp. 333-342. doi: 10.1109/QSIC.2014.22. URL https://doi.org/10.1109/QSIC.2014.22 A study of the effectiveness of usage examples in REST API documentation. S Sohan, F Maurer, C Anslow, M P Robillard, 2017 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). IEEES. Sohan, F. Maurer, C. Anslow, M. P. Robillard, A study of the ef- fectiveness of usage examples in REST API documentation, in: 2017 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), IEEE, 2017, pp. 53-61. Assessing the adequacy of documentation through document quality indicators. J D Arthur, K T Stevens, Proceedings. Conference on Software Maintenance-1989. Conference on Software Maintenance-1989IEEEJ. D. Arthur, K. T. Stevens, Assessing the adequacy of documentation through document quality indicators, in: Proceedings. Conference on Software Maintenance-1989, IEEE, 1989, pp. 40-49. Analyzing the co-evolution of comments and source code. B Fluri, M Würsch, E Giger, H C Gall, Software Quality Journal. 174B. Fluri, M. Würsch, E. Giger, H. C. Gall, Analyzing the co-evolution of comments and source code, Software Quality Journal 17 (4) (2009) 367-394. Source code comments quality assessment method based on aggregation of classification algorithms. H Yu, B Li, P Wang, D Jia, Y Wang, J. Comput. Appl. 3612H. Yu, B. Li, P. Wang, D. Jia, Y. Wang, Source code comments quality assessment method based on aggregation of classification algorithms, J. Comput. Appl. 36 (12) (2016) 3448-3453. A systematic review on the use of definition of done on agile software development projects. A Silva, T Araújo, J Nunes, M Perkusich, E Dilorenzo, H O De Almeida, A Perkusich, 10.1145/3084226.3084262Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering. E. Mendes, S. Counsell, K. Petersenthe 21st International Conference on Evaluation and Assessment in Software EngineeringKarlskrona, SwedenACMA. Silva, T. Araújo, J. Nunes, M. Perkusich, E. Dilorenzo, H. O. de Almeida, A. Perkusich, A systematic review on the use of definition of done on agile software development projects, in: E. Mendes, S. Coun- sell, K. Petersen (Eds.), Proceedings of the 21st International Confer- ence on Evaluation and Assessment in Software Engineering, EASE 2017, Karlskrona, Sweden, June 15-16, 2017, ACM, 2017, pp. 364- 373. doi:10.1145/3084226.3084262. URL https://doi.org/10.1145/3084226.3084262 On the use of ontologies in software process assessment: A systematic literature review. A Tarhan, G Giray, 10.1145/3084226.3084261Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering. E. Mendes, S. Counsell, K. Petersenthe 21st International Conference on Evaluation and Assessment in Software EngineeringKarlskrona, SwedenACMA. Tarhan, G. Giray, On the use of ontologies in software process as- sessment: A systematic literature review, in: E. Mendes, S. Coun- sell, K. Petersen (Eds.), Proceedings of the 21st International Confer- ence on Evaluation and Assessment in Software Engineering, EASE 2017, Karlskrona, Sweden, June 15-16, 2017, ACM, 2017, pp. 2-11. doi:10.1145/3084226.3084261. URL https://doi.org/10.1145/3084226.3084261 Towards process improvement in devops: A systematic literature review. S Badshah, A A Khan, B Khan, 10.1145/3383219.3383280EASE '20: Evaluation and Assessment in Software Engineering. J. Li, L. Jaccheri, T. Dingsoyr, R. ChitchyanTrondheim, NorwayACMS. Badshah, A. A. Khan, B. Khan, Towards process improvement in de- vops: A systematic literature review, in: J. Li, L. Jaccheri, T. Dingsoyr, R. Chitchyan (Eds.), EASE '20: Evaluation and Assessment in Software Engineering, Trondheim, Norway, April 15-17, 2020, ACM, 2020, pp. 427-433. doi:10.1145/3383219.3383280. URL https://doi.org/10.1145/3383219.3383280
[ "https://github.com/sbaltes/dblp-retriever" ]
[ "Programmable reaction-diffusion fronts", "Programmable reaction-diffusion fronts" ]
[ "Anton S Zadorin ", "Yannick Rondelez ", "Jean-Christophe Galas ", "André Estevez-Torres ", "‡ ", "‖ ", "\nroute de Nozay\nLaboratoire de photonique et de nanostructures\nCNRS\n91460MarcoussisFrance\n", "\nLIMMS/CNRS-IIS\nUniversity of Tokyo\nKomaba 4-6-2 Meguro-kuTokyoJapan\n" ]
[ "route de Nozay\nLaboratoire de photonique et de nanostructures\nCNRS\n91460MarcoussisFrance", "LIMMS/CNRS-IIS\nUniversity of Tokyo\nKomaba 4-6-2 Meguro-kuTokyoJapan" ]
[]
Morphogenesis is central to biology but remains largely unexplored in chemistry. Reactiondiffusion (RD) mechanisms are, however, essential to understand how shape emerges in the living world. While numerical methods confrm the incredible potential of RD mechanisms to generate patterns, their experimental implementation, despite great efforts, has yet to surpass the paradigm of stationary Turing patterns achieved 25 years ago. The principal reason for our diffculty to synthesize arbitrary concentration patterns from scratch is the lack of fully programmable reaction-diffusion systems. To solve this problem we introduce here a DNA-based system where kinetics and diffusion can be individually tuned. We demonstrate the capability to precisely control reaction-diffusion properties with an autocatalytic network that propagates in a one-dimensional reactor with uniform velocity, typically 100 µm min -1 . The diffusion coeffcient of the propagating species can be reduced up to a factor 2.7 using a species-specifc strategy relying on self-assembled hydrodynamic drags. Our approach is modular as we illustrate by designing three alternative frontgenerating systems, two of which can pass through each other with little interaction. Importantly, the strategies to control kinetics and diffusion are orthogonal to each other resulting in simple programming rules. Our results can be quantitatively predicted from frst-principle RD equations and are in excellent agreement with a generalized Fisher-Kolmogorov-Petrovskii-Piscunov analytical model. Together, these advances open the way for the rational engineering of far-from-equilibrium arbitrary patterns and could lead to the synthesis of self-organizing materials.Significance Statement:How macroscopic spatiotemporal order arises in a system of chemical reactions is a longstanding question which has important implications in biological morphogenesis. Traveling waves of concentration and stationary Turing patterns, which dynamics are ruled through an interplay between reaction and diffusion, are the archetypes of the emergence of such an order. However, these important phenomena have been interrogated so far with a class of 2/27 chemical reactions for which reaction and diffusion are hardly tunable: redox or pH oscillators such as the Belousov-Zhabotinsky reaction. Here we report a programmable DNA-based biochemical system where both the reaction and the diffusion terms can be easily controlled. It opens the way to the synthesis of reconfgurable reaction-diffusion behaviors.
null
[ "https://arxiv.org/pdf/1407.4152v1.pdf" ]
119,275,703
1407.4152
fee76d0ddf262334a9dab7579730907bfff49bb8
Programmable reaction-diffusion fronts Anton S Zadorin Yannick Rondelez Jean-Christophe Galas André Estevez-Torres ‡ ‖ route de Nozay Laboratoire de photonique et de nanostructures CNRS 91460MarcoussisFrance LIMMS/CNRS-IIS University of Tokyo Komaba 4-6-2 Meguro-kuTokyoJapan Programmable reaction-diffusion fronts Corresponding author: André Estevez-TorresFisher-KPPchemical wavesself-organizationDNA nanotechnology Morphogenesis is central to biology but remains largely unexplored in chemistry. Reactiondiffusion (RD) mechanisms are, however, essential to understand how shape emerges in the living world. While numerical methods confrm the incredible potential of RD mechanisms to generate patterns, their experimental implementation, despite great efforts, has yet to surpass the paradigm of stationary Turing patterns achieved 25 years ago. The principal reason for our diffculty to synthesize arbitrary concentration patterns from scratch is the lack of fully programmable reaction-diffusion systems. To solve this problem we introduce here a DNA-based system where kinetics and diffusion can be individually tuned. We demonstrate the capability to precisely control reaction-diffusion properties with an autocatalytic network that propagates in a one-dimensional reactor with uniform velocity, typically 100 µm min -1 . The diffusion coeffcient of the propagating species can be reduced up to a factor 2.7 using a species-specifc strategy relying on self-assembled hydrodynamic drags. Our approach is modular as we illustrate by designing three alternative frontgenerating systems, two of which can pass through each other with little interaction. Importantly, the strategies to control kinetics and diffusion are orthogonal to each other resulting in simple programming rules. Our results can be quantitatively predicted from frst-principle RD equations and are in excellent agreement with a generalized Fisher-Kolmogorov-Petrovskii-Piscunov analytical model. Together, these advances open the way for the rational engineering of far-from-equilibrium arbitrary patterns and could lead to the synthesis of self-organizing materials.Significance Statement:How macroscopic spatiotemporal order arises in a system of chemical reactions is a longstanding question which has important implications in biological morphogenesis. Traveling waves of concentration and stationary Turing patterns, which dynamics are ruled through an interplay between reaction and diffusion, are the archetypes of the emergence of such an order. However, these important phenomena have been interrogated so far with a class of 2/27 chemical reactions for which reaction and diffusion are hardly tunable: redox or pH oscillators such as the Belousov-Zhabotinsky reaction. Here we report a programmable DNA-based biochemical system where both the reaction and the diffusion terms can be easily controlled. It opens the way to the synthesis of reconfgurable reaction-diffusion behaviors. Introduction Morphogenesis is an area that remains largely unexplored in chemistry. We know, however, that reaction-diffusion (RD) mechanisms are essential for the emergence of spatiotemporal ordered structures in living systems (1). Our capacity to generate concentration patterns from scratch hence bears the potential to increase our understanding of morphogenic processes in an unprecedented way. Turing (2) and later Gierer and Meinhardt (3) laid the theoretical foundations of chemical morphogenesis. At steady state, a large majority of chemical systems relax to a state of homogeneous concentration. Many excitable systems and oscillators develop fronts, waves and spirals (4)(5)(6)(7), with well-defned velocity. But just a handful of reactions produce more complex behaviors such as stationary Turing patterns (8-10), replicating (11) and oscillating spots (12). Nothing more complex than that has ever been observed experimentally in synthetic systems in the absence of external forcing (13). Computational methods suggest, however, a wealth of possible phenomena (13,14) but these are diffcult to test because experimental systems with tunable properties have remained elusive in the laboratory. In this work we introduce a programmable reactiondiffusion experimental system. To generate arbitrary spatio-temporal patterns the following properties need to be programmable: i) the topology of the chemical reaction network (CRN), ii) the reaction rates, r i and iii) the diffusion coeffcients of the individual species, D i . The frst two requirements guarantee a chemical system with non-trivial dynamics. The last two conditions allow to probe experimentally different regions of the bifurcation diagram. In the last forty years, a strong effort has been dedicated to develop experimental systems where the aforementioned properties could be programmed. The majority of them are related to the Belousov-Zhabotinsky (BZ) reaction (15)(16)(17): they concern small inorganic or organic molecules and redox or acid-base reactions (we will call them BZ-related reactions). Our current understanding of chemical reactivity does not allow to engineer CRNs with such chemistries in a rational way. Although semi-heuristic methods have been developed (18)(19)(20), they are 4/27 neither general nor modular. They are nevertheless still the gold standard to experimentally test reaction-diffusion theories (21). An essential point in the quest to synthesize complex RD patterns is the ability to selectively reduce D i for a given chemical species (1,2). Particular solutions to reduce D i have been devised for BZ-related reactions (8,10) but no general strategy is available. DNA-based chemical reaction networks provide an interesting solution to the issues mentioned above. Due to base complementarity, the reactivity of single stranded DNA (ssDNA) towards hybridization can be predicted from the sequence (22,23). Recent advances in DNA nanotechnology allow us to program the topology of quite complex CRNs. Enzymefree DNA circuits have produced tunable cascading reactions (24,25) and they have recently allowed to encode edge detection algorithms of light-generated patterns (26). In combination with enzymatic reactions, non-equilibrium dissipative behaviors with DNA circuits have been obtained, such as non-linear oscillators (27)(28)(29) and memory switches (30). We have recently observed wave trains and spirals in a synthetic reaction network made of short DNA strands and three enzymes (31). Here we introduce a general method to control specifcally both the reaction rates and the diffusion coeffcients of DNA species involved in such programmable reaction networks. We apply it to a far-from-equilibrium autocatalytic system that develops traveling fronts of uniform velocity. As a result, we demonstrate that the propagation velocity of the fronts can be controlled by either reaction or diffusion in a predictable manner. Importantly, a source of dNTPs maintains the system far from equilibrium for several hours in a closed reactor, which greatly facilitates its experimental implementation. The growing feld of structural DNA nanotechnology (32) uses DNAencoding to self-assemble µm structures with nanometer resolution (33,34). We show that the same DNA-based chemistry can be harnessed to generate order on the millimeter length scale and hence suggest that both approaches could be bridged in future integrative models of the emergence of shapes in the living world. 5/27 Model Throughout this work we consider an autocatalytic system based on the PEN DNA toolbox, (27,35), which is a modular approach to engineer complex chemical reaction networks. Species A, an 11-mer ssDNA, may catalyze its own growth in the presence of a template, T, a 22-mer, carrying two contiguous domains complementary to A ( Figure 1). A → r 2 A ,(1) where the rate r(A) depends on A, the concentration of A (see SI section 1). In a one dimensional reactor the evolution of A depends on time, t, and position, x, and is described by the reaction-diffusion equation ∂ A ∂ t =r ( A)+ ∂ ∂ x ( D eff (A) ∂ A ∂ x ) ,(2) where we have made explicit that the diffusion coeffcient D eff (A) may depend on the concentration of A. The reason is that we are modeling a reaction-diffusion system that depends on the concentration of at least 6 different species (Figure 1) with the single average species A. A is either free or bound to T and thus the diffusion coeffcient of the average species A, depends on the molar fraction of free A, and thus on A. We show in the SI section that in our case D eff (0)≃ K −1 2T 0 +K −1 D A + 2T 0 2T 0 +K −1 D T ,(3) where K -1 is the dissociation constant of reaction {1} in Figure 1, T 0 is the total concentration of T and D A and D T are the diffusion coeffcients of species A and T, respectively. Figure S1). Note that the scaling in Eq. 4 is, in principle, valid regardless of the expression of r(A), assuming b) and e) hold, and the change of r'(0) is due to a multiplicative factor on r(A). If the rate-law is unknown, r'(0) is simply the exponential growth rate at low A. However, when the function r(A) differs from the one of Fisher-KPP, a multiplicative constant greater than unity may appear in the expression of the velocity (for an exactly solvable chemically relevant example we refer to (39)). To take this into account we introduce a corrective factor γ v corr = γ v mod ,(5) where the indexes stand for corrected and model, respectively, and v mod is given by Eq. 4. Throughout this paper we will show that our programmable molecular system is in quantitative agreement with Eqs. 3-5, with γ = 1.3, and is thus a very good candidate to explore experimentally the emergence of complex patterns. Results A front of autocatalyst propagates with uniform velocity We frst studied the growth kinetics and the front propagation dynamics of autocatalyst A ( Figure 2). In both cases the concentration of A was indirectly monitored using the non-specifc fuorescent DNA binder EvaGreen. The fuorescence quantum yield of EvaGreen increases 6-fold when bound to dsDNA, compared to ssDNA (SI Table S1). The fuorescence intensity measured in the following is thus proportional to a linear combination of the concentration of double stranded species: B 1 , B 2 , B 12 and F. We frst introduced in a well-mixed reactor T, A, pol, nick and deoxyribonucleotides (dNTPs) at 38°C. The growth of 8/27 the fuorescence intensity displays a sigmoidal shape (SI Figure S3). It is not a simple logistic growth: two growth rates were obtained by ftting with a biexponential function, k 1 = (8.0 ± 0.8)·10 -2 min -1 , and k 2 = 0.57 min -1 (Figure 2A and SI Figure S3). After about 80 min, the fuorescence intensity stabilizes, most likely when all the free templates T are bound to A either as B 12 , B 1 , B 2 or F. The monoexponential growth at low A can be explained by a simple kinetic model where the polymerization reaction {5} is rate-limiting (SI Section 1.2). We did not attempt to interpret the second monoexponential time-scale in this work as it appears to be irrelevant for front propagation dynamics. When a channel is flled with the reaction buffer with all components except A and an initial condition is created by injection of 1 µM A to the left inlet, we observe a front of fuorescence that moves from left to right ( Figure 2B, SI video S1). The shape of the intensity profle along x is stable and propagates with uniform velocity ( Figure 2C). The front lasts for about 150 min before reaching the right border of the channel. For 38°C, T = 200 nM, pol = 16 U/mL, and nick = 300 U/mL the velocity of the front is 68 µm min -1 . Importantly, and in agreement with the theory described above, the velocity and the shape of the front are independent of the shape and the amplitude of the initial condition (SI Figure S 4). Under certain circumstances (higher temperature, higher pol) a leak reaction due to the unprimed polymerization of T triggers a homogenous growth across the whole channel, preventing the front from fnishing the run. However, in the conditions described above, it takes more than 900 min for the unprimed reaction to become evident (SI Figure S5). In an independent experiment we measured the diffusion coeffcient of fuorescent analogues of A and T at 38°C from the the relaxation of a sharp initial concentration profle (SI section 8, SI Figure S6, Table 2). We 3.2. The velocity of the front depends on the growth rate, which can be specifically tuned 11/27 We went out to check both the scaling v ~ (r'(0)) 1/2 (Eq. 4) and the capability of our (Table 1-2). enzyme, which has already been described (31), or to an effect of the single stranded DNA binding protein present in the reaction mix. This fact was not investigated further because the front velocity is set by the growth rate at low A. Interestingly, the growth rate is proportional to T 0 in the range 0-100 nM, with r'(0) = (3.1·10 -4 nM -1 min -1 )T 0 , indicating that, in this range, growth kinetics can be specifcally tuned by changing the concentration of their templates, an important feature for modular programmability ( Figure 3C) Orthogonal autocatalysts with different sequences propagate with uniform but different velocity To illustrate the versatility of our approach we designed two more autocatalysts A 1 and A 2 produced by templates T 1 and T 2 , respectively. A 1 has 5 out of 11 bases different from A but functions with the same nicking enzyme (Nt.Bst NBI). A 2 has 11 out of 11 bases different from A and depends on a different nicking enzyme (Nb.BsmI). All these three species display sigmoidal growth curves with a clear exponential term at the onset of growth (SI Figure S8) with r'(0) for A and A 2 is 0.08 min -1 and 0.13 min -1 . The growth of A 1 was too fast to allow confdent measurement of r'(0). All three autocatalysts developed propagating fronts with a stable shape ( Figure 4A) and a uniform velocity ( Figure 4B). The propagation velocities of A 1 and A 2 were 101 and 64 µm min -1 , respectively, compared with 70 µm min -1 for A. Using the measured values for r'(0) and identical D eff (0) the corrected calculated velocities with γ= 1.3 are 77 µm min -1 and 97 µm min -1 for A and A 2 , respectively. The predicted value is in good agreement with experiment for A but not so much with A 2 (50% off), suggesting that the factor γ could depend on the template. Indeed, we hypothesize that γdepends on how the full autocatalytic mechanism (Figure 1) is reduced into the single variable reactiondiffusion equation 2. In a channel containing both T and T 2 , two fronts propagating in opposite directions could be triggered by injecting A and A 2 on the left and right inlet, respectively ( Figure 4C and D). For t < 74 min each front propagates in a fresh medium and it is thus not surprising that we observe the same behavior as for independent fronts. After collision, A 2 maintains its velocity constant and equal to 66 µm min -1 while the velocity of A is reduced 1.3-fold from 58 to 46 µm min -1 . Considering that after collision A and A 2 propagate in a region that has, respectively, a high concentration of A 2 and A, and thus the potential to saturate the enzymes, the negligible interaction between the two fronts is particularly striking. Here we used, on purpose, two autocatalysts that rely on different nicking enzymes but depend on 14/27 the same polymerase. Because the substrate concentration is well below the K M of the polymerase (SI Figure S10) The diffusion coefficient of an autocatalyst can be selectively reduced with a self-assembled hydrodynamic drag So far we have shown three independent strategies to modify the growth rate and thus the propagation velocity. We develop in the following a method to reduce D eff (0) without modifying the growth rate and thus changing the front velocity through diffusion (Eq. 4). Controlling the diffusion coeffcient, D, of a molecule is not a simple task. The strategy used here consists of attaching a hydrodynamic drag to the 3'-end of template T, which binds to the active species A ( Figure 5A). We considered two types of drags: permanently and dynamically attached ones. Permanent drags may be bound to a DNA strand through an irreversible interaction, such as streptavidin-biotin (this work), acrydite-acrylamide (42), or amide-coupling (43). They require, however, a preliminary and cumbersome coupling process. In contrast, dynamic drags reversibly bind to a DNA strand, for instance through hydrophobic interactions. They do not need a coupling step because they self-assemble in solution. We have tested two permanent drags, streptavidin and streptavidin-coated beads, and one dynamic drag, micelles. The micelles were made of triton X-100, a neutral surfactant. In the frst case, T was linked in 3' to a biotin through a 5 thymine spacer, which is noted T-5-bt, and further coupled to streptavidin to obtain T-5-bt:str. In the second case, T was linked in 3' to a cholesteryl without thymine spacer, noted T-ch, and used in a 10 g/L triton solution yielding T-ch:trit. At this concentration triton self-organize into micelles about 5.5 nm in radius and the cholesteryl group dynamically attaches to them through hydrophobic interactions. Table 1 and 2 provide sizes for these drags and for species A, T, T-5-bt, T-ch, as well as their corresponding diffusion coeffcients at 44°C or 38°C. Triton micelles worked best and are described in the following. Streptavidin worked well (SI Figure S12) but resulted in a reduction of D eff (0) of only 1.6-fold. Beads strongly reduced the diffusion coeffcient of T but they did not provide a control strategy orthogonal to growth kinetics (SI Figure S13). (40). 2 Size of streptavidin alone, from (44). 3 Recalculated from measurements at 38°C. 4 Recalculated from measurements at 20°C. Table 2: Estimated hydrodynamic radius, R h , measured diffusion coeffcient, D, growth rate, r'(0), front velocity, v, and an inferred diffusion coeffcient associated to the front propagation, D eff (0). Template concentration 200 nM, pol = 16 U/mL, nick = 300 U/mL, 10 g/L triton X-100. All measurements were performed at 38°C. Where applicable, values are accompanied by the confdence interval with the confdence probability of 0.95. The intervals are calculated for samples of n = 5 both for r'(0) and v, D eff (0) was treated as a function of these variables. Species R h (nm) D i (10 3 µm 2 min -1 ) (40). 2 Size of triton micelles alone at 30°C, from (45). 3 Value for T in a tritonfree buffer. 4 Compatible with the value 2.5·10 3 µm 2 min -1 measured for triton micelles alone at 30°C (45). r'(0) (10 -2 min -1 ) v (µm min -1 ) D eff (0) (10 3 µm 2 min -1 ) A - 16 ± 3 - - - We frst checked the infuence of the triton drag on the growth kinetics. Figure 5B displays the growth curves for T and T-ch in the presence of 10 g/L triton X-100. Both curves display a biexponential and remarkably similar shape, with equal growth rates within experimental error (Table 2). They differ only by a shift of 18 min in the onset of growth, which is negligible taking into account the experiment-to-experiment variation (SI Figure S9). Furthermore, control experiments demonstrate that for both templates the polymerization rates were identical, while nicking rates differed by a factor 3 (SI Figures S10 and S11). Considering that polymerization is the rate-limiting step these data demonstrate that the triton drag strategy has a negligible infuence in the growth kinetics. Figure 5C shows the propagation of a front of A growing on either T or T-ch in a triton solution with the same reaction conditions. The second front advances 1.6 ± 0.2 times slower, the velocities being 65 ± 5 and 40 ± 4 µm min -1 , respectively (confdence 0.95). These values, together with the growth rate, can be substituted into Eq. 4, yielding a (2.7 ± 0.8)-fold reduction in D eff (0). To compare these with the prediction given by Eqs. 3-5, we independently measured the diffusion coeffcient of T-ch:trit, D T-ch:trit = (4.0 ± 0.3)·10 3 µm 2 min -1 . Supposing that the hybridization constant is not affected by the presence of triton and taking thus K -1 = 3 nM, together with γ = 1.30, the predicted velocity for the front growing on T-ch is 46 ± 2 µm min - D eff no−drag (0) D eff drag (0) = K −1 D A +2T 0 D T K −1 D A +2T 0 D T :drag can be used to estimate the theoretical expectation of the change of D eff (0) in the presence of a drag. For T-5-bt and T-5-bt:str at 44°C from Table 1 we obtain a (1.5 ± 0.1)-fold change, while for T and T-ch:trit at 38°C from Table 2 this ratio is 2.6 ± 0.2. All the predicted values are thus in excellent agreement with the experimental fgures indicating that the diffusion coeffcient of a propagating autocatalyst can be tuned in a quantitative manner. We further demonstrated that this diffusion control strategy worked well when fronts of A and A 2 collided in channels containing either T or T-ch and T 2 ( Figure 5D). The velocity of A growing on either T or T-ch was not infuenced by the front of A 2 and again a velocity reduction factor of 1.7 was measured. The presence of the drag on another template did not infuence at all the velocity of A 2 . This shows that diffusion control can be performed selectively on a single node of a chemical reaction network and suggests that it may be scaled to larger networks. Finally, in a channel containing both T and T-ch, we achieved fne tuning of the velocity of a front of A by varying the molar fraction of T-ch while keeping the total concentration (T+T-ch) constant ( Figure 5E). This fne tuning was exclusively due to diffusion control. The analytical prediction with γ = 1.3 is, once again, in excellent agreement with the experimental data, without ftting. Discussion The frst to suggest a connection between a reaction-diffusion process and the morphology of an organism was Alan Turing in 1952 (2). He demonstrated that two chemicals that react and diffuse may create an inhomogeneous stationary spatial pattern of well-defned wavelength from a homogeneous initial condition. A key constrain for this to happen is that the diffusion coeffcient of the activator species, the autocatalyst, needs to be signifcantly smaller than that of the inhibitor (46). Although this was a foundational work with great impact, it took nearly forty years for chemists to get experimental evidence of Turing patterns (8,9). The reason is that chemical systems used so far to investigate 19/27 dissipative structures have involved small inorganic and organic molecules, for which the reaction rates, the mechanism and the diffusion coeffcients can be hardly modifed in a rational manner. Although Epstein (15) and others have made extraordinary experimental and theoretical contributions to the understanding of dissipative chemical structures, the feld has reached an impasse from the experimental point of view because of the lack of powerful control tools. Recently, purifed biochemical models have been used to study striking spatiotemporal phenomena, in particular the Min system (47). Such systems have the advantage of being biologically relevant, but remain hard to reprogram. We argue here that DNA-based biochemical systems are experimental models of choice to study the emergence of spatiotemporal order in chemistry with important implications in both biological morphogenesis and in the synthesis of self-organizing materials. We think that they will advantageously replace Belousov-Zhabotinsky-related systems. In a previous work we have shown that relatively complex chemical reaction networks (CRNs) can be designed from the bottom up into DNA-based biochemical systems (29) and they display traveling waves and spirals in a non-stirred reactor (31). Here we further demonstrate that the velocity of traveling fronts in a related autocatalytic system can be fnely and quantitatively controlled. This control arises from the modularity of the DNA-toolbox and from the specifcity of the biochemical reactions involved. Our system has three types of chemical species: active species, A, templates, T, and enzymes, pol and nick. The total concentration of active species changes over space and time; they can be generated, or degraded and they diffuse driven by large gradients. In contrast, the total concentration of templates and enzymes does not change over time or space. The rate constants of a given CRN depend mainly on the total concentrations of templates and enzymes and only to the second order (when saturation arises) on the concentration of free species. As a result, the rate constants for each reaction can be set independently by changing the concentration of a polymerase, the concentration of a template or its sequence. This is impossible to do for BZ-related systems in a closed reactor. To overcome this problem, cumbersome open reactors in contact with the top and bottom of 20/27 a thin sheet reactor, were needed to observe complex spatio-temporal structures in BZrelated systems (8,20,48). Such reactors guarantee that the concentration of a given chemical (and thus the rate of each reaction) is constant over time. They are not needed in systems designed on the framework of the PEN DNA toolbox (27,35) We also have shown that the programmability of DNA and its chemical versatility (many chemical modifcations are commercially available) make it straightforward to design selective strategies to control the diffusion coeffcient of an active species. Two strategies to reduce the diffusion coeffcient have been used in the past in BZ-related systems. The medium was supplemented with starch, that made a complex with iodide (8), or the reaction was carried out in a water-in-oil emulsion (10). In this last case bromide could diffuse rapidly from one water droplet to another, as it is soluble in oil, but the hydrophilic activating species could only move from one droplet to another through droplet merging, which is a slow process. These two implementations depend on the intrinsic properties of the reactants and are neither general nor modular, in contrast with the strategy shown here. Moreover we have demonstrated that our strategies to control kinetics and diffusion are orthogonal to each other; when diffusion is modifed kinetics is not, making our system easily programmable. It has, of course, limitations. The presence of triton in the solution facilitates the formation of bubbles, specially at 38 ºC, which may cause trouble. Moreover, the diffusion control needs a low value of the dissociation constant between A and T (Eq. 3) while the selective control of the growth rate using the template concentration needs the opposite. Finally, while the presence of cholesteryl and triton had a marginal effect in the overall growth rate it did infuence the rate of the nicking reaction. 21/27 Importantly, the biochemical system presented here is commercially available, relatively cheap, very robust to the variability of enzymatic activity inherent to commercial preparations, simple to carry out and compatible with widely available reactor materials such as polystyrene. It does not require particular skills in biochemistry: no need for protein or DNA purifcation, for instance. The spatial reactor was fabricated with low-tech protocols using plastic slides and Paraflm (SI Figure S14) available in any laboratory. For these reasons, we anticipate that DNA-based systems will be a widely used experimental model to ask fascinating questions about the emergence of spatio-temporal molecular order (49). Conclusion We have shown that using a relatively simple chemical system based on DNA and When needed, triton X-100 was added to this buffer to the fnal concentration of 10 g/L to generate micelles. The following two enzymes were added into the mix: Bst DNA polymerase large fragment (pol) (NEB) and Nt.BstNBI nickase (nick) (NEB). In experiments with the T 2 template, Nt.BstNBI was substituted by Nb.BsmI (NEB). Typical concentrations were 16 U/mL for pol and 300 U/mL for nick, however, nicking enzyme activities signifcantly changed from batch to batch, and their concentrations were adjusted according to independent assays. Oligonucleotide sequences are provided in the SI, section 16. Growth kinetics experiments Autocatalyst growth independent of spatial variables was achieved by mixing 20 µL of the above solution with 1 µL of 200 nM A, A 1 , or A 2 , depending on the template used, (thus, A 0 ≈ 10 nM). This well-mixed solution was monitored in a CFX96 Touch Real-Time PCR Detection System (Bio-Rad) used as a thermostated fuorescence reader. Alternatively, this reaction mix was injected into a polystyrene chip and monitored under a microscope as described below. Typically, one reaction per experiment was performed in a tube without addition of an autocatalyst (A, A 1 , or A 2 ) to monitor the onset time of the unprimed growth. Control experiments demonstrate that growth rate constants measured in tubes in the rtPCR machine or in the polystyrene channels used for front propagation were identical within experimental precision. The former method was used for convenience (SI, Figure S15). 23/27 Front propagation experiments The reaction chamber was a channel of approximately 1.8 mm length, 2-3 mm width and 0.25 mm height cut out from two layers of Paraflm and placed between two clear polystyrene slides manually produced from 10 cm Petri dishes. The channel was open on the side from one end, and closed from the other. A hole of 1 mm diameter was drilled in the upper slide above the second end to facilitate the channel flling by aspiration with a micropipette (SI Figure S9). Polystyrene was selected instead of glass because we noticed a strong interaction of the Nt.BstNBI nickase with glass. In contrast, for Nb.BsmI a simple assembly using glass cover slips with no drilling and a channel opened from both sides may be utilized. The Paraflm layers were placed between the slides and left on a hot plate at 50°C to glue the assembly. The reaction mix with the template but without an autocatalyst was then introduced from the side inlet. To generate the initial condition for the traveling wave, 5 µL of the initial mix were mixed with 0.5 µL of 10 µM A, A 1 or A 2 , then 1.5 µL of the resulting solution was injected from the side. Both the side inlet and the vertical holes were sealed with vacuum grease, and the reaction was then monitored with a Zeiss Axio Observer Z1 inverted microscope with a transparent heating plate (Tokai-Hit) using a 2.5x objective, a HXP 120 C (Zeiss) (experiments in Table 1) or LED (CoolLED) excitation light (all other experiments), a motorized stage with Tango controller (Marzhauser-Wetzlar), and an EM-CCD Digital camera C9100 (Hamamatsu). Images were acquired automatically using µManager 1.4 (50) and treated with ImageJ (NIH). Prior to data analysis, the background and the inhomogeneous illumination were corrected by subtracting the frst image and dividing by an average of images where the channel was homogeneously flled with dye. Acknowledgments 1.INTRODUCTION........................................................................... 4 2.MODEL.................................................................................... 6 3.RESULTS.................................................................................. 9 3.1.A front of autocatalyst propagates with uniform velocity.............................9 3.2.The velocity of the front depends on the growth rate, which can be specifically tuned.............................................................................11 3.3.Orthogonal autocatalysts with different sequences propagate with uniform but different velocity............................................................................. 14 3.4.The diffusion coefficient of an autocatalyst can be selectively reduced with a self-assembled hydrodynamic drag ................................................... 16 4.DISCUSSION............................................................................ 19 5.CONCLUSION............................................................................ 22 6.MATERIALS AND METHODS.............................................................23 6.1.Reaction assembly................................................................................23 6.2.Growth kinetics experiments..................................................................23 6.3.Front propagation experiments.............................................................. 24 7.ACKNOWLEDGMENTS.................................................................... 24 8.REFERENCES............................................................................ 25 For r(A) = kA(1 -A/C) and D eff (A) = D, Eq. 2 takes the form of the well-known Fisher-Kolmogorov-Petrovskii-Piscunov (Fisher-KPP) equation, where k is the replication rate constant and C the carrying capacity (36, 37). This classic equation has traveling wave solutions of the form A(x,t) = A(x -vt), where the velocity v is bounded from below by v=2 √ r '(0)D eff (0) , (4) r' being the derivative of r and both r'(A) and D eff (A) are taken at the limit A = 0. In the Fisher-KPP model, r'(0) = k and D eff (0) = D. Importantly, in the case of a) constant D, b) r(0) = 0, c) bounded growth (i. e. there exists A max > 0 such that r(A max ) = 0), d) r(A) > 0, e) r'(0) > 0 and f) r'(A) < r'(0) on (0,A 0 ), v from Eq. 4 corresponds to the single stable asymptotic traveling wave solution, and depends neither on other details of the growth function r(A), nor on the shape of the initial condition(37,38). In our experimental conditions a), c) and f) are violated: D eff depends on A, the growth is not bounded (though, it saturates at a certain rate) and it7/27 accelerates as A increases (in some region of concentrations Figure 2 : 2The autocatalyst grows exponentially at short times in a well-mixed reactor and generates a front with uniform velocity in a channel reactor. A) Log-lin plot of the normalized EvaGreen fuorescence, I n , vs time, t, in a 20 µL tube. The blue line is a biexponential ft between t = 0 min and t = 32 min. The red line is an exponential ft between t = 0 min and t = 26 min (see SIFigure S3for details on the ftting procedure). B) Profles of I n along the channel length, x, starting from t = 15 min in 5 min intervals (top). The arrow shows the direction of the front propagation. The thick lines correspond to the frames shown below. Images of the fronts at 30, 70 and 110 min (middle, SI video S1). Time vs the position of the front (bottom, linear ft in red). T = 200 nM, pol = 16 U/mL, nick = 300 U/mL, 38°C in the reaction buffer with 10 g/L of triton X-100. obtained D A = (16 ± 3)·10 3 µm 2 min -1 and D T = (10.7 ± 0.7)·10 3 µm 2 min -1 , respectively, in agreement with values reported in the literature (40, 41). As a proxy for K -1 we measured the dissociation constant of the hybridization of A with its complementary strand and found 3 nM at 38°C. From Eqs. 3-5 we thus calculate the values predicted by the model: D eff mod (0) ≈ (10.7 ± 0.7)·10 3 µm 2 min -1 , v mod = 59 ± 7 µm min -1 and γ = 1.1 ± 0.2. The front velocity predicted by the simple Fisher-KPP model is just 16% below the value measured experimentally. 27 Figure 3 : 273model to provide a quantitative prediction of v with a unique value of γ. To do so we performed growth and front propagation experiments with different concentrations of T and pol (Figure 3).Figure 3A-D reports the dependence of growth and propagation on the total template concentration, T 0 . The growth of the autocatalyst is always monoexponential at short times for T 0 = 0-200 nM. This monoexponential character is maintained during the whole growth phase at 25 nM, but a second exponential time-scale appears at larger T 0 . For all the values of T 0 investigated, the break in the slope inFigure 3Ahappens at a similar value of intensity, 600 a. u., suggesting that there is a threshold concentration of dsDNA species responsible for a change in the growth mechanism, which becomes faster. This change in the mechanism could be due to the inhibition of the polymerase by the nicking12/The growth rate of the autocatalyst and its propagation velocity can be tuned specifcally with the template concentration and non-specifcally with the polymerase concentration. A) Log-lin plots of the growth kinetics with different concentrations of the template, T 0 . B) Fluorescent images of the front position at 0 min and 50 min for different T 0 . For clarity, the brightness of the images with different T 0 has been normalized (SI video S2). C) First order rate constant r'(0) vs T 0 , the red line is a linear ft for T 0 = 0-100 nM. D) Square of the front velocity v vs T 0 , the blue line is the prediction using Eqs. 3-5 with γ = 1.3. E) r'(0) vs normalized polymerase concentration, pol/pol 0 , the red line is a linear ft. F) v 2 vs pol/pol 0 , the blue line is the prediction using Eqs. 3-5 with γ = 1.3 (SI video S3). Experimental conditions: A-D) 38°C, pol = 16 U/mL, nick = 300 U/mL, E-F) T 0 = 200 nM, pol 0 = 16 U/mL, nick = 500 U/mL, 44°C. Error bars were estimated from the 10% experimental precision (both on r'(0) and v) measured for 4 independent experiments at T 0 = 200 nM Figure 1 ) 1. The velocity of the front also depended on T 0 . Fronts starting at the same position propagated farther within a given time when T 0 increased (Figure 3B). As we found r'(0)~ T 0 , Eq. 4 predicts v 2~ T 0 , which was verifed experimentally for T 0 = 0-200 nM. To quantitatively test the predictions of our model we substituted these data into Eq. 5 and used the independently measured values of D i , K -1 and r'(0) to calculate v mod with Eq. 3-4 and then obtain γ= 1.30 ± 0.16, in agreement with the value reported above. With this value of γ, the analytical equation 5 is in excellent quantitative agreement with the data (Figure 3D, blue line).T 0 is thus a convenient experimental parameter to tune the growth rate and the propagation velocity. It has the advantage of being specifc: in a complex reaction network with several autocatalysts growing on different templates, changing the template concentration of one of them will modify r'(0) and v for a single autocatalyst. However, it is also desirable to have another experimental knob to set the overall strength of growth and propagation. To this end we studied the dependence of r'(0) and v on the polymerase concentration, pol (Figure 3E-F, SI Figure S7). As a test of the robustness of the model's predictions, we performed these experiments at a different temperature and nicking enzyme concentration, 44ºC and 500 U/mL, respectively. For relative pol concentrations ranging from 0.25-to 2-fold the measured values of r'(0) are linearly dependent on pol, with r'(0) = (0.05 min -1 ) pol/pol 0 . This indicates that in these conditions the polymerization (reaction {5} on is the rate-limiting step. For the same range of pol we measured front velocities between 20 and 125 µm min -1 . In an independent experiment we obtained D A = (18 ± 3)·10 3 µm 2 min -1 , D T = (11.8 ± 0.8)·10 3 µm 2 min -1 and K -1 = 100 nM at 44ºC. In the range pol/pol 0 = 0-1, 13/27 the velocities predicted by Eqs. 3-5 with γ= 1.30 are in excellent agreement with the experimental ones (Figure 3F, blue line). Figure 4 : 4Different autocatalysts propagate with different velocities and collide with little interaction. A and A 1 differ by 5 bases and depend on the same nicking enzyme, Nt.Bst NBI, while A 2 uses another nicking enzyme, Nb.BsmI. A) Profle of the fronts generated by different autocatalysts in separate channels at t = 20 min (solid lines) and at t = 70 min (dashed). B) Time vs the position of the front for data in panel A. C) Time-lapse images (SI video S4) and D) time vs the position of the front for a front of A, propagating left to right, and a front of A 2 , propagating right to left, colliding at t = 74 min and x = 5 mm. The dotted line in panel D is a guide to the eye to appreciate the slope-break after collision. The color code is conserved within the fgure with A in black, A 1 in blue and A 2 in red. T = T 1 = T 2 = 200 nM, pol = 16 U/mL, nick = 300 U/mL, 38°C. Figure 5 : 5The velocity of a front can be reduced using a hydrodynamic drag without altering the growth rate. A) Sketch of the diffusion control strategy implemented in this work, the template, in black, is attached to a hydrodynamic drag, while the active species, in grey, reversibly hybridizes to it. B) Growth kinetics of normalized fuorescence vs shifted time. C) Propagation of fronts generated by templates T:trit (blue curves) and T-ch:trit (red curves) in different channels, represented as still images at two given times (top, SI video S5), as fuorescence profles along the channel at the same times (middle, solid lines t=0 min, dashed t=78 min), and as time vs position of the front at half height (bottom). D) Time vs front position for colliding fronts of A on T and A 2 (top) and A on T-ch and A 2 (bottom). E) Fine-tuning of the front velocity through diffusion by changing the molar fraction of T-ch compared to T and keeping constant the total concentration of template, the line is the theoretical prediction from Eqs. 3-5 and γ= 1.30. Template concentration 200 nM (except in panel D, 150 nM), pol = 16 U/mL, nick = 300 U/mL, A 0 = 10 nM (growth kinetics), 10 g/L triton X-100, 38°C. due to the characteristics of enzymatic reactions: for a chemical B reacting with an enzyme with Michaelis-Menten constant K M , the rate of the reaction is constant and independent of B for B >> K M . This happens in our case for dNTPs and polymerase, for instance: the excess of dNTPs acts as a reservoir of free energy keeping polymerization rate constant over long periods of time (100-1000 min). two enzymes it is possible to generate programmable fronts propagating with constant velocity. These fronts can be effectively described by a reaction-diffusion equation with one dependent variable, closely related to the Fisher-KPP problem. This model provides excellent quantitative predictions of the front velocity and its associated effective diffusion with a single phenomenological parameter. We have demonstrated the control of the velocity of the waves via kinetics and diffusion. The former can be tuned non-specifcally, through the enzyme concentration, or specifcally, trough the concentration of a specifc DNA template or through its sequence. In addition, we demonstrated a method to control the diffusion coeffcient of a DNA reactant by reversible attachment of a self-assembled hydrodynamic drag. Importantly, the methods to control kinetics and diffusion are orthogonal to each other making programming rules simple. The targeted control of diffusion coupled to the simplicity of rewiring a reaction network opens new avenues for the bottom-up construction of fully reconfgurable spatio-temporal dissipative structures. Aldrich), 5 mg/L extremely thermostable ssDNA binding protein (ET SSB) (NEB), and 1x EvaGreen DNA binder (20x dilution of the manufacturer's stock solution) (Biotium). Table of contents of Species A reversibly hybridizes with T on any of these two domains, generating species B 1 , B 2 and B 12 . B 1 may be extended by a polymerase, pol, to form species F. F carries a recognition site for a nicking enzyme, nick, such that the upper strand is cut at its midpoint, yielding species B 12 . The net reaction is thus,6/27 Figure 1: Mechanism of the DNA-based autocatalyst. The 11-base long DNA strand A reversibly hybridizes with a 22-mer template, T, bearing two contiguous sites complementary of A (reactions {1-4}). Species B 1 is extended by a polymerase, pol, to form the dsDNA F {5}, which bears a recognition site for a nicking enzyme, nick, yielding B 12 {6}. The net reaction is the autocatalysis of A with rate r. Double and single arrows indicate reversible and irreversible reactions, respectively. the interaction of the two fronts is negligible. Although we were not able to observe stable colliding fronts when two templates depending on the same nicking enzyme were used, we believe that this technical issue should be solved with a careful optimization of the experimental conditions. In any case, up to eight nicking enzymes with orthogonal recognition sites are commercially available from major manufacturers, which could signifcantly extend the complexity of the CRNs that can be constructed within the framework of the PEN DNA toolbox. To the best of our knowledge this is the frst time that the collision of two chemically distinct fronts is observed. The modularity of the PEN DNA toolbox hence allows to simply design de novo autocatalysts that generate predictable and complex spatio-temporal patterns.15/27 the bound state with low D, as illustrated in Eq. 3. This approach applies well to single stranded DNA species, for which a binding partner always exist as its Watson-Crick complementary. The task then breaks down to reducing the diffusion of that partner.Indeed, D ~ R -1 , R being the hydrodynamic radius of the molecule, but R ~ M 1/2 , where M is the molecular mass, in the case of a random coil. As a result, relatively large monomolecular entities need to be involved if one wants to reduce D signifcantly. However, these entities need not necessarily be covalent or even stable: if A interacts dynamically with a ligand, its effective diffusion coeffcient D eff (0) will be a weighted average between the free state with high D and 16/27 Table 1 : 1Estimatedhydrodynamic radius, R h , measured diffusion coeffcient, D, growth rate, r'(0), front velocity, v, and an inferred diffusion coeffcient associated to the front propagation, D eff (0). Template concentration 200 nM, pol = 16 U/mL, nick = 500 U/mL. All measurements were performed at 44°C, except for D that was done at 20°C or 38°C and recalculated to 44°C. Where applicable, values are accompanied by confdence intervals with the confdence probability of 0.95. The intervals are calculated for samples of n = 3 for r'(0) and n = 4 for v, D eff (0) was treated as a function of these variables. Species i R h (nm) D i (10 3 µm 2 min -1 ) r'(0) (10 -2 min -1 ) v (µm min -1 ) D eff (0) (10 3 µm 2 min -1 ) A - 18 ± 3 3 - - - T 1.5 1 11.8 ± 0.8 3 5.6 73 24 T-5-bt 1.5 1 13.6 ± 0.8 4 6.6 ± 0.2 87 ± 10 29 ± 7 T-5-bt:str 1.6 2 7.6 ± 0.5 4 6.2 ± 0.1 66 ± 6 18 ± 3 17/27 . Moreover, the ratio18/27 Streptavidin-coated beads were a kind gift from M. Coppey (Curie, Paris).References Reaction-diffusion model as a framework for understanding biological pattern formation. S Kondo, T Miura, Science. 3295999Kondo S & Miura T (2010) Reaction-diffusion model as a framework for understanding biological pattern formation. Science 329(5999):1616-1620. The chemical basis of morphogenesis. A M Turing, Philos. Trans. R. Soc. London, Ser. B. 237641Turing AM (1952) The chemical basis of morphogenesis. Philos. Trans. R. Soc. London, Ser. B 237(641):37-72. A theory of biological pattern formation. A Gierer, H Meinhardt, Kybernetik. 12Gierer A & Meinhardt H (1972) A theory of biological pattern formation. Kybernetik 12: 30-39. Detailed studies of propagating fronts in the iodate oxidation of arsenous acid. A Hanna, A Saul, K Showalter, J. Am. Chem. Soc. 10414Hanna A, Saul A, & Showalter K (1982) Detailed studies of propagating fronts in the iodate oxidation of arsenous acid. J. Am. Chem. Soc. 104(14):3838-3844. Concentration wave propagation in twodimensional liquid-phase self-oscillating system. A N Zaikin, A M Zhabotinsky, Nature. 2255232Zaikin AN & Zhabotinsky AM (1970) Concentration wave propagation in two- dimensional liquid-phase self-oscillating system. Nature 225(5232):535-537. Winfree AT (1972) Spiral waves of chemical activity. Science. 1754022Winfree AT (1972) Spiral waves of chemical activity. Science 175(4022):634-636. Pattern formation mechanisms in reaction-diffusion systems. V K Vanag, I R Epstein, Int. J. Dev. Biol. 535-6Vanag VK & Epstein IR (2009) Pattern formation mechanisms in reaction-diffusion systems. Int. J. Dev. Biol. 53(5-6):673-681. Experimental evidence of a sustained standing turing-type nonequilibrium chemical pattern. V Castets, E Dulos, J Boissonade, P De Kepper, Phys. Rev. Lett. 64242953Castets V, Dulos E, Boissonade J, & De Kepper P (1990) Experimental evidence of a sustained standing turing-type nonequilibrium chemical pattern. Phys. Rev. Lett. 64(24):2953. Transition from a uniform state to hexagonal and striped turing patterns. Q Ouyang, H L Swinney, Nature. 3526336Ouyang Q & Swinney HL (1991) Transition from a uniform state to hexagonal and striped turing patterns. Nature 352(6336):610-612. Pattern formation in a tunable medium: The belousovzhabotinsky reaction in an aerosol ot microemulsion. V K Vanag, I R Epstein, Phys. Rev. Lett. 8722228301Vanag VK & Epstein IR (2001) Pattern formation in a tunable medium: The belousov- zhabotinsky reaction in an aerosol ot microemulsion. Phys. Rev. Lett. 87(22):228301. Experimental observation of self-replicating spots in a reaction-diffusion system. K-J Lee, W D Mccormick, J E Pearson, H L Swinney, Nature. 3696477Lee K-J, McCormick WD, Pearson JE, & Swinney HL (1994) Experimental observation of self-replicating spots in a reaction-diffusion system. Nature 369(6477):215-218. Stationary and oscillatory localized patterns, and subcritical bifurcations. V K Vanag, I R Epstein, Phys. Rev. Lett. 9212Vanag VK & Epstein IR (2004) Stationary and oscillatory localized patterns, and subcritical bifurcations. Phys. Rev. Lett. 92(12). Localized patterns in reaction-diffusion systems. V K Vanag, I R Epstein, Chaos. 173Vanag VK & Epstein IR (2007) Localized patterns in reaction-diffusion systems. Chaos 17(3):-. Complex patterns in a simple system. J E Pearson, Science. 2615118Pearson JE (1993) Complex patterns in a simple system. Science 261(5118):189-192. An introduction to nonlinear chemical reactions. J A Epstein I &amp; Pojman, Oxford University PressNew YorkEpstein I & Pojman JA (1998) An introduction to nonlinear chemical reactions (Oxford University Press, New York). Nonlinear chemical dynamics. F Sagues, I R Epstein, Dalton Transactions. 7Sagues F & Epstein IR (2003) Nonlinear chemical dynamics. Dalton Transactions (7):1201-1217. Design and control of patterns in reaction-diffusion systems. V K Vanag, I R Epstein, Chaos. 182Vanag VK & Epstein IR (2008) Design and control of patterns in reaction-diffusion systems. Chaos 18(2):-. Transitions from bistability to limit cycle oscillations. Theoretical analysis and experimental evidence in an open chemical system. J &amp; De Boissonade, P Kepper, J. Phys. Chem. 845Boissonade J & De Kepper P (1980) Transitions from bistability to limit cycle oscillations. Theoretical analysis and experimental evidence in an open chemical system. J. Phys. Chem. 84(5):501-506. Systematic design of chemical oscillators -batch oscillations and spatial wave patterns in chlorite oscillatingsystems .8. P De Kepper, I R Epstein, K Kustin, M Orban, J. Phys. Chem. 862De Kepper P, Epstein IR, Kustin K, & Orban M (1982) Systematic design of chemical oscillators -batch oscillations and spatial wave patterns in chlorite oscillating- systems .8. J. Phys. Chem. 86(2):170-171. An experimental design method leading to chemical turing patterns. J Horvath, I Szalai, P De Kepper, Science. 3245928Horvath J, Szalai I, & De Kepper P (2009) An experimental design method leading to chemical turing patterns. Science 324(5928):772-775. Testing turing's theory of morphogenesis in chemical cells. N Tompkins, N Li, C Girabawe, M Heymann, G B Ermentrout, I R Epstein, S Fraden, Proc. Natl. Acad. Sci. USA. 2527Tompkins N, Li N, Girabawe C, Heymann M, Ermentrout GB, Epstein IR, & Fraden S (2014) Testing turing's theory of morphogenesis in chemical cells. Proc. Natl. Acad. Sci. USA. 25/27 A unifed view of polymer, dumbbell, and oligonucleotide DNA nearest-neighbor thermodynamics. J SantaluciaJr, Proc. Natl. Acad. Sci. USA. 95SantaLucia Jr J (1998) A unifed view of polymer, dumbbell, and oligonucleotide DNA nearest-neighbor thermodynamics. Proc. Natl. Acad. Sci. USA 95:1460-1465. Mfold web server for nucleic acid folding and hybridization prediction. M Zuker, Nucleic Acids Res. 3113Zuker M (2003) Mfold web server for nucleic acid folding and hybridization prediction. Nucleic Acids Res. 31(13):3406-3415. Enzyme-free nucleic acid logic circuits. G Seelig, D Soloveichik, D Y Zhang, E Winfree, Science. 3145805Seelig G, Soloveichik D, Zhang DY, & Winfree E (2006) Enzyme-free nucleic acid logic circuits. Science 314(5805):1585-1588. Scaling up digital circuit computation with DNA strand displacement cascades. L Qian, E Winfree, Science. 3326034Qian L & Winfree E (2011) Scaling up digital circuit computation with DNA strand displacement cascades. Science 332(6034):1196-1201. Pattern transformation with DNA circuits. S M Chirieleison, P B Allen, Z B Simpson, A D Ellington, X Chen, Nat Chem. 512Chirieleison SM, Allen PB, Simpson ZB, Ellington AD, & Chen X (2013) Pattern transformation with DNA circuits. Nat Chem 5(12):1000-1005. Programming an in vitro DNA oscillator using a molecular networking strategy. K Montagne, R Plasson, Y Sakai, T Fujii, Y Rondelez, Mol Syst Biol. 7466Montagne K, Plasson R, Sakai Y, Fujii T, & Rondelez Y (2011) Programming an in vitro DNA oscillator using a molecular networking strategy. Mol Syst Biol 7:466. Synthetic in vitro transcriptional oscillators. J Kim, E Winfree, Mol Syst Biol. 7465Kim J & Winfree E (2011) Synthetic in vitro transcriptional oscillators. Mol Syst Biol 7:465. Predator-prey molecular ecosystems. T Fujii, Y Rondelez, ACS Nano. 71Fujii T & Rondelez Y (2013) Predator-prey molecular ecosystems. ACS Nano 7(1):27- 34. Bottom-up construction of in vitro switchable memories. A Padirac, T Fujii, Y Rondelez, 10.1073/pnas.1212069109Proc. Natl. Acad. Sci. USA. Natl. Acad. Sci. USAPadirac A, Fujii T, & Rondelez Y (2012) Bottom-up construction of in vitro switchable memories. Proc. Natl. Acad. Sci. USA 10.1073/pnas.1212069109 Spatial waves in synthetic biochemical networks. A Padirac, T Fujii, A Estévez-Torres, Y Rondelez, J. Am. Chem. Soc. 13539Padirac A, Fujii T, Estévez-Torres A, & Rondelez Y (2013) Spatial waves in synthetic biochemical networks. J. Am. Chem. Soc. 135(39):14586-14592. Challenges and opportunities for structural DNA nanotechnology. A V Pinheiro, D Han, W M Shih, H Yan, Nat Nano. 612Pinheiro AV, Han D, Shih WM, & Yan H (2011) Challenges and opportunities for structural DNA nanotechnology. Nat Nano 6(12):763-772. Folding DNA to create nanoscale shapes and patterns. Pwk Rothemund, Nature. 4407082Rothemund PWK (2006) Folding DNA to create nanoscale shapes and patterns. Nature 440(7082):297-302. Complex shapes self-assembled from single-stranded DNA tiles. B Wei, M Dai, P Yin, Nature. 4857400Wei B, Dai M, & Yin P (2012) Complex shapes self-assembled from single-stranded DNA tiles. Nature 485(7400):623-626. Dynamic DNAtoolbox reaction circuits: A walkthrough. A Baccouche, K Montagne, A Padirac, T Fujii, Y Rondelez, 10.1016/j.ymeth.2014.01.015Baccouche A, Montagne K, Padirac A, Fujii T, & Rondelez Y (2014) Dynamic DNA- toolbox reaction circuits: A walkthrough. Methods doi:10.1016/j.ymeth.2014.01.015. The wave of advance of advantageous genes. R A Fisher, Ann. of Eugenics. 7Fisher RA (1937) The wave of advance of advantageous genes. Ann. of Eugenics 7:355- 369. Etude de l'equation de la diffusion avec croissance de la quantité de matière et son application à un probleme biologique. A Kolmogoroff, I Petrovsky, N Piscounoff, Bull. Univ. Moskou, Ser. Internat., Sec. A. 6Kolmogoroff A, Petrovsky I, & Piscounoff N (1937) Etude de l'equation de la diffusion avec croissance de la quantité de matière et son application à un probleme biologique. Bull. Univ. Moskou, Ser. Internat., Sec. A 6:1-25. Nonlinear diffusion in population genetics, combustion, and nerve pulse propagation. Partial differential equations and related topics. D G Aronson, H F Weinberger, SpringerBerlinAronson DG & Weinberger HF (1975) Nonlinear diffusion in population genetics, combustion, and nerve pulse propagation. Partial differential equations and related topics, (Springer, Berlin), pp 5-49. Simple and complex propagating reaction-diffusion fronts. S K Scott, K Showalter, J. Phys. Chem. 9622Scott SK & Showalter K (1992) Simple and complex propagating reaction-diffusion fronts. J. Phys. Chem. 96(22):8702-8711. Unifed description of electrophoresis and diffusion for DNA and other polyions. E Stellwagen, Y J Lu, N C Stellwagen, Biochemistry. 42Stellwagen E, Lu YJ, & Stellwagen NC (2003) Unifed description of electrophoresis and diffusion for DNA and other polyions. Biochemistry 42:11745-11750. Fourier analysis to measure diffusion coeffcients and resolve mixtures on a continuous electrophoresis chip. A Estevez-Torres, C Gosse, T Lesaux, J F Allemand, V Croquette, H Berthoumieux, A Lemarchand, L Jullien, Anal. Chem. 7921Estevez-Torres A, Gosse C, LeSaux T, Allemand JF, Croquette V, Berthoumieux H, Lemarchand A, & Jullien L (2007) Fourier analysis to measure diffusion coeffcients and resolve mixtures on a continuous electrophoresis chip. Anal. Chem. 79(21):8222- 8231. Spatial control of DNA reaction networks by DNA sequence. P Allen, X Chen, A Ellington, Molecules. 1711Allen P, Chen X, & Ellington A (2012) Spatial control of DNA reaction networks by DNA sequence. Molecules 17(11):13390-13402. Microfuidic patterning of miniaturized DNA arrays on plastic substrates. M Geissler, E Roy, G A Diaz-Quijada, J-C Galas, T Veres, ACS Applied Materials & Interfaces. 17Geissler M, Roy E, Diaz-Quijada GA, Galas J-C, & Veres T (2009) Microfuidic patterning of miniaturized DNA arrays on plastic substrates. ACS Applied Materials & Interfaces 1(7):1387-1395. Ligand-receptor binding on nanoparticle-stabilized liposome surfaces. L Zhang, K Dammann, S C Bae, S Granick, Soft Matter. 35Zhang L, Dammann K, Bae SC, & Granick S (2007) Ligand-receptor binding on nanoparticle-stabilized liposome surfaces. Soft Matter 3(5):551-553. Shape and size of a nonionic surfactant micelle. Triton x-100 in aqueous solution. H H Paradies, J. Phys. Chem. 846Paradies HH (1980) Shape and size of a nonionic surfactant micelle. Triton x-100 in aqueous solution. J. Phys. Chem. 84(6):599-607. Turing's theory of morphogenesis of 1952 and the subsequent discovery of the crucial role of local self-enhancement and long-range inhibition. H Meinhardt, Interface Focus. 24Meinhardt H (2012) Turing's theory of morphogenesis of 1952 and the subsequent discovery of the crucial role of local self-enhancement and long-range inhibition. Interface Focus 2(4):407-416. Spatial regulators for bacterial cell division self-organize into surface waves in vitro. M Loose, E Fischer-Friedrich, J Ries, K Kruse, P Schwille, Science. 3205877Loose M, Fischer-Friedrich E, Ries J, Kruse K, & Schwille P (2008) Spatial regulators for bacterial cell division self-organize into surface waves in vitro. Science 320(5877):789-792. Sustained chemical waves in an annular gel reactor: A chemical pinwheel. Z Noszticzius, W Horsthemke, W D Mccormick, H L Swinney, W Y Tam, Nature. 3296140Noszticzius Z, Horsthemke W, McCormick WD, Swinney HL, & Tam WY (1987) Sustained chemical waves in an annular gel reactor: A chemical pinwheel. Nature 329(6140):619-620. Designing modular reaction-diffusion programs for complex pattern formation. D Scalise, R Schulman, Technology. 0201Scalise D & Schulman R (2014) Designing modular reaction-diffusion programs for complex pattern formation. Technology 02(01):55-66. Computer control of microscopes using µmanager. A Edelstein, N Amodaj, K Hoover, R Vale, N Stuurman, John Wiley & Sons, IncEdelstein A, Amodaj N, Hoover K, Vale R, & Stuurman N (2010) Computer control of microscopes using µmanager (John Wiley & Sons, Inc.).
[]
[ "State-Insensitive Cooling and Trapping of Single Atoms in an Optical Cavity", "State-Insensitive Cooling and Trapping of Single Atoms in an Optical Cavity" ]
[ "J Mckeever \nNorman Bridge Laboratory of Physics\nCalifornia Institute of Technology\n12-33, 91125PasadenaCA\n", "J R Buck \nNorman Bridge Laboratory of Physics\nCalifornia Institute of Technology\n12-33, 91125PasadenaCA\n", "A D Boozer \nNorman Bridge Laboratory of Physics\nCalifornia Institute of Technology\n12-33, 91125PasadenaCA\n", "A Kuzmich \nNorman Bridge Laboratory of Physics\nCalifornia Institute of Technology\n12-33, 91125PasadenaCA\n", "H.-C Nägerl \nNorman Bridge Laboratory of Physics\nCalifornia Institute of Technology\n12-33, 91125PasadenaCA\n", "D M Stamper-Kurn \nNorman Bridge Laboratory of Physics\nCalifornia Institute of Technology\n12-33, 91125PasadenaCA\n", "H J Kimble \nNorman Bridge Laboratory of Physics\nCalifornia Institute of Technology\n12-33, 91125PasadenaCA\n" ]
[ "Norman Bridge Laboratory of Physics\nCalifornia Institute of Technology\n12-33, 91125PasadenaCA", "Norman Bridge Laboratory of Physics\nCalifornia Institute of Technology\n12-33, 91125PasadenaCA", "Norman Bridge Laboratory of Physics\nCalifornia Institute of Technology\n12-33, 91125PasadenaCA", "Norman Bridge Laboratory of Physics\nCalifornia Institute of Technology\n12-33, 91125PasadenaCA", "Norman Bridge Laboratory of Physics\nCalifornia Institute of Technology\n12-33, 91125PasadenaCA", "Norman Bridge Laboratory of Physics\nCalifornia Institute of Technology\n12-33, 91125PasadenaCA", "Norman Bridge Laboratory of Physics\nCalifornia Institute of Technology\n12-33, 91125PasadenaCA" ]
[]
Single Cesium atoms are cooled and trapped inside a small optical cavity by way of a novel faroff-resonance dipole-force trap (FORT), with observed lifetimes of 2 − 3 seconds. Trapped atoms are observed continuously via transmission of a strongly coupled probe beam, with individual events lasting ≃ 1 s. The loss of successive atoms from the trap N ≥ 3 → 2 → 1 → 0 is thereby monitored in real time. Trapping, cooling, and interactions with strong coupling are enabled by the FORT potential, for which the center-of-mass motion is only weakly dependent on the atom's internal state.
10.1103/physrevlett.90.133602
[ "https://arxiv.org/pdf/quant-ph/0211013v2.pdf" ]
15,928,831
quant-ph/0211013
9ac268355bc8cb199189c454d54b7b04d6e49154
State-Insensitive Cooling and Trapping of Single Atoms in an Optical Cavity 16 Feb 2003 J Mckeever Norman Bridge Laboratory of Physics California Institute of Technology 12-33, 91125PasadenaCA J R Buck Norman Bridge Laboratory of Physics California Institute of Technology 12-33, 91125PasadenaCA A D Boozer Norman Bridge Laboratory of Physics California Institute of Technology 12-33, 91125PasadenaCA A Kuzmich Norman Bridge Laboratory of Physics California Institute of Technology 12-33, 91125PasadenaCA H.-C Nägerl Norman Bridge Laboratory of Physics California Institute of Technology 12-33, 91125PasadenaCA D M Stamper-Kurn Norman Bridge Laboratory of Physics California Institute of Technology 12-33, 91125PasadenaCA H J Kimble Norman Bridge Laboratory of Physics California Institute of Technology 12-33, 91125PasadenaCA State-Insensitive Cooling and Trapping of Single Atoms in an Optical Cavity 16 Feb 2003 Single Cesium atoms are cooled and trapped inside a small optical cavity by way of a novel faroff-resonance dipole-force trap (FORT), with observed lifetimes of 2 − 3 seconds. Trapped atoms are observed continuously via transmission of a strongly coupled probe beam, with individual events lasting ≃ 1 s. The loss of successive atoms from the trap N ≥ 3 → 2 → 1 → 0 is thereby monitored in real time. Trapping, cooling, and interactions with strong coupling are enabled by the FORT potential, for which the center-of-mass motion is only weakly dependent on the atom's internal state. A long-standing ambition in the field of cavity quantum electrodynamics (QED) has been to trap single atoms inside high-Q cavities in a regime of strong coupling [1]. Diverse avenues have been pursued for creating the trapping potential for atom confinement, including additional far off-resonant trapping beams [2], nearresonant light with withn ≃ 1 intracavity photons [3,4], and single trapped ions in high-finesse optical cavities [5,6], although strong coupling has yet to be achieved for trapped ions. A critical aspect of this research is the development of techniques for atom localization that are compatible with strong coupling, as required for quantum computation and communication [7,8,9,10,11,12]. In this Letter we present experiments to enable quantum information processing in cavity QED by (1) achieving extended trapping times for single atoms in a cavity while still maintaining strong coupling, (2) realizing a trapping potential for the center-of-mass motion that is largely independent of the internal atomic state, and (3) demonstrating a scheme that allows continuous observation of trapped atoms by way of the atom-field coupling. More specifically, we have recorded trapping times up to 3 s for single Cs atoms stored in an intracavity far-off resonance trap (FORT) [13], which represents an improvement by a factor of 10 2 beyond the first realization of trapping in cavity QED [2], and by roughly 10 4 beyond prior results for atomic trapping [3] and localization [4] withn ≃ 1 photon. We have also continuously monitored trapped atoms by way of strong coupling to a probe beam, including observations of trap loss atom by atom over intervals ≃ 1 s. These measurements incorporate auxiliary cooling beams, and provide the first realization of cooling for trapped atoms strongly coupled to a cavity. Our protocols are facilitated by the choice of a "magic" wavelength for the FORT [14,15,16], for which the relevant atomic levels are shifted almost equally, thereby providing significant advantages for coherent state manipulation of the atom-cavity system. A major obstacle to the integration of a conventional red-detuned FORT within the setting of cavity QED is that excited electronic states generally experience a positive AC-Stark shift of comparable magnitude to the negative (trapping) shift of the ground state [13]. This leads to the unfortunate consequence that the detuning and -1.0 -0.5 0.0 δ 6P 3/2 -4 -2 0 2 4 m F' -2 -1 δ 945 935 925 λ F (nm) 6S 1/2 6P 3/2 FIG. 1: AC-Stark shifts (δ6S 1/2 ,δ6P 3/2 ) for the (6S 1/2 , 6P 3/2 ) levels in atomic Cs for a linearly polarized FORT. The inset shows (δ6S 1/2 ,δ 6P 3/2 ,F ′ =4 ) as functions of FORT wavelength λF . The full plot givesδ6P 3/2 versus m F ′ for each of the levels 6P 3/2 , F ′ = 2, 3, 4, 5 for λF = 935.6 nm. In each case, the normalization isδ = δ/[δ6S 1/2 (λF = 935.6 nm)] [17]. hence the effective coupling between an atomic transition and the cavity mode become strong functions of the atom's position within the trap [16]. However, due to the specific multi-level structure of Cesium, the wavelength λ F of the trapping laser can be tuned to a region where both of these problems are eliminated for the 6S 1/2 → 6P 3/2 transition, as illustrated in Fig. 1 [14,15,16,17]. Around the "magic" wavelength λ F = 935 nm, the sum of AC-Stark shifts coming from different allowed optical transitions results in the ground 6S 1/2 and excited 6P 3/2 states both being shifted downwards by comparable amounts, δ 6S 1/2 ≃ δ 6P 3/2 , albeit with small dependence on (F ′ , m F ′ ) for the shifts δ 6P 3/2 . The task then is to achieve state-independent trapping while still maintaining strong coupling for the 6S 1/2 → 6P 3/2 transition. Our experimental setup to achieve this end is schematically depicted in Fig. 2 [2]. Significantly, the cavity has a TEM 00 longitudinal mode located nine mode orders below the mode employed for cavity QED at 852 nm, at the wavelengthλ F = 935.6 nm, allowing the implementation of a FORT with δ 6S 1/2 ≃ δ 6P 3/2 . The field to excite this cavity mode is provided by a laser at λ F , which is independently locked to the cavity. The finesse of the cavity atλ F is F ∼ 2200 [18], so that a mode-matched input power of 1.2 mW gives a peak AC-Stark shift δ 6S 1/2 /2π = −47 MHz for all states in the 6S 1/2 ground manifold, corresponding to a trap depth U 0 /k B = 2.3 mK, which was used for all experiments. Principal parameters relevant to cavity QED with the system in Fig. 2 are the Rabi frequency 2g 0 for a single quantum of excitation and the amplitude decay rates (κ, γ) due to cavity losses and atomic spontaneous emission. For our system, g 0 /2π = 24 MHz, κ/2π = 4.2 MHz, and γ/2π = 2.6 MHz, where g 0 is for the (6S 1/2 , F = 4, m F = 4) → (6P 3/2 , F ′ = 5, m ′ F = 4) transition in atomic Cs at λ 0 = 852.4 nm. Strong coupling is thereby achieved (g 0 ≫ (κ, γ)), resulting in critical photon and atom numbers n 0 ≡ γ 2 /(2g 2 0 ) ≃ 0.006, N 0 ≡ 2κγ/g 2 0 ≃ 0.04. The small transition shifts for our FORT mean that g 0 is considerably larger than the spatially dependent shift δ 0 of the bare atomic frequency employed for cavity QED, g 0 ≫ δ 0 ≡ |δ 6P 3/2 − δ 6S 1/2 |, whereas in a conventional FORT, δ 0 ∼ 2|δ 6S 1/2 | ≫ g 0 . In addition to the FORT field, the input to the cavity consists of probe and locking beams, all of which are directed to separate detectors at the output. The transmitted probe beam is monitored using heterodyne detection, allowing real-time detection of individual cold atoms within the cavity mode [19]. The cavity length is actively controlled using a cavity resonance at λ C = 835.8 nm, so the length is stabilized and tunable independently of all other intracavity fields [2]. The probe as well as the FORT beam are linearly polarized along a directionl + orthogonal to the x-axis of the cavity [18,20]. Cold atoms are collected in a magneto-optical trap (MOT) roughly 5 mm above the cavity mirrors and then released after a stage of sub-Doppler polarizationgradient cooling [13]. Freely falling atoms arrive at the cavity mode over an interval of about 10 ms, with kinetic energy E K /k B ≃ 0.8 mK, velocity v ≃ 0.30 m/s, and transit time ∆t = 2w 0 /v ≃ 150 µs. Two addi-tional orthogonal pairs of counter-propagating beams in a σ + − σ − configuration illuminate the region between the cavity mirrors along directions at ±45 • relative tô y,ẑ (the "y − z beams") and contain cooling light tuned red of F = 4 → F ′ = 5 and repumping light near the F = 3 → F ′ = 3 transition [21]. These beams eliminate the free-fall velocity to capture atoms in the FORT and provide for subsequent cooling of trapped atoms. We employed two distinct protocols to study the lifetime for single trapped atoms in our FORT. (1) Trapping "in the dark" with the atom illuminated only by the FORT laser atλ F and the cavity-locking laser at λ C . For this protocol, strong coupling enables real-time monitoring of single atoms within the cavity for initial triggering of cooling light and for final detection. (2) Trapping with continuous observation of single atoms with cavity probe and cooling light during the trapping interval. In this case, atoms in the cavity mode are monitored by way of the cavity probe beam, with cooling provided by the auxiliary y − z beams. (1) In our first protocol, the F = 4 → F ′ = 5 transition is strongly coupled to the cavity field, with zero detuning of the cavity from the bare atomic resonance, ∆ C ≡ ω C − ω 4→5 = 0. In contrast to Ref. [2], here the FORT is ON continuously without switching, which makes a cooling mechanism necessary to load atoms into the trap. The initial detection of a single atom falling into the cavity mode is performed with the probe beam tuned to the lower sideband of the vacuum-Rabi spectrum (∆ p = ω p − ω 4→5 = −2π × 20 MHz). The resulting increase in transmitted probe power when an atom approaches a region of optimal coupling [22,23] triggers ON a pulse of transverse cooling light from the y − z beams, detuned 41 MHz red of ω 4→5 . During the subsequent trapping interval, all near-resonant fields are turned OFF (including the transverse cooling light). After a variable delay t T , the probe field is switched back ON to detect whether the atom is still trapped, now with ∆ p = 0. Data collected in this manner are shown in Fig. 3(a), which displays the conditional probability P to detect an atom given an initial single-atom triggering event versus the time delay t T . The two data sets shown in Fig. 3(a) yield comparable lifetimes, the upper acquired with mean intracavity atom numberN = 0.30 atoms and the lower withN = 0.019 [24]. The offset in P between these two curves arises primarily from a reduction in duration δt of the cooling pulses, from 100 µs to 5 µs, which results in a reduced capture probability. Measurements with constant δt but withN varied by adjusting the MOT parameters allow us to investigate the probability of trapping an atom other than the "trigger" atom and of capturing more than one atom. For example, with δt = 5 µs as in the lower set, we have varied 0.011 N 0.20 with no observable change in either P T or the trap lifetime τ . Since a conservative upper bound on the relative probability of trapping a second atom is justN /2 (whenN ≪ 1), these data strongly support the conclusion that our measurements are for single trapped atoms. We rou- tinely observe lifetimes 2 s < τ < 3 s depending upon the parameters chosen for trap loading and cooling. Fig. 3(b) explores scattering processes within the FORT that transfer population between the 6S 1/2 , F = (3, 4) ground-state hyperfine levels. For these measurements, the F = 4 level is initially depleted, and then the population in F = 4 as well as the total 3 + 4 population are monitored as functions of time t D to yield the fractional population f 4 (t D ) in F = 4. The measured time τ R = (0.11 ± 0.02)s for re-equilibration of populations between F = (3, 4) agrees with a numerical simulation based upon scattering rates in our FORT, which predicts τ R = 0.10 s for atoms trapped at the peak FORT intensity in an initially unpolarized state in the F = 3 level. Turning next to the question of the mechanisms that limit our FORT lifetime, we recall that parametric heating caused by intensity fluctuations of the trapping field can be quite important [2,25]. From measurements of intensity fluctuations for our FORT around twice the relevant harmonic frequencies (ν axial = 570, ν radial = 4.8) kHz, we estimate a lower bound to the FORT lifetime of τ axial p > 1. 6 s [26]. Since this estimate suggests that parametric heating could be a limiting factor in Fig. 3, we performed subsequent measurements in which the intensity noise was reduced below the shot-noise level of our detection system, giving a lower bound τ axial p > 9 s. Unfortunately, the measured FORT lifetime increased only modestly to τ = (3.1±0.4) s, indicating that other mechanisms are partially responsible for the observed decay. A second suspect is a heating process described by Corwin et al. [27] associated with inelastic Raman scattering in an elliptically polarized FORT field [20]. We calculate rates Γ s for spontaneous Raman scattering in our FORT to be 2.5 to 7 s −1 for transitions that change the hyperfine quantum number F , and between 0.8 and 2.5 s −1 when only m F changes [28]. Based on Eq. 3 in Ref. [27] (a two-state model), we estimate an upper limit to the heating rate from this mechanism, Γ IR 0.2Γ s , giving heating times as short as 0.7 s for the fastest calculated scattering rate. However, we have also undertaken a full multilevel simulation of the optical pumping processes, which indicates much slower heating, Γ IR ∼ 0.02 s −1 . We are working to resolve this discrepancy. A third suspect that cannot be discounted is the presence of stray light, which we have endeavored to eliminate. For lifetimes as in Fig. 3, we require intracavity photon numbern ≪ 10 −5 , which is not trivial to diagnose. A final concern is the background pressure in the region of the FORT. Although the chamber pressure is 3 × 10 −10 Torr (leading to τ ≃ 30 s), we have no direct measurement of the residual gas density in the narrow cylinder between the mirror substrates (diameter 1 mm and length 43 µm), except for the trap lifetime itself. (2) Toward the goals of continuous observation of single trapped atoms [3,4] and of implementing Λ-schemes in cavity QED [7,8,9,29], we next present results from our second protocol. Here, the F = 4 → F ′ = 4 transition is strongly coupled to the cavity field, with ∆ ′ C ≡ ω C − ω 4→4 = 0. In contrast to our protocol (1), the FORT and the transverse y − z beams are left ON continuously, with the latter containing only light near the F = 3 → F ′ = 3 resonance, with detuning ∆ 3 . Significantly, we observe trap loading with no cooling light near the F = 4 → F ′ = 5 transition. An example of the resulting probe transmission is shown in Fig. 4, which displays two separate records of the continuous observation of trapped atoms. Here, the probe detuning ∆ ′ p = ω p − ω 4→4 = 0 and the probe strength is given in terms ofm = | â | 2 deduced from the heterodyne current, withâ as the annihilation operator for the intracavity field. We believe that the y − z repumping beams (which excite F = 3 → F ′ = 3) provide cooling, since without them the atoms would "roll" in and out of the near-conservative FORT potential (indeed no trapping occurs in their absence). In addition, this is a continuous cooling and loading scheme, so that we routinely load multiple atoms into the trap. The most striking characteristic of the data collected in this manner is thatm versus t always reaches its deepest level within the ≃ 10 ms window when the falling atoms arrive, subsequently increasing in a discontinuous "staircase" of steps. As indicated in Fig. 4, our interpretation is that there is a different level form associated with each value N of the number of trapped atoms (with the level decreasing for higher N ), and that each step is due to the loss of an atom from the cavity mode. In addition, we observe a strong dependence both of the initial trapping probability and of the continuous observation time on the detuning of the transverse beams, with an optimal value ∆ 3 ≃ 25 MHz to the blue of the 3 → 3 transition, which strongly suggests blue Sisyphus cooling [30]. We stress that observations as in Fig. 4 are made possible by strong coupling in cavity QED, for which individual intracavity atoms cause the displayed changes in probe transmission. Whilem in Figure 4 is only ≃ 0.01, it represents an output flux ≃ 5 × 10 5 photons per second. The probe is also critical to the cooling, although it is not clear whether this beam is acting as a simple "repumper" [30] or is functioning in a more complex fashion due to strong coupling. We have not seen such striking phenomena under similar conditions for cavity QED with the F = 4 → F ′ = 5 transition. Note that our ability to monitor the atom as well as to cool its motion are enabled by the state-insensitive character of the trap, since the net transition shifts are small, (g 0 , ∆ 3 ) ≫ δ 0 . In summary, we have demonstrated a new set of ideas within the setting of cavity QED, including state insensitive trapping suitable for strong coupling. Trapping of single atoms with g 0 ≫ (δ 0 , κ, γ) has been achieved with lifetimes τ ≃ 2 − 3s. Since intrinsic heating in the FORT is quite low (∼ 11 µK/s due to photon recoil), we anticipate extensions to much longer lifetimes. Continuous observations of multiple atoms in a cavity have been reported, and involve an interplay of a strongly coupled probe field for monitoring and a set of y − z cooling beams. Our measurements represent the first demonstration of cooling for trapped atoms strongly coupled to a cavity. Beyond its critical role here, state insensitive trapping should allow the application of diverse laser cooling schemes, leading to atomic confinement in the Lamb-Dicke regime with strong coupling, and thereby to further advances in quantum information science. We gratefully acknowledge the contributions of K. Birnbaum, A. Boca, T. W. Lynn, S. J. van Enk, D. W. Vernooy, and J. Ye. This work was supported by the Caltech MURI Center for Quantum Networks under ARO Grant No. DAAD19-00-1-0374, by the National Science Foundation, and by the Office of Naval Research. * Georgia Institute of Technology, Atlanta, GA 30332 † Institut für Experimentalphysik, Universität Innsbruck, Technikerstraße 25, A-6020 Innsbruck, Austria ‡ Department of Physics, University of California, Berkeley, CA 94720 FIG. 2 : 2Schematic of experiment for trapping single atoms in an optical cavity in a regime of strong coupling. Relevant cavity parameters are length l = 43.0 µm, waist w0 = 23.9 µm, and finesse F = 4.2 × 10 5 at 852 nm. The inset illustrates transverse beams used for cooling and repumping. probability P as a function of trapping time tT . The upper data set is for mean intracavity atom numberN ≈ 0.30, while the lower set is forN ≈ 0.019 atoms. Exponential fits (solid lines) yield lifetimes τupper = (2.4±0.2) s and τ lower = (2.0 ± 0.3) s. (b) The fractional population f4(tD) in F = 4 following depletion of this level at tD = 0. An exponential fit (solid line) gives τR = (0.11 ± 0.02) s. FIG. 4 : 4Two traces of the continuous observation of trapped atoms inside a cavity in a regime of strong coupling. After an initial sharp reduction around t = 0 as atoms are cooled into the cavity mode, the intracavity field strengthm increases in a discontinuous fashion as trapped atoms escape from the cavity mode one by one. RF detection bandwidth = 1 kHz, ∆ ′ C = 0 = ∆ ′ p , and ∆3/2π = 25 MHz (blue). For a review, see contributions in the special issue of. Phys. Scr. 76127For a review, see contributions in the special issue of Phys. Scr. T76, 127 (1998). . J Ye, Phys. Rev. Lett. 834987California Institute of TechnologyDoctoral ThesisJ. Ye et al., Phys. Rev. Lett. 83, 4987 (1999); and D. W. Vernooy, Doctoral Thesis (California Institute of Tech- nology, 2000). . C J Hood, Science. 2871447C. J. Hood et al., Science 287, 1447 (2000); . Phys. Rev. A. 6313401Phys. Rev. A 63, 013401 (2000). . P W H Pinkse, Nature. 404365P. W. H. Pinkse et al., Nature 404, 365 (2000). . G R Guthöhrlein, Nature. 41449G. R. Guthöhrlein et al., Nature 414, 49 (2001). . J Eschner, Nature. 413495J. Eschner et al., Nature 413, 495 (2001). . T Pellizari, Phys. Rev. Lett. 753788T. Pellizari et al., Phys. Rev. Lett. 75, 3788 (1995). . J I Cirac, Phys. Rev. Lett. 783221J. I. Cirac et al., Phys. Rev. Lett. 78, 3221 (1997). . S Van Enk, Science. 279205S. van Enk et al., Science 279, 205 (1998). . C Cabrillo, Phys. Rev. A. 591025C. Cabrillo et al., Phys. Rev. A 59, 1025 (1999). . S Bose, Phys. Rev. Lett. 835158S. Bose et al., Phys. Rev. Lett. 83, 5158 (1999). . A S Parkins, H J Kimble, Journal Opt. B: Quantum Semiclass. Opt. 1496A. S. Parkins and H. J. Kimble, Journal Opt. B: Quan- tum Semiclass. Opt. 1, 496 (1999). . Laser Cooling, H J Trapping, P Metcalf, Van Der Straten, Springer-VerlagLaser Cooling and Trapping, H. J. Metcalf and P. van der Straten (Springer-Verlag, 1999). . C J Hood, C Wood, Rainer Blatt et al.World Scientific80SingaporeC. J. Hood and C. Wood as described in H. J. Kimble et al., Laser Spectroscopy XIV, eds. Rainer Blatt et al. (World Scientific, Singapore, 1999), 80. . H Katori, J. Phys. Soc. Jpn. 682479H. Katori et al., J. Phys. Soc. Jpn. 68, 2479 (1999); . T Ido, Phys. Rev. A. 6161403T. Ido et al., Phys. Rev. A 61, 061403 (2000). . S J Van Enk, Phys. Rev. A. 6413407S.J. van Enk et al., Phys. Rev. A 64, 013407 (2001). The shifts shown in Fig. 1 incorporate the following couplings, including counter-rotating terms: 6S 1/2 → nP 1/2,3/2 , for n = 6 − 11. 6P 3/2 → nS 1/2 for n = 6 − 15The shifts shown in Fig. 1 incorporate the following couplings, including counter-rotating terms: 6S 1/2 → nP 1/2,3/2 , for n = 6 − 11; 6P 3/2 → nS 1/2 for n = 6 − 15; 6P 3/2 → nD 3/2,5/2 for n = 5 − 11. Relevant parameters are taken from. Can. J. Phys. M. Fabry and J. R. Cussenot54836California Institute of TechnologyDoctoral Thesis6P 3/2 → nD 3/2,5/2 for n = 5 − 11. Relevant parameters are taken from C. J. Hood, Doctoral Thesis (California Institute of Technology, 2000), and from M. Fabry and J. R. Cussenot, Can. J. Phys. 54, 836 (1976). . C J Hood, Phys. Rev. A. 6433804C. J. Hood et al., Phys. Rev. A 64, 033804 (2001). . H Mabuchi, Opt. Lett. 211393H. Mabuchi et al., Opt. Lett. 21, 1393 (1996). Because of small stress-induced birefringence in the cavity mirrors, we align the directions of linear polarization along an axis that coincides with one of the cavity eigen-polarizations [18], denoted byl±. For initial polarization alongl+, measurements of FORT [probe] polarization alongl− for the cavity output power P give P−/P+. < 0.02[0.002] for the FORT [probe] beamBecause of small stress-induced birefringence in the cav- ity mirrors, we align the directions of linear polariza- tion along an axis that coincides with one of the cavity eigen-polarizations [18], denoted byl±. For initial po- larization alongl+, measurements of FORT [probe] po- larization alongl− for the cavity output power P give P−/P+ < 0.02[0.002] for the FORT [probe] beam. The (incoherent) sum of the four intensities is I4−5 ∼ 60mW/cm 2 for the cooling and I3−3 ∼ 40mW/cm 2 for the repumping light. with uncertainties of roughly 2×The (incoherent) sum of the four intensities is I4−5 ∼ 60mW/cm 2 for the cooling and I3−3 ∼ 40mW/cm 2 for the repumping light, with uncertainties of roughly 2×. . C J Hood, Phys. Rev. Lett. 804157C. J. Hood et al., Phys. Rev. Lett. 80, 4157 (1998). Specific examples of single-atom detection events are omitted here. For ∆p ≃ −g0, the increases in cavity transmission are quite similar to those in Refs. 3, 22], while for ∆p = 0, the decreases are similar to those in Refs. [2, 19], albeit now in the presence of the FORTSpecific examples of single-atom detection events are omitted here. For ∆p ≃ −g0, the increases in cavity transmission are quite similar to those in Refs. [3, 22], while for ∆p = 0, the decreases are similar to those in Refs. [2, 19], albeit now in the presence of the FORT. N is estimated from the mean number of atom transit events (of duration ≃ 150µs) during the interval ≃ 10ms from the falling MOT atoms. in the absence of trappingN is estimated from the mean number of atom transit events (of duration ≃ 150µs) during the interval ≃ 10ms from the falling MOT atoms, in the absence of trapping. . T A Savard, Phys. Rev. A. 561095T. A. Savard et al., Phys. Rev. A 56, R1095 (1997); . C , C. . W Gardiner, ibid. 6145801W. Gardiner et al., ibid. 61, 045801 (2000). . K L Corwin, Phys. Rev. Lett. 831311K. L. Corwin et al., Phys. Rev. Lett. 83, 1311 (1999). . R A Cline, Opt. Lett. 19207R. A. Cline et al., Opt. Lett. 19, 207 (1994). . A Kuhn, Phys. Rev. Lett. 8967901A. Kuhn et al., Phys. Rev. Lett. 89, 067901 (2002). R3734 (1996) and references therein. D Boiron, Phys. Rev. A. 53D. Boiron et al., Phys. Rev. A 53, R3734 (1996) and references therein.
[]
[ "Photogalvanic effect induced charge and spin photocurrent in group-V monolayer systems", "Photogalvanic effect induced charge and spin photocurrent in group-V monolayer systems" ]
[ "Li-Wen Zhang \nSchool of Physics and Information Engineering\nShanxi Normal University\n030031TaiyuanChina\n", "Ya-Qing Yang \nInstitute of Laser Spectroscopy\nState Key Laboratory of Quantum Optics and Quantum Optics Devices\nShanxi University\n030006TaiyuanChina\n\nCollaborative Innovation Center of Extreme Optics\nShanxi University\n030006TaiyuanChina\n", "Jun Chen \nInstitute of Theoretical Physics\nState Key Laboratory of Quantum Optics and Quantum Optics Devices\nShanxi University\n030006TaiyuanChina\n\nCollaborative Innovation Center of Extreme Optics\nShanxi University\n030006TaiyuanChina\n", "Lei Zhang \nInstitute of Laser Spectroscopy\nState Key Laboratory of Quantum Optics and Quantum Optics Devices\nShanxi University\n030006TaiyuanChina\n\nCollaborative Innovation Center of Extreme Optics\nShanxi University\n030006TaiyuanChina\n" ]
[ "School of Physics and Information Engineering\nShanxi Normal University\n030031TaiyuanChina", "Institute of Laser Spectroscopy\nState Key Laboratory of Quantum Optics and Quantum Optics Devices\nShanxi University\n030006TaiyuanChina", "Collaborative Innovation Center of Extreme Optics\nShanxi University\n030006TaiyuanChina", "Institute of Theoretical Physics\nState Key Laboratory of Quantum Optics and Quantum Optics Devices\nShanxi University\n030006TaiyuanChina", "Collaborative Innovation Center of Extreme Optics\nShanxi University\n030006TaiyuanChina", "Institute of Laser Spectroscopy\nState Key Laboratory of Quantum Optics and Quantum Optics Devices\nShanxi University\n030006TaiyuanChina", "Collaborative Innovation Center of Extreme Optics\nShanxi University\n030006TaiyuanChina" ]
[]
Photogalvanic effect (PGE) occurs in materials with non-centrosymmetric structures when irradiated by linearly or circularly polarized light. Here, using non-equilibrium Green's function combined with density functional theory (NEGF-DFT), we investigated the linear photogalvanic effect (LPGE) in monolayers of group-V elements (As, Sb, and Bi) by first-principles calculations. First, by designing a two-probe structure based on the group-V elements, we found a giant anisotropy photoresponse of As between the armchair and zigzag directions. Then, we analyzed Sb and Bi's charge and spin photocurrent characteristics when considering the spin-orbit coupling (SOC) effect. It is found that when the polarization direction of linearly polarized light is parallel or perpendicular to the transport direction (θ = 0 • or 90 • ), the spin up and spin down photoresponse in the armchair direction has the same magnitude and direction, leading to the generation of net charge current. However, in the zigzag direction, the spin up and spin down photoresponse have the same magnitude with opposite directions, leading to the generation of pure spin current. Furthermore, it is understood by analyzing the bulk spin photovoltaic (BSPV) coefficient from the symmetry point of view. Finally, we found that the net charge current generated in the armchair direction and the pure spin current generated in the zigzag direction can be further tuned with the increase of the material's buckling height |h|. Our results highlight that these group-V monolayers are promising candidates for novel functional materials, which will provide a broad prospect for the realization of ultrathin ferroelectric devices in optoelectronics due to their spontaneous polarization characteristics and high Curie temperature.
null
[ "https://export.arxiv.org/pdf/2305.16032v1.pdf" ]
258,887,754
2305.16032
4e7cbeb2e69d1c5d4facfdc43e7f930730676136
Photogalvanic effect induced charge and spin photocurrent in group-V monolayer systems 25 May 2023 Li-Wen Zhang School of Physics and Information Engineering Shanxi Normal University 030031TaiyuanChina Ya-Qing Yang Institute of Laser Spectroscopy State Key Laboratory of Quantum Optics and Quantum Optics Devices Shanxi University 030006TaiyuanChina Collaborative Innovation Center of Extreme Optics Shanxi University 030006TaiyuanChina Jun Chen Institute of Theoretical Physics State Key Laboratory of Quantum Optics and Quantum Optics Devices Shanxi University 030006TaiyuanChina Collaborative Innovation Center of Extreme Optics Shanxi University 030006TaiyuanChina Lei Zhang Institute of Laser Spectroscopy State Key Laboratory of Quantum Optics and Quantum Optics Devices Shanxi University 030006TaiyuanChina Collaborative Innovation Center of Extreme Optics Shanxi University 030006TaiyuanChina Photogalvanic effect induced charge and spin photocurrent in group-V monolayer systems 25 May 2023 Photogalvanic effect (PGE) occurs in materials with non-centrosymmetric structures when irradiated by linearly or circularly polarized light. Here, using non-equilibrium Green's function combined with density functional theory (NEGF-DFT), we investigated the linear photogalvanic effect (LPGE) in monolayers of group-V elements (As, Sb, and Bi) by first-principles calculations. First, by designing a two-probe structure based on the group-V elements, we found a giant anisotropy photoresponse of As between the armchair and zigzag directions. Then, we analyzed Sb and Bi's charge and spin photocurrent characteristics when considering the spin-orbit coupling (SOC) effect. It is found that when the polarization direction of linearly polarized light is parallel or perpendicular to the transport direction (θ = 0 • or 90 • ), the spin up and spin down photoresponse in the armchair direction has the same magnitude and direction, leading to the generation of net charge current. However, in the zigzag direction, the spin up and spin down photoresponse have the same magnitude with opposite directions, leading to the generation of pure spin current. Furthermore, it is understood by analyzing the bulk spin photovoltaic (BSPV) coefficient from the symmetry point of view. Finally, we found that the net charge current generated in the armchair direction and the pure spin current generated in the zigzag direction can be further tuned with the increase of the material's buckling height |h|. Our results highlight that these group-V monolayers are promising candidates for novel functional materials, which will provide a broad prospect for the realization of ultrathin ferroelectric devices in optoelectronics due to their spontaneous polarization characteristics and high Curie temperature. I. INTRODUCTION Ferroelectric materials, with a spontaneous polarization, have attracted significant attention both experimentally and theoretically in recent years due to their potential applications, including field effect transistors, non-volatile memory devices, solar cells and sensors, etc 1-5 . However, it is well known that the spontaneous polarization in ferroelectric materials can lead to charge accumulation on the surfaces of the materials. For three-dimensional (3D) ferroelectric materials, such as BiF eO 3 , BaT iO 3 , etc., which tend to lose their ferroelectricity when the thickness is less than a critical value due to the effect of the unscreened depolarizing electrostatic field [6][7][8] . Therefore, it cannot satisfy the technological demand of ongoing device miniaturization. Since then, exploring two-dimensional (2D) ferroelectric materials has been essential to overcome the problem 9,10 . As an element of group-V, black phosphorus has attracted extensive attention in recent years due to its tunable direct band gap in a wide range (from visible to infrared), high carrier mobility, and other characteristics [11][12][13][14][15][16][17][18][19][20] . It has been applied in the preparation of field effect transistors, spin valves, memory, etc [14][15][16][17][18][19][20] . The structure of monolayer black phosphorus preserves spatial inversion symmetry, hence forbidding ferroelectric (FE). The FE may be induced by extrinsic means, e.g., via an external electric field or by substituting different elements for P to break the centrosymmetry of the structure. Recently, Xiao et al. revealed that 2D elemental group-V (As, Sb, Bi) monolayers with spontaneous polarization due to the lattice distortion with atomic layer buckling and has quite sizable values, which are comparable to or even larger than some 2D monolayer compounds 21,22 . Their Curie temperatures can be higher than room temperature, making them promising candidates for ultrathin ferroelectric devices 21 . In addition, the puckered lattice structure of group-V monolayers, such as Sb and Bi, have already been experimentally demonstrated [23][24][25] . All of these provide a strong guarantee for the theoretical research on the properties of group-V monolayers. It is well known that the photogalvanic effect (PGE) occurs in materials without spatial inversion symmetry under the illumination of polarized light 26,[28][29][30][31][32][33][34][35]37 . Because of the lack of centrosymmetry of the group-V (As, Sb, Bi) monolayer with the puckered lattice structures, the distribution of the photo-excited electrons in the conduction bands is imbalanced, which leads to a persistent photocurrent without the need to apply an external bias voltage or a temperature gradient. All these clues motivate us to investigate the PGE of the two-probe devices based on the group-V monolayer. Moreover, the spin-orbit coupling (SOC) are noticeable for Sb and Bi atoms. And there are many theoretical progresses on the generation of pure spin current by the PGE recently 36,[38][39][40] . Thus, the following questions naturally arise: Can the photo-induced pure spin current be generated and further tuned? In this work, we answer the question by investigating the linear PGE (LPGE) of two-probe devices based on the group-V monolayer. Due to the lack of centrosymmetry, finite photocurrent can be obtained without needing any external electric field in the system. By analyzing charge and spin photocurrent characteristics of Sb and Bi when considering the spin-orbit coupling (SOC) effect, we found that net charge current and pure spin current can be generated in the armchair and zigzag direction, respectively, when the polarization direction of linearly polarized light is parallel or perpendicular to the transport direction (θ = 0 • or 90 • ). This is due to the system posses mirror symmetry M x . In addition, by calculating the photoresponse of different buckling heights, we also found that the net charge current generated in the armchair direction and the pure spin current generated in the zigzag direction can be further tuned with the increase of the material's buckling height |h|. The above conclusions provide a theoretical basis for the study of group-V monolayer ferroelectric materials in optoelectronic devices. II. COMPUTATIONAL DETAILS To study the optoelectronic properties of group-V materials, we first optimized the atomic structure of the monolayer As, Sb, and Bi (space group P mn2 1 , point group C 2v ), respectively. Here, they are relaxed when the residual force on each atom is smaller than 0.01 eVÅ −1 using the Vienna ab initio simulation package (VASP) 41,42 . 400 eV was chosen as the kinetic energy cutoff for the calculations. The plane wave basis set with the projector augmented wave (PAW) 43 pseudo potential was adopted. The exchange-correlation potentials were approximated by the Local density approximation (LDA). To avoid the interaction between repeat images, a vacuum space of about 20Å perpendicular to the plane (direction of y axis, as shown in Fig. 1) was used. In all geometry optimizations, van der Waals (vdW) correction (in the Grimme-D2 approach) was considered. The optimized buckling height (h) are -0.17Å, -0.41Å and -0.59Å for monolayer As, Sb, and Bi, respectively (as shown in Fig. 1(a), h = y 1 − y 2 ). The detailed parameters (such as lattice constants) are summarized in Table 1, which are in good agreement with previous calculations 21,24 . Since the LDA method always underestimates the band gap of semiconductors, the calculations of electronic properties are also conducted using the Heyd-Scuseria-Ernzerhof (HSE06) hybrid functional, which is constructed by mixing the PBE and Hartree-Fock (HF) functionals 44,45 . The screening parameter of HSE06 is 0.2Å −1 . The results show that the band structures calculated by the two methods only shift the bottom of the conduction band and the top of the valence band upward or downward, and their general characteris- tics are unchanged. Due to a large amount of calculation, we used the LDA method to analyze the optoelectronic properties of the two-probe system. Starting from the above atomic structures obtained by minimizing total energy, we investigated the LPGE of the two-probe devices based on As, Sb, and Bi monolayers by carrying out non-equilibrium Green's function(NEGF) combined with density functional theory (DFT) formalism 46-51 , as implemented in the Nanodcal transport package 46,52 . In the subsequent NEGF-DFT numerical calculations, the linear combinations of atomic orbital (LCAO) basis sets and the standard normconserving nonlocal pseudo-potentials 53 were adopted; the Local density approximation was applied for the exchange-correlation potential; the wave functions and the other physical quantities were expanded with LCAO at the double-ζ polarization (DZP) level. The k -point mesh is adopted as 13×1×13 for the self-consistent calculation of the lead region. In the photocurrent transport calculation of the two-probe device structures, 100×1×1 k−space grid was used in the armchair direction, 1×1×100 k−space grid was used in the zigzag direction. The self-consistency is deemed to be achieved when the total energy, hamiltonian, and density matrix are converged to an accuracy of 1×10 −5 a.u. In the two-probe structure (see Fig. 1), the photocurrent can be generated by the linearly polarized light shining on the central scattering region which is indicated by the box drawn with the red dashed line. In general, the direction of the current flow is from the lead to the central region. For simplicity, in the following, we consider the photocurrent flowing in the left lead of the two-probe structure. Firstly, we give the photocurrent J (ph) L,s , which is calculated in the first-order Born approximation with the following formula 28,50,54 , retarded/advanced Green's functions without photons. The information about polarization of the light is included in the self energy and can be characterized by a complex vector e. For the linearly polarized light, e = cos θe 1 + sin θe 2 , where θ is the angle formed by the polarized direction with respect to the vector e 1 (z axis when the transport direction is armchair direction; x axis when the transport direction is zigzag direction). In our numerical calculations, the light is incident along the −y direction (as shown in Fig. 1). For simplicity, we introduce a normalized photocurrent (photoresponse) 54,55 , which can be written as, J (ph) L,s = ie h Tr{Γ L [G < ph + f L (E)(G > ph − G < ph )]} ss dE,(1) where L indicatesR s = J (ph) L,s eI ω ,(2) where J (ph) L,s is the photocurrent defined in Eq. (1); I ω is the photon flux defined as the number of photons per unit time per unit area. Note that the photoresponse has dimension of area, a 2 0 /photon where a 0 is the Bohr radius. The charge photocurrent (I c ) and spin photocurrent (I s ) can be defined as, I c = R ↑ + R ↓ , I s = R ↑ − R ↓(3) where the R ↑/↓ represents the photoresponse with spin up or spin down component. III. RESULTS AND DISCUSSION Before analyzing the photoresponse characteristics of the two-probe device structure based on the group-V monolayer, we first discuss the influence of the spinorbit coupling (SOC) effect from the band structure (BS) and the joint density of states (JDOS) of the bulk materials 56,57 , as shown in Fig. 2. Here, the JDOS can be written as: J cv ( ω) = 2 (2π) 2 δ[E c (k) − E v (k) − ω]dk when excited with a photon of frequency ω. The E c (k) and E v (k) are the energies of electronic states at k point in the conduction and valence bands, respectively. Fig. 2(a-c) compares the band structures of As, Sb, and Bi, respectively, with and without SOC. It can be seen that SOC has rather little influence on the band structure of As, especially the band around the Fermi level, which is almost the same. However, for Sb and Bi, it is evident that SOC significantly influences their band structure, especially for Bi; the band-splitting effect near the Fermi level is distinguishable under SOC. This indicates that the impact of SOC can be ignored in calculating the photoresponse based on As monolayer. At the same time, the influence of SOC should be considered in analyzing photoresponse characteristics based on the Sb and Bi monolayer. Since the JDOS measures the number of allowed optical transitions from the valence band to the conduction band, Fig. 2(d-f) further investigates the JDOS distribution of As, Sb, and Bi with and without SOC. For As, the distribution of JDOS with and without SOC is almost the same, indicating that the SOC effect can be ignored in As monolayer. However, for Sb and Bi, the distribution of JDOS with and without SOC is different. Therefore, we will analyze the photoresponse characteristics of As monolayer without SOC and Sb and Bi monolayer with SOC. In the following, we present the LPGE of a two-probe device structure based on the group-V monolayer, as shown in Fig. 1. The whole system is divided into three regions, the central scattering region, and the left/right lead, which is indicated by the box drawn with a solid red line. And the leads extend to the electron reservoir into infinity. Here, we chose the linearly polarized light, which propagates along the −y direction and irradiate the entire central scattering region of the device. Similar to the structure of black phosphorus, the primitive unit cell of monolayer As, Sb, Bi consists of two atomic layers with four atoms, two in the upper sublayer and the other two in the lower sublayer (as shown in Fig. 1(a)). The difference is that the black phosphorus structure has spatial inversion symmetry. In contrast, the symmetry of As, Sb, and Bi systems are broken due to the lattice distortion with out-of-plane atomic buckling, which causes electric charge accumulations at the outmost atoms, leading to spontaneous in-plane polarization along the z axis (the blue arrow in Fig. 1(a) of polarization of the system). Interestingly, since the group-V monolayer retains the mirror-symmetrical M x , it produces the expected photoresponse characteristics, which we will introduce in detail later. Fig. 3(a, b) shows the photoresponse of As versus polarization angles θ = [0 : 15 : 180] at different photon energies in the armchair and zigzag direction. At first glance, the photoresponse in the armchair direction exhibits a cosine curve character, while the zigzag direction exhibits a sine curve character. In addition, we found that the magnitude and sign of photoresponse change for different photon energies, as shown in Fig. 3(a, b). This can be explained by the fact that the electrons are excited from the valence bands to the conduction bands under light irradiation. If the curvatures of the conduction band are different for +k and -k, the electrons excited to the conduction bands will have unbalanced motion, which results in the generation of a photocurrent. Depending on the photon energy, the electrons are activated to different k points in conduction bands and obtain different band velocities. The summation of all activated electrons determines the sign of the photocurrent with other velocity distributions. So the character of the coefficient varies for different photon energies 28 . Fig. 3(c) shows the photoresponse of As versus the photon energies in the armchair and zigzag direction when θ = 0 • . It can be seen that the photoresponse in the armchair direction is greater than that in the zigzag direction (R ac /R zz ∼ 10 5 ), which presents giant anisotropy [27][28][29] . And the larger anisotropy in pho-toresponse can serve as an effective method in the determination of the lattice orientation of As monolayer. For calculating the photoresponse of the two-probe device structure based on Sb and Bi monolayer, we included the SOC effect. We examined the charge and spin photocurrent along the armchair and zigzag directions, as shown in Fig. 4 and 5. We first analyzed the charge and spin photocurrent versus the light polarization angles θ for the Sb monolayer, as shown in Fig. 4(a, b). In the armchair direction, when the polarization angle θ = 0 • , 90 • , and 180 • , there is no spin photocurrent generated. However, the corresponding charge photocurrent is nonzero. This means that only charge current can be produced in these situations. Otherwise, the spinpolarized photocurrent can be produced by tuning the polarization. In contrast, in the zigzag direction, the charge photocurrent is zero when θ = 0 • , 90 • , and 180 • . Interestingly, the pure spin photocurrent with finite magnitude can be obtained at these polarization angles. To further demonstrate the dependence of photocurrent on photon energy, Fig. 4(c, d) analyze the charge and spin photocurrent versus photon energies when the polarization angle θ = 0 • . As the photon energy increases, the spin photocurrent in the armchair direction is always zero, resulting in the generation of net charge current. Conversely, in the zigzag direction, there is no charge current, leading to the generation of pure spin current. The same physics can be acquired for the Bi monolayer, as shown in Fig. 5(a, b). The net charge current is gener- ated in the armchair direction, and a pure spin current is obtained in the zigzag direction. Based on these results, let us refer back to the charge photocurrent of As monolayer in Fig. 3(c). Since the As monolayer's calculation does not considered the SOC effect, only the charge photocurrent was studied. From Fig. 3(c), we found that the charge current is always zero in the zigzag direction. However, the charge current can be finite in the armchair direction. Note that these results are consistent with that of Sb and Bi monolayer simulated with SOC. To further alternatively understand the physics behind the generation of charge and spin photocurrent, we wish to start with the material's symmetry. It is known that ferroelectrics intrinsically exhibit the bulk photovoltaic (BPV) effect or bulk spin photovoltaic (BSPV) effect due to the fundamental requirement of spontaneous inversion symmetry breaking. It can generate charge current or spin current under light illumination. The nonlinear optical (NLO) charge current or spin current under light with frequency ω can be expressed as 58 J a,s i = Ω=±ω σ a,s i bc (0; Ω, −Ω)E b (Ω)E c (−Ω),(4) where E(ω) is the Fourier component of the electric field at angular frequency ω. σ a,s i bc is the NLO conductivity and expressed as σ a,s i bc (0; ω, −ω) = − e 2 2 ω 2 dk (2π) 3 mnl f lm v b lm ω ml − ω + i/τ * ( j a,s i mn v c nl ω mn + i/τ − v c mn j a,s i nl ω nl + i/τ ),(5) where a indicates the direction of the current, while b and c are the polarization direction of polarized light. s i with i = x, y, z is the spin polarization, while s 0 represents charge current. j a,s i = 1/2(v a s i + s i v a ) indicate the spin current (i = 0) and charge current (s 0 = e). The numerators in Eq. (5) are composed of terms with the format N iabc mnl = j a,s i mn v b nl v c lm (i = 0) for spin current and N 0abc mnl = v a mn v b nl v c lm (i = 0) for charge current. For the monolayer of the group-V we studied, since it has mirror symmetry M x , the following relationship can be obtained M x v a mn (k) = (−1) δxa v a mn (k ′ )(6) and M x s i mn (k) = −(−1) δxi s i mn (k ′ )(7) When the mirror-symmetrical M x is applied to the conductivity σ a,s i bc , it can be found that when the polarization direction of the polarized light is along the x axis direction, σ x,s 0 x,x = 0, σ z,s 0 x,x = 0, σ x,s i x,x = 0, σ z,s i x,x = 0, that is, a non-zero charge current and a non-zero spin current are generated in the z direction and x direction, respectively. Similarly, when the polarization direction of the polarized light is along the z axis direction, σ x,s 0 z,z = 0, σ z,s 0 z,z = 0, σ x,s i z,z = 0, σ z,s i z,z = 0, i.e., the x direction produces a non-zero spin current, while the z direction produces a non-zero charge current. For the detailed calculation process please refer to ref. 58. Therefore, in the group-V monolayer, we studied when the polarization direction of linearly polarized light is parallel or perpendicular to the transport direction of the system (θ = 0 • or 90 • ), a net charge current can be generated in the armchair direction. In addition, a pure spin current can be obtained in the zigzag direction, consistent with our numerical results above. Last but not least, since the degree of buckling h can be effectively tuned by charge doping 24 , it will be essential to know if the photocurrent generated by PGE can be further adjusted. Moreover, we calculated the charge photocurrent in the armchair direction and spin photocurrent in the zigzag direction of Bi monolayer versus the magnitude of buckling heights, as shown in Fig. 5(c). Obviously, the charge current in the armchair direction and the spin current in the zigzag direction oscillate with the increase of the buckling height |h|. This is because the shift current is the main contribution to the linearly photogalvanic effect (LPGE) generated photocurrent, the shift current depends not only on the buckling height, but also on the density of states, velocity matrix elements, and shift-vector matrix elements 59 . From response theory, it is found that the short-circuit current on a device is proportional to the sum of polarization differences 59 . This indicates that the larger polarization of the ferroelectric material, the larger photocurrent can be obtained, which suggests that 2D ferroelectrics are potential candidates for photovoltaic materials. IV. CONCLUSION To summarize, we investigated the linear photogalvanic effect of two-probe devices based on group-V (As, Sb, and Bi) monolayer from first-principles calculations. By calculating the photoresponse of As in the armchair and zigzag direction, it is found that there is apparent anisotropy. More interestingly, in the two-probe device based on Sb and Bi monolayer, when the polarization direction of linearly polarized light is parallel or perpendicular to the transport direction, net charge current and pure spin current can be generated in the armchair and zigzag direction, respectively. This is also understood by analyzing the nonlinear optical coefficient. More importantly, it is also found that the net charge current generated in the armchair direction and the pure spin current generated in the zigzag direction can be further tuned with the increase of the material's buckling height |h|, which paves the novel applications of group-V monolayers in optoelectronics and opto-spintronics. FIG. 1 : 1the left lead and s indicates the spin component (s = ↑, ↓); e and h are the electron charge and the Planck's constant; Γ L = i(Σ r L − Σ a L ) is the linewidth function denoting the coupling between the central scattering region and the left lead, and Σ r L = [Σ a L ] † is the retarded self-energy due to the presence of the left lead; f L (E) is the Fermi-Dirac distribution function of the left lead; G </> ph = G r 0 Σ </> ph G a 0 is the lesser/greater Green's function including electron-photon interaction 55 , where Σ </> ph is the self-energy due to the presence of the electron-photon interaction, G Schematic plot of the two-probe device structure based on the group-V monolayer. (a) The side view of the relaxed configuration along the armchair direction; (b) The top view of the relaxed configuration along the armchair direction. Here, the length of central scattering region is 5*b. (c) The top view of the relaxed configuration along the zigzag direction. Here, the length of central scattering region is 4.5*a.It is divided into the left lead, right lead, and central scattering region where the light impinges with energy ω. Here, h = y1 − y2, the blue arrow represents the system's polarization direction. A is the electromagnetic vector potential inside the x − z plane. e1 and e2 are two unit vectors for characterizing the polarization of the light. θ denotes the polarization angle for the linearly polarized light. FIG. 2 : 2(a-c) The band structures (BS) of As, Sb, and Bi with and without SOC. (d-f) The joint density of states (JDOS) of As, Sb, and Bi with and without SOC. FIG. 3 :FIG. 4 : 34The calculated photoresponse of As without SOC. (a) The photoresponse versus polarization angles θ at different photon energies in the armchair direction. (b) The photoresponse versus polarization angles θ at different photon energies in the zigzag direction. (c) The photoresponse versus the photon energies in the armchair (red circles solid line) and zigzag (blue circles solid line) direction when θ = 0 • . The buckling height is -0.17Å. The calculated photoresponse of Sb with SOC. (a, b) Charge photocurrent and spin photocurrent versus polarization angles θ in the armchair and zigzag direction, respectively, where photon energy is fixed as 0.4 eV. (c, d) Charge photocurrent and spin photocurrent versus photon energies in the armchair and zigzag direction, respectively, where θ = 0 • . FIG. 5 : 5The calculated photoresponse of Bi with SOC. (a, b) Charge and spin photocurrent versus photon energies in the armchair and zigzag direction, respectively, where θ = 0 • . (c) Charge current in the armchair direction and spin current in the zigzag direction versus the magnitude of buckling heights |h|, where θ = 0 • , E ph = 0.3 eV. Table 1 1Lattice constants (a, b) and buckling height (h) parameters of monolayer As, Sb, and Bi. Elements a (Å) b (Å) h (Å) As 3.72 4.23 -0.17 Sb 4.25 4.43 -0.41 Bi 4.39 4.57 -0.59 J. F. Scott, Applications of Modern Ferroelectrics, Science 315, 5814 (2007) 2 M. Dawber, K. M. Rabe, and J. F. Scott, Physics of thin- AcknowledgementsWe gratefully acknowledge the the support from the NationalKey ferroelectric oxides. Rev. Mod. Phys. 771083ferroelectric oxides, Rev. Mod. Phys. 77, 1083 (2005) Thin-film ferroelectric materials and their applications. L W Martin, A M Rappe, Nat. Rev. Mater. 216087L. W. Martin, and A. M. Rappe, Thin-film ferroelectric materials and their applications, Nat. Rev. Mater. 2, 16087 (2017) Enhanced tunneling electroresistance in multiferroic tunnel junctions due to the reversible modulation of orbitals overlap. L Jiang, L L Tao, B S Yang, J Wang, X F Han, Appl. Phys. Lett. 109192902L. Jiang, L. L. Tao, B. S. Yang, J. Wang, and X. F. Han, Enhanced tunneling electroresistance in multiferroic tun- nel junctions due to the reversible modulation of orbitals overlap, Appl. Phys. Lett. 109, 192902 (2016) Ferroelectricity and tunneling electroresistance effect driven by asymmetric polar interfaces in all-oxide ferroelectric tunnel junctions. L L Tao, J Wang, Appl. Phys. Lett. 10862903L. L. Tao, and J. Wang, Ferroelectricity and tunneling elec- troresistance effect driven by asymmetric polar interfaces in all-oxide ferroelectric tunnel junctions, Appl. Phys. Lett. 108, 062903 (2016) C H Ahn, K M Rabe, J M Triscone, Ferroelectricity at the Nanoscale: Local Polarization in Oxide Thin Films and Heterostructures. 3035657C. H. Ahn, K. M. Rabe, and J. M. Triscone, Ferroelec- tricity at the Nanoscale: Local Polarization in Oxide Thin Films and Heterostructures, Science 303, 5657 (2004) D D Fong, G B Stephenson, S K Streiffer, J A Eastman, O Auciello, P H Fuoss, C Thompson, Ferroelectricity in Ultrathin Perovskite Films. 3045677D. D. Fong, G. B. Stephenson, S. K. Streiffer, J. A. East- man, O. Auciello, P. H. Fuoss, and C. Thompson, Ferro- electricity in Ultrathin Perovskite Films, Science 304, 5677 (2004) Critical thickness for ferroelectricity in perovskite ultrathin films. J Junquera, P Ghosez, Nature. 422506J. Junquera, and P. Ghosez, Critical thickness for ferro- electricity in perovskite ultrathin films, Nature 422, 506 (2003) Realizing giant tunneling electroresistance in two-dimensional graphene/BiP ferroelectric tunnel junction. L L Kang, P Jiang, N Cao, H Hao, X H Zheng, L Zhang, Z Zeng, Nanoscale. 1116837L. L. Kang, P. Jiang, N. Cao, H. Hao, X. H. Zheng, L. Zhang, and Z. Zeng, Realizing giant tunneling electroresis- tance in two-dimensional graphene/BiP ferroelectric tun- nel junction, Nanoscale 11, 16837 (2019) C Liu, W H Wan, J Ma, W Guo, Y Yao, Robust ferroelectricity in two-dimensional SbN and BiP. 107984C. Liu, W. H. Wan, J. Ma, W. Guo, and Y. Yao, Robust ferroelectricity in two-dimensional SbN and BiP, Nanoscale 10, 7984 (2018) Highmobility transport anisotropy and linear dichroism in fewlayer black phosphorus. J Qiao, X Kong, Z X Hu, F Yang, W Ji, Nat. Commun. 54475J. Qiao, X. Kong, Z. X. Hu, F. Yang, and W. Ji, High- mobility transport anisotropy and linear dichroism in few- layer black phosphorus, Nat. Commun. 5, 4475 (2014) Electrical contacts to monolayer black phosphorus: A first-principles investigation. K Gong, L Zhang, W Ji, H Guo, Phys. Rev. B. 90125441K. Gong, L. Zhang, W. Ji, and H. Guo, Electrical contacts to monolayer black phosphorus: A first-principles investi- gation, Phys. Rev. B 90, 125441 (2014) Optoelectronic characteristics and application of black phosphorus and its analogs. Y Y Li, B Gao, Y Han, B K Chen, J Y Huo, Front. Phys. 1643301Y. Y. Li, B. Gao, Y. Han, B. K. Chen, and J. Y. Huo, Op- toelectronic characteristics and application of black phos- phorus and its analogs, Front. Phys. 16, 43301 (2021) A novel electrically controllable volatile memory device based on few-layer black phosphorus. L W Zhang, Z Z Yu, L Zhang, X H Zheng, L T Xiao, S T Jia, J Wang, J. Mater. Chem. C. 62460L. W. Zhang, Z. Z. Yu, L. Zhang, X. H. Zheng, L. T. Xiao, S. T. Jia, and J. Wang, A novel electrically controllable volatile memory device based on few-layer black phospho- rus, J. Mater. Chem. C 6, 2460 (2018) H Liu, A T Neal, Z Zhu, Z Luo, X Xu, D Tomnek, P D Ye, Phosphorene: An Unexplored 2D Semiconductor with a High Hole Mobility. 84033H. Liu, A. T. Neal, Z. Zhu, Z. Luo, X. Xu, D. Tomnek, and P. D. Ye, Phosphorene: An Unexplored 2D Semiconductor with a High Hole Mobility, Acs Nano 8, 4033 (2014) Gate-tunable large spin polarization in a few-layer black phosphorus-based spintronic device. L W Zhang, J Chen, X H Zheng, B Wang, L Zhang, L T Xiao, S T Jia, Nanoscale. 1111872L. W. Zhang, J. Chen, X. H. Zheng, B. Wang, L. Zhang, L. T. Xiao, and S. T. Jia, Gate-tunable large spin polarization in a few-layer black phosphorus-based spintronic device, Nanoscale 11, 11872 (2019) T Hong, B Chamlagain, W Lin, H J Chuang, M Pan, Z Zhou, Y Q Xu, Polarized photocurrent response in black phosphorus field-effect transistors. 68978T. Hong, B. Chamlagain, W. Lin, H. j. Chuang, M. Pan, Z. Zhou, and Y. Q. Xu, Polarized photocurrent response in black phosphorus field-effect transistors, Nanoscale 6, 8978 (2014) Black phosphorus field-effect transistors. L Li, Y Yu, G J Ye, Q Ge, X Ou, H Wu, D Feng, X H Chen, Y Zhang, Nature Nanotech. 9372L. Li, Y. Yu, G. J. Ye, Q. Ge, X. Ou, H. Wu, D. Feng, X. H. Chen, and Y. Zhang, Black phosphorus field-effect transistors, Nature Nanotech. 9, 372 (2014) M Buscema, D J Groenendijk, S I Blanter, G A Steele, H S J Van Der Zant, A Castellanos-Gomez, Fast and Broadband Photoresponse of Few-Layer Black Phosphorus Field-Effect Transistors. 143347M. Buscema, D. J. Groenendijk, S. I. Blanter, G. A. Steele, H. S. J. van der Zant, and A. Castellanos-Gomez, Fast and Broadband Photoresponse of Few-Layer Black Phosphorus Field-Effect Transistors, Nano Lett. 14, 3347 (2014) Gate-tunable black phosphorus spin valve with nanosecond spin lifetimes. A Avsar, J Y Tan, M Kurpas, M Gmitra, K Watanabe, T Taniguchi, J Fabian, B Özyilmaz, Nature Phys. 13888A. Avsar, J. Y. Tan, M. Kurpas, M. Gmitra, K. Watanabe, T. Taniguchi, J. Fabian, and B.Özyilmaz, Gate-tunable black phosphorus spin valve with nanosecond spin life- times, Nature Phys. 13, 888 (2017) Elemental Ferroelectricity and Antiferroelectricity in Group-V Monolayer. C C Xiao, F Wang, S Y A Yang, Y H Lu, Y P Feng, S B Zhang, Adv. Funct. Mater. 281707383C. C. Xiao, F. Wang, S. Y. A. Yang, Y. H. Lu, Y. P. Feng, and S. B. Zhang, Elemental Ferroelectricity and Antiferro- electricity in Group-V Monolayer, Adv. Funct. Mater. 28, 1707383 (2018) Promising ferroelectricity in 2D group IV tellurides: a first-principles study. W H Wan, C Liu, W Xiao, Y G Yao, Appl. Phys. Lett. 111132904W. H. Wan, C. Liu, W. Xiao, and Y. G. Yao, Promising ferroelectricity in 2D group IV tellurides: a first-principles study, Appl. Phys. Lett. 111, 132904 (2017) Surface states on a topologically nontrivial semimetal: The case of Sb(110). M Bianchi, D Guan, A Stróżecka, C H Voetmann, S Bao, J I Pascual, A Eiguren, P Hofmann, Phys. Rev. B. 85155431M. Bianchi, D. Guan, A. Stróżecka, C. H. Voetmann, S. Bao, J. I. Pascual, A. Eiguren, and P. Hofmann, Surface states on a topologically nontrivial semimetal: The case of Sb(110), Phys. Rev. B 85, 155431 (2012) . Y H Lu, W T Xu, M Zeng, G Yao, L Shen, M Yang, Z Luo, F Pan, K Wu, T Das, P He, J Jiang, J Martin, Y P Feng, H Lin, X Wang, Topological Properties Determined by Atomic Buckling in Self-Assembled Ultrathin Bi. 1511080Nano lett.Y. H. Lu, W. T. Xu, M. Zeng, G. Yao, L. Shen, M. Yang, Z. Luo, F. Pan, K. Wu, T. Das, P. He, J. Jiang, J. Martin, Y. P. Feng, H. Lin, and X. Wang, Topological Properties Determined by Atomic Buckling in Self-Assembled Ultra- thin Bi(110), Nano lett. 15, 80 (2015) Ultrathin Bi(110) films on Si(111) √ 3 × √ 3-B substrates. I Kokubo, Y Yoshiike, K Nakatsuji, H Hirayama, Phys. Rev. B. 9175429I. Kokubo, Y. Yoshiike, K. Nakatsuji, and H. Hirayama, Ultrathin Bi(110) films on Si(111) √ 3 × √ 3-B substrates, Phys. Rev. B 91, 075429 (2015) Photogalvanic effect induced fully spin polarized current and pure spin current in zigzag SiC nanoribbons. J Chen, L W Zhang, L Zhang, X H Zheng, L T Xiao, S T Jia, J Wang, Phys. Chem. Chem. Phys. 2026744J. Chen, L. W. Zhang, L. Zhang, X. H. Zheng, L. T. Xiao, S. T. Jia, and J. Wang, Photogalvanic effect induced fully spin polarized current and pure spin current in zigzag SiC nanoribbons, Phys. Chem. Chem. Phys. 20, 26744 (2018) A highly polarization sensitive antimonene photodetector with a broadband photoresponse and strong anisotropy. F Chu, M Chen, Y Wang, Y Xie, B Liu, Y Yang, X An, Y Zhang, J. Mater. Chem. C. 6F. Chu, M. Chen, Y. Wang, Y. Xie, B. Liu, Y. Yang, X. An, and Y. Zhang, A highly polarization sensitive antimonene photodetector with a broadband photoresponse and strong anisotropy, J. Mater. Chem. C 6, 2509-2514 (2018) Photogalvanic effect in monolayer black phosphorus. Y Xie, L Zhang, Y Zhu, L Liu, H Guo, Nanotechnology. 26455202Y. Xie, L. Zhang, Y. Zhu, L. Liu, and H. Guo, Photogal- vanic effect in monolayer black phosphorus, Nanotechnol- ogy 26, 455202 (2015) Abstreiter, Removal of spin degeneracy in p-SiGe quantum wells demonstrated by spin photocurrents. S D Ganichev, U Rössler, W Prettl, E L Ivchenko, V V Belkov, R Neumann, K Brunner, G , Phys. Rev. B. 6675328S. D. Ganichev, U. Rössler, W. Prettl, E. L. Ivchenko, V. V. Belkov, R. Neumann, K. Brunner, and G. Abstre- iter, Removal of spin degeneracy in p-SiGe quantum wells demonstrated by spin photocurrents, Phys. Rev. B 66, 075328 (2002) Enhanced circular photogalvanic effect in HgTe quantum wells in the heavily inverted regime. J Li, W Yang, J Liu, W Huang, C Li, S Y Chen, Phys. Rev. B. 9535308J. Li, W. Yang, J. Liu, W. Huang, C. Li, and S. Y. Chen, Enhanced circular photogalvanic effect in HgTe quantum wells in the heavily inverted regime, Phys. Rev. B 95, 035308 (2017) Giant anisotropic photogalvanic effect in a flexible AsSb monolayer with ultrahigh carrier mobility. P Zhao, J Li, W Wei, Q Sun, H Jin, B Huang, Y Dai, Phys. Chem. Chem. Phys. 1927233P. Zhao, J. Li, W. Wei, Q. Sun, H. Jin, B. Huang, and Y. Dai, Giant anisotropic photogalvanic effect in a flexi- ble AsSb monolayer with ultrahigh carrier mobility, Phys. Chem. Chem. Phys. 19, 27233 (2017) Helicity-dependent photocurrents in graphene layers excited by midinfrared radiation of a CO2 laser. C Jiang, V A Shalygin, V Y Panevin, S N Danilov, M M Glazov, R Yakimova, S Lara-Avila, S Kubatkin, S D Ganichev, Phys. Rev. B. 84125429C. Jiang, V. A. Shalygin, V. Y. Panevin, S. N. Danilov, M. M. Glazov, R. Yakimova, S. Lara-Avila, S. Kubatkin, and S. D. Ganichev, Helicity-dependent photocurrents in graphene layers excited by midinfrared radiation of a CO2 laser, Phys. Rev. B 84, 125429 (2011) Photon wavelength dependent valley photocurrent in multilayer M oS2. H Guan, N Tang, X Xu, L L Shang, W Huang, L Fu, X Fang, J Yu, C Zhang, X Zhang, L Dai, Y Chen, W Ge, B Shen, Phys. Rev. B. 96241304H. Guan, N. Tang, X. Xu, L. L. Shang, W. Huang, L. Fu, X. Fang, J. Yu, C. Zhang, X. Zhang, L. Dai, Y. Chen, W. Ge, and B. Shen, Photon wavelength dependent valley photocurrent in multilayer M oS2, Phys. Rev. B 96, 241304 (2017) The photogalvanic effect in media lacking a center of symmetry. V I Belinicher, B I Sturman, Sov. Phys. Usp. 23199V. I. Belinicher, and B. I. Sturman, The photogalvanic effect in media lacking a center of symmetry, Sov. Phys. Usp. 23, 199 (1980) Enhanced photogalvanic effect in the two-dimensional M gCl2/ZnBr2 vertical heterojunction by inhomogenous tensile stress. L Qian, J Zhao, Y Xie, Front. Phys. 1713502L. Qian, J. Zhao, and Y. Xie, Enhanced photogalvanic effect in the two-dimensional M gCl2/ZnBr2 vertical het- erojunction by inhomogenous tensile stress, Front. Phys. 17, 13502 (2022) Pure spin current generation via photogalvanic effect with spatial inversion symmetry. X X Tao, P Jiang, H Hao, X H Zheng, L Zhang, Z Zeng, Phys. Rev. B. 10281402Pure spin current generation via photogalvanic effect with spatial inversion symmetry, X. X. Tao, P. Jiang, H. Hao, X. H. Zheng, L. Zhang, and Z. Zeng, Phys. Rev. B 102, 081402 (2020) A polarization-sensitive, self-powered, broadband and fast T i3C2Tx MXene photodetector from visible to near-infrared driven by photogalvanic effects. B Liu, L Qian, Y Zhao, Y Zhang, F Liu, Y Zhang, Y Xie, W Shi, Front. Phys. 1753501B. Liu, L. Qian, Y. Zhao, Y. Zhang, F. Liu, Y. Zhang, Y. Xie, and W. Shi, A polarization-sensitive, self-powered, broadband and fast T i3C2Tx MXene photodetector from visible to near-infrared driven by photogalvanic effects, Front. Phys. 17, 53501 (2022) PT -Symmetry-Enabled Spin Circular Photogalvanic Effect in Antiferromagnetic Insulators. R Fei, W Song, L Pusey-Nazzaro, L Yang, Phys. Rev. Lett. 127207402R. Fei, W. Song, L. Pusey-Nazzaro, and L. Yang, PT - Symmetry-Enabled Spin Circular Photogalvanic Effect in Antiferromagnetic Insulators, Phys. Rev. Lett. 127, 207402 (2021) Multifunctional Two-Dimensional V Si2N4/W Si2N4/V Si2N4 Photodetector Driven by the Photogalvanic Effect. L Shu, L Qian, X Ye, Y Xie, Phys. Rev. Applied. 1754010L. Shu, L. Qian, X. Ye, and Y. Xie, Multifunctional Two-Dimensional V Si2N4/W Si2N4/V Si2N4 Photodetec- tor Driven by the Photogalvanic Effect, Phys. Rev. Applied 17, 054010 (2022) Large Photogalvanic Spin Current by Magnetic Resonance in Bilayer Cr Trihalides. H Ishizuka, M Sato, Phys. Rev. Lett. 129107201H. Ishizuka, and M. Sato, Large Photogalvanic Spin Cur- rent by Magnetic Resonance in Bilayer Cr Trihalides, Phys. Rev. Lett. 129, 107201 (2022) Ab initio molecular dynamics for liquid metals. G Kresse, J Hafner, Phys. Rev. B. 47558G. Kresse, and J. Hafner, Ab initio molecular dynamics for liquid metals, Phys. Rev. B 47, 558 (1993) Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. G Kresse, J Furthmüller, Phys. Rev. B. 5411169G. Kresse, and J. Furthmüller, Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set, Phys. Rev. B 54, 11169 (1996) Projector augmented-wave method. P E Blöchl, Phys. Rev. B. 5017953P. E. Blöchl, Projector augmented-wave method, Phys. Rev. B 50, 17953 (1994) Efficient hybrid density functional calculations in solids: Assessment of the HeydC-ScuseriaCErnzerhof screened Coulomb hybrid functional. J Heyd, G E Scuseria, J. chem. phys. 1211187J. Heyd, and G. E. Scuseria, Efficient hybrid density func- tional calculations in solids: Assessment of the HeydC- ScuseriaCErnzerhof screened Coulomb hybrid functional, J. chem. phys. 121, 1187 (2004) Influence of the exchange screening parameter on the performance of screened hybrid functionals. A V Krukau, O A Vydrov, A F Izmaylov, G E Scuseria, J. Chem. Phys. 125224106A. V. Krukau, O. A. Vydrov, A. F. Izmaylov, and G. E. Scuseria, Influence of the exchange screening parameter on the performance of screened hybrid functionals, J. Chem. Phys. 125, 224106 (2006) Ab initio modeling of quantum transport properties of molecular electronic devices. J Taylor, H Guo, J Wang, Phys. Rev. B. 63245407J. Taylor, H. Guo, and J. Wang, Ab initio modeling of quantum transport properties of molecular electronic de- vices, Phys. Rev. B 63, 245407 (2001) Ab initio modeling of open systems: Charge transfer, electron conduction, and molecular switching of a C60 device. J Taylor, H Guo, J Wang, Phys. Rev. B. 63121104J. Taylor, H. Guo, and J. Wang, Ab initio modeling of open systems: Charge transfer, electron conduction, and molec- ular switching of a C60 device, Phys. Rev. B 63, 121104 (2001) Nonlinear Spin Current and Magnetoresistance of Molecular Tunnel Junctions. D Waldron, P Haney, B Larade, A Macdonald, H Guo, Phys. Rev. Lett. 96166804D. Waldron, P. Haney, B. Larade, A. MacDonald, and H. Guo, Nonlinear Spin Current and Magnetoresistance of Molecular Tunnel Junctions, Phys. Rev. Lett. 96, 166804 (2006) Giant magnetoresistance and spin Seebeck coefficient in zigzag -graphyne nanoribbons. M X Zhai, X F Wang, P Vasilopoulos, Y Liu, Y Dong, L Zhou, Y Jiang, W You, Nanoscale. 611121M. X. Zhai, X. F. Wang, P. Vasilopoulos, Y. Liu, Y. Dong, L. Zhou, Y. Jiang, and W. You, Giant magnetoresistance and spin Seebeck coefficient in zigzag -graphyne nanorib- bons, Nanoscale 6, 11121 (2014) Generation and transport of valley-polarized current in transition-metal dichalcogenides. L Zhang, K Gong, J Chen, L Liu, Y Zhu, D Xiao, H Guo, Phys. Rev. B. 90195428L. Zhang, K. Gong, J. Chen, L. Liu, Y. Zhu, D. Xiao, and H. Guo, Generation and transport of valley-polarized current in transition-metal dichalcogenides, Phys. Rev. B 90, 195428 (2014) Two-Dimensional -Graphyne Suspended on Si(111): A Hybrid Device. A Saraiva-Souza, M Smeu, L Zhang, M A Ratner, H Guo, J. Phys. Chem. C. 1204605A. Saraiva-Souza, M. Smeu, L. Zhang, M. A. Ratner, and H. Guo, Two-Dimensional -Graphyne Suspended on Si(111): A Hybrid Device, J. Phys. Chem. C 120, 4605 (2016) For details of the NanoDcal quantum tranpsort package. For details of the NanoDcal quantum tranpsort package, see http://hzwtech.com/ Efficacious Form for Model Pseudopotentials. L Kleinman, D M Bylander, Phys. Rev. Lett. 481425L. Kleinman, and D. M. Bylander, Efficacious Form for Model Pseudopotentials, Phys. Rev. Lett. 48, 1425 (1982) First-principles analysis of photocurrent in graphene PN junctions. J Chen, Y Hu, H Guo, Phys. Rev. B. 85155441J. Chen, Y. Hu, and H. Guo, First-principles analysis of photocurrent in graphene PN junctions, Phys. Rev. B 85, 155441 (2012) Nonequilibrium photocurrent modeling in resonant tunneling photodetectors. L E Henrickson, J. Appl. Phys. 916273L. E. Henrickson, Nonequilibrium photocurrent modeling in resonant tunneling photodetectors, J. Appl. Phys. 91, 6273 (2002) Photogalvanic effect in monolayer WSe2-MoS2 lateral heterojunction from first principles. W M Luo, Z G Shao, X F Qin, M Yang, Physica E Low Dimens. Syst. Nanostruct. 115113714W. M. Luo, Z. G. Shao, X. F. Qin, and M. Yang, Pho- togalvanic effect in monolayer WSe2-MoS2 lateral hetero- junction from first principles, Physica E Low Dimens. Syst. Nanostruct. 115, 113714 (2020) Solid state physics part II optical properties of solids. M S Dresselhaus, Lecture Notes. Massachusetts Institute of TechnologyM. S. Dresselhaus, Solid state physics part II optical prop- erties of solids, Lecture Notes (Massachusetts Institute of Technology, Cambridge, MA) (2001) Pure spin photocurrent in non-centrosymmetric crystals: bulk spin photovoltaic effect. H W Xu, H Wang, J Zhou, J Li, Nat. Commun. 124330H. W. Xu, H. Wang, J. Zhou, and J. Li, Pure spin pho- tocurrent in non-centrosymmetric crystals: bulk spin pho- tovoltaic effect, Nat. Commun. 12, 4330 (2021) Quantitative relationship between polarization differences and the zone-averaged shift photocurrent. B M Fregoso, T Morimoto, J E Moore, Phys. Rev. B. 9675421B. M. Fregoso, T. Morimoto, and J. E. Moore, Quantita- tive relationship between polarization differences and the zone-averaged shift photocurrent, Phys. Rev. B 96, 075421 (2017)
[]
[ "Verifying Stochastic Behaviors of Decentralized Self-Adaptive Systems: A Formal Modeling and Simulation Based Approach", "Verifying Stochastic Behaviors of Decentralized Self-Adaptive Systems: A Formal Modeling and Simulation Based Approach" ]
[ "Nianyu Li \nSoftware Engineering Institute School of Electronics Engineering and Computer Science\nInstitute of Mathematics AMSS Chinese Academy of Sciences\nPeking University\nBeijing, BeijingChina, China\n", "Di Bai \nSoftware Engineering Institute School of Electronics Engineering and Computer Science\nInstitute of Mathematics AMSS Chinese Academy of Sciences\nPeking University\nBeijing, BeijingChina, China\n", "Wenpin Jiao \nSoftware Engineering Institute School of Electronics Engineering and Computer Science\nInstitute of Mathematics AMSS Chinese Academy of Sciences\nPeking University\nBeijing, BeijingChina, China\n", "Zhuoqun Yang [email protected] \nSoftware Engineering Institute School of Electronics Engineering and Computer Science\nInstitute of Mathematics AMSS Chinese Academy of Sciences\nPeking University\nBeijing, BeijingChina, China\n" ]
[ "Software Engineering Institute School of Electronics Engineering and Computer Science\nInstitute of Mathematics AMSS Chinese Academy of Sciences\nPeking University\nBeijing, BeijingChina, China", "Software Engineering Institute School of Electronics Engineering and Computer Science\nInstitute of Mathematics AMSS Chinese Academy of Sciences\nPeking University\nBeijing, BeijingChina, China", "Software Engineering Institute School of Electronics Engineering and Computer Science\nInstitute of Mathematics AMSS Chinese Academy of Sciences\nPeking University\nBeijing, BeijingChina, China", "Software Engineering Institute School of Electronics Engineering and Computer Science\nInstitute of Mathematics AMSS Chinese Academy of Sciences\nPeking University\nBeijing, BeijingChina, China" ]
[]
Self-adaptive software is considered as the most advanced approach and its development attracts a lot of attention. Decentralization is an effective way to design and manage the complexity of modern self-adaptive software systems. However, there are still tremendous challenges. One major challenge is to unify decentrality with traditional self-adaptive implementation framework during design and implementation activity. One is to guarantee the required global goals and performance of decentralized self-adaptive systems operating in highly dynamic and uncertain environments. Another challenge is to predict the influence of system's internal change on its self-adaptability to the environment. To solve these problems, we combine the mechanisms of separation of concerns with modeling method using timed automata to allow the system to be analyzed and verified. Timed computation tree logic is used to specify system goals and stochastic simulations in dynamic environment are experimented to verify decentralized self-adaptive system's adaptation properties. In this paper, we extracted a motivation example from practical applications in UAV emergency mission scenarios. The whole approach is evaluated and illustrated with this motivation example and the statistical results can be used as reference for arrangement planning of UAVs in cyber physical spaces.
10.1109/qrs.2018.00020
[ "https://arxiv.org/pdf/1706.08271v1.pdf" ]
8,920,409
1706.08271
295db61b45b37db8c3cb370f8a0b1897a302bbe6
Verifying Stochastic Behaviors of Decentralized Self-Adaptive Systems: A Formal Modeling and Simulation Based Approach Nianyu Li Software Engineering Institute School of Electronics Engineering and Computer Science Institute of Mathematics AMSS Chinese Academy of Sciences Peking University Beijing, BeijingChina, China Di Bai Software Engineering Institute School of Electronics Engineering and Computer Science Institute of Mathematics AMSS Chinese Academy of Sciences Peking University Beijing, BeijingChina, China Wenpin Jiao Software Engineering Institute School of Electronics Engineering and Computer Science Institute of Mathematics AMSS Chinese Academy of Sciences Peking University Beijing, BeijingChina, China Zhuoqun Yang [email protected] Software Engineering Institute School of Electronics Engineering and Computer Science Institute of Mathematics AMSS Chinese Academy of Sciences Peking University Beijing, BeijingChina, China Verifying Stochastic Behaviors of Decentralized Self-Adaptive Systems: A Formal Modeling and Simulation Based Approach self-adaptive softwaremodelingverificationtimed automatontemporal logic Self-adaptive software is considered as the most advanced approach and its development attracts a lot of attention. Decentralization is an effective way to design and manage the complexity of modern self-adaptive software systems. However, there are still tremendous challenges. One major challenge is to unify decentrality with traditional self-adaptive implementation framework during design and implementation activity. One is to guarantee the required global goals and performance of decentralized self-adaptive systems operating in highly dynamic and uncertain environments. Another challenge is to predict the influence of system's internal change on its self-adaptability to the environment. To solve these problems, we combine the mechanisms of separation of concerns with modeling method using timed automata to allow the system to be analyzed and verified. Timed computation tree logic is used to specify system goals and stochastic simulations in dynamic environment are experimented to verify decentralized self-adaptive system's adaptation properties. In this paper, we extracted a motivation example from practical applications in UAV emergency mission scenarios. The whole approach is evaluated and illustrated with this motivation example and the statistical results can be used as reference for arrangement planning of UAVs in cyber physical spaces. I. INTRODUCTION Current society extensively relies on software systems to achieve specific goals. However, ensuring these required goals of software systems is a tremendous challenge since there are lots of uncertainties that the developer cannot totally understand or think about during design activity, and the changing environment and system goals lead to costly reconfiguration and time-consuming maintenance tasks [1]. Therefore, there is a high demand for managing complexity reduction and achieving desired goals within a reasonable cost and timely manner. Self-adaptive software is generally considered as one of the most promising approaches to manage the complexity and uncertainties of modern software systems since it enables a system to adapt itself autonomously to internal and environmental dynamics to achieve particular goals including performance, security, fault management, etc [2]. Self-adaptation means that a system should be selfmanaging, self-governing, self-maintenance and self-control, underlies to more primitive level as self-awareness (i.e., the system is aware of its own states and behaviors), and contextawareness (i.e., the system realizes its environment context) [3]. Self-adaptive systems could be either centralized or decentralized. This paper is focusing on decentralized selfadaptive systems whose behaviors and objectives have to be synthesized from the interactions of autonomous constituent subsystems [12]. There are two main characteristics of decentralized systems. On one hand, constituent subsystems are autonomous which implies that their behaviors and interactions are not coordinated by any centralized facility. On the other hand, autonomous subsystems should exhibit coherent behaviors to achieve global goals of systems and meanwhile eliminate conflicts by interacting with one another. Decentralized self-adaptive systems are an important branch of self-adaptation and require to be studied in order to understand the most effective way to design and manage complex systems [4]. One consideration is that complex systems are usually composed of different number of subsystems, distributed in several places and connected by the Internet or the cloud. Therefore, their uncontrolled distribution easily leads to the absence of global knowledge and difficulty in sharing the global status of the system. The other thought is to ensure that systems are robust enough against failures especially single node failure. Three fundamental challenges related to decentralized selfadaptive systems [5] have not been resolved thoroughly so far. Firstly, how to unify the features of decentralized systems with commonly used self-adaptive implementation framework (i.e., MAPE framework)? Secondly, how to ensure that decentralized self-adaptive systems satisfy their global goals and maintain a satisfactory performance under the changing environments? Thirdly, how to predict the adaptability of decentralized self-adaptive systems when there are unexpected or uncertain changes (even irreparable damages) take place in the systems themselves? These problems can be seen in many practical applications. For example, UAVs, commonly known as drones without human pilots aboard and having a wide range of applications in many fields like environment hazards monitoring, traffic management and photogrammetry, are usually deployed in cyber physical spaces and arranged as a whole decentralized self-adaptive system to carry out search and rescue tasks which are too dirty, dangerous, or impossible for humans. Then, people may have some expectations (or goals) on the UAV-involved system, whether the system can explore the entire space and search all targets in given time under the changing environment, and whether the system is still adaptive when some UAVs crash or run out of batteries. The analogous problems can be found in amount of different decentralized self-adaptive systems. This paper comes up with a novel approach to current challenges. First, we introduce a method for modeling a decentralized self-adaptive system and its environment separately. In the method, separation of concerns is applied to decompose and model each decentralized self-adaptive subsystems and its environment into several low-coupling components since there are always uncertain changes taking place in the environment and it is irrational to maintain an environment model in the system in advance. Meanwhile timed automata are adopted to model components of a decentralized system and different aspects of the environment. A timed automaton is a kind of finite automaton extended with a finite set of resettable valued clocks and it can be added with stochastic and non-linear dynamical features. Thus, the system and its adaptation behaviors can be analyzed and verified entirely. Then, we describe a method for specifying and verifying the required adaptation properties of decentralized self-adaptive systems. In the method, the primary global goals of a decentralized self-adaptive system are specified by using TCTL (timed computation tree logic), which extends computation tree logic by discrete time variables and time constraints. Meanwhile, the adaptation properties (mainly about the satisfactions of the global goals' achievements) of a decentralized self-adaptive system in a dynamic environment are verified and validated by simulation. In this work, we adopt a statistical model checking tool to carry out the simulation by executing the formal models specified in timed automata and further verifying the adaptation properties. To illustrate the whole approach, particularly about how the components and their behaviors of a decentralized selfadaptive system situated in an uncertain context is modeled formally and how the adaptation properties of the system is specified and verified, we describe and implement a motivation example, which is extracted from fully autonomous and decentralized UAVs emergency scenario in practical application. The statistical results of the scenario acquired from the approach can be used as reference for arrangement planning of UAVs in a real smart city. The rest of the paper is structured as follows. Section 2 describes the motivation example extracted from UAV emergency scenarios. Section 3 introduces the overview of our approach. Section 4 presents decentralized self-adaptive architecture. Section 5 declares behaviors modeled by using timed automata. Section 6 explains property specifications by using TCTL and section 7 illustrates adaptation evaluation through simulation runs. Section 8 details some related work and the final section makes some concluding remarks on this paper and points out our future work. II. MOTIVATION EXAMPLE To motivate our work on decentralized self-adaptive systems we introduce an example throughout the paper. The example describes a setting including a decentralized system and its environment, illustrates two main characteristics of this system and challenges of self-adaptive behaviors in dynamic environment. This motivation example can be instantiated as a variety of searching and rescuing or surveillance scenarios where fully autonomous UAVs operate in a space-dependent environment and global properties of the system need to be formally verified. A. Scenario Discription In the scenario, communication infrastructure is disabled in a city due to disasters; parts of the city may be unsafe; victims might be stranded in various locations and have no idea of where the rescue center is, autonomous UAVs are dispatched to locate victims and lead them to the safe areas. Search and rescue work must be performed. Autonomous UAVs are then dispatched to locate victims, leading them to safe area. Naturally, UAVs can move in the city environment in specific ways by utilizing global knowledge of the city map and local knowledge (limited by e.g., line of sight) of positions of neighboring UAVs and victims. If a UAV is close to a victim, it can lead him to a safe zone. A safe zone is where a hospital or an ambulance locates and medical assistance can be provided for victims. The disaster city can be divided into several different districts and each district has one rescue center. Fig.1 visualizes a possible configuration of a district. The rescue center is the safe zone mentioned above and in charge of the whole district's safety. The district then is divided into several blocks and each block contains one building at most. The victims are spread in different blocks and have no idea of where the rescue center is since the public communication is disabled. Several drones will be arranged in each district to guide victims to the rescue center of this district. The UAVs will start from the block where the rescue center is, search for victims by local knowledge and lead them to the safe zone by global knowledge of the city map. B. Scenario Characteristics Each drone in a single district can be seen as subsystems of a decentralized self-adaptive system with the following characteristics. First, due to the disable of the public communication in disaster scenarios, a centralized control center may result in unacceptable overhead and bottleneck, thus, every drone carrying private communication infrastructure is fully autonomous and decentralized without a ground control system instructing what drone needs to do. Second, even though drones search and rescue victims independently, they should cooperatively achieve the global goal of saving all the victims in their ruling district and coordinate with each other when two drones are near and possibly colliding. The cyber physical spaces where decentralized UAVs operate are highly dynamic and uncertain. The main uncertainties of environment we consider are the movements of different victims. In a disaster scenario, since victims have no idea of where rescue centers are, their behaviors cannot be predicted easily. According to the report of psychologists from CRHNet [7], immediately following the impact of a disaster, nearly 4/5 of victims are in a state of shock and unable to cope with the situation by themselves. Thus we assume that a victim is prone to staying where he is with 80 percent probability, which means that there are still 20 percent of chance that he would choose move. Also, the directions of the victim's movement are uncertain. Each direction, north, south, east and west can be equally treated with the same probability, each of 25 percent probability. And if the victim can reach a building where he thinks as a relative unthreatening place, it is very probable that he would just wait there for rescue. Although victims may have some knowledge of the district, this information is not considered. C. Scenario challenges In this scenario, since this category of the system is related to human safety without slight mistake being tolerated, it must be analyzed and verified in advance and the analysis results could then be used as reference for actual deployment and dynamic adjustment. Developing self-adaptive software deployed in each drone is not easy, but not that hard. However, ensuring the global goals, e.g., all the victims in the district can be searched and rescued in an acceptable period, brings challenges when drones are deployed in the dynamic and uncertain environment and victims' movements are unforeseen. Also, when the internal changes of the system happen, for example, the number of drones in a district varies due to expenditure or drones' crash in accident, predicting the self-adaptability of this decentralized self-adaptive system is another challenge worth studying. In this system, the adaptability can be measured by how fast and how well the global goals are achieved. Therefore, in the case study in this paper, we will measure the performance and efficiency of the system by simulation. III. APPROACH OVERVIEW Fig.2 provides an overview of our approach, which is based on modeling and simulation and divided into four phases. Model checking provides an effective and rigorous method for verifying the self-adaptive behaviors whilst simulation implements compromised and intuitive method to foresee and validate the adaptability of self-adaptive software with less memory and time intensive. Fig. 2. Approach overview Phase one: Analyze and design the components (or subsystems) of a decentralized self-adaptive system and the interactions among the components. While implementing a decentralized self-adaptive system, besides analyzing those application-specific components of the system, we should take into account the decentralization of subsystems and the commonly used implementation framework of adaptive systems (e.g., the MAPE framework) as well. The paper introduces the separation of concerns mechanism to architect a decentralized self-adaptive system in cyber physical spaces in order to integrate application, decentralization and adaptation features into a uniform implementation framework. Separation of concerns has become an important principle in software engineering since it simplifies development and maintenance while promoting software reusability especially for decentralized self-adaptive systems with huge complexity. Phase two: Specify formally the behaviors and interactions (maybe stochastically) happening in cyber physical spaces. According to separation of concerns, both components of the decentralized self-adaptive system and different aspects of the environment have their own behaviors and components will communicate with other components and the environment as well. By using priced timed automata with stochastic transitions, the behaviors and interactions occurring in cyber physical spaces can be formally represented and reasoned about the time effects arising from execution of the decentralized self-adaptive system. Phase three: Define the goals of the decentralized selfadaptive system formally. This paper adopts a subset of Timed Computation Tree Logic (TCTL) with clock variables and time constraints, as the query language, to specify the global goals of the system. The system goals are specified as TCTL properties, such as safety, reachability and liveness, and can be verified automatically. Phase four: Simulate and evaluate the adaptability of the decentralized self-adaptive system. As mentioned before, the execution performance of the simulation implies the adaptability of the system. To evaluates the performance, this paper adopts statistical model-checking (SMC), which is an analysis technique used to study the performance of a system in a given stochastic environment, to perform stochastic simulation runs through timed automata designed in Phase two. If the analysis results are not reasonable or some necessary global properties cannot be satisfied, it could trace back to the initial design of the whole system and make adjustments or modifications in responsibility partition. More details will be introduced with motivation example in the following four sections. IV. DECENTRALIZED SELF-ADAPTIVE SYSTEM ARCHITECTURE According to the principle of separation of concerns [8], our approach separates the environment from the self-adaptive system and modules of the system from special purpose concerns. A. Separation concerns of environment and software In much of the research work on adaptive software, the environment is modeled in the system. However, it is unpractical to model everything about the environment in advance and maintain the environment model in a decentralized system. That will bring about problems. For instance, when the parametric changes in the environment become known or adjusted as experience gains, it still needs to modify the whole decentralized system. However, if modeling the environment and decentralized system separately, the only adjustment is very likely related to the environment while the decentralized system maintains the status quo. In the UAV example, the objects in the environments that we need to focus on are the victims and buildings that decentralized UAV system has to make contacts with and all irrelevant things in the cyber physical space are categorized as Others, as shown in Fig.4. B. Separation concerns of functional behaviors in system Crosscutting concerns need more than single program location modifications when the system's requirement changes or new functions are added to the system. Therefore, separating crosscutting concerns from functional behaviors could reduce the cost of modification. In a self-adaptive system, adaptive behaviors are achieved by implementing the activities of the MAPE (Monitoring, Analysis, Planning, Execution). The monitoring part collects, aggregate, and filter information from managed environment. Analysis part analyzes the information and identifies the configurations that can achieve the system's goals. The Planning module encloses the strategy constructing the actions needed to better achieve the goals. During the execution, the adaptation strategy is enacted on the system. However, to achieve the global goals, the autonomous subsystems in the decentralized system take actions independently and they should interact with others to exhibit coherent behaviors. Therefore, in a decentralized self-adaptive system, every subsystem must combine the features MAPE loop for adaptation with a communication mechanism for coordination among decentralized subsystems. As shown in Fig.3, Monitoring part should be responsible for interacting with the whole context, which means not only the changing environment but other subsystems in the decentralized system. Analysis part should also analyze whether subsystems have potential conflicts in achieving the global goals through communication and coordination. Planning and Execution parts are the same as in the MAPE loop. Each module can be subdivided into different components according to more refined functional behaviors specific to the application. In the specific instance mentioned previously, the decentralized self-adaptive system is composed of drones (i.e., subsystems of the decentralized self-adaptive system) and each drone is further divided into three modules, as shown in Fig.4. The Monitoring Module is segmented by two aspects due to different concerns of the environment and other drones in its system. The aspects related to the environment contain two components, Victim Detector and Building Detector, providing the functionality to detect victims and buildings in the block by the limited sight of camera. In the subsystem of a drone, Drone Detector is responsible for detecting whether there are other drones in the same block. Analysis Module, which deals with information collected by monitoring subsystem, consists of Victim Organizer and Drone Communicators, to analyze the victim information and neighboring drones in avoidance of potential collision. Routine Generator is the kernel part of the local drone system, responsible for planning the movement strategy. Since the execution part is to enact the strategies made by the planning part and the only action specified by the strategies in motivation example is movement, the planning and execution are integrated as one module. V. BEHAVIORS MODELLED BY USING TIMED AUTOMATA A modeling formalism for a decentralized self-adaptive system should allow the representation of uncertain behaviors of the system and communications among its subsystems. It should also enable to reason about the time effects arising from concurrent executions of subsystems involved in the decentralized system. However, formalisms such as Process Calculi and Markov Decision Process are not supportive enough of mechanisms to reason about both stochastic behaviors and real-time effects. The model of timed automata is one of the prominent classical formalisms for describing behaviors of real-time systems. A timed automaton (TA) with inputs and outputs is defined as a seven tuple as shown below. Q is a finite set of locations (or states as in a finite state automaton). q0 belongs to Q and it is the initial location. X is the finite set of clocks. I and O represent input events and output events, respectively. Inv are functions that define invariants correspondingly to states. T is the set of transitions and T ⊆ Q × IO × B(X) × 2 X × Q, where B(X) is the set of Boolean constraints involving clocks of the form x#A (x∈X, #∈{<, ≤, =, ≥, > } and A is an integer constant). TA = (Q, q0, X, I, O, T, Inv) A transition can be specified as a tuple as well, e.g., t = (q, q', a, g, r), which specifies a transition from state q to q' with (either input, output, or internal τ) event a, guard g and clock r, where q, q' ∈ Q, a ∈ IO{τ}, g ∈ B(X), r ⊆ X and it is the set of clocks to be reset. However, with the complexity of cyber-physical spaces, timed automaton is not expressive enough since model checking problems need to be decidable while the realistic problems are not. A solution of the above mentioned problem is to introduce both stochastic and non-linear dynamical features to timed automata. Concretely, a state might have multiple targeted states with the same guard and event, with certain proportion of probability weight. Also, the variable clocks in an automaton can evolve with various rates and such rates can be specified. Fig.5 is an example of timed automaton with non-linear dynamic features and it specifies a timer for a decentralized self-adaptive drone system, which allows the drones to detect the environment, be coordinated with other drones, plan for movement strategies and take movement actions in a constraint time. This automaton has 7 states, Initial, T1, T2, …, T6, where Initial is the initial state of this automaton, and has only one clock, i.e., c. In this automaton, input events (whose identifiers are followed by ?) are receiving signals whilst output events (whose identifiers are followed by !) are emitting signals by channels. There are only output events, i.e., env_detect, drone_communicate, and generate_routine, in this automaton. Each state is associated with an invariant, for instance, invariant "c'==0" means that the changing rate of clock c should be equal to 0 in T1, in other words, clock of c in this state should not change (the changing rate of clock c can be set by different values as non-linear dynamic features); invariant "c<=10*roundT+4", where roundT is a local variable representing the number of blocks a drone has already searched, specifies that the clock should be always less than the value on the right side in state T2. A guard is a conjunction of constraints, for instance, guard "all_victim_safe()==false" controls the transition from T1 to T2 and the transition can happen only when the guard expression is valid. From Initial to T1, the assignment expressions "c=0" and "roundT=0" are internal events. The interested reader can refer to the work [9] for complete description on timed automata. Let's take victim in the environment as an example. The automaton is shown in Fig.6. At the initial position, the victim judges whether he is near the rescue center by himself with a probability (in this motivation scenario, the probability may be very low in the disaster situation). If he judges that he is not near the center, there is 20 percent chance that he will move (i.e., the state transits from Judging to Move by 20% probability) whilst 80 percent chance of standing still (i.e., the state transits from Judging to Standstill by 80% probability). In the automaton, these two transitions are weighted 1 and 4, respectively, with two different target states but same guard. At state Move, the victim has equal chances randomly choosing a direction to a contiguous block, in other words that each direction shares the same probability weight 1. This is a way of introducing probabilistic transitions to model stochastic behaviors. Another way is using random() function and bounding the returning double value to generate different probability proportion. After updating the position information, signal vic_bd_chan is emitted if there is building surround. If the victim is in state StayOutside, and he knows that he is detected by a drone through the broadcasting channel drone_vic_chan, he will follow the drone tightly until receiving the signal victim_safe of reaching the rescue center and then he will finish his behaviors (i.e., the automaton reaches the terminal state--Safety). To analyze and verify the decentralized self-adaptive drone system in motivation example of UAV Emergency Response Scenario, each component of the system is modeled as automata with stochastic transitions. Given the space limitation, these components will only be introduced briefly without concrete models. Monitoring Subsystem: Monitoring Subsystem detects the environment and other drones in the system from the perspective of a drone and is modeled as three extended automata: Building Detector, Victim Detector and Drone Detector, whose concerns are to detect the building, victim information and check if there are multiple drones in the same block. This subsystem can be modeled as more automata with higher granularity to detect different information in environment separately when the situation is much more complicated in a concretized scenario. Analysis Subsystem: Analysis Subsystem is modeled as two automata: Victim Organizer, maintaining a rescued queue information for the drone collected from monitoring environment, and Drone Communicator, analyzing saved victims' information in two neighboring drones. This subsystem provides basis for making plans in planning phase. Planning & Execution Subsystem: Routine Generator is responsible of generating the rational position for next step according to saved victims information in rescued queue provided by Victim Organizer, and taking actions. According to the timed automata with stochastic behaviors and non-linear dynamical features described above, Fig.7 shows the interactions between all components of a drone system and the disaster environment. In the figure, signal messages sent/received between the components are labeled on arrowed lines. All the components can be synchronized through binary channels between two automata or broadcasting channels among corresponding multi automata. To ensure a decentralized self-adaptive system satisfies its primary global goals under the changing environment, we use a subset of Timed Computation Tree Logic (TCTL) to specify the adaptation goals formally so that the goals can be verified based on timed automata. TCTL is an extension of Computation Tree Logic (CTL) [10], a branching-time logic with tree-like structure in which the future is nondeterministic and any branch might be an actual path that will be realized. Compared to CTL, TCTL adds clock variables and clock constraints. TCTL is composed of state formulae and path formulae. A state formula φ involves a single state and it is to test whether φ holds or not in that state, whilst a path formula φ is to assert whether φ holds over a path. Clock variables and clock constraints can be defined as atomic TCTL state formulae to reason about clock values. TCTL path formulae can be used to specify some specific properties of a system, such as safety, reachability and liveness. A safety property is used to verify that "something bad will never happen". A reachability property is used to check whether a given state formula can be satisfied by any reachable state. A liveness property is used to verify that "something good will eventually happen". Table I illustrates different types of properties. Safety Property: This type of property is to check that something dissatisfying the goals will never happen or the system should hold some formula related to goals in every state. There are two ways for checking safety properties, A□φ and E□φ, where □ represents that all states should satisfy the state formulae φ in a certain path. The property A□φ (referred as Invariantly) evaluates to true if and only if all reachable states from all paths satisfy φ while the property E □ φ (referred as potentially always) means that an existing path either infinite, or self-absorbed in the maximal path always satisfies φ. Reachability Property: This type of property is to check whether the system can reach some states satisfying the goals. A◇φ and E◇φ are the two ways for checking reachability properties, where ◇ represents that a state in future can satisfy state formulae φ from current state. The property A ◇ φ (referred as eventually) evaluates to true if and only if all possible transition sequences eventually reaches a state satisfying φ while E◇φ (referred as possibly) only needs one transition sequence reaches that state. Liveness Property: This type of property is to check something satisfying the goals will eventually happen as long as executing self-adaptive behaviors. This property (referred as Leads to) is expressed as "φ implies ψ" and the semantic of this property is whenever φ holds, eventually ψ will happen as well. For a decentralized self-adaptive system, its adaptation goals can be specified and checked as the combinations of safety, reachability and liveness properties. In the motivation example of the decentralized self-adaptive system, it has a list of adaptation goals as follows. First, all drones should function properly. Concretely, 1) a drone and its components should continue functioning. Corresponding to the formal model for a drone, all timed automata should not be trapped into deadlocks. This can be specified as a safety property of TCTL. A□ not deadlock 2) A drone should generate routines to lead victims to the rescue center after it detects victims. This can be specified as a liveness property. A ◇ VictimOrganizor0. VictimReceiving implies RoutineGenerator0.NaviMode. 3) All drones should not malfunction in turn. On one hand, a drone cannot search and rescue more victims than that actually appearing in the district and this can also be specified as a safety property. A□ not VictimOrganizer0.len>VictimNum. 4) A drone cannot miscount the number of victims that it has searched and rescued. Specifically, if it has unfortunately not searched any victims yet, the length of its maintained victim queue should always be zero. This can be defined as a safety property as well. E□ VictimOrganizer0.len=0 Second, the system should search and lead all victims to rescue centers, which is the primary adaptation goal, and this can be specified as a reachability property. According to the time automata, all victims should reach the state "Safety" in all possible transition sequences. A◇ forall (i: id_t) victim(i).Safety To reach the "Safety" state, a victim may luckily reach the rescue center by himself before detected by drones since he may loaf randomly in his districts. In other words, a victim has chance to save himself without needing drones' search and rescue. This can also be specified as a reachability property. E◇ not victim(0).WaitingForHelp Meanwhile, a victim can also be saved after being detected by a drone. Correspondingly to the automaton of Victim, the victim should be able to reach state WaitingForHelp eventually and then definitely be guided to rescue center (i.e., reaching state Safety). This property can be specified as a liveness property. A ◇ victim(0).WaitingForHelp implies victim(0).Safety VII. ADAPTATION EVALUATION BY USING SIMULATIONS Statistical model-checking (SMC) [11] is a recent approach introduced as an analysis technique to study performance of the system in a given stochastic environment. Due to the features of large memory requirement, many realistic models are untreatable in model checking. SMC, bypassing decidability issues and no need to store the state space, is a nicely complement of model-checking method and could actually work in large decentralized systems that cannot be expressed or checked with classical model-checking. SMC can deal with two main problems. One is threshold problem whether the probability measure of query meets the bound. The other is estimation problem that what is the probability measure of a certain query. SMC generates finite trajectories through discrete-event stochastic simulation from given model within a bound for the desired level of approximation. Through these simulation runs, the approximate answer whether property can be reason about from given model will be given with a bounded error probability to the threshold problem and an approximate estimate of probability to the estimation problem. UPPAAL-SMC, a stochastic and statistical model checking extension of UPPAAL, relies on a series of extensions of the SMC approach to handle real-time systems and estimate nondeterministic problems. UPPAAL-SMC has advantage of specifying complex problems in an efficient manner as well as getting feedback in the form of probability distributions, which can be used to analyze performance aspects of systems [12]. In UPPAAL-SMC, the approximate answer to threshold problem can be estimated by following query format: Pr[bound](φ)>=p and the probability confidence interval can be estimated to estimation problem by: Pr[bound](φ), where bound is a constraint expression, φ is the state formulae and p is a positive floating number not exceeding one. There are three ways to bound the runs. One is implicitly by time by specifying <=A, which constrains the implicit global time within a positive integer A. One is explicitly by cost with c<=A where c is a specific defined clock less than A. Another one is by number of discrete steps with #<=A where # is the total number of transitions between states. In the motivation example, the global time and the number of discrete steps are uncertain due to stochastic transitions. However, what can be estimated is constraint timer for drones introduced in modeling. Under the presumption that all the drones arranged in the district are of the same type, the time used in each block for a drone is an approximate estimated value. Therefore, the value of clock variable in timer can be mapped to the time in reality. The probability query is Pr[c<=10000]( ◇ forall (i: id_t) victim(i).Safety). The threshold set as 10000 which is far more than in demand is trying to sweep up all the possible values of clock c. The probability distribution and cumulative probability diagrams are shown in Fig.8, according to different number of drones from 2 to 6, respectively, arranged in the district. Positions of all the objects concerned in the environment, including rescue center, buildings and victims, are initialized with locations as shown in the Fig.1. The scenario challenge guaranteeing primary global goals under changing environment could be settled by verifying reachablity properties (e.g., all victims should reach the state "Safety") through verification tools. The other challenge predicting the self-adaptability under changing internal structures (i.e., the number of drones arranged in the district varies in the motivation example) of system itself would be solved by comparing and analyzing statistical results through SMC. To measure the performance and efficiency, three concretized questions are present to rephrase this challenge. Table II integrated from probability distribution diagrams. Q2: Given the time constraints and probability threshold, how many drones should be arranged in the district to make the probability exceeds this threshold? The data in Table III are the average clock values in need to make the probability exceed some threshold given the number of drones, integrated from cumulative probability confidence intervals diagrams. The answer to this question could be found as the minimum number of drones meeting the condition. Q3: Given the number of drones and time constraints, what is the probability that all victims in a district can be saved? The answer could be found immediately from the probability confidence intervals diagrams. The decentralized self-adaptive drone system has different satisfactions of the global goals' achievements with corresponding numbers of subsystems. The experimental results to these detailed questions using SMC could be used as reference for arrangement planning of UAVs before actual deployment and dynamical adjustment in real time to reduce expenditures while maintaining high performance and efficiency. VIII. RELATED WORK Our work makes a novel contribution in general process to verify stochastic behaviors of decentralized self-adaptive systems qualitatively and quantitatively, and touches a number of related areas. Self-adaptation is a hot topic with increasing complexity of current systems. A MAPE loop is a primary framework of achieving self-adaptive behaviors. Paolo et al. exploits the concept of multi-agent Abstract State Machines to specify distributed and decentralized adaptation control in terms of MAPE-K control loops [13]. Danny et al. introduces multiple MAPE loops to solve heterogeneous systems and make decisions to decentralize each of the MAPE functions [14]. Luciano et al. outlines an architecture for the design of component-based of distributed self-adaptive systems and reason on properties through data collection, correlation, aggregation and analysis [15]. However, these literatures do not take interactions between elements in decentralized system into account or implicitly embedded these into MAPE loops. In our design framework, we explicitly consider features of self-adaptation control loop and decentralization of systems together and combine them into a new MAPE loop. systems analysis at early stage and modeling with timed automata has been put great effort. Kahina et al. formalize home care plans as timed automata generated from useroriented abstractions [16]. Guillermo et al. discusses how to model the different types of relationships among computer clocks of a distributed system, namely, ideal clocks, drifting clocks, and synchronized clocks with timed automata [17]. Nicolas et al. develop an approach towards modeling sociotechnical systems in general and socio-technical attacks using timed automata and illustrate its application by a complex case study [18]. However, the model checking problems in those literatures are deterministic whereas the realistic problems are not. We introduce both stochastic and non-linear dynamical features to timed automata to model uncertainties in both environments and system behaviors. In terms of evaluation, SMC is a powerful and flexible approach for formal verification of computational models. David et al. use SMC to estimate the probability with which the property holds on the system and develop a distributed SMC tool [19]. Zohra et al. model the MAC level protocol in WSNs and use SMC to make qualitative and quantitative verification of this protocol [20]. Dehui et al. propose a quantitative analysis approach based on statistical model checking technique for project schedules to reduce the human results [21]. In our work, we use SMC as well, as a compromise way to mainly solve estimation problems relating to the performance of decentralized self-adaptive systems. In our motivation example, we introduce UAV emergency mission scenarios which are common in practical. UAV related research in fact has been considered extensively in literature. For example, Farhan et al. discusses the applications of UAVs in the city of Dubai as an example, their opportunities and challenges [22]. Hamid et al. describe the possible Intelligent Transportation Systems applications that can use UAVs, and highlights the potential and challenges [23]. Mario et al. proposes an implementation of an emergencymanagement service by cloud robotics in a smart city scenario with the goal of providing aerial support to citizens, however, its UAVs system is centralized [24]. As far as we are aware, existing research have not involved into decentralized UAV Emergency missions. IX. CONCLUSION AND FUTURE WORK There is a growing importance in self adaptation recently. Though numerous excellent research efforts have been put into this area, Self-adaptation field is still in its newly born stage, and existing knowledge and approaches are not adequate enough to address today's dynamic and ever-changing environments. Therefore, self-adaptation poses not only opportunities, but also many challenges, such as guaranteeing the required global goals and performance of systems operating in highly dynamic and uncertain environments. In this paper, we mainly focus on the decentralized aspects on self-adaptive system and provide a whole process to verify and evaluate the adaptability of decentralized self-adaptive systems with stochastic behaviors. Firstly, we introduce a method for modeling a decentralized self-adaptive system and its environment separately, and adopt timed automata to model different components of the system and different aspects of the environment. Secondly, we describe a method for specifying and verifying the required adaptation properties of decentralized self-adaptive systems and adopt a statistical model checking tool to verify and validate the properties by simulation runs. We also contribute a novel example extracted from practical applications in UAV usage scenarios to illustrate the feasibility of the whole approach. In our future research, we plan to further elaborate on the work presented in this paper by applying the method to practical scenarios. The motivation example is kind of simplified to represent reality of the world. We have considered the uncertainties of victim behavior, however, there are still many remaining issues. For example, buildings could be collapsed; drones might be crashed by falling objects; victims may be injured and cannot move. All of these situations has very high probability taking place and complicate the modeling. The first idea to deal with it is to add more features and insert more specific components to system design, which is fuzzy but realizable in short term. Another idea is to try to find a method of filling the gap between modeling of system in cyber physical spaces and use case, and try to synchronize the modifications. This will make the approach more applicable to a wider set of physical environments. We are also considering using different formal model method to see whether it can better describe adaptive behaviors during plan making in one step or look-ahead horizon. Fig. 1 . 1One configuration of a district Fig. 3 . 3Interactions between modules in decentralized system and environmentFig. 4. Architecture of the decentralized self-adaptive drone system and environment Fig. 5 . 5Timed automaton for drone timer It should be explained that this automaton itself does not show up inFig.4because its function has nothing to do with the goals of saving victims but record the needed time which can be used to statistical analysis in later section.To describe stochastic behaviors in timed automata, a probability transition function :IO×T  [0,1] is introduced to extend timed automata. Suppose Tq is the non-empty set of transitions starting from q, then for all q ∈ Q, ∑ ( , ) = 1. Given the state q and event a, probabilities of different transitions can be present by probability weight according to their proportion. Fig. 6 . 6Victim Model Fig. 7 . 7Interactions between component VI. ADAPTATION GOALS SPECIFED BY TCTL Fig. 8 . 8Experiment with different number of drones Q1: Given the number of drones, what is the average time the system needs to save all the victims in a district? The average values of c (i.e., clock variable in timer referring to Fig.5) are shown in TABLE I . IDIFFERENT TYPES OF PROPERTIESProperty Name Expression Safety Invariantly A□φ Potentially always E□φ Rechability Eventually A◇φ Possibly E◇φ Liveness Leads to φ implies ψ TABLE II . IIAVERAGE CLOCK VALUE# drones 2 3 4 5 6 Average c 384.3 288.2 221.2 182.9 159.2 TABLE III. AVERAGE CLOCK VALUE BASED ON PROBABILITY THRESHOLD # drones 2 3 4 5 6 Pr>0.70 485.5 361.2 284.5 253.2 187.2 Pr>0.75 542,1 395.6 310.0 277.2 214.8 Pr>0.80 598.6 448.3 362.1 323.1 242.4 Pr>0.85 711.6 522.7 414.3 349.0 297.6 Software engineering for self-adaptive systems: A research roadmap. B Cheng, Software Engineering for Self-Adaptive Systems. SpringerB. Cheng et al. "Software engineering for self-adaptive systems: A research roadmap. In Software Engineering for Self-Adaptive Systems," Springer,2009. Requirements-Aware Systems: A Research Agenda for RE for Self-adaptive Systems. P Sawyer, REP. Sawyer et al, "Requirements-Aware Systems: A Research Agenda for RE for Self-adaptive Systems. RE 2010: 95-103. The Vision of Autonomic Computing. Jeffrey O Kephart, David M Chess, IEEE Computer. 3613Jeffrey O. Kephart, David M. Chess: "The Vision of Autonomic Computing," IEEE Computer 36(1): 41-50 (2003) 3. Decentralized approaches for self-adaptation in agent organizations. R Kota, TAAS. 7128R. Kota et al, "Decentralized approaches for self-adaptation in agent organizations," TAAS 7(1): 1:1-1:28(2012). Endogenous versus exogenous self-management. D Weyns, Software Engineering for Adaptive and Self-managing Systems. ACMD. Weyns et al, "Endogenous versus exogenous self-management. In Software Engineering for Adaptive and Self-managing Systems," ACM, 2008. UAV-carried long distance wi-fi communication infrastructure. R , Junfei Xie, AIAA Infotech@ Aerospace. 747R. Junfei Xie et al, "UAV-carried long distance wi-fi communication infrastructure," In AIAA Infotech@ Aerospace, page 0747. 2016. Anticipating Human Behavior in Disaster : Myths, Exaggerations, and Realities. D Mcentire, McEntire D. "Anticipating Human Behavior in Disaster : Myths, Exaggerations, and Realities," 2006. Composing Adaptive Software. M Young, IEEE Computer. 377M. Young et al, "Composing Adaptive Software," IEEE Computer 37(7): 56-64 (2004). The Theory of Timed Automata. R Alur, REX WorkshopR. Alur et al, "The Theory of Timed Automata," REX Workshop 1991: 45-73 An Efficiently Checkable Subset of TCTL for Formal Verification of Transition Systems with Delays. J K Deka, VLSI Design. J. K. Deka, "An Efficiently Checkable Subset of TCTL for Formal Verification of Transition Systems with Delays," VLSI Design 1999: 294-299. K G Larsen, Statistical Model Checking: Past, Present, and Future. 2016K. G. Larsen et al, "Statistical Model Checking: Past, Present, and Future," ISoLA (1) 2016: 3-15. Uppaal SMC tutorial. A David, STTT. 174A. David et al, "Uppaal SMC tutorial," STTT 17(4): 397-415 (2015). Formal Design and Verification of Self-Adaptive Systems with Decentralized Control. P Arcaini, ACM Transactions on Autonomous and Adaptive Systems TAAS. 11435P. Arcaini "Formal Design and Verification of Self-Adaptive Systems with Decentralized Control," ACM Transactions on Autonomous and Adaptive Systems TAAS 11(4): 25:1-25:35 (2017). On Patterns for Decentralized Control in Self-Adaptive Systems. D Weyns, Software Engineering for Self-Adaptive SystemsD. Weyns et al, "On Patterns for Decentralized Control in Self-Adaptive Systems," Software Engineering for Self-Adaptive Systems 2010: 76- 107. Towards decentralized self-adaptive component-based systems. L Baresi, L. Baresi et al, "Towards decentralized self-adaptive component-based systems," SEAMS 2008: 57-64. Using Timed Automata Framework for Modeling Home Care Plans. Kahina Gani, Marinette Bouet, Michel Schneider, Farouk Toumani, ICSS. 2015Kahina Gani, Marinette Bouet, Michel Schneider, Farouk Toumani, "Using Timed Automata Framework for Modeling Home Care Plans," ICSS 2015: 1-8. Using Timed Automata for Modeling Distributed Systems with Clocks: Challenges and Solutions. G Rodrí Guez-Navas, IEEE Trans. Software Eng. 396G. Rodrí guez-Navas et al, "Using Timed Automata for Modeling Distributed Systems with Clocks: Challenges and Solutions," IEEE Trans. Software Eng. 39(6): 857-868 (2013). Modelling Social-Technical Attacks with Timed Automata. N David, MIST@CCS. 2015N. David et al, "Modelling Social-Technical Attacks with Timed Automata," MIST@CCS 2015: 21-28. Statistical Model Checking of Distributed Adaptive Real-Time Software. D Kyle, RV. 2015D. Kyle et al, "Statistical Model Checking of Distributed Adaptive Real-Time Software," RV 2015: 269-274. Statistical Model Checking of CSMA/CA in WSNs. Z Hmidi, VECoS. 2016Z. Hmidi et al, "Statistical Model Checking of CSMA/CA in WSNs," VECoS 2016: 27-42. A novel quantitative evaluation approach for software project schedules using statistical model checking. D Du, ICSE Companion. D. Du et al, "A novel quantitative evaluation approach for software project schedules using statistical model checking," ICSE Companion 2014: 476-479. Opportunities and Challenges of Using UAVs for Dubai Smart City. F Mohammed, NTMS. 2014F. Mohammed et al, "Opportunities and Challenges of Using UAVs for Dubai Smart City," NTMS 2014: 1-4. UAV-Enabled Intelligent Transportation Systems for the Smart City: Applications and Challenges. H Menouar, IEEE Communications Magazine. 553H. Menouar et al, "UAV-Enabled Intelligent Transportation Systems for the Smart City: Applications and Challenges," IEEE Communications Magazine 55(3): 22-28 (2017). A Cloud Based Service for Management and Planning of Autonomous UAV Missions in Smart City Scenarios. G Ermacora, MESAS. 2014G. Ermacora et al, "A Cloud Based Service for Management and Planning of Autonomous UAV Missions in Smart City Scenarios," MESAS 2014: 20-26.
[]
[ "The Evolution of Galaxies in and around Clusters at High-Redshift", "The Evolution of Galaxies in and around Clusters at High-Redshift" ]
[ "Yutaka Fujita [email protected] \nDepartment of Astronomical Science\nDepartment of Physics and Astronomy\nNational Astronomical Observatory\nThe Graduate University for Advanced Studies\n2-21-1 Osawa181-8588MitakaTokyo\n", "Tomotsugu Goto \nThe Johns Hopkins University\n3400 North Charles Street21218-2686BaltimoreMDUSA\n" ]
[ "Department of Astronomical Science\nDepartment of Physics and Astronomy\nNational Astronomical Observatory\nThe Graduate University for Advanced Studies\n2-21-1 Osawa181-8588MitakaTokyo", "The Johns Hopkins University\n3400 North Charles Street21218-2686BaltimoreMDUSA" ]
[]
In this paper, we focus on ram-pressure stripping and evaporation of disk galaxies in and around a cluster. We show that the evolution of the disk surface density affects the efficiency of ram-pressure stripping of galaxies at z > ∼ 1. We also consider the saturation of thermal conduction in detail and show that it cannot be ignored at larger radii of a cluster, which makes the time-scale of the evaporation larger. Both the ram-pressure stripping and evaporation could affect the evolution of galaxies even around a cluster. In particular, the observed gradual decline of the star-formation rates of galaxies in and around clusters could be explained by evaporation without resorting to speculative strangulation (stripping of warm gas in galactic halos).On the other hand, a correlation has been known between galaxy morphology and the local environment. Dressler (1980) studied 55 nearby galaxy clusters and found that fractions of early-type galaxies increase and those of late-type galaxies decrease with increasing local galaxy density in the clusters. Subsequent studies have confirmed the morphology-density relation, that is, early-type galaxies are dominated in inner region of clusters where the density is high and the fraction decreases toward the outside of the clusters (Whitmore, Gilmore 1991;Whitmore et al. 1993).Recently, we have gradually understood that the above phenomena are related to each other.Dressler et al. (1994)andCouch et al. (1998)found that most of the blue galaxies observed as the Butcher-Oemler effect are normal spirals with active star formation.Dressler et al. (1997)studied 10 clusters at z ∼ 0.5, and found the morphology-density relation at these redshifts. However, they also found that S0 fractions are much smaller than those in nearby clusters. The low fractions of S0 galaxies have also been observed by others(Fasano et al. 2000). In many clusters, their galaxy population gradually changes from a red, evolved, early-type population in the inner part of the clusters to a progressively blue, later-type population in the extensive outer envelope of the clusters (Abraham et alsuggest that the blue, normal spirals observed in high-redshift clusters were originally field galaxies; they fell into clusters and evolved into the non-blue S0 galaxies observed in nearby clusters.Several mechanisms have been proposed that can lead to color and morphological transformations between galaxy classes in clusters, such as, galaxy mergers (Toomre, a gradual decline in the star-formation rate of a galaxy owing to the stripping of halo gas (strangulation or suffocation;Larson et al. 1980;Bekki et al. 2002).In this paper, we focus on ram-pressure stripping and evaporation following Fujita (2004, hereafter Paper I). We mostly consider the star-formation history of galaxies, and do not treat the morphological transition in detail. We consider the redshift evolution of the disk-size and the surface density, which were not considered in Paper I. Moreover, we consider the evaporation in detail, while paying attention to the saturation of thermal conduction. We mainly treat the environmental effects on galaxies during the first infall into a cluster; we do not consider their long-term evolution. Thus, the high-redshift galaxies we investigate may not be direct progenitors of galaxies at z ∼ 0. Although we often use the word 'evolution' from now on, it does not mean the evolution of 'a particular galaxy'. Instead, we discuss the differences of the 2
10.1093/pasj/56.4.621
[ "https://export.arxiv.org/pdf/astro-ph/0406637v2.pdf" ]
17,971,656
astro-ph/0406637
1690827c58637c8c95e42348c8a581b5e8d234a1
The Evolution of Galaxies in and around Clusters at High-Redshift 16 Jul 2004 Yutaka Fujita [email protected] Department of Astronomical Science Department of Physics and Astronomy National Astronomical Observatory The Graduate University for Advanced Studies 2-21-1 Osawa181-8588MitakaTokyo Tomotsugu Goto The Johns Hopkins University 3400 North Charles Street21218-2686BaltimoreMDUSA The Evolution of Galaxies in and around Clusters at High-Redshift 16 Jul 2004(Received 2004 April 12; accepted 2004 June 25)galaxies: clusters: general-galaxies: evolution-galaxies: high- redshift-galaxies: interactions In this paper, we focus on ram-pressure stripping and evaporation of disk galaxies in and around a cluster. We show that the evolution of the disk surface density affects the efficiency of ram-pressure stripping of galaxies at z > ∼ 1. We also consider the saturation of thermal conduction in detail and show that it cannot be ignored at larger radii of a cluster, which makes the time-scale of the evaporation larger. Both the ram-pressure stripping and evaporation could affect the evolution of galaxies even around a cluster. In particular, the observed gradual decline of the star-formation rates of galaxies in and around clusters could be explained by evaporation without resorting to speculative strangulation (stripping of warm gas in galactic halos).On the other hand, a correlation has been known between galaxy morphology and the local environment. Dressler (1980) studied 55 nearby galaxy clusters and found that fractions of early-type galaxies increase and those of late-type galaxies decrease with increasing local galaxy density in the clusters. Subsequent studies have confirmed the morphology-density relation, that is, early-type galaxies are dominated in inner region of clusters where the density is high and the fraction decreases toward the outside of the clusters (Whitmore, Gilmore 1991;Whitmore et al. 1993).Recently, we have gradually understood that the above phenomena are related to each other.Dressler et al. (1994)andCouch et al. (1998)found that most of the blue galaxies observed as the Butcher-Oemler effect are normal spirals with active star formation.Dressler et al. (1997)studied 10 clusters at z ∼ 0.5, and found the morphology-density relation at these redshifts. However, they also found that S0 fractions are much smaller than those in nearby clusters. The low fractions of S0 galaxies have also been observed by others(Fasano et al. 2000). In many clusters, their galaxy population gradually changes from a red, evolved, early-type population in the inner part of the clusters to a progressively blue, later-type population in the extensive outer envelope of the clusters (Abraham et alsuggest that the blue, normal spirals observed in high-redshift clusters were originally field galaxies; they fell into clusters and evolved into the non-blue S0 galaxies observed in nearby clusters.Several mechanisms have been proposed that can lead to color and morphological transformations between galaxy classes in clusters, such as, galaxy mergers (Toomre, a gradual decline in the star-formation rate of a galaxy owing to the stripping of halo gas (strangulation or suffocation;Larson et al. 1980;Bekki et al. 2002).In this paper, we focus on ram-pressure stripping and evaporation following Fujita (2004, hereafter Paper I). We mostly consider the star-formation history of galaxies, and do not treat the morphological transition in detail. We consider the redshift evolution of the disk-size and the surface density, which were not considered in Paper I. Moreover, we consider the evaporation in detail, while paying attention to the saturation of thermal conduction. We mainly treat the environmental effects on galaxies during the first infall into a cluster; we do not consider their long-term evolution. Thus, the high-redshift galaxies we investigate may not be direct progenitors of galaxies at z ∼ 0. Although we often use the word 'evolution' from now on, it does not mean the evolution of 'a particular galaxy'. Instead, we discuss the differences of the 2 Introduction Clusters of galaxies in the redshift range of 0.2-0.5 often exhibit an overabundance, relative to present-day clusters, of blue galaxies (Butcher, Oemler 1978. This starformation activity is often called the Butcher-Oemler effect. Subsequent studies have confirmed this trend (Couch, Sharples 1987;Rakos, Schombert 1995;Lubin 1996;Margoniner, de Carvalho 2000;Ellingson et al. 2001;Goto et al. 2003a). average properties of galaxies at low and high redshifts. Although the models presented in this paper could be included in complicated semi-analytic models of galaxy formation, it would be instructive to study the characteristics of the environmental effects using simple models before we consider such a semi-analytic approach. This paper is organized as follows. In section 2, we summarize our models. In section 3 we give the results of our calculations, and compare them with observations in section 4. Conclusions are given in section 5. As a cosmological model, we consider a cold dark-matter model with a non-zero cosmological constant (ΛCDM model). The cosmological parameters are h = 0.7, where the Hubble constant is given by H 0 = 100h km s −1 Mpc −1 , Ω 0 = 0.25, λ 0 = 0.75, and σ 8 = 0.8. Models The Growth of Clusters The typical mass of progenitors of a cluster can be derived from the extended Press-Schechter model (EPS) (Bower 1991;Bond et al. 1991;Lacey, Cole 1993) and its further extension (Fujita et al. 2002). The latter is a Press-Schechter model including the effect of spatial correlations among initial density fluctuations (SPS). The models are summarized in Paper I. Cluster progenitors can be classified into two categories: one is the main cluster and the others are subclusters. The main cluster is the progenitor that was located near to the center of the current cluster, and had a much larger mass than the other progenitors. Subclusters are where P SPS (M, z|M 0 , z 0 ) is the conditional probability based on the SPS model. The definition of P SPS is shown in equation (7) in Paper I. We define the radius of the region that later becomes a cluster of mass M 0 at z = z 0 as R 0 . Following Paper I, we consider the subclusters that were initially located at 0.7R 0 < r < R 0 in the precluster region. We refer to the inner radius as R in . Ram-Pressure Stripping We adopt the ram-pressure stripping model of Paper I. In the following sections, we often refer to a relatively large dark halo containing galaxies and gas as a 'cluster'. This 'cluster' includes the main cluster, the subclusters, and so on. We assume that a cluster is spherically symmetric and the density distribution of the dark matter is ρ m (r) = ρ mv (r/r vir ) −a ,(3) where ρ mv and a are constants, r vir is the virial radius of the cluster, and r is the distance from the cluster center. We choose a = 2.4 and determine ρ mv and r vir by a spherical collapse model following Paper I. We note that the average mass density of a cluster increases toward high redshift. For example, it is proportional to (1 + z) 3 for the Einstein-de Sitter universe. We ignore the self-gravity of the ICM, and consider two ICM mass distributions. When the ICM is not heated by anything other than the gravity of the cluster, the distribution is written as ρ ICM (r) = ρ ICM,vir [1 + (r/r c ) 2 ] −a/2 [1 + (r vir /r c ) 2 ] −a/2 ,(4) where r c /r vir = 0.1. We call this model 'the non-heated ICM model'. In this model, we assume that the ICM temperature equals to the virial temperature (T ICM = T vir ). However, at least for nearby clusters and groups, an entropy excess of the ICM (an entropy floor) has been observed in X-rays in the central regions. This indicates that the ICM has been heated by some sources, such as AGNs or supernovae, in addition to the gravity of the clusters and groups (Ponman et al. 1999;Lloyd-Davies et al. 2000;Mulchaey 2000;Mulchaey et al. 2003). Thus, we construct 'the heated ICM model'. We assume that the ICM is heated before cluster formation. Although there is a debate about whether the heating took place before or after cluster formation (Fujita 2001b;Yamada, Fujita 2001;Babul et al. 2002;Voit et al. 2002), the following results would not be much different, even if the ICM is heated after cluster formation (Loewenstein 2000). If the ICM is heated non-gravitationally before cluster formation, the final distribution depends on the virial temperature of the cluster, T vir . If T vir ≥ T 0 , we assume that a shock forms near the virial radius of the cluster. In this study, we assume T 0 = 0.8 keV from X-ray observations (Fujita, Takahara 2000;Paper I). The ICM distribution is given by ρ ICM (r) = ρ ICM,vir [1 + (r/r c ) 2 ] −3b/2 [1 + (r vir /r c ) 2 ] −3b/2 ,(5) where b = (2.4/3)T vir /T ICM and T ICM is the temperature of the ICM, which is given by T ICM = T vir + T 0 . If T vir < T 0 keV, a shock does not form, but the gas accreted by a cluster adiabatically falls into the cluster. The ICM density and temperature profiles are approximately given by ρ ICM (r) = ρ ICM,vir 1 + 3 A ln r vir r 3/2 ,(6)T ICM (r) = 4 15 1 + 3 A ln r vir r ,(7) where A is the constant determined by the adiabat of the ICM (Balogh et al. 1999a; Paper I). Since equations (6) and (7) diverge at r = 0, we take their values at r = 0.1r vir as the central values. In equations (4), (5), and (6), the normalizations of the ICM profile are given by the observed ICM fraction of clusters or the rate of gas accretion to clusters (Paper I). If the nongravitational heating makes the accretion time larger than the lifetime of a cluster, the cluster cannot accrete much gas and the gas fraction is smaller than the average in the universe. This happens at high redshifts (z > ∼ 1-2) in our models. We consider a radially infalling disk galaxy from the turnaround radius of a cluster (2r vir ). As the velocity of the galaxy increases, the ram-pressure from the ICM also increases. The condition of ram-pressure stripping is ρ ICM v 2 rel > 2πGΣ ⋆ Σ HI = v 2 rot r −1 gal Σ HI = 2.1 × 10 −11 dyn cm −2 v rot 220 km s −1 2 × r gal 10 kpc −1 Σ HI 8 × 10 20 m H cm −2 ,(8) where v rel is the relative velocity between the galaxy and the ICM, Σ ⋆ is the gravitational surface mass density, Σ HI is the surface density of the H I gas, v rot is the rotation velocity, and r gal is the characteristic radius of the galaxy (Gunn, Gott 1972;. We define the cluster radius at which condition (8) is satisfied for the first time as the stripping radius, r st . Since we assume that the ICM is nearly in pressure equilibrium for r < r vir , the relative velocity, v rel , is equivalent to the velocity of the galaxy relative to the cluster, v, for r < r vir . Evaporation The time-scale of the evaporation of cold gas in a galaxy is written as t cond ≈ 3 2 k B T ICM µm H M cold |L| ,(9) where k B is the Boltzmann constant, T ICM is the ICM temperature, µ (=0.6) is the mean molecular weight, m H is the hydrogen mass, M cold is the mass of cold gas in the galaxy, and L is the energy flux from the hot ICM surrounding the galaxy via thermal conduction (Paper I). We define neutral and molecular gas confined in a galactic disk as cold gas; we do not consider the cold gas in a galactic bulge. If the electron mean-free path is smaller than the spatial scale of the temperature gradient around the galaxy, the thermal conduction is not saturated and the energy flux is given by |L nsat | = 4πr 2 gal κ 0 T 7/2 ICM ,(10) where r gal is the galaxy radius, and κ 0 = 5 × 10 −7 erg cm −1 s −1 K −3.5 (Paper I). If the electron mean-free path is larger than the spatial scale of the temperature gradient, the thermal conduction is saturated and the energy flux is given by |L sat | = 4πr 2 gal × 0.4n e k B T ICM 2k B T ICM πm e 1/2 ,(11) where n e is the electron number density, and m e is the electron mass (Cowie, McKee 1977). For convenience, we often refer to t cond given by L = L nsat as t cond,nsat and that given by L = L sat as t cond,sat . An actual energy flux is given by |L| = min(|L nsat |, |L sat |). Evolution of Disk Properties The most important difference between this study and Paper I is that we consider the redshift evolution of disk properties in this paper. We adopted a simple model, discussed in Mo et al. (1998), for the galactic disk. For a given rotation velocity, the galaxy radius at redshift z is given by r gal (z) = r gal,0 (H[z]/H 0 ) −1 ,(12) where r gal,0 is the galaxy radius at z = 0 and H(z) = H 0 [λ 0 + (1 − λ 0 − Ω 0 )(1 + z) 2 + Ω 0 (1 + z) 3 ] 1/2(13) is the Hubble constant at redshift z. The surface density and mass of the disk at redshift z are given by Σ ⋆ (z) = Σ ⋆,0 H(z)/H 0 ,(14)M disk (z) = M disk,0 (H[z]/H 0 ) −1 ,(15) where Σ ⋆,0 and M disk,0 are the surface density and mass at z = 0, respectively. As shown in figure 1 in Mo et al. (1998), H(z = 1) ∼ 2H 0 . Thus, the disk radius (surface density) at z ∼ 1 is a factor of two smaller (larger) than that at z = 0. We give the column density of the H I gas in the disk and the mass of the cold gas before the galaxy is affected by environmental effects as follows. We assume that Σ HI ∝ Σ ⋆ for simplicity; in the future study, we will not assume this by using a semi-analytic model of galaxy formation. Thus, the column density of the H I gas at redshift z is given by Σ HI (z) = Σ HI,0 H(z)/H 0 ,(16) where Σ HI,0 is the column density at z = 0. We also assume that M cold ∝ M disk and 6 M cold (z) = M cold,0 (H[z]/H 0 ) −1 ,(17) where M cold,0 is the mass of the cold gas at z = 0. Results We considered the evolution of clusters with three masses at z = z 0 . Two of them are the same as those in Paper I. The masses of the two clusters at z 0 = 0 are M 0 = 1 × 10 15 M ⊙ and 6.7 × 10 13 M ⊙ . We call the former cluster 'the low-redshift cluster (LCL)', which is studied to be compared with clusters observed at z ∼ 0.5. Actually, at z = 0.5, the mass of the main cluster is M vir = 3 × 10 14 M ⊙ , which is close to the masses of well-known clusters observed at z ∼ 0.5 (Schindler 1999). The typical mass of the subclusters of the LCL at z = 0.5 is M vir = 6.7 × 10 13 M ⊙ (figure 1). The latter cluster with M 0 = 6.7 × 10 13 M ⊙ at z 0 = 0 is investigated to be compared with the subclusters. Since the mass scale of this cluster is that of groups of galaxies, we call this cluster 'the group'. For the group, we considered only the evolution of the main cluster. The third cluster that we studied has a mass of M 0 = 1 × 10 15 M ⊙ at z 0 = 0.5. This cluster was studied to be compared with recent observations of clusters observed at 0.5 < ∼ z < ∼ 1. We call this cluster 'the high-redshift cluster (HCL)'. In figure 1, we present the evolutions of the cluster masses. We expect that when the mass of the main cluster satisfies the relation M vir /M 0 = (R in /R 0 ) 3 , the subclusters begin to be included in the main cluster and become subhalos. For the parameters we adopted (e.g., R in = 0.7R 0 ), the subclusters of the LCL and HCL are absorbed by the main clusters, together with the galaxies in them, at z < ∼ 0.4 and 1, respectively. This means that at z ∼ 0.4 (1) the subclusters of the LCL (HCL) are to be observed just outside the main cluster of the LCL (HCL). Since we are interested in galaxies at high redshift, we investigated relatively large galaxies that can be observed in detail. We consider two model galaxies following Paper I. While we fixed the rotation velocities of the galaxies v rot , we changed the radius, surface density, and H I column density according to equations (12), (14), and (16). The parameters for the bigger model galaxy, which is similar to the Milky Way, are v rot = 220 km s −1 , r gal,0 = 10 kpc, and Σ HI,0 = 8 × 10 20 m H cm −2 . The parameters for the smaller model galaxy, which is similar to M33, are v rot = 105 km s −1 , r gal,0 = 5 kpc, and Σ HI,0 = 14 × 10 20 m H cm −2 . These are the values at z = 0 even for the galaxies in the HCL. From now on, this disk evolution is considered unless otherwise mentioned. Ram-Pressure Stripping First, we consider the time-scale of ram-pressure stripping. Although equation (8) is the condition at the representative radius of a galaxy, numerical simulations performed by Abadi, Moore, and Bower (1999) showed that it can also be applied at 'each radius' of the galaxy. Assuming that v rot is the constant and Σ HI ∝ Σ ⋆ , the H I column density has the relation Σ HI ∝ Σ ⋆ ∝ v 2 rot /r ∝r −1 , wherer is the distance from the galaxy center. Thus, equation (8) shows that the ram-pressure required for stripping atr = 5 kpc is 4-times larger than that at r = 10 kpc. Figure 2 shows the time elapsed since the cold gas atr = 10 kpc is stripped until that atr = 5 kpc is stripped, ∆t 10−5 , for the radially infalling bigger galaxy (v rot = 220 km s −1 ). Forr < ∼ 5 kpc, the effect of the galactic bulge would be important; since we are interested in the star-formation activities in galactic disks, we ignore the inner regions. For a comparison, the rotation time of the galaxy atr = 10 kpc is shown in figure 2. The minimum time-scale of the evolution of a galaxy is expected to be the rotation time (t rot ). For example, the passage of a galactic arm through a molecular cloud could stimulate star formation. Thus, if ∆t 10−5 < ∼ t rot , the ram-pressure stripping can be regarded as an instantaneous phenomenon for the galaxy, which is the case for our model galaxy (figure 2). Thus, the long-term evolution of the galaxy does not need to be considered. Moreover, ∆t 10−5 is much smaller than the crossing time of the galaxy in the cluster. If turbulent and viscous stripping is considered, the stripping could be even faster (Quilis et al. 2000). Since the ρ ICM and v rel rapidly decrease as the distance from the cluster center increases, the ram-pressure becomes less effective in the outer region of the cluster. Thus, at cluster radii r > r st , where r st is defined by the pressure balance at the galactic radiusr = r gal [equation (8)], it is unlikely that the ram-pressure directly affects the evolution of the galactic disk. Although galactic gas atr > r gal might be affected by ram-pressure at r > r st , the effect should be included in 'strangulation', which is considered in Paper I. Since cold gas in a galactic disk is almost instantaneously stripped by ram-pressure, r st /r vir should be related to the fraction of galaxies affected by ram-pressure stripping in a cluster. The evolutions of r st /r vir are shown in figures 3 (the bigger galaxy) and 4 (the smaller galaxy). Compared to the non-heated ICM model, r st /r vir decreases faster toward higher redshift in the heated ICM model. This is because the non-gravitational heating reduces the ICM density and the ram-pressure on galaxies in the inner part of a cluster. In the heated ICM models, the changes of the slopes correspond to the transformations of the assumed ICM distributions (see Paper I). The differences between figures 3 and 4 are small, which shows that the differences of the galaxy properties do not much affect the ram-pressure stripping. Of course, for much smaller galaxies, r/r st should be much different from those in figures 3 and 4. However, those small galaxies are difficult to be observed in detail at high redshifts. In figures 3 and 4, we also present the evolutions when the disk properties do not evolve, which were shown in figures 2 and 3 in Paper I; for z < ∼ 0.5, when the disk properties evolve, r st /r vir is not much different from that when they do not evolve. For z > ∼ 1, however, the former is much smaller than the latter. This is because in the former case, the surface density of the disk is larger at a higher redshift, and the disk is more robust against ram-pressure stripping [equation (14)]. As discussed in Fujita (2001a), the ram-pressure from the ICM is larger at higher redshifts, because the average mass density of clusters at higher redshifts is larger than that of clusters at lower redshifts. Thus, the results show that the effect of the larger disk surface density dominates that of the larger ram-pressure at high redshifts. For the HCL, r st /r vir is larger than that for the LCL for a given redshift, because the mass of the HCL is larger than that of the LCL, and the typical galaxy velocity in the HCL is larger than that in the LCL (Paper I). On the other hand, for a given mass, ram-pressure stripping is more effective in higher redshift clusters. The mass of the main cluster of the HCL at z ∼ 1 is almost the same as that of the LCL at z ∼ 0.5 (∼ 3 × 10 14 M ⊙ ; figure 1). However, r st /r vir is larger and ram-pressure stripping is more effective in the former. Tormen, Moscardini, and Yoshida (2004) studied the orbital properties of galaxies in massive clusters at z < 0.8 by numerical simulations. Since they showed that the typical pericentric radius of galaxies for the first passage is about 0.3 r vir , ram-pressure stripping is effective if r st /r vir > ∼ 0.3. Here, we assume that the results of Tormen, Moscardini, and Yoshida (2004) can be applied to our model clusters and the progenitors. If the disk evolution is considered and non-gravitational heating is not considered, ram-pressure stripping is effective at z < ∼ 1 for the main cluster of the LCL, z < ∼ 2.5 for the main cluster of the HCL, and z < ∼ 1 for the subclusters of the HCL, while it is not effective for the subclusters of LCL and the group regardless of z (figures 3 and 4). Figures 3 and 4 also show that if both the disk evolution and non-gravitational heating are considered, ram-pressure stripping is effective at z < ∼ 0.5 for the main cluster of the LCL, and z < ∼ 1.5 for the main cluster of the HCL, while it is not effective for the subclusters of the LCL and HCL, and the group regardless of z. We note that in Paper I the tidal force from the main cluster and the resultant shift of the orbit of a galaxy in the subcluster are estimated; the estimated peri-centric radius of the galaxy is ∼ 0.2r vir . Thus, the assumed threshold of ram-pressure stripping (r st /r vir < ∼ 0.3) seems to be reasonable even for the subclusters. For the subclusters of the LCL, for example, since 0.2 < ∼ r st /r vir < ∼ 0.3 at z < ∼ 1 in the non-heated ICM model , we should rather say that the ram-pressure stripping is marginally effective at z < ∼ 1 in the non-heated ICM model. Evaporation In Paper I, we mainly investigated the evaporation effect on galaxies in clusters (or the progenitors) that have not been heated non-gravitationally. In this study, we also focus on the evaporation in clusters that have been heated non-gravitationally. Moreover, we study the saturation effect of thermal conduction on the evaporation time-scale in detail. Following Paper I, we assume that M cold,0 = 5 × 10 9 M ⊙ for the bigger galaxy, and M cold,0 = 4 × 10 9 M ⊙ for the smaller galaxy. In figures 5 and 6, we present t cond,nsat and t cond,sat at r = 0 and r vir for the bigger galaxy; figure 5 is for the non-heated ICM model, and figure 6 is for the heated ICM model. Note that since t cond,nsat ∝ M cold /r gal for the unsaturated case and t cond,sat ∝ M cold /r 2 gal for the saturated case [equations (9), (10), and (11)], t cond,nsat (t cond,sat ) for the smaller galaxy is 1.6 (3.2) times larger at a given redshift for our parameters. Thus, the differences between the bigger galaxy and the smaller galaxy do not much affect the following results. The actual evaporation timescale of a galaxy at a radius r is given by t cond (r) ≡ max[t cond,nsat (r), t cond,sat (r)]. For example, at lower redshifts for the main cluster of the LCL, the saturation cannot be ignored at r = r vir and t cond = t cond,sat . In the non-heated ICM model, t cond,nsat increases as z increases because t cond,nsat ∝ T −5/2 ICM and T ICM decreases rapidly with the mass of the cluster progenitors [equations (9) and (10)]. In the non-heated ICM model, t cond,sat (r = 0) and t cond,sat (r = r vir ) decrease slowly as z increases ( figure 5). This is because their T ICM -dependence is small (∝ T high redshift dominates [equations (9) and (11)]. In the heated ICM model, the evolution of the conduction time-scales is more complicated (figure 6). In general, the conduction time-scales in the heated ICM model are smaller than those in the non-heated ICM model because of larger T ICM owing to non-gravitational heating. The detailed behavior of the conduction time-scale when thermal conduction is not saturated, t cond,nsat , can be explained as follows. For the main clusters of the LCL and the HCL and the subcluster of the HCL, t cond,nsat increases as z increases. This is because T ICM decreases as is the case of the non-heated ICM model. However, the increasing rate is smaller because T ICM decreases more slowly owing to non-gravitational heating. For higher redshifts, the viral temperature becomes smaller and satisfies the relation T vir < T 0 , where T 0 (=0.8 keV) is the critical temperature under which the ICM distribution follows an adiabatic accretion model (subsection 2.2). For example, T vir equals T 0 = 0.8 keV at z = 1.2 for the main cluster of the LCL. Since the cluster is not isothermal in the adiabatic accretion model [equation (7)], t cond,nsat bifurcates there. For the subcluster of the LCL and the group, the density and temperature profiles follow those predicated by the adiabatic accretion model for z ≥ 0. In these redshift ranges, t cond,nsat (r = r vir ) > t cond,nsat (r = 0) because T ICM (r = r vir ) < T ICM (r = 0). In the middleredshift range, 1.2 < z < 1.7 for the main cluster of the LCL for example, the ICM fraction of a cluster is the same as that at lower redshift although the cluster is more compact at the higher redshifts. Therefore, T ICM increases via adiabatic compression and t cond,nsat (r = r vir ) decreases as z increases. At higher redshifts, z > 1.7 for the main cluster of the LCL for example, t cond,nsat increases as z increases, because the ICM fraction and T ICM decrease. The behavior of the saturated conduction time-scale, t cond,sat , at lower redshifts (z < ∼ 1.2 for the main cluster of the LCL, for example) can be explained as follows. Since the average dark matter and ICM densities increase toward higher redshifts, the increase of t cond,sat through the decrease of T ICM is dominated by the decrease of t cond,sat through the increase of n e [equations (9) and (11)] at r = r vir . However, in our ICM model, for a given gas mass fraction, the non-gravitational heating decreases the ICM density in the central region of a cluster. Thus, the decrease of t cond,sat is not significant at r = 0. At higher redshifts, once the density and temperature profiles follow those predicted by the adiabatic accretion model, the behaviors of t cond,sat can be attributed to the mechanisms that are the same as the case of t cond,nsat . In figure 7, we show the evolution of t cond = max(t cond,nsat ,t cond,sat ) at r = 0 for the bigger galaxy in the non-heated ICM model. This figure corresponds to figure 5 in Paper I in which the evolution of the disk is not taken into account. As shown in figure 5, t cond,nsat ≫ t cond,sat (r = 0) and therefore t cond (r = 0) = t cond,nsat . The conduction time-scale can be represented by the cluster or its progenitors. We also show t ′ cond (r = 0) = t cond (r = 0) + t form , where t form is the Hubble time when the galaxy forms at z = z form . If t ′ cond < t H at a given redshift, the cold gas in the galaxy has been evaporated by the redshift. Note that t cond corresponds to t ′ cond when z form = ∞. Figure 7 shows that in the main cluster of the LCL (HCL) observed at z ∼ 0.5 (1), cold gas has been evaporated from the galaxies near the cluster center. On the other hand, in the subclusters of the LCL (HCL) observed at z ∼ 0.5 (1), cold gas is evaporating if the galaxies formed at z ∼ 1-2. The time-scales of the evaporation, t cond , is relatively long ( > ∼ 2 Gyr). In the group, cold gas is evaporating at z ∼ 0. In figure 8, we present the evolutions of t cond (r = 0) and t ′ cond (r = 0) for the bigger galaxy in the heated ICM model. Since t cond in the heated ICM model is generally smaller than that in the non-heated ICM owing to the larger T ICM , t ′ cond becomes smaller than t H soon after galaxy formation. A galaxy that first enters a cluster from the outside does not stay at r = r vir and the orbit is not exactly radial. If we assume that the typical peri-centric radius of galaxies is about 0.3 r vir , ram-pressure stripping is effective for most galaxies if r st /r vir > ∼ 0.3 (see subsection 3.1). On the other hand, if r st /r vir < ∼ 0.3, the galaxies would be affected only by evaporation while they are orbiting. In this case, the actual time-scale of evaporation, t evap , is given by t cond (r = 0) < t evap < t cond (r = r vir ). Therefore, in the subclusters of the LCL observed at z ∼ 0.5, the bigger galaxy infalling from the outside loses its cold gas with a time-scale of 0.5 < t evap < 10 Gyr in the heated ICM model ( figure 6). In the subclusters of the HCL observed at z ∼ 1, a galaxy loses its cold gas with the time-scale of t evap ∼ 0.5 Gyr in the heated ICM model ( figure 6). Comparison between Ram-pressure Stripping and Evaporation Although both ram-pressure stripping and evaporation suppress star-formation activities in galaxies, they affect the star-formation activities differently. For position, while the rampressure stripping is effective only in the central regions of clusters (r < ∼ 0.5r vir ; figures 3 and 4), the evaporation is often effective even at r ∼ r vir . For time-scale, while the ram-pressure stripping checks star-formation activities in a very short time (∼ 10 8 yr; figure 2, see , the evaporation generally affects the star-formation activities more slowly (∼ 10 9 yr; figures 5 and 6). The difference in the decline rate of the star-formation activities could be discriminated by the spectra of the galaxies. These facts would be useful to observationally find whether ram-pressure stripping or evaporation dominates in clusters. In the next section, we investigate several specific cases. Discussion In Paper I, the star-formation activities of galaxies in main clusters and the subclusters in the vicinity of the main clusters observed at z ∼ 0.5 are discussed. Since the consideration of the disk evolution does not much change r st /r vir for z < ∼ 0.5 for the LCL (figures 3 and 4), the conclusions about the effects of ram-pressure stripping on the galaxies do not change either. In Paper I, we discussed that the rapid decline of the star-formation rates of galaxies in the main cluster of the LCL is not consistent with observations of the CNOC sample of very luminous X-ray clusters (Balogh et al. 1999b; and this suggests that the star-formation rates have decreased before the galaxies enter the main clusters (Goto et al. 2003b;Goto et al. 2003c). In Paper I, we argued that the 'pre-processing' occurred in the subclusters (Zabludoff, Mulchaey 1998;Hashimoto, Oemler 2000;Balogh et al. 2000), which is consistent with the observations showing that red galaxies have a clumpy distribution around a main-cluster ). In the subclusters of the LCL at z ∼ 0.5, ram-pressure stripping is marginally effective (0.2 < ∼ r st /r vir < ∼ 0.3) in the non-heated ICM, and it is ineffective in the heated ICM model (figures 3 and 4). If ram-pressure stripping is the mechanism of the pre-processing, most of the cold gas in galaxies may be removed and star-formation activity of the galaxies may completely die out before the galaxies enter the main cluster. This may be inconsistent with the existence of many blue galaxies in main clusters at z < ∼ 0.5 ; the observations suggest a slower decline of the star-formation rates. Thus, the heated ICM model is preferable because ram-pressure stripping can be ignored. In the heated ICM model, evaporation can be candidates of the mechanism of the pre-processing. The timescale of the evaporation is 0.5 < ∼ t evap < ∼ 10 Gyr. Since the time-scale is relatively large, the evaporation can be an alternative of strangulation, which is expected to gradually suppress the star-formation activities of galaxies (Larson et al. 1980;Bekki et al. 2002), but is highly speculative (Benson et al. 2000). For the HCL observed at z ∼ 1, ram-pressure stripping is effective in the main cluster regardless of non-gravitational heating. As is the case of the LCL at z ∼ 0.5, the ICM of the subclusters must have been heated to avoid ram-pressure stripping in the subclusters. At higher redshifts (z > ∼ 1), the effect of the disk evolution is more significant (figures 3 and 4), although it is not well-known that disk galaxies with the rotation velocities that we studied exist. At these redshifts, most of the subclusters have not been absorbed in the main cluster. As long as the ICM has not been heated non-gravitationally, the efficiency of rampressure stripping decreases only slowly as redshift increases. Thus, the products of rampressure stripping, such as galaxies that have spectra reflecting rapidly declined star-formation rates, may be observed at these redshifts although the number fraction is not as much as that at lower redshifts. In the non-heated ICM model, figure 7 shows that the evaporation is ineffective (t cond > ∼ t H ) for less massive systems such as the subclusters of the LCL (at z > 0.7) and the HCL (at z > 1.5). Thus, if the evaporation is the mechanism of the pre-processing, the star-formation rates of galaxies hardly decline in the vicinity of the clusters at z > ∼ 1-2 except for galaxies in which star-formation activities decrease rapidly by the ram-pressure stripping. On the other hand, if the ICM has been heated, ram-pressure stripping does not occur in the high-redshift range (figures 3 and 4). The time-scale of the evaporation is given by t cond (r = 0) < ∼ t evap < ∼ t cond (r vir ) and is relatively large; for example, 1 < ∼ t evap < ∼ 4 Gyr at z ∼ 2 for the subcluster of the LCL ( figure 6). Thus, the star-formation rates of galaxies should decrease slowly with this time-scale. Thus, the heated and non-heated ICM models may be discriminated by future observations of the star-formation history of galaxies in the regions surrounding main clusters. However, quantitative predictions based on semi-analytic models or numerical simulations would be required for actual discrimination. We leave this for future study. Conclusions We have investigated ram-pressure stripping and evaporation of disk galaxies in and around a cluster using analytical models based on a hierarchical clustering scenario. We considered the redshift evolution of the size and surface density of the disk. We showed that the evolution does not much affect the efficiency of ram-pressure stripping of galaxies for z < ∼ 0.5, but affects for z > ∼ 1. We also considered the saturation of thermal conduction in detail, and found that it cannot be ignored at larger radii of a cluster, which makes the time-scale of the evaporation larger. Thus, the evaporation has the same effect as strangulation (stripping of warm gas in galactic halos) to suppress the star-formation activities of galaxies gradually. Observations of galaxies in the vicinity of clusters at z ∼ 1 are useful to investigate whether non-gravitational heating has occurred or not by z ∼ 1. Fig. 1 . 1(a) Mass evolution of the main cluster (dotted line), the typical subcluster (solid line) of the LCL, and the group (dashed line). (b) Mass evolution of the main cluster (dotted line) and the typical subcluster (solid line) of the HCL. Fig. 2 . 2Time-scale of ram-pressure stripping, ∆t 10−5 for the bigger galaxy in the (a) main cluster of the LCL, (b) subcluster of the LCL, (c) group, (d) main cluster of the HCL, and (e) subcluster of the HCL. The solid lines are the results of the non-heated ICM model and the dotted lines are those of the heated ICM model. The dashed-line shows the rotation time of the galaxy (t rot ). Fig. 3 .Fig. 4 . 34Evolution of the stripping radii, r st , normalized by the virial radii, r vir , for the bigger galaxy in the (a) main cluster of the LCL, (b) subcluster of the LCL, (c) group, (d) main cluster of the HCL, and (e) subcluster of the HCL. Solid lines are the results of the non-heated ICM model and dotted lines are those of the heated ICM model. Bold and thin lines are those when the evolution of a galactic disk is considered and those when it is not considered, respectively, For r < r st , ram-pressure stripping is effective. Same as figure 3, but for the smaller galaxy. ICMFig. 5 . 5) and the increase of n e at Evolution of the conduction times for the non-heated ICM model for the bigger galaxy in the (a) main cluster of the LCL, (b) subcluster of the LCL, (c) group, (d) main cluster of the HCL, and (e) subcluster of the HCL. The solid lines are t cond,nsat at r = 0 (thin lines) and r = r vir (thick lines). The dotted lines are t cond,sat at r = 0 (thin lines) and r = r vir (thick lines). Note that t cond,nsat (r = 0) = t cond,nsat (r vir ) in this figure. Fig. 7 . 7Time-scales of thermal conduction, t cond (r = 0) for the non-heated ICM model for the bigger galaxy in the (a) main cluster of the LCL, (b) subcluster of the LCL, (c) group, (d) main cluster of the HCL, and (e) subcluster of the HCL. The solid and dashed lines show t cond (r = 0) ′ = t cond (r = 0) + t form when z form = 1 and 2, respectively. The dotted and dot-dashed lines show t cond (r = 0) and the Hubble time t H , respectively. For t ′ cond < ∼ t H , thermal conduction is effective. Fig. 8 . 8Same for figure 7, but for the heated ICM model. t cond, nsat t cond, satFig. 6. Same as figure 5, but for the heated ICM model. t cond ∝ M cold /r gal [equations (9) and (10)]. Since both M cold and r gal are proportional to H(z) −1 [equations(12)and(17)], t cond is not influenced by the effect of the disk evolution. Thus, the upper figures are the same as figures 5a-c in Paper I. Infigure 7, we also present the Hubble time, t H . If t cond ≪ t H , the cold gas in a galaxy is evaporated soon after the galaxy forms in We are grateful to M. Nagashima and M. Enoki for useful comments. Y. F. was supported in part by a Grant-in-Aid from the Ministry of Education, Culture, Sports, Science and Technology (14740175). . M G Abadi, B Moore, R G Bower, MNRAS. 308947Abadi, M. G., Moore, B., & Bower, R. G. 1999, MNRAS, 308, 947 . R G Abraham, ApJ. 471694Abraham, R. G., et al. 1996, ApJ, 471, 694 . A Babul, M L Balogh, G F Lewis, G B Poole, MNRAS. 330329Babul, A., Balogh, M. L., Lewis, G. F., & Poole, G. B. 2002, MNRAS, 330, 329 . M L Balogh, A Babul, D R Patton, MNRAS. 307463Balogh, M. L., Babul, A., & Patton, D. R. 1999a, MNRAS, 307, 463 . M L Balogh, S L Morris, H K C Yee, R G Carlberg, E Ellingson, ApJ. 48875Balogh, M. L., Morris, S. L., Yee, H. K. C., Carlberg, R. G., & Ellingson, E. 1997, ApJ, 488, L75 . M L Balogh, S L Morris, H K C Yee, R G Carlberg, E Ellingson, ApJ. 52754Balogh, M. L., Morris, S. L., Yee, H. K. C., Carlberg, R. G., & Ellingson, E. 1999b, ApJ, 527, 54 . M L Balogh, J F Navarro, S L Morris, ApJ. 540113Balogh, M. L., Navarro, J. F., & Morris, S. L. 2000, ApJ, 540, 113 . D Balsara, M Livio, C P Dea, ApJ. 43783Balsara, D., Livio, M., & O'Dea, C. P. 1994, ApJ, 437, 83 . K Bekki, W J Couch, Y Shioya, ApJ. 577651Bekki, K., Couch, W. J., & Shioya, Y. 2002, ApJ, 577, 651 . A J Benson, R G Bower, C S Frenk, S D M White, MNRAS. 314557Benson, A. J., Bower, R. G., Frenk, C. S., & White, S. D. M. 2000, MNRAS, 314, 557 . J R Bond, S Cole, G Efstathiou, N Kaiser, ApJ. 379440Bond, J. R., Cole, S., Efstathiou, G., & Kaiser, N. 1991, ApJ, 379, 440 . R G Bower, MNRAS. 248332Bower, R. G. 1991, MNRAS, 248, 332 . H Butcher, A Oemler, Jr, ApJ. 21918Butcher, H., & Oemler, A., Jr. 1978, ApJ, 219, 18 . H Butcher, A Oemler, Jr, ApJ. 285426Butcher, H., & Oemler, A., Jr. 1984, ApJ, 285, 426 . G Byrd, M Valtonen, ApJ. 35089Byrd, G., & Valtonen, M. 1990, ApJ, 350, 89 . W J Couch, A J Barger, I Smail, R S Ellis, R M Sharples, ApJ. 497188Couch, W. J., Barger, A. J., Smail, I., Ellis, R. S., & Sharples, R. M. 1998, ApJ, 497, 188 . W J Couch, R M Sharples, MNRAS. 229423Couch, W. J., & Sharples, R. M. 1987, MNRAS, 229, 423 . L L Cowie, C F Mckee, ApJ. 211135Cowie, L. L., & McKee, C. F. 1977, ApJ, 211, 135 . L L Cowie, A Songaila, Nature. 266501Cowie, L. L., & Songaila, A. 1977, Nature, 266, 501 . A Dressler, ApJ. 236351Dressler, A. 1980, ApJ, 236, 351 . A Dressler, ApJ. 490577Dressler, A., et al. 1997, ApJ, 490, 577 . A Dressler, A Oemler, Jr, H R Butcher, J E Gunn, ApJ. 430107Dressler, A., Oemler, A., Jr., Butcher, H. R., & Gunn, J. E. 1994, ApJ, 430, 107 . E Ellingson, H Lin, H K C Yee, R G Carlberg, ApJ. 547609Ellingson, E., Lin, H., Yee, H. K. C., & Carlberg, R. G. 2001, ApJ, 547, 609 . G Fasano, B M Poggianti, W J Couch, D Bettoni, P Kjaergaard, M Moles, ApJ. 542673Fasano, G., Poggianti, B. M., Couch, W. J., Bettoni, D., Kjaergaard, P., & Moles, M. 2000, ApJ, 542, 673 . Y Fujita, ApJ. 509587Fujita, Y. 1998, ApJ, 509, 587 . Y Fujita, ApJ. 550612Fujita, Y. 2001a, ApJ, 550, 612 . Y Fujita, ApJ. 5507Fujita, Y. 2001b, ApJ, 550, L7 . Y Fujita, PASJ. 5629Paper IFujita, Y. 2004, PASJ, 56, 29 (Paper I) . Y Fujita, M Nagashima, ApJ. 516619Fujita, Y., & Nagashima, M. 1999, ApJ, 516, 619 . Y Fujita, C L Sarazin, M Nagashima, T Yano, ApJ. 57711Fujita, Y., Sarazin, C. L., Nagashima, M., & Yano, T. 2002, ApJ, 577, 11 . Y Fujita, F Takahara, ApJ. 536523Fujita, Y., & Takahara, F. 2000, ApJ, 536, 523 . Y Fujita, M Takizawa, M Nagashima, M Enoki, PASJ. 511Fujita, Y., Takizawa, M., Nagashima, M., & Enoki, M. 1999, PASJ, 51, L1 . T J Gaetz, E E Salpeter, G Shaviv, ApJ. 316530Gaetz, T. J., Salpeter, E. E., & Shaviv, G. 1987, ApJ, 316, 530 . T Goto, PASJ. 55739Goto, T., et al. 2003a, PASJ, 55, 739 . T Goto, PASJ. 55757Goto, T., et al. 2003b, PASJ, 55, 757 . T Goto, M Yagi, M Tanaka, S Okamura, MNRAS. 348515Goto, T., Yagi, M., Tanaka, M., & Okamura, S. 2004, MNRAS, 348, 515 . T Goto, C Yamauchi, Y Fujita, S Okamura, M Sekiguchi, I Smail, M Bernardi, P L Gomez, MNRAS. 346601Goto, T., Yamauchi, C., Fujita, Y., Okamura, S., Sekiguchi, M., Smail, I., Bernardi, M., & Gomez, P. L. 2003c, MNRAS, 346, 601 . J E Gunn, J R Gott, ApJ. 1761Gunn, J. E., & Gott, J. R., III 1972, ApJ, 176, 1 . Y Hashimoto, A Oemler, Jr, ApJ. 530652Hashimoto, Y., & Oemler, A., Jr. 2000, ApJ, 530, 652 . T Kodama, R G Bower, MNRAS. 32118Kodama, T., & Bower, R. G. 2001, MNRAS, 321, 18 . T Kodama, I Smail, F Nakata, S Okamura, R G Bower, ApJ. 5629Kodama, T., Smail, I., Nakata, F., Okamura, S., & Bower, R. G. 2001, ApJ, 562, L9 . C Lacey, S Cole, MNRAS. 262627Lacey, C., & Cole, S. 1993, MNRAS, 262, 627 . R B Larson, B M Tinsley, C N Caldwell, ApJ. 237692Larson, R. B., Tinsley, B. M., & Caldwell, C. N. 1980, ApJ, 237, 692 . E J Lloyd-Davies, T J Ponman, D B Cannon, MNRAS. 315689Lloyd-Davies, E. J., Ponman, T. J., & Cannon, D. B. 2000, MNRAS, 315, 689 . M Loewenstein, ApJ. 53217Loewenstein, M. 2000, ApJ, 532, 17 . L M Lubin, AJ. 11223Lubin, L. M. 1996, AJ, 112, 23 . V E Margoniner, R R De Carvalho, AJ. 1191562Margoniner, V. E., & de Carvalho, R. R. 2000, AJ, 119, 1562 . H J Mo, S Mao, S D M White, MNRAS. 295319Mo, H. J., Mao, S., & White, S. D. M. 1998, MNRAS, 295, 319 . B Moore, N Katz, G Lake, A Dressler, A Oemler, Jr, Nature. 379613Moore, B., Katz, N., Lake, G., Dressler, A., & Oemler, A., Jr. 1996, Nature, 379, 613 . M Mori, A Burkert, ApJ. 538559Mori, M., & Burkert, A. 2000, ApJ, 538, 559 . J S Mulchaey, ARA&A. 38289Mulchaey, J. S. 2000, ARA&A, 38, 289 . J S Mulchaey, D S Davis, R F Mushotzky, D Burstein, ApJS. 14539Mulchaey, J. S., Davis, D. S., Mushotzky, R. F., & Burstein, D. 2003, ApJS, 145, 39 . A Oemler, Jr, A Dressler, H R Butcher, ApJ. 474561Oemler, A., Jr., Dressler, A., & Butcher, H. R. 1997, ApJ, 474, 561 . T J Ponman, D B Cannon, J F Navarro, Nature. 397135Ponman, T. J., Cannon, D. B., & Navarro, J. F. 1999, Nature, 397, 135 . D Portnoy, S Pistinner, G Shaviv, ApJS. 8695Portnoy, D., Pistinner, S., & Shaviv, G. 1993, ApJS, 86, 95 . V Quilis, B Moore, R Bower, Science. 2881617Quilis, V., Moore, B., & Bower, R. 2000, Science, 288, 1617 . K D Rakos, A P Odell, J M Schombert, ApJ. 490194Rakos, K. D., Odell, A. P., & Schombert, J. M. 1997, ApJ, 490, 194 . K D Rakos, J M Schombert, ApJ. 43947Rakos, K. D., & Schombert, J. M. 1995, ApJ, 439, 47 . S Schindler, A&A. 349435Schindler, S. 1999, A&A, 349, 435 . I Smail, A C Edge, R S Ellis, R D Blandford, MNRAS. 293124Smail, I., Edge, A. C., Ellis, R. S., & Blandford, R. D. 1998, MNRAS, 293, 124 . H Takeda, P E J Nulsen, A C Fabian, MNRAS. 208261Takeda, H., Nulsen, P. E. J., & Fabian, A. C. 1984, MNRAS, 208, 261 . A Toomre, J Toomre, ApJ. 178623Toomre, A., & Toomre, J. 1972, ApJ, 178, 623 . G Tormen, L Moscardini, N Yoshida, MNRAS. 3501397Tormen, G., Moscardini, L., & Yoshida, N. 2004, MNRAS, 350, 1397 . P G Van Dokkum, M Franx, D D Kelson, G D Illingworth, D Fisher, D Fabricant, ApJ. 500714van Dokkum, P. G., Franx, M., Kelson, D. D., Illingworth, G. D., Fisher, D., & Fabricant, D. 1998, ApJ, 500, 714 . G M Voit, G L Bryan, M L Balogh, R G Bower, ApJ. 576601Voit, G. M., Bryan, G. L., Balogh, M. L., & Bower, R. G. 2002, ApJ, 576, 601 . B C Whitmore, D M Gilmore, ApJ. 36764Whitmore, B. C., & Gilmore, D. M. 1991, ApJ, 367, 64 . B C Whitmore, D M Gilmore, C Jones, ApJ. 407489Whitmore, B. C., Gilmore, D. M., & Jones, C. 1993, ApJ, 407, 489 . M Yamada, Y Fujita, ApJ. 553145Yamada, M., & Fujita, Y. 2001, ApJ, 553, L145 . A I Zabludoff, J S Mulchaey, ApJ. 49639Zabludoff, A. I., & Mulchaey, J. S. 1998, ApJ, 496, 39
[]
[ "Spectral proper orthogonal decomposition of harmonically forced turbulent flows", "Spectral proper orthogonal decomposition of harmonically forced turbulent flows" ]
[ "Liam F Heidt \nGraduate Aerospace Laboratories\nCalifornia Institute of Technology\nCalifornia Institute of Technology\n91101CaliforniaUSA\n", "† ", "Tim Colonius \nDepartment of Mechanical and Civil Engineering\nCalifornia Institute of Technology\n91101CaliforniaUSA\n" ]
[ "Graduate Aerospace Laboratories\nCalifornia Institute of Technology\nCalifornia Institute of Technology\n91101CaliforniaUSA", "Department of Mechanical and Civil Engineering\nCalifornia Institute of Technology\n91101CaliforniaUSA" ]
[]
Many turbulent flows exhibit time-periodic statistics. These include turbomachinery flows, flows with external harmonic forcing, and the wakes of bluff bodies. Many existing techniques for identifying turbulent coherent structures, however, assume the statistics are statistically stationary. In this paper, we leverage cyclostationary analysis, an extension of the statistically stationary framework to processes with periodically varying statistics, to generalize the spectral proper orthogonal decomposition (SPOD) to the cyclostationary case. The resulting properties of the cyclostationary SPOD (CS-SPOD for short) are explored, a theoretical connection between CS-SPOD and the harmonic resolvent analysis is provided, simplifications for the low and high forcing frequency limits are discussed, and an efficient algorithm to compute CS-SPOD with SPOD-like cost is presented. We illustrate the utility of CS-SPOD using two example problems: a modified complex linearized Ginzburg-Landau model and a high-Reynolds-number turbulent jet.
null
[ "https://export.arxiv.org/pdf/2305.05628v1.pdf" ]
258,564,702
2305.05628
916a7daf133760805b76ecd93b16ea1d10d49d41
Spectral proper orthogonal decomposition of harmonically forced turbulent flows Liam F Heidt Graduate Aerospace Laboratories California Institute of Technology California Institute of Technology 91101CaliforniaUSA † Tim Colonius Department of Mechanical and Civil Engineering California Institute of Technology 91101CaliforniaUSA Spectral proper orthogonal decomposition of harmonically forced turbulent flows (Received xx; revised xx; accepted xx)This draft was prepared using the LaTeX style file belonging to the Journal of Fluid Mechanics 1 Many turbulent flows exhibit time-periodic statistics. These include turbomachinery flows, flows with external harmonic forcing, and the wakes of bluff bodies. Many existing techniques for identifying turbulent coherent structures, however, assume the statistics are statistically stationary. In this paper, we leverage cyclostationary analysis, an extension of the statistically stationary framework to processes with periodically varying statistics, to generalize the spectral proper orthogonal decomposition (SPOD) to the cyclostationary case. The resulting properties of the cyclostationary SPOD (CS-SPOD for short) are explored, a theoretical connection between CS-SPOD and the harmonic resolvent analysis is provided, simplifications for the low and high forcing frequency limits are discussed, and an efficient algorithm to compute CS-SPOD with SPOD-like cost is presented. We illustrate the utility of CS-SPOD using two example problems: a modified complex linearized Ginzburg-Landau model and a high-Reynolds-number turbulent jet. Introduction Periodic and quasi-periodic forced turbulent flows are ubiquitous in engineering and nature. Such flows include those in turbomachinery, weather and climate, and flow control with harmonic actuation. In cases where the forcing is slow compared to the turbulence time scales, the statistics may be modeled as quasi-stationary (comprising a series of stationary states). However, in many cases, the forcing is at frequencies commensurate with the turbulence, and the turbulence structure is not only modulated by, but also altered by, the forcing. In such, a key goal is to identify coherent structures that can be compared and contrasted to their occurrence in similar but unforced flows but that are otherwise mutually uncorrelated. The most commonly used technique to identify coherent structures in turbulence is proper orthogonal decomposition (Lumley 1967(Lumley , 1970Aubry et al. 1988;Sirovich 1989;Aubry 1991), which represents flow data as mutually orthogonal modes whose amplitudes optimally reconstruct the correlation tensor. When applied in its typical space-only form, the modes are not coherent in time, leading many researchers to apply DMD and its variants (Rowley et al. 2009;Schmid 2010;Schmid et al. 2011). However, for statistically stationary flows, spectral POD (SPOD) (Lumley 1967(Lumley , 1970Citriniti & George 2000;Picard & Delville 2000;Towne et al. 2018) leads to an optimal reconstruction of the spacetime statistics and results in modes that oscillate at a single frequency. A fundamental assumption required in both space-only POD and SPOD is statistical stationarity, meaning that the statistics are time-invariant. This assumption is appropriate for many unforced flows. However, when forced, this fundamental assumption is no longer valid as the flow, and its statistics, are now correlated to the forcing. Several works have developed extensions to SPOD to study forced turbulent flows. Franceschini et al. (2022) studied flows where a high-frequency turbulent component develops on a low-frequency periodic motion. Subsequently, a quasi-steady assumption is made, and conditionally fixed coherent structures at each phase are determined. Glezer et al. (1989) developed an extended POD method for flows with periodic statistics by summing an ensemble of time series. However, since this method is based on POD, it still contains the shortcomings present in POD. Heidt et al. (2021) applied SPOD to the residual component of the triply decomposed fields (Hussain & Reynolds 1970, 1972 to isolate the impact of the forcing on the turbulence but still required a stationary assumption. Clearly, SPOD and the aforementioned extensions are not sufficient to study forced turbulent flows. This motivates an extension of SPOD to these flows, which is the primary focus of this paper which we achieve by leveraging cyclostationary analysis. Cyclostationary analysis is an extension to statistically stationary analysis to processes with periodic statistics that has been applied in a range of fields (Gardner 2018), from economics to physics and mechanics. Initially developed by Gudzenko (1959), Lebedev (1959), and Gladyshev (1963), it was then extensively studied and popularized in Hurd (1969) and Gardner (1972). The theory of second-order cyclostationary processes was further developed by Boyles & Gardner (1983) and Gardner (1986b), while Brown III (1987) and Gardner (1986c) furthered the theory of complex-valued processes. Cyclostationary analysis provides a robust statistical theory to study these processes, and tools analogous to those used to study stationary processes (e.g. the mean, cross-correlation, cross-spectral density, etc) have been developed which naturally collapse back to their stationary counterparts when analyzing a stationary process. Kim et al. (1996) developed cyclostationary empirical orthogonal-functions (CSEOFs) that essentially extends SPOD to cyclostationary processes for one-dimensional data. Kim & North (1997) modified this technique to include multi-dimensional data by reducing the computational cost through several approximations. However, due to a lack of clarity in the literature regarding the derivation, properties, interpretation, and computation of these techniques, their use has been limited. Furthermore, despite the aforementioned approximations, both formulations are computationally intractable for high-dimensional data. In this paper, we extend SPOD to flows with time-periodic statistics through an extension to the exact form of CSEOFs (Kim et al. 1996) to include large multi-dimensional data. We hereafter refer to this method as cyclostationary SPOD (CS-SPOD for short). Methods used to model coherent structures are also considered. Specifically, we consider resolvent analysis (also known as input/output analysis), where one seeks forcing modes that give rise to the most amplified response modes with respect to their energetic gain. When applied to turbulent fluid flows, the nonlinear modal interactions are regarded as forcing terms to the linearized time-averaged turbulent mean (McKeon & Sharma 2010). Resolvent analysis has been used to study a wide range of transitional and turbulent flows (Cossu et al. 2009;McKeon & Sharma 2010;Meliga et al. 2012;Oberleithner et al. 2014;Jeun et al. 2016;Schmidt et al. 2018), amounts others. Towne et al. (2018) provided a theoretical connection between SPOD and resolvent, showing that resolvent output modes equal SPOD modes when the resolvent forcing modes are mutually uncorrelated. This provides a theoretical basis to use resolvent analysis to develop models of the space-time statistics of a turbulent flow (Moarref et al. 2013;Towne et al. 2020;Amaral et al. 2021) and the development of various methods (Morra et al. 2019;Pickering et al. 2021) to help whiten the forcing coefficients, thereby improving these models. Resolvent analysis was extended to flows with a time-periodic mean flow in Padovan et al. (2020) and Padovan & Rowley (2022) and is termed harmonic resolvent analysis. This leads to a system of frequency-couple equations that provide the ability to study the first-order triadic interactions present in these time-periodic flows. Analogous to the relationship between SPOD and resolvent analysis, in the present paper, we establish a theoretical connection between CS-SPOD and harmonic resolvent analysis. The remainder of the paper is organized as follows. Section 2 introduces and outlines the theory of cyclostationary processes and reviews an algorithm to compute their statistics. In §3, CS-SPOD is derived, its properties explored, and an efficient computational algorithm is proposed. After validating the method in §4, we demonstrate its utility of CS-SPOD in §5. Finally, in §6, we explore the relationship between CS-SPOD and the harmonic resolvent analysis. Section 8 concludes the manuscript and summarizes the main points. Cyclostationary theory This section provides an overview of the theory of cyclostationary analysis and the tools used to study them, with a focus on fluid dynamics. Comprehensive reviews can be found in Gardner et al. (2006), Antoni (2009), and Napolitano (2019). A complex-valued scalar process q(t) at time t is cyclostationary in the wide sense if its mean and autocorrelation function are periodic with period T 0 (Gardner 1986b), giving E{q(t)} = E{q(t + T 0 )}, (2.1a) R(t, τ ) = R(t + T 0 , τ ), (2.1b) where E{·} is the expectation operator, R is the autocorrelation function, and τ is a time-delay. Since the mean and autocorrelation are time-periodic, they can be expressed as a Fourier series E{q(t)} = ∞ kα=−∞q kαα0 e i2π(kαα0)t , (2.2a) R(t, τ ) ≡ E{q(t + τ /2)q * (t − τ /2)} = ∞ kα=−∞R kαα0 (τ )e i2π(kαα0)t , where k α ∈ Z and the Fourier series coefficients are given bŷ q kαα0 ≡ 1 T 0 T0/2 −T0/2 E{q(t)}e −i2π(kαα0)t dt, (2.3a) R kαα0 (τ ) ≡ 1 T 0 T0/2 −T0/2 R(t, τ )e −i2π(kαα0)t dt,(2.3b) where α 0 = 1/T 0 is the fundamental cycle frequency. The Fourier coefficientsR kαα0 (τ ) are known as the cyclic autocorrelation functions of q(t) at cycle frequency k α α 0 . If a process contains non-zeroq kαα0 and/orR kαα0 (τ ), it is said to exhibit first-and second-order cyclostationarity at cycle frequency k α α 0 , respectively. Wide-sense stationary processes are the special case for whichR kαα0 (τ ) = 0 for k = 0 only. If the process q(t) contains a deterministic periodic component at cycle frequency k α α 0 , it would exhibit both first-order and second-order (and any higher-order) cyclostationarity at cycle frequency k α α 0 . Thus, a deterministic component results in a pure first-order component and an impure (i.e. made up from components of a lower-order) second-order (or higher) component (Antoni et al. 2004). Antoni et al. (2004) and Antoni (2009) showed that in physical systems, it is crucial to analyze the first-and second-order components separately, where the second-order component q (t) is defined as q (t) ≡ q(t) − E{q(t)}, (2.4) such that q(t) = E{q(t)} + q (t) and the mean E{q(t)} = E{q(t + T 0 )} is T 0 periodic. This approach makes physical sense considering that the first-order component is the deterministic tonal component that originates from the forcing, while the second-order component is a stochastic component that represents the underlying turbulence that is modified by the forcing. The sequential approach is analogous to the triple decomposition (Hussain & Reynolds 1970, 1972 where the underlying flow is separated into the firstorder (phase-averaged) and second-order (turbulent/residual) components. In this manuscript, we assume that all processes analyzed using second-order analysis tools are zero-mean processes (or have had their first-order component removed). Thus, by stating that a process exhibits second-order cyclostationarity at cycle frequency k α α 0 , we mean that the process exhibits pure second-order cyclostationarity at k α α 0 . Second-order cyclostationary analysis tools In fluid dynamics, we are frequently interested in the correlation between two quantities. Thus, we will now consider the complex-valued process q(x, t) at time t and independent variables (or spatial locations) x instead of the scalar process q(t). Two processes are jointly cyclostationary if their cross-correlation function can be expressed as a Fourier series, such that R(x, x , t, τ ) ≡ E{q(x, t + τ /2)q * (x , t − τ /2)} = ∞ kα=−∞R kαα0 (x, x , τ )e i2π(kαα0)t , (2.5) where the Fourier series coefficients are given bŷ (2.6) and are known as the cyclic cross-correlation functions of between q(x) and q(x ) at cycle frequency k α α 0 with (·) * being the complex conjugate of (·). If the only non-zero cycle frequency is k α α 0 = 0, then q(x) and q(x ) are jointly wide-sense stationary. Similar to the common assumption in stationary analysis, we assume that all processes are separately and jointly cyclostationary. A cyclostationary process can be analyzed in the dual-frequency domain via the cyclic cross-spectral density (CCSD). The CCSD is the generalization of the cross-spectral density (CSD) for cyclostationary processes and is related to the cyclic cross-correlation function via the cyclic Wiener-Khinchin relation (Gardner & Robinson 1989) R kαα0 (x, x , τ ) ≡ 1 T 0 T0/2 −T0/2 R(x, x , t, τ )e −i2π(kαα0)t dt,S kαα0 (x, x , f ) = ∞ −∞R kαα0 (x, x , τ )e −i2πf τ dτ. (2.7) The CCSD can also be written as S kαα0 (x, x , f ) ≡ (2.8) lim ∆f →0 lim T →∞ 1 T T /2 −T /2 ∆f E q 1/∆f (x, t, f + 1 2 k α α 0 )q * 1/∆f (x , t, f − 1 2 k α α 0 ) dt, whereq W (x, t, f ) ≡ t+ W 2 t− W 2 q(x, t )e −i2πf t dt is the short-time Fourier transform of q(x, t), f is the spectral frequency, and k α α 0 is the cycle frequency. This shows that the CCSD represents the time-averaged statistical correlation (with zero lag) of two spectral components at frequencies f + 1 2 k α α 0 and f − 1 2 k α α 0 as the bandwidth approaches zero (Napolitano 2019). For k α = 0, the CCSD naturally reduces to the CSD, i.e. S 0 (x, x , f ). Correlation between spectral components in cyclostationary processes is critical in the derivation of CS-SPOD, and for stationary processes, the lack of correlation between spectral components is why SPOD can analyze each frequency independently. The Wigner-Ville (WV) spectrum (Martin 1982;Martin & Flandrin 1985;Antoni 2007) shows the spectral information of the process as a function of time (or phase) and, for a cyclostationary process, is given by W V (x, t, f ) = ∞ kα=−∞ S kαα0 (x, f )e i2π(kαα0)t , (2.9) where S kαα0 (x, f ) is the cyclic power-spectral density (i.e. S kαα0 (x, x, f )). While nonphysical, the WV spectrum may contain negative energy densities due to the negative interaction terms in the WV spectrum (Antoni 2007;Flandrin 1998). However, Antoni (2007) showed this could be arbitrarily reduced with increasing sampling time. The CCSD and WV spectrum can be integrated with respect to frequency (Gardner 1994;Randall et al. 2001), which results in the instantaneous variance and the cyclic distribution of the instantaneous variance, respectively m(x, t) = E{q(x, t)q * (x, t)} = ∞ −∞ W V (x, t, f )df, (2.10a) m kαα0 (x) = ∞ −∞ S kαα0 (x, f )df, (2.10b) where m(x, t) is the mean-variance of the process andm kαα0 (x) quantifies the meanvariance contribution from each cycle frequency k α α 0 . So far, we have assumed that the cycle frequencies are known, but this may not always be the case. To determine the cycle frequencies present in the system, all possible cycle frequencies α are explored by rewriting the CCSD as S(x, x , α, f ) ≡ lim ∆f →0 lim T →∞ 1 T T /2 −T /2 ∆f E q 1/∆f (x, t, f + α 2 )q * 1/∆f (x , t, f − α 2 ) dt. (2.11) A process exhibits cyclostationarity at cycle frequency α when S(x, x , α, f ) = 0. The range of possible cycle frequencies is α = [−0.5/∆t 0.5/∆t], which must be searched over with a resolution ∆α = 1/T (Gardner 1986a) to ensure all cycle frequencies present are captured. For cyclostationary processes, because the cross-correlation function is periodic, the spectral correlation becomes discrete in α such that S(x, x , α, f ) = ∞ kα=−∞ S kαα0 (x, x , f )δ(α − k α α 0 ). (2.12) The cyclic distribution of the instantaneous variance is rewritten aŝ m(x, α) = ∞ −∞ S(x, α, f )df, (2.13) which similarly becomes discrete for a cyclostationary process. We must clarify one point of terminology. Considering stationary processes are a subset of cyclostationary processes, all stationary processes are also cyclostationary. We use the most restrictive description, i.e. stationary processes are referred to as stationary and not cyclostationary. By stating that a process exhibits cyclostationarity, we imply that at least one cycle frequency k α α 0 , k α = 0 exists. Cycloergodicity In fluid dynamics, it is laborious to require multiple realizations of a single process, and we often invoke ergodicity in stationary processes to equate the ensemble average with a long-time average of a single realization. We can similarly leverage the concept of cycloergodicity as described in Boyles & Gardner (1983), allowing us to replace the expectation operator with a suitable time average, specifically, the cycle-averaging operator (Braun 1975) q(x, t) = E{q(x, t)} = lim P →∞ 1 P P p=0 q(x, t + pT 0 ), (2.14) where q(x, t) is the mean. The cycle-averaging operator is used when the data is phaselocked to the forcing (i.e. sampled at an integer number of samples per cycle) and is identical to the phase-average used in the triple decomposition (Hussain & Reynolds 1970;. As the cycle average operator is periodic, it can be expressed as a Poisson sum q(x, t) = E{q(x, t)} = ∞ kα=−∞ e i2π(kαα0)t lim s→∞ 1 s s/2 −s/2 q(x, t)e −i2π(kαα0)t dt. (2.15) This definition is employed for non-phase-locked data or to filter out first-order components which are assumed to be statistical noise (Franceschini et al. 2022;Sonnenberger et al. 2000) and is identical to the harmonic-averaging procedure used by Mezić (2013) and Arbabi & Mezić (2017) when restricted to a temporally periodic average. Computing the CCSD There are practical considerations and nuances to computing the CCSD from discrete data that we discuss in this section. Let the vector q k ∈ R N represent a flow snapshot, i.e. the instantaneous state of the process q(x, t) at time t k on a set of points in a spatial domain Ω. The length of the vector N is equal to the number of spatial points multiplied by the number of state variables. We assume that this data is available for M equispaced snapshots, with t k+1 = t k + ∆t. In addition, we assume that this data is phase-locked, meaning that there are an integer number of time steps in the fundamental period, T 0 , and define N θ = T 0 /∆t †. Adopting similar notation to Towne et al. (2018), we estimate the CCSD tensor S(x, x , α, f ), which represents the spectral correlation between q(x, t) and q(x , t) at cycle frequency α and spectral frequency f . For a cyclostationary process, S(x, x , α, f ) is non-zero for α = k α α 0 only, and therefore is written as S kαα0 (x, x , f ) or equivalently S kα/T0 (x, x , f ). The space-time data can now be represented as the data matrix Q and time vector T Q = [q 1 , q 2 , · · · , q M ] ∈ R N ×M , T = [t 1 , t 2 , · · · , t M ] ∈ R M . (2.16, 2.17) † This restriction simplifies and reduces the computational expense of the calculations but can in principle be relaxed by using the Poisson sum time-average as in (2.15) and the non-computationally-efficient form of CS-SPOD shown in algorithm 2. Alternatively, non-phased-locked data can be temporally interpolated to be phase-locked. Although we have a formula for the CCSD as seen in (2.8 and 2.11), this does not result in a consistent estimator of the CCSD, as the variance of the estimate of the CCSD does not tend to zero as the amount of available data becomes large (Jenkins 1968;Antoni 2007;Napolitano 2019). Instead, this results in an estimate where the variance in the estimate is equal to the squared value of the estimate itself. A consistent estimate of the CCSD can be obtained by employing an appropriate averaging technique. The most common technique is the time-averaging Welch method (Welch 1967) due to its high computational efficiency. The Welch method averages a number of CCSDs to obtain a consistent estimate of the CCSD. From (2.11), we see that to compute the CCSD the Welch procedure is performed on two frequency-shifted versions of the data, given by Q ±α/2 = Qe −i2π(±α/2)T = [q 1,±α/2 , q 2,±α/2 , · · · , q M,±α/2 ], (2.18) where q k,±α/2 are the ± 1 2 α frequency-shifted data matrices corresponding to the k th snapshot, i.e. q k,±α/2 = q k e −i2π(±α/2)t k . Next, we split the two frequency-shifted data matrices into a number of, possibly overlapping, blocks. Each block is written as (2.19) where N f is the number of snapshots in each block and the k th entry of the n th block is q (n) k,±α/2 = q k+(n−1)(N f −N0),±α/2 . The total number of blocks, N b , is given by Q (n) ±α/2 = [q (n) 1,±α/2 , q (n) 2,±α/2 , · · · , q (n) N f ,±α/2 ] ∈ C N ×N f ,N b = N −N0 N f −N0 , where · represents the floor operator and N 0 is the number of snapshots that each block overlaps. The cycloergodicity hypothesis states that each of these blocks is considered to be a single realization in an ensemble of realizations of this cyclostationary flow. Subsequently, the DFT of each block for both frequency-shifted matrices is computed using a window w, givinĝ Q (n) ±α/2 = [q (n) 1,±α/2 ,q (n) 2,±α/2 , · · · ,q (n) N f ,±α/2 ], (2.20) whereq (n) k,±α/2 = 1 N f N f j=1 w j q (n) j,±α/2 e −i2π(k−1)[(j−1)/N f ] ,(2.21) for k = 1, · · · , N f and n = 1, · · · , N b whereq (n) k,±α/2 is the k th Fourier component of the n th block of the ±α/2 frequency-shifted data matrix, i.e. f k,±α0/2 . The nodal values w j of a window function are utilized to mitigate spectral and cyclic leakage arising from the non-periodicity of the data within each block. Due to the ±α/2 frequency-shifting applied, the k th discrete frequencies of the ±α/2 frequency-shifted data matrices represent a frequency of f k,±α/2 = f k ± α/2 =        k − 1 N f ∆t for k N f /2, k − 1 − N f N f ∆t for k > N f /2. ± α 2 (2.22) This shows that the frequency components f k + α/2 and f k − α/2, as required by (2.11), have the same index k in the shifted frequency vectors f k,±α/2 , respectively. The CCSD tensor S(x, x , α, f ) is then estimated at cycle frequency α and spectral frequency f k by S f k ,α = ∆t sN b N b n=1q (n) k,α/2 (q (n) k,−α/2 ) * ,(2.23) where s = N f j=1 w 2 j is the normalization constant that accounts for the difference in power between the windowed and non-windowed signal. This is written compactly by arranging the Fourier coefficients at the same index k into new frequency-data matriceŝ Q f k ,±α/2 = √ κ[q (1) k,±α/2 ,q (2) k,±α/2 , · · · ,q (N b −1) k,±α/2 ,q (N b ) k,±α/2 ] ∈ C N ×N b , (2.24) where κ = ∆t sN b . S f k ,α is then estimated by S f k ,α =Q f k ,α/2 (Q f k ,−α/2 ) * . (2.25) This estimate converges, i.e. the bias and variance become zero, as N b and N f are increased together (Welch 1967;Bendat & Piersol 2011;Antoni 2007). The algorithm to compute the CCSD from data snapshots is outlined in algorithm 1, from which all other second-order cyclostationary analysis tools can be computed. For efficient memory management, variables assigned with '←' can be deleted after each iteration in their respective loop. Similar to the Welch estimate of the CSD, the estimate of the CCSD suffers from the standard bias-variance trade-off, and caution should be taken to ensure sufficiently converged statistics. In the CCSD, a phenomenon similar to spectral leakage is present and is called cyclic leakage (Gardner 1986a) that results in erroneous cycle frequencies. Using 67% overlap when using a Hanning or Hamming window results in excellent cyclic leakage minimization and variance reduction (Antoni 2007). To reduce the variance sufficiently, T ∆f >> 1 is required (Antoni 2009). Algorithm 1 Algorithm to compute CCSD using frequency-shifted data matrices. 1: for Each data block, n = 1, 2, · · · , N b do Compute the frequency-shifted block data matrices 2: Q (n) ±α/2 ← [q 1+(n−1)(N f −N0),±α/2 , q 2+(n−1)(N f −N0),±α/2 , · · · , q N f +(n−1)(N f −N0),±α/2 ] Using a (windowed) fast Fourier transform, calculate and store the row-wise DFT for each frequency-shifted block data matrix 3:Q (n) ±α/2 = FFT(Q (n) ±α/2 ) = [q (n) 1,±α/2 ,q(n) 2,±α/2 , · · · ,q (n) N f ,±α/2 ] The columnq (n) k,±α/2 contains the n th realization of the Fourier mode at the k th discrete frequency f k,±α/2 4: end for 5: for Each frequency k = 1, 2, · · · , N f (or some subset of interest) do Assemble the matrices of Fourier realizations from the k th column of eachQ (n) ±α/2 6:Q f k ,±α/2 ← √ κ[q (1) k,±α/2 ,q(2) k,±α/2 , · · · ,q (N b −1) k,±α/2 ,q (N b ) k,±α/2 ] Compute the CCSD at spectral frequency f k and cycle frequency α 7: The objective of CS-SPOD is to find deterministic functions that best approximate, on average, a zero-mean stochastic process. For clarity, we derive CS-SPOD using an approach and notation analogous to the SPOD derivation presented in Towne et al. S f k ,α =Q f k ,α/2 (Q f k ,−α/2 ) * . (2018) and refer the reader to Brereton & Kodal (1992), Towne et al. (2018), and Schmidt & Colonius (2020) for detailed discussions on POD and SPOD. Like SPOD, we seek deterministic modes that depend on both space and time such that we can optimally decompose the space-time statistics of the flow. Thus, we assume that each realization of the stochastic process belongs to a Hilbert space with an inner product q 1 , q 2 x,t = ∞ −∞ Ω q * 2 (x, t)W (x)q 1 (x, t)dxdt, (3.1) where q 1 (x, t), q 2 (x, t) are two realizations of the flow, W (x) is a positive-definite weighting tensor, and Ω denotes the spatial domain of interest. We then seek to maximize λ = E{| q(x, t), φ φ φ(x, t) x,t | 2 } φ φ φ(x, t), φ φ φ(x, t) x,t , (3.2) which leads to ∞ −∞ Ω R(x, x , t, t )W (x )φ φ φ(x , t )dx dt = λφ φ φ(x, t), (3.3) where R(x, x , t, t ) ≡ E{q(x, t)q * (x , t )} is the two-point space-time correlation tensor. Until this stage, no assumptions about the flow has been made and is therefore identical to the derivation of SPOD (Lumley 1967(Lumley , 1970Towne et al. 2018). Since cyclostationary flows persist indefinitely, they have infinite energy in the spacetime norm, as shown in (3.1). Consequently, the eigenmodes of (3.3) do not possess any of the useful quantities relied upon in POD or SPOD. To solve this, a new eigenvalue decomposition is obtained in the spectral domain from which modes with the desired properties are determined. We employ a solution ansatz of φ φ φ(x, t) = m∈Am ψ ψ ψ(x, γ + mα 0 )e i2π(γ+mα0)t . (3.4) The set of frequencies present in the solution ansatz φ φ φ(x, t), is called the γ set of solution frequencies Ω γ = { · · · , γ − 2α 0 , γ − α 0 , γ, γ + α 0 , γ + 2α 0 , · · · }. In appendix A, we then use theory from §2 to derive the infinite-dimensional CS-SPOD eigenvalue problem, written compactly as Ω S(x, x , γ)W(x )Ψ Ψ Ψ (x , γ)dx = λΨ Ψ Ψ (x, γ), (3.5) where S(x, x , γ) = (3.6a)            . . . . . . . . . . . . . . . . . . S 0 (x, x , γ − α 0 ) S −α0 (x, x , γ − α0 2 ) S −2α0 (x, x , γ) . . . . . . S α0 (x, x , γ − α0 2 ) S 0 (x, x , γ) S −α0 (x, x , γ + α0 2 ) . . . . . . S 2α0 (x, x , γ) S α0 (x, x , γ + α0 2 ) S 0 (x, x , γ + α) . . . . . . . . . . . . . . . . . .            , W (x) =         . . . W (x) W (x) W (x) . . .         , (3.6b) Ψ Ψ Ψ (x, γ) =         . . . ψ ψ ψ(x, γ − α 0 ) ψ ψ ψ(x, γ) ψ ψ ψ(x, γ + α 0 ) . . .         . (3.6c) S(x, x , γ) is the CS-SPOD decomposition tensor, W (x) is the concatenated weight tensor, and Ψ Ψ Ψ (x, γ) are the CS-SPOD eigenvectors. The CS-SPOD eigenvectors φ φ φ(x, t) have Fourier series coefficients, at each f ∈ Ω γ , of ψ ψ ψ(x, f ). This coupling of frequencies in CS-SPOD occurs because frequency components separated by nα 0 are correlated to each other, as shown in (2.8). In contrast, stationary processes do not exhibit correlation between frequencies, and thus each frequency can be solved independently via SPOD. Due to this coupling, CS-SPOD performed at γ and γ + α 0 solve the same problem, i.e. giving Ω γ = Ω γ+zα0 , where z ∈ Z, meaning that CS-SPOD only contains unique solutions for the frequency sets corresponding to γ ∈ Γ , where Γ = (−α 0 /2, α 0 /2]. In practice, the infinite-dimensional problem is not solved, and we restrict our solution frequencies by limiting A m to a 1 harmonics, giving A m = {−a 1 , −a 1 + 1, · · · , 0, · · · , a 1 − 1, a 1 } and Ω γ = {−a 1 α 0 + γ, (−a 1 + 1)α 0 + γ, · · · , γ, · · · , (a 1 − 1)α 0 + γ, a 1 α 0 + γ}. In addition, the flow may only exhibit cyclostationarity at a 2 harmonics of the fundamental cycle frequency giving A n = {−a 2 , −a 2 + 1, · · · , 0, · · · , a 2 − 1, a 2 }. We employ identical notation to restrict the harmonics used to compute various second-order tools, such as the Wigner-Ville spectrum. These limits result in 2a 1 + 1 coupled equations, resulting in a 2a 1 + 1 × 2a 1 + 1 block eigensystem that is 2a 2 + 1 banded-block-diagonal. In practice, a 1 should be chosen such that Ω γ encompasses all frequencies of interest, a 2 should be chosen to encompass all the cycle frequencies present in the flow, and a 2 < a 1 . An example for a 1 = 2, a 2 = 1 is (for compactness, we have dropped the explicit dependence on x in this equation) S(γ) = (3.7)       S 0 (γ − 2α 0 ) S −α0 (γ − 3 2 α 0 ) 0 0 0 S α0 (γ − 3 2 α 0 ) S 0 (γ − α 0 ) S −α0 (γ − 1 2 α 0 ) 0 0 0 S α0 (γ − 1 2 α 0 ) S 0 (γ) S −α0 (γ + 1 2 α 0 ) 0 0 0 S α0 (γ + 1 2 α 0 ) S 0 (γ + 1 2 α 0 ) S −α0 (γ + 3 2 α 0 ) 0 0 0 S α0 (γ + 3 2 α 0 ) S 0 (γ + 2α 0 )       . In the limiting case that a 2 = 0, we obtain a block-diagonal CS-SPOD decomposition matrix where each diagonal block is the standard SPOD eigenvalue problem. CS-SPOD properties Since S(x, x , γ) is compact and finite, Hilbert-Schmidt theory guarantees a number of properties analogous to those for POD and SPOD (Lumley 1967(Lumley , 1970Towne et al. 2018). There are a countably infinite set of eigenfunctions Ψ Ψ Ψ j (x, γ) at each unique frequency set Ω γ that are orthogonal to all other modes at the same frequency set Ω γ in the spatial inner norm q 1 , q 2 x = Ω q * 2 (x, t)W (x)q 1 (x, t)dx, i.e. Ψ Ψ Ψ j (x, γ), Ψ Ψ Ψ k (x, γ) x = δ j,k . The following concatenated vector of each flow realization at the solution frequencies is optimally expanded aŝ Q(x, γ) =         . . . q(x, γ − α 0 ) q(x, γ) q(x, γ + α 0 ) . . .         ,Q(x, γ) = ∞ j=1 a j (γ)Ψ Ψ Ψ j (x, γ), (3.8a, b) whereq(x, f ) is the temporal Fourier decomposition of each flow realization q(x, t) at frequency f and a j (γ) = Q (x, γ), Ψ Ψ Ψ j (x, γ) x are the expansion coefficients, which are uncorrelated i.e. E{a j (γ)a * k (γ)} = λ j (γ)δ j,k . S(x, x , γ) is positive semi-definite meaning that S(x, x , γ) has the following unique diagonal representation S(x, x , γ) = ∞ j=1 λ j (γ)Ψ Ψ Ψ j (x, γ)Ψ Ψ Ψ * j (x , γ), (3.9) in which the CS-SPOD modes are its principal components. This shows that CS-SPOD determines the modes that optimally reconstruct the second-order statistics, one frequency set Ω γ at a time. CS-SPOD modes are optimal in terms of their total energy reconstruction of S(x, x , γ) only. Thus, although each of the CCSDs present in S(x, x , γ) have a diagonal representation, the individual components of Ψ Ψ Ψ j (x, γ) are, in general, not orthogonal in the space norm, i.e. ψ ψ ψ j (x, f ), ψ ψ ψ k (x, f ) x = δ j,k . One exception is for stationary processes where the correlation between different frequency components is zero, resulting in a blockdiagonal matrix where Ψ Ψ Ψ j (x, γ) contains just a single non-zero ψ ψ ψ j (x, γ) component, with ψ ψ ψ j (x, γ), ψ ψ ψ k (x, γ) x = δ j,k . Transforming the eigenvectors Ψ Ψ Ψ j (x, γ) back into the time domain, noting the ansatz defined in (3.4), gives φ φ φ γ,j (x, t) = m∈Am ψ ψ ψ j (x, γ + mα 0 )e i2π(γ+mα0)t , which are orthogonal in the space-time inner product integrated over a complete period. Thus, every mode occurring at each frequency set Ω γ can be viewed as a unique space-time mode. The two-point space-time correlation tensor can be written as R(x, x , t, t ) = α0/2 −α0/2 ∞ j=1 λ j (γ)φ φ φ γ,j (x, t)φ φ φ * γ,j (x , t )dγ. (3.10) Substituting in the frequency expansion of φ φ φ γ,j (x, t) and applying t = t − τ gives R(x, x , t, τ ) = α0/2 −α0/2 ∞ j=1 λ j (γ) m∈Am m ∈Am (3.11) ψ ψ ψ j (x, γ + mα 0 )ψ ψ ψ * j (x , γ + m α 0 )e i2π(m−m )α0t e i2π(γ+m α0)τ dγ, resulting in a reconstruction that is time-periodic due to e i2π(m−m )α0t , which is why the ansatz defined by (3.4) was chosen. In summary, for cyclostationary flows, CS-SPOD leads to modes that oscillate at a set of frequencies (Ω γ ) and optimally represent the second-order space-time flow statistics. Computing CS-SPOD modes in practice We now detail how to compute CS-SPOD modes from data along with a technique to reduce the cost and memory requirements to levels similar to those of SPOD. Since the dimension of the CCSD is N × N , the overall eigensystem S γ k (which is the discrete approximation of S(x, x , γ)) becomes (2a 1 + 1)N × (2a 1 + 1)N in size. For common fluid dynamics problems, this can become a dense matrix O(10 6 − 10 9 ) × O(10 6 − 10 9 ) in size, which is computationally intractable to store in memory, let alone compute its eigendecomposition. This is also the dimension of the inversion required in the CSEOF methods by Kim et al. (1996) and Kim & North (1997). Thus, we derive a methodof-snapshots approach similar to the technique employed in POD (Sirovich 1987) and SPOD (Citriniti & George 2000;Towne et al. 2018) that reduces the size of the eigenvalue problem from (2a 1 + 1)N × (2a 1 + 1)N to (2a 1 + 1)N b × (2a 1 + 1)N b . Since N b << N , the method-of-snapshots technique makes the eigenvalue problem computationally tractable. To determine CS-SPOD with a finite amount of discrete data, we substitute in the Welch computational procedure for the CCSD into each term of the frequency-limited version of (3.6a). We numerically evaluate this as S γ k = Q γ k Q * γ k , Q γ k =         Q γ k ,−a1α0 . . . Q γ k . . . Q γ k ,a1α0          , (3.12a, b) whereQ γ k ,mα0 = √ κ[q (1) k,mα0 ,q (2) k,mα0 , · · · ,q (N b −1) k,mα0 ,q (N b ) k,mα0 ] ∈ C N ×N b . (3.13) Q γ k is called the concatenated frequency-data matrix at the discrete Ω γ k set of solution frequencies andq (n) k,mα0 is the k th DFT component of the n th block of the mα 0 frequencyshifted data matrix. As stated previously, the solution frequency sets are only unique for γ ∈ Γ , thus the corresponding DFT frequencies are γ k =        k − 1 N f ∆t for k α 0 N f ∆t 2 + 1, k − 1 − N f N f ∆t for N f − α 0 N f ∆t 2 + 1 < k N f ,(3.14) which forms the elements γ k ∈ Γ k . Expanding (3.12) gives S γ k = (3.15)         Q γ k ,−a1α0Q * γ k ,−a1α0 · · ·Q γ k ,−a1α0Q * γ k · · ·Q γ k ,−a1α0Q * γ k ,a1α0 . . . . . . . . . . . . . . . Q γ kQ * γ k ,−a1α0 · · ·Q γ kQ * γ k · · ·Q γ kQ * γ k ,a1α0 . . . . . . . . . . . . . . . Q γ k ,a1α0Q * γ k ,−a1α0 · · ·Q γ k ,a1α0Q * γ k · · ·Q γ k ,a1α0Q * γ k ,a1α0          . This expression shows that S γ k contains off-diagonal terms that represent spectral correlations that are not present in the process (i.e. not present in A n ). However, as N b and N are increased together, this system converges and becomes a consistent estimate of (3.6a). Thus, all terms that represent spectral correlations not present in A n converge to zero. Furthermore, the estimate is numerically positive semi-definite resulting in CS-SPOD modes that will inherit the desired properties. We note the restriction of cycle frequencies to A n is not required for the numerical computation, and only a 1 is chosen. Equation (3.12) shows that the final eigenvalue problem can be compactly written as S γ k WΨ Ψ Ψ γ k = Λ γ k Ψ Ψ Ψ γ k , (3.16a) Q γ k Q * γ k WΨ Ψ Ψ γ k = Λ γ k Ψ Ψ Ψ γ k . (3.16b) The spatial inner weight q 1 , q 2 x = Ω q * 1 (x, t)W (x)q 2 (x, t)dx (3.17) is approximated as q 1 , q 2 x = q * 1 Wq 2 where W ∈ C N ×N is a positive-definite Hermitian matrix that accounts for both the weight and the numerical quadrature of the integral on the discrete grid and W ∈ C (2a1+1)N ×(2a1+1)N is the block-diagonal matrix of W (similar to 3.6b). The CS-SPOD modes are then given by the columns of Ψ Ψ Ψ γ k and are ranked by their corresponding eigenvalues given by the diagonal matrix Λ γ k . These discrete CS-SPOD modes hold analogous properties to all those previously discussed, including that they are discretely orthogonal Ψ Ψ Ψ * γ k WΨ Ψ Ψ γ k = I and optimally decompose the estimated CS-SPOD decomposition matrix S γ k = Ψ Ψ Ψ γ k Λ γ k Ψ Ψ Ψ * γ k (i.e. the second-order statistics). At most, min(N, N b ) number of non-zero eigenvalues can be obtained. Thus, it is possible to show that the following N b × N b eigenvalue problem Q * γ k W Q γ k Θ γ k = Λ γ k Θ γ k , (3.18) contains the same non-zero eigenvalues as (3.16). This approach is known as the methodof-snapshots (Sirovich 1987). The corresponding eigenvectors are exactly recovered as Ψ Ψ Ψ γ k = Q γ k Θ γ k Λ −1/2 γ k . (3.19) Other than the simple weighting matrix W, only the concatenated data matrix Q γ k must be determined, which is easily achieved by computing each term (Q γ k ,mα0 ) in Q γ k using algorithm 1. Once Q γ k is determined, one computes Q * γ k W Q γ k and then performs the eigenvalue decomposition. Typically, only the first few modes are of physical interest, which allows us to employ a truncated decomposition where we determine a limited number of the most energetic CS-SPOD modes using randomized linear algebra methods (Martinsson & Tropp 2020). The total energy can be efficiently evaluated by taking the trace of Q * γ k W Q γ k . Algorithm 2 implements the CS-SPOD in a practical, but computationally inefficient, manner. The algorithm requires computing 2a 1 + 1 CCSDs, and those the cost is approximately 2a 1 +1 times that of the SPOD. The memory requirement scales similarly. This can be prohibitive when analyzing large data sets. However, significant savings are realized since all the terms in Q γ k are in the form of Q γ k ,mα0 , which represent the k th frequency component of the temporal Fourier transform of the mα 0 frequency-shifted data matrix. The temporal Fourier transform of the n th realization of the mα 0 frequency-shifted data is given bŷ q (n) k,mα0 = 1 N f N f j=1 w j q (n) j,mα0 e −i2π(k−1)[ j−1 N f ] , (3.20a) = 1 N f N f j=1 w j q (n) j e −i2π(mα0∆t)[(j−1)+(n−1)(N f −N0)] e −i2π(k−1)[ j−1 N f ] ,(3.20b) where e −i2π(mα0∆t)[(j−1)+(n−1)(N f −N0)] is the frequency-shifting operation. We separate Algorithm 2 Naive algorithm to compute CS-SPOD. 1: for Each data block, n = 1, 2, · · · , N b do Construct the block data matrix 2: Q (n) = [q 1+(n−1)(N f −N0) , q 2+(n−1)(N f −N0) , · · · , q N f +(n−1)(N f −N0) ] Construct the block time matrix 3: T (n) = [t 1+(n−1)(N f −N0) , t 2+(n−1)(N f −N0) , · · · , t N f +(n−1)(N f −N0) ] 4: end for 5: for Each m ∈ A m , where A m = {−a 1 , −a 1 + 1, · · · , 0, · · · , a 1 − 1, a 1 } do 6: for Each data block, n = 1, 2, · · · , N b do Compute the frequency-shifted block data matrices 7: Q (n) mα0 ← Q (n) e −i2π(mα0)T (n) Using a (windowed) fast Fourier transform, calculate and store the row-wise DFT for each frequency-shifted block data matrix 8:Q (n) mα0 = FFT(Q (n) mα0 ) = [q (n) 1,mα0 ,q (n) 2,mα0 , · · · ,q (n) Nw,mα0 ] where, the columnq (n) k,mα0 contains the n th realization of the Fourier mode at the k th discrete frequency of the mα 0 frequency-shifted block-data matrix 9: end for 10: end for 11: for Each γ k ∈ Γ k (or some subset of interest) do Assemble the concatenated frequency-data matrix for frequency set Ω γ k 12: Q γ k ←          Q γ k ,−a1α0 . . . Q γ k ,0 . . . Q γ k ,a1α0           , whereQ γ k ,mα0 ← √ κ[q (1) k,mα0 ,q (2) k,mα0 , · · · ,q (N b −1) k,mα0 ,q (N b ) k,mα0 ] is the matrix of Fourier realizations corresponding to the k th column of the mα 0 frequency-shifted block-data matrixQ (n) mα0 13: Compute the matrix M γ k ← Q * γ k W Q γ k 14: Compute the eigenvalue decomposition M γ k = Θ γ k Λ γ k Θ * γ k 15: Compute and save the CS-SPOD modes Ψ Ψ Ψ γ k = Q γ k Θ γ k Λ −1/2 γ k and energies Λ γ k for the γ k frequency set Ω γ k 16: end for these components into a phase-shifting component and a zero-phase-shift frequencyshifting component, bŷ q (n) k,mα0 = e −i2π(mα0∆t)[(n−1)(N f −N0)] 1 N f N f j=1 w j q (n) j e −i2π(mα0∆tN f +k−1)[ j−1 N f ] , (3.21a) q (n) k,mα0 = e −i2π(mα0∆t)[(n−1)(N f −N0)]q (n) (k,m) , (3.21b) where (k, m) is the th frequency that is a function of k, m. This shows that the f k discrete frequency of the mα 0 -frequency-shifted data matrix (f k,mα0 ) can be exactly computed as a phase-shifted version of the f (k,m) discrete frequency component of the non-frequency-shifted data matrix. To employ this method, mα 0 ∆tN f ∈ Z. This ensures that the change in frequency due to the applied frequency-shifting operator is equal to an integer change in the index of the frequency vector. Since α 0 ∆T = 1/N θ , this gives mN f N θ ∈ Z, which requires N f = N osc N θ , N osc ∈ Z. With this restriction, the frequency spectrum of the DFT of a N f length record is (3.22) and the unique frequency sets become f k =        (k − 1)α 0 N osc for k N osc N θ 2 , (k − 1 − N osc N θ )α 0 N osc for k > N osc N θ 2 ,γ k =        (k − 1)α 0 N osc for k N osc 2 + 1, (k − 1 − N osc N θ )α 0 N osc for N f − N osc 2 + 1 < k N f . ( 3.23) This demonstrates that a frequency shift of mα 0 corresponds to an integer change in the frequency index, i.e. the k th frequency component of the mα 0 -frequency-shifted data matrix corresponds to the phase-shifted version of the (k, m) th frequency component (f (k,m) ) of the non-frequency-shifted data matrix, i.e. f k,mα0 = f (k,m) , where (k, m) =            k + mN osc for m 0 k + mN osc + N f for m < 0 for k N osc 2 + 1, k + mN osc − N f for m 0 k + mN osc for m < 0 for N f − N osc 2 + 1 < k N f . (3.24) This means that all the data required for CS-SPOD (for all frequency sets Ω γ k ) is contained within the Fourier transform of the original data matrix. Algorithm 3 incorporates these savings and requires only a single DFT of the data matrix, making it similar in computational cost and memory requirement to SPOD. The memory usage to compute CS-SPOD for complex input data is ≈ ( 1 1−N0/N f + 1) × mem(Q), which is the memory required to store the, possibly overlapping, block data matrix and the original data matrix. Additional memory is required to store the temporary matrix Q γ k , although the size of this matrix is minimal as typically 2a 1 +1 << N f . In extreme cases where only a single snapshot can be loaded at a time, a streaming CS-SPOD algorithm could be developed analogous to the streaming SPOD method by Schmidt & Towne (2019). Validation of our CCSD and CS-SPOD algorithms We validate our implementation of the CCSD and CS-SPOD using a model problem that has an analytical solution. Let n(x, t) be a zero-mean, complex-valued, stationary random process with uniformly distributed phase (between 0 and 2π), normally dis-Algorithm 3 Efficient algorithm to compute CS-SPOD. 1: for Each data block, n = 1, 2, · · · , N b do Construct the block data matrix 2: Q (n) ← [q 1+(n−1)(N f −N0) , q 2+(n−1)(N f −N0) , · · · , q N f +(n−1)(N f −N0) ] Using a (windowed) fast Fourier transform, calculate and store the row-wise DFT for each frequency-shifted block data matrix 3:Q (n) = FFT(Q (n) ) Discard any frequency components that are not required to compute Q γ k (if one is not computing Q γ k over all γ k ∈ Γ k ) 4: end for 5: for Each γ k ∈ Γ k (or some subset of interest) do Assemble the concatenated frequency-data matrix for frequency set Ω γ k 6: Q γ k ←          Q γ k ,−a1α0 . . . Q γ k ,0 . . . Q γ k ,a1α0           whereQ γ k ,mα0 ← √ κ[q (1) k,mα0 ,q (2) k,mα0 , · · · ,q (N b −1) k,mα0 ,q (N b ) k,mα0 ] is the matrix of Fourier realizations corresponding to the k th column of the mα 0 frequency-shifted block-data matrixQ Compute the matrix M γ k ← Q * γ k W Q γ k 8: Compute the eigenvalue decomposition M γ k = Θ γ k Λ γ k Θ * γ k 9: Compute and save the CS-SPOD modes Ψ Ψ Ψ γ k = Q γ k Θ γ k Λ −1/2 γ k and energies Λ γ k for the γ k frequency set Ω γ k 10: end for tributed unit variance, and a covariance kernel c(x, x ) = E{n(x, t)n * (x , t)}, of c(x, x ) = 1 √ 2πσ η exp − 1 2 x − x σ η 2 exp −i2π x − x λ η ,(4.1) where σ η = 4 is the standard deviation of the envelope, λ η = 20 is the wavelength of the filter, and x 0 = 1.5 is the center off-set distance. This covariance kernel is identical to the one used by Towne et al. (2018) as its structure is qualitatively similar to statistics present in real flows (e.g. a turbulent jet). The filtered process n(x, t) is defined as the convolution between a filter f (x, t) and n(x, t), given by n(x, t) = f (x, t) n(x, t). (4.2) We sinusoidally modulate n(x, t) to create a cyclostationary process where f 0 = 0.5 is the modulation frequency and φ 0 = 1 3 2π is a phase offset. Using the theory developed in §2, the CCSD of g(x, t) is analytically determined as g(x, t) = n(x, t)cos(2πf 0 t + φ 0 ), (4.3)S g (x, x , α, f ) =      1 4 e ±i2θ S n (x, x , 0, f ) for α = ±2f 0 1 4 S n (x, x , 0, f + f 0 ) + 1 4 S n (x, x , 0, f − f 0 ) for α = 0 0 otherwise (4.4) where S n (x, x , 0, f ) is the CCSD of n(x, t) at cycle frequency α = 0 (thus equaling the CSD). The fundamental and only non-zero cycle frequency present is α 0 = ±2f 0 , indicating that this process exhibits cyclostationarity. The CSD of n(x, t) is given by S n (x, x , 0, f ) = c(x, x )F (x, f )F * (x , f ),(4.5) where F (x, f ) is the temporal Fourier transform of the filter f (x, t). The filter employed is a 5 th -order finite-impulse-response filter with a cutoff frequency f co , that varies as a function of the spatial location f co = 0.2|x − x 0 |/max(x) + 0.2. This results in a filter exhibiting a more rapid spectral decay at x 0 and a flatter spectrum moving away from this location. A domain x ∈ [−10, 10] is employed and is discretized using 2001 equispaced grid points resulting in a grid spacing of ∆x = 0.01. All estimates of the CCSD and CS-SPOD are performed using a Hamming window with L w = 10N θ and an overlap of 67%. Snapshots are saved in time with ∆t = 0.04, resulting in N θ = 25 time steps per period of the fundamental cycle frequency, T 0 = 1/α 0 = 1/(2f 0 ). Data is saved for t end = 2000T 0 , resulting in 50000 snapshots and 593 blocks (realizations) of the process. Sample paths of the process at x = 0, as a function of the phase of the fundamental cycle frequency, are shown in figure 1. As theoretically predicted, we observe a modulation in the amplitude of the process as a function of the phase. Since α 0 = 2f 0 , the phase offset φ 0 = 1 3 2π applied to the sinusoidal modulation results in a phase offset of 1 6 2π in the sample paths. This modulation is observed in figure 2, where we plot the analytical WV spectrum computed using (2.9 and 4.4) at x = x = 0. This shows the sinusoidal modulation of the PSD as a function of the phase and a decay in the amplitude of the spectrum with increasing |f | due to the applied filter. In figure 3, we compare the magnitude of the analytical and numerical CCSD at f = 0.1 and α = 0, ±2f 0 . Here, we observe the aforementioned key structures of the covariance kernel along with the excellent agreement between the numerical and analytical CCDSs, which would further improve with an increasing number of realizations, thereby validating our CCSD implementation (algorithm 1). Next, we validate our efficient algorithm to compute CS-SPOD (algorithm 3) and determine its convergence with increasing data by comparing the numerical results to the analytical results. The analytical solution is determined by forming the CS-SPOD eigensystem defined via (3.6a) through evaluating the analytical CCSDs (given by (4.4)) and then numerically evaluating the final eigenvalue problem. To encompass the range of relevant frequencies we use a 1 = 10 to construct A m , resulting in Ω γ = [−10, 10] + γ. Figure 4 shows a comparison of the analytical and numerical CS-SPOD eigenspectrums (averaged over 10000 realizations of the process), at γ = 0.2 for t end = 100T 0 , 400T 0 , and 2000T 0 , which corresponds to 27, 117, and 593 blocks, respectively. As the duration of the process increases, we observe an increasingly converged estimate of the eigenspectrum. This is reflected in the percentage error between the averaged numerical eigenvalues and the analytical eigenvalues for the three most dominant CS-SPOD modes, which we show in figure 5. We see that these eigenvalues linearly converge to the true value as the duration of the process increases, which is theoretically expected due to the linear reduction in the variance of the Welch estimate of the CCSD with increasing realizations (Antoni 2007). Overall, we obtain a consistent estimate of the CS-SPOD eigenvalues and conclude that our implementation of CS-SPOD is correct. Example problems Application to a modified linearized complex Ginzburg-Landau equation Our first example is the simple and well-understood linearized complex Ginzburg-Landau equation, which has been used as a model for a convectively unstable flow that exhibits non-modal growth (Chomaz et al. 1988;Cossu & Chomaz 1997;Hunt & Crighton 1991). It can be written in the form of a generic linear forced system ∂q(x, t) ∂t − L(x, t)q(x, t) = f (x, t), (5.1) where q(x, t) and f (x, t) represent the state and forcing, respectively, with |q(x → ±∞, t)| → 0, and L(x, t) is the linear operator L(x, t) = −ν 1 ∂ ∂x + ν 2 ∂ 2 ∂x 2 − µ(x, t). (5.2) We use the commonly used form µ(x) = µ 0 − c 2 µ + µ 2 2022), we construct periodic dynamics by using µ 0 = µ 0 + A µ0 sin(2πf 0 t), where µ 0 is the average value of µ 0 , A µ0 is the amplitude of the periodic modulation of µ 0 , and f 0 is the frequency of the periodic modulation. For A µ0 = 0 the system has time-invariant dynamics, while for |A µ0 | > 0 the system has timeperiodic dynamics, resulting in a stationary and cyclostationary response, respectively. By varying A µ0 , we modify the degree to which the system is cyclostationary. We choose f 0 = 0.1, which is substantial compared to the frequencies of interest (≈ [−0.5, 0.5]), meaning that the quasi-steady approach of Franceschini et al. (2022) can not be employed. Like Towne et al. (2018), we use µ 0 = 0.23, which for A µ0 = 0 strongly amplifies external noise due to the non-normality of L(x, t) and results in a degree of low-rankness typically present in turbulent flows. As per Franceschini et al. (2022), we confirm the stability of the system using Floquet analysis (results not shown). To demonstrate the utility of CS-SPOD and to facilitate its interpretation, we compare CS-SPOD performed at several levels of cyclostationarity A µ0 = 0.0, 0.2, and 0.4. A pseudo-spectral approach utilizing Hermite polynomials is employed to discretize the equations Chen & Rowley 2011), where the collocation points [x 1 , x 2 , · · · , x N H ] correspond to the first N H Hermite polynomials with scaling factor R{(−µ 2 /(2ν 2 )) ). For CS-SPOD, the value of the weighting matrix at x i is determined as the distance between the midpoints of the neighbouring grid points. Temporal integration is performed using the embedded 5 th order Dormand-Prince Runge-Kutta method (Dormand & Prince 1980;Shampine & Reichelt 1997). After the initial transients have decayed, a total of 40000 solution snapshots are saved with ∆t = 0.5, giving a Nyquist frequency of f Nyquist = 1. To mimic a turbulent system, similar to Towne et al. (2018), we force our system using spatially correlated band-limited noise. This is performed by constructing spatially correlated noise with the following covariance kernel g(x, x ) = 1 √ 2πσ η exp − 1 2 x − x σ η 2 exp −i2π x − x λ η ,(5.3) where σ η is the standard deviation of the envelope and λ η is the wavelength of the filter. Spatial correlation is introduced by multiplying white noise by the Cholesky decomposition of the covariance kernel. The white noise has a uniform phase, normally distributed amplitude with unit variance, and is generated as in Towne et al. where L = 60, p = 10. The spatially correlated noise is low-pass filtered using a 10 thorder finite-impulse-response filter with a cutoff frequency equal to 0.6f Nyquist . This results in a stationary forcing that is approximately constant in amplitude up to the cutoff frequency (−6dB in amplitude at the cutoff frequency) but has non-zero spatial correlation as defined by (5.3). The forcing is then linearly interpolated to the temporal locations required by the temporal integration. To compute the WV spectrum, SPOD, and CS-SPOD, we employed a window length N w = 10N θ and an overlap 67%, resulting in N b = 595 (realizations) of the process and a frequency discretization of ∆f = 0.01. In analyzing the fabricated data, we must first determine those frequencies, if any, where the system exhibits cyclostationarity. To do this, we compute the CCSD and search over all possible values of α in the range of possible cycle frequencies α ∈ [−1, 1], noting the α discretization required as discussed in §2 to ensure no possible cycle frequencies are missed. Figure 6 shows the CCSD and integrated CCSD for the three values of A µ0 at x = 0, and confirms that the system is cyclostationary when A µ0 > 0 as high values of the CCSD and the integrated CCSD are seen at α = 0, the modulation frequency (f 0 ), and an increasing number of harmonics as A µ0 is further increased. We show 100 realizations of the process for each A µ0 along with the WV spectrum at x = x = 0 as a function of the phase of α 0 in figure 7. The WV spectrum is computed using a 2 = 5 to encompass all cycle frequencies present. Figure 7 (a) shows that the statistics are almost constant as a function of phase for A µ0 = 0, which is expected given the time-invariant dynamics. The small degree of modulation observed is due to statistical error. In figures 7 (b, c), we observe increasing levels of modulation in the statistics as A µ0 increases. Furthermore, the peak value of the spectrum also increases due to the increasing non-normality of the system with increasing µ 0 . Given that the largest value of µ 0 occurs at θ = 0.5π and the peak of the WV spectrum occurs at θ ≈ 0.95π, there is a phase delay of ≈ 0.45π between when the dynamics of the system are the least stable and when the perturbations are, on average, the largest. Based on the preceding analysis and to ensure we encompass all frequencies of interest, we compute CS-SPOD using a 1 = 5, resulting in a frequency range of Ω γ = [−0.5, 0.5]+γ. We first consider the stationary process with A µ0 = 0.0. Although CS-SPOD modes are theoretically equivalent to SPOD for the stationary case, finite data length leads to differences. Figure 8 shows the SPOD eigenspectrum for A µ0 = 0.0. Note that the spectrum is not symmetric in f because the Ginzburg-Landau system is complex. We superpose on the SPOD spectra the set of frequencies f ∈ Ω γ for γ = 0.05, and mark and rank the 6 intersections with the highest energy. Based on the plot, we should find that the 4 most dominant CS-SPOD modes correspond to the dominant SPOD mode at a frequency of γ − α 0 , γ, γ + α 0 , and γ + 2α 0 , respectively. Similarly, the 5 th and 6 th CS-SPOD modes should correspond to the first subdominant SPOD modes at a frequency of γ and γ + α 0 , respectively. Figure 9 makes comparisons between SPOD and CS-SPOD (performed assuming a fundamental cycle frequency of α 0 = f 0 ) for the energy and eigenfunctions for each of these six modes. While the results are quite similar in each case, there are differences associated with statistics convergence, and this, as expected, occurs when there is a small energy separation between two distinct modes (e.g. modes 5 and 6). In figure 10, we now compare the CS-SPOD eigenspectrum for all γ k ∈ Ω γ for the three different values of A µ0 . As A µ0 increases, so does the energy, as the disturbances are increasingly amplified by the increasing non-normality of the linear operator at phases corresponding to positive A µ0 sin(2πf 0 t), consistent with the trend shown previously in figure 7. A large energy separation between the dominant and sub-dominant CS-SPOD modes is observed, which increases for greater A µ0 , indicating that the process is increasingly low rank. In figure 11, for γ = 0.05, we show the fraction of the total energy (λ T = j λ j ) that the first J CS-SPOD or SPOD modes recover. As theoretically expected for A µ0 = 0, CS-SPOD and SPOD result in almost identical energy distribution. In contrast, with increasing A µ0 , CS-SPOD captures an increasingly greater amount of energy than SPOD. For example, at A µ0 = 0.4, the first CS-SPOD mode captures 64% of the total energy, while the first SPOD mode captures just 45%. Furthermore, the first three CS-SPOD modes capture 92% of the total energy, while seven SPOD modes are required to capture a similar amount of energy. As theoretically expected, the energy captured by SPOD does not exceed the energy captured by CS-SPOD (since SPOD modes are a subset of CS-SPOD modes). Thus, as the statistics become increasingly cyclostationary (i.e. more phase-dependent), CS-SPOD is able to capture an increasingly larger fraction of the phase-dependent statistics present in the process, which SPOD, due to the fundamentally flawed assumption of statistical stationarity, is unable to achieve. We now investigate how A µ0 modifies the dominant CS-SPOD modes, at γ = 0.05, by showing the real component and the magnitude of the temporal evolution of the modes φ φ φ j (x, t) in figures 12 and 13, respectively. We note that due to the multiple frequency components (Ω γ ) present in φ φ φ j (x, t), φ φ φ j (x, t) can, unlike SPOD, no longer be completely represented by a single snapshot and instead must be displayed as a function of time. Similarly, the amplitude of the mode is periodic in time with period T 0 = 1/α 0 , unlike SPOD where the amplitude is constant in time. Thus, the amplitude is displayed as a function of phase θ. Similar results are observed for other values of γ not shown here. Overall, across all values of A µ0 , the real component of the CS-SPOD modes shows a similar structure. However, as A µ0 is increased, an additional modulation is seen that results in increasingly time/phase-dependent magnitudes. Finally, in figure 14, we investigate which frequency components are most energetic via the fractional energy of each frequency component f ∈ Ω γ for each CS-SPOD mode, defined as E f,j ≡ ψ ψ ψ j (x, f ) * W (x)ψ ψ ψ j (x, f ), where f ∈Ωγ E f,j = 1. As A µ0 increases, the CS-SPOD modes are constructed from an increased number of non-zero-energy frequency components and at higher energy levels. For example, at γ = 0.05, the dominant frequency component, f = 0.05, contains ≈ 100%, 83%, and 64% of the total energy of the corresponding CS-SPOD mode for A µ0 = 0, 0.2, and 0.4, respectively. This occurs because of the increasing amount of correlation present between different frequency components as A µ0 increases. Alternatively, this phenomenon can be understood as the following; as A µ0 increases, the statistics become more time-dependent, and thus, the amount of interaction between frequency components in Ω γ increases such that the summation of these frequency components result in CS-SPOD modes that capture the time-periodic modulation experienced by the flow. Forced turbulent jet We now consider a forced turbulent, isothermal, subsonic jet for which data is available from a previous study Heidt et al. (2021). The LES was computed using the Charles solver by Cascade Technologies using a setup similar to previous, experimentally validated simulations of turbulent jets (Brès et al. 2017. The jet has a Mach number of M j = U j /c j = 0.4 and a Reynolds number of Re j = ρ j U j D/µ j = 4.5 × 10 5 , where ρ is the density, µ is the viscosity, U is the velocity, c is the speed of sound, D is the nozzle diameter, and the subscripts j and ∞ represent the jet and free-stream conditions, respectively. Frequencies are reported with respect to the Strouhal number St = f D/U j , where f is the frequency. A schematic of the simulation setup is shown in figure 15. An acoustic forcing is applied at a frequency St f = f f D/U j = 0.3 and amplitude a 0 /U j = 0.1. This forcing was chosen to roughly model the forced jet experiments of Crow & Champagne (1971), and we chose St f = 0.3 to match what they observed as the frequency that led to the largest amplification by the flow (i.e. the jet preferred mode). We intentionally used a high amplitude of forcing as we wanted to clearly establish cyclostationarity in the resulting turbulence. The forcing is applied in an annular region surrounding the jet up to r/D = 5. The acoustic forcing inlet co-flow is defined by: c(r) = 0.5 [1 − erf (2(r − 5))] , (5.4a) u f (r, t) = c(r)sin(2πf f t), (5.4b) u x (r, t) = u ∞ + a 0 u f (r, t), (5.4c) u r (r, t) = u θ (r, t) = 0, (5.4d) ρ(r, t) = ρ ∞ + ρ ∞ (u x (r, t) − u ∞ )/a ∞ , (5.4e) p(r, t) = p ∞ + a ∞ ρ ∞ (u x (r, t) − u ∞ ). (5.4f) The simulation was run, post-transient, with a time-step of ∆tD/c ∞ = 0.001, for 480 periods of the forcing frequency (or a total time of t sim D/c ∞ ≈ 4000), during which N θ = 48 snapshots were saved over each cycle of the forcing. The unstructured LES data were interpolated onto a structured cylindrical grid (n x × n r × n θ = 656 × 138 × 128) spanning x/D ∈ [0, 30], r/D ∈ [0, 6], and θ ∈ [0, 2π], which was employed in the subsequent analyses. For the stochastic estimates, we use a window length N w = 6N θ and an overlap Figure 16: Top of each pair of images is u/Uj at θ = 0, π/2, π, 3π/2 for the forced Mach 0.4 turbulent jet. Bottom of each pair of images is u x /Uj at a time instant corresponding to a forcing phase of θ = 0, π/2, π, 3π/2. of 67%, resulting in N b = 237 blocks and a non-dimensional frequency discretization of ∆St ≈ 0.05. In figure 16, we plot the instantaneous and phase-averaged (2.14) velocity at four phases of one forcing cycle. Though not shown, we verified that the phase-averaged field is axisymmetric, consistent with the axisymmetric jet forcing. In the phase-averaged field, a large modulation in the axial velocity of the jet is observed with a vortex rollup occurring around x/D = 2.0. The fundamental frequency fluctuation is primarily located in the potential core region and drives the large-scale periodic modulation. In figure 17, we extract the first four frequency components (f = 0, 0.3, 0.6, 0.9) of the phase-averaged field. The total fluctuation level, i.e. 2 × R{û x,α /U j }, for each non-zero frequency is ≈ 40%, 15%, and 8% thereby indicating that a substantial, nonlinear periodic modulation of the mean occurs. Harmonic generation similarly peaks near x = 2 where the strong roll-up is occurring. Next, we analyze the second-order stochastic component to determine the cycle frequencies present in order to apply CS-SPOD. Similar to the previous example, to determine what cycle frequencies are present in the flow, we interrogate the CCSD and integrated CCSD for α = [−3, 3] (not shown), again noting the α discretization as discussed in §2. We confirm that only the cycle frequencies present are harmonics of the forcing frequency (i.e. Zf f ). Figures 18 and 19 show the CCSD and corresponding WV spectrum, respectively, of the axisymmetric component of the axial velocity at x/D = 5, r/D = 0.75. For clarity, the CCSD is only shown for α/α 0 ∈ Z since all other values of α are ≈ 0 (to within statistical convergence). A large modulation occurs for α/α 0 = 0, ±1, ±2. The WV spectrum shows this large modulation of the statistics, where the phase of the highenergy regions corresponds to when the high-velocity regions pass. Overall, it is clear that the forced turbulent jet exhibits cyclostationarity at frequencies equal to the harmonics of the forcing frequency. Finally, we demonstrate the utility of CS-SPOD on a forced turbulent jet. Recalling that both SPOD and CS-SPOD modes are decoupled amongst the azimuthal modes of the jet (owing to the statistical axisymmetry of the flow), we focus for brevity only on the axisymmetric m = 0 component of the fluctuations. We seek modes that are orthogonal in the Chu-compressible energy norm (Chu 1965) that has been applied in previous SPOD studies q j , q k E = q H 1 diag T γ g ρM 2 , ρ, ρ, ρ, ρ γ g (γ g − 1)T M 2 q k rdxdrdθ = q * j Wq 2 , (5.5) where M is the Mach number, γ g is the ratio of specific heats, and the matrix W takes We show the CS-SPOD eigenspectrum for the turbulent jet in figure 20. A large energy separation between the first three CS-SPOD modes is observed. Since CS-SPOD solves for multiple frequencies at a time, the energy separation will be smaller than with SPOD, in particular, with a flatter spectrum. The spectrum peaks at γ St = 0 and decays as |γ St | → 0.15 which, because the smallest |St| ∈ F occurs at |γ St |, occurs due to the decaying energy spectrum typically present in a turbulent jet. This low-rank behaviour, which is expected based on previous literature on natural turbulent jets (e.g. Schmidt et al. (2018)), is observed in figure 21 where we show the fraction of the total energy captured by the first J SPOD and CS-SPOD modes. The first CS-SPOD mode captures 8% of the total energy present in the flow at the set of frequencies Ω γ , 2 modes capture 13%, 10 modes capture 31%, and 50 modes capture 66%. At γ St = 0 this increases to 16%, 29%, 56%, and 87% for 1, 2, 10, and 50 modes, respectively. Surprisingly, in contrast to the Ginsburg-Landau model, the energy separation between the most energetic CS-SPOD and SPOD modes is not large despite the high level of modulation present. However, despite this small difference, a large variation in the structure and temporal evolution of the most energetic SPOD and CS-SPOD modes is seen, which we explore next. We show the real and absolute value of the pressure component of the most energetic SPOD and CS-SPOD mode at γ St = 0.15 in figure 22. The solid and dashed lines in these figures correspond to the contour lines ofũ x /U j = 0.25, 0.75. SPOD modes are only shown at a single time instance due to their time-invariant evolution, while CS-SPOD modes are shown at several time instances to show their temporal evolution. The most dominant SPOD mode is focused downstream at x/D ≈ 6 − 12, has a frequency St = 0.15, and has a structure typical of the so-termed "Orr modes" previously observed in unforced turbulent jets Pickering et al. 2020). By construction, the amplitude of the SPOD mode remains constant over time, and the region of maximum amplitude corresponds to x/D ≈ 6 − 12 and r/D ≈ 0 − 1. The real component of the most energetic CS-SPOD mode has a structure similar to the respective SPOD modes but with an additional modulation localized to the shear layer in regions of high velocity. This is also observed in amplitude contours, where the amplitude of the mode substantially varies as a function of phase in a region similar to the amplitude profile of SPOD, but the high-amplitude regions always follow the high-velocity regions of the jet. The CS-SPOD modes follow this region since it is where the greatest amount of shear occurs along with the vortex roll-up (as seen in figures 17 and 16). Figure 23 shows the same CS-SPOD mode in a zoomed-in region near the nozzle exit, plotted with lower contour levels since the fluctuation levels are smaller there. At t = 0 (i.e. θ = 0), a short wavelength Kelvin-Helmholtz (KH) mode that is located between the 25% and 75% velocity lines in the x/D = [0, 1] region is seen. The KH mode is angled towards the centerline due to the modulation of the mean flow. Next, at t = T 0 /4 the KH mode has propagated slightly downstream and has become significantly weaker due to the much thinner shear layer at this phase of the motion. From t = T 0 /4 to t = 3T 0 /4, the KH mode increases in strength as it continues to propagate downstream due to the increasing thickness of the boundary layer. The KH mode also rotates due to the roll-up induced by the forcing, as seen in figure 16. At t = 3T 0 /4, the KH mode is substantially stronger than at t = T 0 /4 and is a lower-frequency structure located around x/D = [0.6, 1] region and is angled away from the centerline. A corresponding interrogation of the SPOD mode shows no near-nozzle Kelvin-Helmholtz activity at this frequency, highlighting the ability of CS-SPOD to reveal potentially important dynamical effects that are slaved to the forcing frequency. Figure 24 shows the normalized energy as a function of phase for the three dominant modes at γ St = 0.15. The energy, despite the large phase-dependent modulation seen in figure 22, varies by just ±2% as a function of phase. This demonstrates that, despite the strong phase-dependent structure of the mode and of the statistics present in the jet, on average, over the flow, the total energy contained within these modes is not strongly phase-dependent. Finally, in figure 25, we show the fractional energy of each frequency component f ∈ Ω γ for the CS-SPOD modes. The large amount of frequency interaction previously observed is visible, where for j = 1, the 8 highest energy frequency components are ±0.15, ±0.45, ±0.75, ±1.05 which contain 45.6%, 3.7%, 0.47%, 0.11% of the energy, respectively. Thus, a large amount of interaction occurs between the frequency components in Ω γ , which results in the large periodicity observed. It is important to note that although a frequency component may only contain a small fraction of the total energy in a CS-SPOD mode, in many cases, it is still a physically important feature, such as the modulated KH mode discussed previously, and thus should be carefully studied. Overall, we see that the forcing clearly results in a large modulation of the KH and Orr modes present, an effect that SPOD is unable to capture. Thus, the utility of CS-SPOD to describe the coherent structures in a forced turbulent jet is demonstrated. Harmonic resolvent analysis and its relationship to CS-SPOD Harmonic resolvent analysis Padovan & Rowley (2022) extends resolvent analysis to time-periodic mean flows. Starting with the nonlinear governing equations ∂g(t) ∂t = H(g(t)),(6.1) where H is the time-independent continuity, momentum, and energy equations and g(t) ∈ C N is the state vector of flow variables, we decompose the state as g(x, t) = g(x, t) + g (x, t), whereg(t) =g(t + T 0 ) is the T 0 periodic mean flow component (firstorder component) and g (t) is the turbulent component (second-order component). Since g(t) is periodic, it can be expressed as a Fourier series, givingg(t) = n∈Anĝ n e i2π(nα0)t , whereĝ n are harmonics of the fundamental frequency α 0 = 1/T 0 of the mean flow (i.e. the Fourier series components), T 0 is the period of oscillation of the mean flow, and n ∈ A n is defined as previous. The cycle frequencies, which in the context of linear analysis must be the frequencies present in the mean flow, are nα 0 . By substituting this decomposition into (6.1), we obtain ∂g (t) ∂t = D g (H(g(t))g (t) + f (t), (6.2) where f (t) contains higher-order terms in g (t). The Jacobian A(t) = D g (H(g(t)) is also a periodic function in time, which, following the discussion in Padovan & Rowley (2022), we assume is a differentiable function of time thereby guaranteeing a unique solution of (6.2). Subsequently, it is also expanded as a Fourier series A(t) = n∈An n e i2π(nα0)t . Inserting this expansion into (6.2), gives whereĝ γ andf γ are the γ-frequency components of g (t) and f (t), respectively. Equation (6.3) represents a system of coupled equations where perturbations at frequency γ are coupled to perturbations at frequency γ − nα 0 through the nα 0 frequency component of the mean flow. In general, this results in an infinite-dimensional problem similar to the infinite-dimensional CS-SPOD eigenvalue problem. In practice, identically to CS-SPOD, we restrict the perturbation frequencies to [γ − a 1 α 0 , γ + a 1 α 0 ] and thus, we seek time-periodic perturbations of g (t) = m∈Amĝ γ+mα0 e i2π(γ+mα0)t , where A m = {−a 1 , · · · , −1, 0, 1, · · · , a 1 }. This results in a solution frequency set of Ω γ = {−a 1 α 0 + γ, (−a 1 + 1)α 0 + γ, · · · , γ, · · · , (a 1 − 1)α 0 + γ, a 1 α 0 + γ}. We also limit the mean flow frequencies to [−a 2 , a 2 ] with a 2 a 1 . The final problem is compactly written as (6.7a, b, c) R kα0 = (−ikα 0 I + 0 ) ∈ C N ×N , and I ∈ R (2a1+1)N ×(2a1+1)N is the identity operator. The harmonic resolvent operator is then defined asĤ = (i2πγI −T ) −1 ∈ C (2a1+1)N ×(2a1+1)N and has (2a 1 + 1) coupled equations and is (2a 2 + 1) banded-blockdiagonal due to the periodicity of the mean flow. If the flow is time-invariant, then all off-diagonal blocks are zero, i.e. there is no cross-frequency coupling, and the system becomes block-diagonal where each diagonal block is the standard resolvent problem at frequency γ + kα 0 , k ∈ Z. As detailed by Padovan & Rowley (2022), the singularity in the harmonic resolvent operator must be removed to avoid numerical difficulties. Similar to CS-SPOD, harmonic resolvent analysis is periodic in γ, and thus we must only solve over the range γ ∈ Ω γ = (−α 0 /2, α 0 /2]. We then seek to solve the forcing modê F that results in the most energetic responseĜ, expressed as the following optimization problem ∂g (t) ∂t = n∈An n e i2π(nα0)t g (t) + f (t),(6.(i2πγI −T )Ĝ =F , (6.6) wherê T =              . . . . . . . . . . . . . . .R −α0Â−α0Â−2α0 . . . . . . α0R0Â−α0 . . . . . . 2α0Âα0Rα0 . . . . . . . . . . . . . . .              ,Ĝ =              . . . g γ−α0 g γ g γ+α0 . . .              ,F =              . . . f γ−α0 f γ f γ+α0 . . .              ,σ 2 = Ĝ ,Ĝ G F ,F F , (6.8) where Ĝ j ,Ĝ k G and F j ,F k F are inner products on the output and input spaces, respectively, and are given by Ĝ j ,Ĝ k G = ΩĜ * k (x, f )W G (x)Ĝ j (x, f )dx, (6.9a) F j ,F k F = ΩF * k (x, f )W F (x)F j (x, f )dx. (6.9b) The solution to this optimization problem is given by the singular value decomposition of the weighted harmonic resolvent operator (6.10) where the diagonal matrix Σ = diag[σ 2 1 , σ 2 2 , · · · ] contains the ranked gains and the columns ofV = W −1/2 F V andÛ = W 1/2 G U contain the forcing and response modes, respectively. These modes have an to analogous structure toF orĜ, and the j th forcing and response modes (Û j ,V j ) can be reconstructed in the time-domain as H = W 1/2 GĤ W −1/2 F = UΣ V * ,U j = U j (x, t) = m∈Amû j,γ+mα0 e i2π(γ+mα0)t (6.11a) V j = V j (x, t) = m∈Amv j,γ+mα0 e i2π(γ+mα0)t . (6.11b) respectively. These modes are orthonormal in their respective spatial norms Û j ,Û k G = V j ,V k F = δ j,k and the temporal modes are orthogonal in their respective space-time norms U j , U k G(x,t) , V j , V k F (x,t) , where U j , U k U (x,t) = Ω U * k (x, t)W G (x)U j (x, t) dxdt, (6.12a) V j , V k G(x,t) = Ω V * k (x, t)W F (x)V j (x, t) dxdt, (6.12b) The decomposition is complete, allowing the output to be expanded aŝ G(x, γ) = ∞ j=1Û j (x, γ)σ j (γ)β j (γ),(6.13) where β j (γ) = F (x, γ),Û j (x, γ) F . (6.14) A connection between harmonic resolvent analysis and CS-SPOD is obtained using an approach analogous to that of Towne et al. (2018) and is similar to relationship between resolvent analysis and SPOD. In §2, it was shown that S(x, x , α, f ) can be compactly written as (6.15) whereq(x, f ) is the short-time Fourier transform of q(x, t). Similarly, the CS-SPOD decomposition tensor for the process q(x, t) can be written as S(x, x , α, f ) = E{q(x, f − α/2)q * (x , f + α/2)},S(x, x , γ) = E{Q(x, γ)Q * (x , γ)}. (6.16) To develop a relationship between CS-SPOD and harmonic resolvent analysis, we equate the CS-SPOD and harmonic resolvent expansions of the CS-SPOD decomposition matrix and set all norms to be equal, i.e. · = · G = · F = · x , giving S(x, x , γ) = ∞ j=1 λ j (γ)Ψ Ψ Ψ j (x, γ)Ψ Ψ Ψ * j (x , γ) (6.17a) = ∞ j=1 ∞ k=1Û j (x, γ)Û * k (x , γ)σ j (γ)σ k (γ)S βj β k (γ), (6.17b) where S βj β k (γ) = E{β j (γ)β * k (γ)} is the scalar CSD between the j th and k th expansion coefficients. Identical to Towne et al. (2018), the output harmonic resolvent modes and singular values were moved outside of the expectation operator since they are deterministic quantities. Conversely, the expansion coefficients depend on the forcinĝ F (x, γ), which is stochastic due to the random nature of turbulent flows and thus is described by the CSD. In the case of a stationary process, S(x, x , γ) is block-diagonal, meaning that Ψ Ψ Ψ j (x, γ) andÛ j (x, γ) contain only a single non-zero frequency component per mode, and this relationship simplifies to that in Towne et al. (2018). For uncorrelated expansion coefficients S βj β k (γ) = µ j (γ)δ jk , the relationship simplifies to S(x, x , γ) = ∞ j=1 λ j (f )Ψ Ψ Ψ j (x, γ)Ψ Ψ Ψ * j (x , γ), (6.18a) = ∞ j=1Û j (x, γ)Û * j (x , γ)σ 2 j (f )µ j (γ). (6.18b) Since orthogonal diagonalizations are unique, this shows that CS-SPOD modes and harmonic resolvent modes are identical, and the k th most energetic CS-SPOD mode corresponds to the resolvent mode with the k th greatest σ 2 j (γ)µ j (γ). If µ j = 1 for all j, then σ 2 j (γ) = λ j (γ) and Ψ Ψ Ψ j (x, γ) =Û j (x, γ) showing that the ranked CS-SPOD eigenvalues equal the ranked harmonic resolvent gains. To determine the conditions when the expansion coefficients are uncorrelated, we perform identical manipulation to Towne et al. (2018), and show that (6.19) where S F F (x, x , γ) = E{F (x, γ)F * (x , γ)} is the CS-SPOD decomposition tensor of F (x, γ). Since harmonic resolvent modes are orthogonal, if S F F (x, x , γ),Û j (x, γ) * = µ j (γ)Û j (x, γ) then S βj β k (γ) = µ j (γ)δ jk . This can be written as 20) which is identical to the CS-SPOD of the input. One can then show that the expansion coefficients are uncorrelated if and only if the harmonic resolvent input modes correspond exactly with the CS-SPOD modes of the input. Thus, we conclude that the relationship between CS-SPOD and harmonic resolvent analysis is identical to that of SPOD and resolvent analysis. S βj β k (γ) = S F F (x, x , γ),Û j (x, γ) * ,Û k (x, γ) * ,Ω S ηη (x, x , γ)W η (x )V j (x , γ)dx = µ j (γ)V j (x, γ),(6. We can then specialize for µ j = 1, giving S ηη (x, x , γ)W η (x ) = Iδ(x − x ), (6.21) which for W η (x ) = I results in S ηη (x, x , γ) = Iδ(x−x ), i.e. the forcing is unit-amplitude white noise. This results in identical harmonic resolvent and CS-SPOD modes along with equal identical energies/gains, i.e. σ 2 j = λ j . We demonstrate this result by comparing the CS-SPOD and harmonic resolvent analysis results for the modified forced Ginzburg-Landau for A µ = 0.4. For both CS-SPOD and harmonic resolvent analysis, we employ a 1 = 5 resulting in a frequency range of Ω γ = [−0.5, 0.5] + γ. To compute CS-SPOD, we force the system with unit variance band-limited white noise. This is constructed similarly to the spatially correlated case previously considered in §5.1 without the step to introduce the spatial correlation. We employ identical computational parameters to those used in §5.1. As demonstrated in §6, because the forcing is white CS-SPOD modes and harmonic resolvent analysis modes are theoretically identical. Furthermore, since the inner product has unit weight, the CS-SPOD eigenvalues equal the harmonic resolvent analysis gains. Figure 26 shows the first six CS-SPOD eigenvalues and harmonic resolvent gains. Overall, excellent agreement is observed between the CS-SPOD eigenvalues and harmonic resolvent gains. The small amount of jitter present in the CS-SPOD eigenvalues is due to statistical convergence. The minor overshoot or undershoot is associated with spectral and cycle leakage, which can be reduced by increasing the frequency resolution of the estimate. As with any spectral estimate, increasing the length of the blocks reduces the number of blocks leading to the well-known bias-variance tradeoff. Improved control over the bias-variance tradeoff in SPOD was achieved using multi-taper methods (Schmidt 2022) and could similarly be used for CS-SPOD. Figure 27 shows the magnitude of the time evolution of the three most energetic CS-SPOD and harmonic resolvent modes at γ = 0.05, which we see are almost indistinguishable. The similarity between the CS-SPOD and harmonic resolvent modes is quantified using the projection ξ jk (f ) = ψ ψ ψ j (γ),Û k (γ) x and the harmonic-resolvent- mode expansion-coefficient CSD S βj β k (γ) given by (6.19). To compute S βj β k (f ), we take two inner products with respect toÛ j (γ) andÛ k (γ) and then divide by σ j (γ) and σ k (γ), obtaining S βj β k (γ) = ∞ j=1 λ n (γ) σ j (γ)σ k (γ) ξ nj (γ)ξ * nk (γ). (6.22) The projection ξ jk and |S β j β k | |S β j β k |∞ are shown in figure 28 for γ = 0.05. |S βj β k | is, by construction, diagonal, and this should result in a diagonal ξ jk . This is verified for the first eight modes, but for increasingly subdominant modes, off-diagonal terms become increasingly apparent, which is owing to a lack of full statistical convergence. Finally, to demonstrate the necessity of using harmonic resolvent and CS-POD to model and educe structures for time-periodic mean flows, we compare our results with a naive application of SPOD and standard resolvent to the time-periodic GL system. Figure 29 compares the (standard) resolvent gains and SPOD eigenvalues for A µ = 0, 0.2, and 0.4. When A µ = 0, the system is stationary, and the resolvent gains and SPOD energies agree (as expected), but there are significant and growing discrepancies as A µ = 0 is increased and the base flow is increasingly oscillatory. For systems with periodic statistics, CS-SPOD and harmonic resolvent analysis must be used to analyze these flows. Low-frequency and high-frequency forcing limits In many flows, the frequency of the forcing may be either low or high with respect to the dynamics of interest. In both cases, simplifications can be made to the analysis. For low-frequency forcing, CS-SPOD and HR tend towards systems that link all frequency components together, thereby making the analysis of the resulting system impractical. However, in many cases, we are interested in frequencies that are much larger than the forcing frequency. Franceschini et al. (2022) showed that high-frequency structures evolving on a low-frequency periodic motion could be analyzed using a quasisteady approach which they named phase-conditioned localized SPOD (PCL-SPOD) and quasi-steady (QS) resolvent analysis. These methods require f >> f 0 and that at each fixed time (or phase) t, the cross-correlation tensor, around that phase, only depends on the time lag τ . At each phase, all standard SPOD and resolvent analysis properties are satisfied in PCL-SPOD and QS resolvent analysis, and we refer the reader to Franceschini et al. (2022) for a detailed discussion. Although PCL-SPOD was developed without reference to cyclostationary theory and computational methods, by employing a similar derivation to Franceschini et al. (2022), PCL-SPOD can be written as Ω W V (x, x , f, t)W (x )ψ ψ ψ(x , f, t)dx = λψ ψ ψ(x, f, t), (7.1) where W V (x, x , f, t) is the Wigner-Ville spectrum and ψ ψ ψ(x , f, t ) are the PCL-SPOD eigenvectors that only contain a single frequency component f and are independent over time. This is analytically identical to the PCL-SPOD shown in Franceschini et al. (2022), but is numerically determined using a different computational procedure. QS resolvent analysis is similarly written as (i2πf I − A(t))ĝ f =η f , (7.2) (a) Contours of QS resolvent gain (σ 2 j ) and PCL-SPOD energy (λj) as a function of frequency f and phase θ. where R(t) = (i2πf I − A(t)) is the QS resolvent operator, and the solution at each timeinstance t is independent of the solution at any other time-instance. We then seek to solve the forcing modeη f that results in the most energetic responseĝ f , which is determined via the singular value decomposition of the weighted harmonic resolvent operator W 1/2 g R(t)W −1/2 η = U † Σ † V * † ,(7.3) where W η and W g are the norms on the input and output space, respectively, and are defined similarly to equation 6.9. The diagonal matrix Σ † = diag[σ 2 1 , σ 2 2 , · · · ] contains the ranked gains and the columns ofV † = W −1/2 F V † andÛ † = W 1/2 G U † contain the forcing and response modes, respectively. Using equation 2.9, algorithm 1, and a procedure similar to that of regular SPOD, we compute PCL-SPOD of the Ginzburg-Landau systems with white-noise forcing for several different forcing frequencies f 0 = 0.01, 0.04, and 0.1 at A µ = 0.2. Due to the substantially lower forcing frequency, 2 × 10 5 snapshots are saved instead of 4 × 10 4 . We then compare the PCL-SPOD and QS resolvent results in figure 30 where we see excellent agreement for small f 0 . We see that as f 0 increases, the PCL-SPOD and QS resolvent results increasingly deviate as the two aforementioned assumptions are increasingly violated. Many physical systems exhibit some form of spectral peak. If the forcing frequency is sufficiently large, such that the energy contained at f +kα 0 , k ∈ Z & k = 0 is substantially lower than at f , one can see that the CS-SPOD and harmonic resolvent systems (given by equations 3.5, 6.6, respectively) can be approximated by the block diagonal term that corresponds to f (i.e. the most energetic component in Ω γ ). Furthermore, for many systems, the impact of a high-frequency forcing on the low-frequency dynamics is not direct, instead, the low-frequencies are modified as a result of nonlinear interaction that modifies mean flow. Thus, for a large forcing frequency, CS-SPOD and harmonic resolvent analysis approach SPOD and standard resolvent analysis, respectively. In figure 31, we show the SPOD and CS-SPOD eigenspectrum of the white-noise forced Ginzburg-Landau system at A µ = 0.8 for f 0 = 0.1, 0.2, 0.4. To assess the convergence of CS-SPOD to SPOD for large forcing frequencies, the CS-SPOD modes have been mapped to the SPOD mode of greatest alignment (computing over the same set of frequencies Ω γ ). This is similar to what was performed in §5.1 during the comparison between SPOD and CS-SPOD modes. We see that as the forcing frequency increases, the CS-SPOD and SPOD eigenvalues begin to converge in the region where the energy at f + kα 0 << f, k ∈ Z & k = 0. Conclusions In this paper, we have proposed CS-SPOD for the extraction of the most energetic coherent structures from complex turbulent flows whose statistics vary time-periodically (i.e. flows that have cyclostationary statistics). This is achieved by an extension of the one-dimensional technique developed by Kim et al. (1996) to large high-dimensional data through the use of the method-of-snapshots to make the algorithm computationally feasible for large data. The orthogonality/optimality properties of the modes generated by CS-SPOD are shown, where, similar to SPOD analysis of stationary flows, CS-SPOD determines the set of orthogonal modes that optimally reconstruct the statistics of these flows in terms of the space-time norm. In contrast to SPOD, where the modes oscillate at a single frequency and have a constant amplitude in time, CS-SPOD modes oscillate at a set of frequencies separated by the fundamental cycle frequency (typically the frequency of modulation), have a periodic amplitude in time, and optimally reconstruct the second-order statistics. We show that CS-SPOD naturally becomes SPOD when analyzing a stationary process, allowing the CS-SPOD results to be interpreted in a familiar manner. Furthermore, we develop an efficient computational algorithm to compute CS-SPOD with a computational cost and memory requirement similar to SPOD, thus allowing CS-SPOD to be computed on a wide range of problems. Lastly, similar to the relationship that exists between SPOD and standard resolvent analysis , CS-SPOD modes are identical to harmonic resolvent modes in the case where the harmonic-resolvent-mode expansion coefficients are uncorrelated. We also discuss simplifications that can be made when forcing at a low or high frequency. We applied the CS-SPOD algorithm to two datasets. The first is data from a modified linearized complex Ginzburg-Landau equation with time-periodic dynamics, which represents a simple model of a flow exhibiting non-modal growth. As the amplitude of the imposed time-periodicity is increased, CS-SPOD yields modes that are increasingly phase-dependent. We demonstrated the inability of SPOD to capture these dynamics, which is shown through both an analysis of the temporal evolution of the modes and by the ability of CS-SPOD to capture substantially more energy than SPOD. In addition, we show that when the system is forced with unit-variance white noise, the CS-SPOD modes from the data were identical (up to statistical convergence) with modes computed by harmonic resolvent analysis. For cyclostationary processes, we show that (standard) resolvent analysis cannot predict the time-averaged statistics even when the white-forcing conditions are met. This shows that CS-SPOD and harmonic resolvent analysis should be used to correctly analyze and/or model flows with cyclostationary statistics. We next considered a forced, turbulent high-Reynolds-number jet, demonstrating CS-SPOD on a turbulent flow for the first time. We identified coherent structures that differed in important ways from their SPOD-identified cousins in natural jets. In particular, CS-SPOD clarifies how the dynamics of the coherent structures are altered by the forcing. For example, the axisymmetric CS-SPOD structure at a low Strouhal number featured finer-scale axisymmetric Kelvin-Helmholtz roll-up in the near-nozzle region that is absent in natural jets at a high Reynolds number. This roll-up waxed and waned at those phases of the forcing cycle where the initial shear layer was thinned and thickened, respectively. Overall, our results show that CS-SPOD successfully extends SPOD to flows with cyclostationary statistics. This allows us to study a wide range of flows with time-periodic statistics such as turbomachinery, weather and climate, and flow control with harmonic actuation, and wake flows rendered cyclostationary through the (arbitrary) choice of a phase reference for the dominant shedding frequency. Although we focused on strictly cyclostationary processes, further generalizations are possible to almost periodic flows and flows forced with several non-commensurate frequencies. where R nα0 (x, x , τ ) are the cyclic autocorrelation functions of R(x, x , t, τ ) at cycle frequency nα 0 and A n = {· · · , −1, 0, 1, · · · } is, in general, the infinite set of harmonics of the fundamental cycle frequency present in the flow. One can also decompose the two-point space-time correlation density as the following phase-shifted Fourier series R(x, x , t, τ ) = n∈AnR nα0 (x, x , τ )e −iπnα0τ e i2πnα0t , (A 2) where the two Fourier coefficients are related by R nα0 (x, x , τ )e iπnα0τ =R nα0 (x, x , τ ). (A 3) Although somewhat unusual, this simply applies a phase shift to the resulting Fourier series coefficients that, after Fourier transforming, shifts the center frequency of the CCSD. This is identical to the phase shift that relates the symmetric and asymmetric definitions of the cyclic cross-correlation functions and CCSD. Due to this, one can derive CS-SPOD using the symmetric definitions and a phase shift or using the asymmetric definition. We choose the former as it results in a simpler derivation later. This phase shift is required to ensure the resulting eigensystem is Hermitian and positive semidefinite. Substituting the cyclic Wiener-Khinchin relation from (2.7) into (A 2) and then into the Fredholm eigenvalue problem (3.3) results in ∞ −∞ Ω ∞ −∞ n∈An S nα0 (x, x , f )e i2πnα0t e i2π(f − 1 2 nα0)τ W (x )φ φ φ(x , t )df dx dt = λφ φ φ(x, t). (A 4) Since τ = t − t , this leads to the following simplifications, whereφ φ φ(x , f ) is the temporal Fourier transform of φ φ φ(x , t ). Similar to SPOD, we must choose a solution ansatz. In SPOD, we can solve a single frequency at a time as there is no correlation between different frequency components. However, since cyclostationary processes have spectral components that are correlated, we are unable to solve for each frequency component separately. Instead, we solve multiple coupled frequencies together by choosing our solution ansatz as φ φ φ(x, t) = m∈Am ψ ψ ψ(x, γ + mα 0 )e i2π(γ+mα0)t , (A 7) giving,φ φ φ(x, f ) = m∈Am ψ ψ ψ(x, γ + mα 0 )δ(f − (γ + mα 0 )), (A 8) where A m = {· · · , −1, 0, 1, · · · } gives, in general, the infinite set of frequencies present in the solution (all separated by α 0 ). The frequency-shifted version ofφ φ φ(x, f ) is given bŷ φ φ φ(x, f − 1 2 nα 0 ) = m∈Am ψ ψ ψ(x, γ + mα 0 )δ(f − (γ + (m + 1 2 n)α 0 )). (A 9) Cyclostationary spectral proper orthogonal decomposition 3.1. Derivation = e −i2π(mα0∆t)[(n−1)(N f −N0)]q (n) (k,m), where the index l(k, m) is given by (3.24) 7: Figure 1 : 1Model process sample paths at x = 0. Figure 2 : 2Analytical WV spectrum at x = x = 0 for the model process. Figure 3 : 3Plot of the magnitude of the CCSD for f = 0.1 of the dummy process for the analytical and numerically generated results. Figure 4 : 4Plot of the analytical and numerical CS-SPOD eigenspectrum at γ = 0.2 for the dummy problem at multiple signal durations. Figure 5 : 5Convergence of CS-SPOD eigenvalues as a function of the total signal duration of the dummy problem. et al. 2009; Chen & Rowley 2011; Towne et al. 2018), resulting in time-invariant dynamics. All constants in (5.1, 5.2), except for µ 0 , use the values in Bagheri et al. (2009). Similar to Franceschini et al. ( Following Bagheri et al. (2009) and Towne et al. (2018), we use N H = 221, leading to a computational domain x ∈ [−85.19, 85.19], which is large enough to mimic an infinite domain. The boundary conditions are implicitly satisfied through the use of Hermite polynomials Figure 6 : 6CCSD (top) and integrated CCSD (bottom) for the Ginzburg-Landau system at x = 0. spatially restricted to an interior portion of the domain via the window exp[−(x/L) p ], Figure 7 : 7Example Ginzburg-Landau sample paths (top) and WV spectrum at x = 0 (bottom). Figure 8 : 8SPOD eigenspectrum for the Ginzburg-Landau system at Aµ 0 = 0.0 showing the three most energetic modes at each discrete frequency f . The 6 highest-energy modes occurring at the frequencies present in the CS-SPOD solution frequencies, i.e. f ∈ Ωγ, are depicted with the red dots. Figure 9 : 9Comparison of SPOD (left) and CS-SPOD modes (right) for the Ginzburg-Landau system at Aµ = 0. From top to bottom are the six most dominant CS-SPOD modes and the six points identified in figure 8. The contour limits for the CS-SPOD eigenfunctions are set equal to corresponding SPOD mode ±||ψ ψ ψj(x, t)||∞. Figure 10 : 10CS-SPOD energy spectrum of the three Ginzburg-Landau systems. Figure 11 : 11Total fractional energy captured by a truncated set of CS-SPOD (CS) and SPOD (S) modes for the three Ginzburg-Landau systems at γ = 0.05. Figure 12 : 12Real component of the three dominant CS-SPOD modes at γ = 0.05 for the three Ginzburg-Landau systems. The contour limits for each CS-SPOD mode is ±|R{ψ ψ ψj(x, t)}|∞. Figure 13 : 13Magnitude of the three dominant CS-SPOD modes at γ = 0.05 for the three Ginzburg-Landau systems. The contour limits for each CS-SPOD mode is [0, ||ψ ψ ψj(x, t)||∞]. Figure 14 : 14Fractional CS-SPOD modal energy, E f,j , at γ = 0.05 for the Ginzburg-Landau systems. Figure 15 : 15Schematic of the forced Mach 0.4 turbulent jet, adapted from Brès et al. (2018). Figure 17 : 17R{ûx,St/Uj} at St = 0, 0.3, 0.6, and 0.9 for the forced Mach 0.4 turbulent jet. Figure 18 : 18Absolute value of the CCSD density of u x /Uj at x/D = 7, r/D = 0.75 for the forced Mach 0.4 turbulent jet. Figure 19 : 19WV spectrum of u x /Uj at x/D = 7, r/D = 0.75 for the forced Mach 0.4 turbulent jet. Figure 20 : 20CS-SPOD Energy Spectrum for the forced Mach 0.4 turbulent jet. Figure 21 : 21Total energy captured by a truncated set of CS-SPOD (CS) and SPOD (S) modes for the forced Mach 0.4 turbulent jet at γSt = 0.15. into account the energy and domain quadrature weights. To compute CS-SPOD, we choose a 1 = 10, resulting in a non-dimensional frequency range of Ω γ St = [−3, 3] + γ St , which encompasses all frequencies of interest. Figure 22 : 22Comparison of the real and magnitude component of the dominant CS-SPOD mode to the dominant SPOD mode for γSt = 0.15 of the forced Mach 0.4 turbulent jet. All contours are set to ±0.75|R{φp,1(x, r, t)}|∞ and [0, 0.75|φp,1(x, r, t)|∞] for the real and magnitude contours, respectively. Figure 23 : 23Real component of the dominant CS-SPOD mode for γSt = 0.15 for the forced Mach 0.4 turbulent jet (zoomed into x/D = [0, 2], r/D = [0, 2]). All contours are set to ±0.25|R{φp,1(x = [0, 2], r = [0, 2], t)}|∞. Figure 24 : 24Energy of the dominant CS-SPOD modes over the phase of the external forcing for γSt = 0.15 for the forced Mach 0.4 turbulent jet. Figure 25 : 25Fractional CS-SPOD eigenmode energy, by frequency, for γSt = 0.15 shown in log 10 scale for the forced Mach 0.4 turbulent jet. Figure 26 : 26Comparison of the first six harmonic resolvent gains σ 2 j and CS-SPOD eigenvalues λj as a function of γ for the white noise forced Ginzburg-Landau system with Aµ = 0.4. Figure 27 : 27Comparison of the magnitude of the three most energetic CS-SPOD (left) and harmonic resolvent (right) modes at γ = 0.05 for the Ginzburg-Landau system at Aµ = 0.4. The contour limits for the CS-SPOD modes are set equal to the corresponding harmonic resolvent modes [0, |ψ ψ ψj(x, t)|∞]. Figure 28 : 28CS-SPOD and harmonic resolvent analysis mode projection coefficient (a) and magnitude of the normalized harmonic-resolvent-mode expansion-coefficient CSD (b) at γ = 0.05 for the Ginzburg-Landau system with white noise forcing. Figure 29 : 29Comparison of the first three resolvent analysis gains σ 2 j and SPOD eigenvalues λj as a function of frequency f for the white noise forced Ginzburg-Landau system at Aµ = 0.0, 0.2, and 0.4. For clarity, every second SPOD eigenvalue has been omitted. (b) Weighted mode shapes in θ − x space of the dominant QS resolvent (σj(f )|ûj(x, f )|) and PCL-SPOD ( λj|ψ ψ ψ(x , f, t)|) mode at f = 0.1. Figure 30 : 30Contours of the gain and weight modes shapes of the white noise forced Ginzburg-Landau system at Aµ = 0.2 and f0 = 0.01, 0.04, and 0.1. Figure 31 : 31Comparison of the dominant SPOD and CS-SPOD eigenvalues λ1 as a function of frequency f for the white noise forced Ginzburg-Landau system at Aµ = 0.8 for f0 = 0.1, 0.2, 0.4. The SPOD eigenvalues for Aµ = 0 are overlaid to show the impact of the forcing on the spectrum. SS nα0 (x, x , f )e i2πnα0t e i2π(f − 1 2 nα0)t W (x ) )t dt df dx = λφ φ φ(x, t), nα0 (x, x , f )e i2π(f + 1 2 nα0)t W (x )φ φ φ(x , f − 1 2 nα 0 )df dx = λφ φ φ(x, t), (A 6) Acknowledgements. The authors gratefully acknowledge support from the United States Office of Naval Research under contract N00014-20-1-2311 with Dr. S. Martens as program manager and the Federal Aviation Administration under grant 13-C-AJFE-UI. This work was supported in part by high-performance computer time and resources from the DoD High Performance Computing Modernization Program. This work used Stampede2 at Texas Advanced Computing Center through allocation CTS120005 from the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296.Declaration of Interests. The authors report no conflict of interest.Appendix A.To derive the eigenvalue problem given by (3.5), we rewrite R(x, x , t, t ) → R(x, x , t, τ ) ≡ E{q(x, t + τ /2)q * (x , t − τ /2)}, where τ = t − t . Recalling that for a cyclostationary process, the two-point space-time correlation density is a periodic function in time and can be expressed as a Fourier seriesSubstituting these expressions into (A 6) and integrating with respect to f results inFor this equation to hold over all time, we perform a harmonic balance where each frequency component must hold separately. This gives γ + (m + n)α 0 = γ + mα 0 → m + n = m. An equation for each frequency component of our ansatz is formed asSubstituting n = m − m , this expression simplifies towhere we ignore m − m / ∈ A n . Expanding (A 12) gives the final CS-SPOD eigenvalue problem (3.5). Aaron 2021 Resolvent-based estimation of turbulent channel flow using wall measurements. Filipe R Amaral, André Cavalieri, Vg, Martini, Jordan Eduardo, &amp; Peter, Towne, Journal of Fluid Mechanics. 92717Amaral, Filipe R, Cavalieri, André VG, Martini, Eduardo, Jordan, Peter & Towne, Aaron 2021 Resolvent-based estimation of turbulent channel flow using wall measurements. Journal of Fluid Mechanics 927, A17. Cyclic spectral analysis in practice. Jérôme Antoni, Mechanical Systems and Signal Processing. 212Antoni, Jérôme 2007 Cyclic spectral analysis in practice. Mechanical Systems and Signal Processing 21 (2), 597-630. . Jérôme Antoni, Cyclostationarity by examples. Mechanical Systems and Signal Processing. 234Antoni, Jérôme 2009 Cyclostationarity by examples. Mechanical Systems and Signal Processing 23 (4), 987-1036. Cyclostationary modelling of rotating machine vibration signals. Mechanical systems and signal processing. Jérôme Antoni, Bonnardot, Frédéric, A &amp; El Raad, Badaoui, 18Antoni, Jérôme, Bonnardot, Frédéric, Raad, A & El Badaoui, Mohamed 2004 Cyclostationary modelling of rotating machine vibration signals. Mechanical systems and signal processing 18 (6), 1285-1314. Study of dynamics in post-transient flows using koopman mode decomposition. Hassan &amp; Arbabi, Mezić, Physical Review Fluids. 212124402Arbabi, Hassan & Mezić, Igor 2017 Study of dynamics in post-transient flows using koopman mode decomposition. Physical Review Fluids 2 (12), 124402. On the hidden beauty of the proper orthogonal decomposition. Nadine Aubry, Theoretical and Computational Fluid Dynamics. 25-6Aubry, Nadine 1991 On the hidden beauty of the proper orthogonal decomposition. Theoretical and Computational Fluid Dynamics 2 (5-6), 339-352. The dynamics of coherent structures in the wall region of a turbulent boundary layer. Nadine Aubry, Holmes, Philip, John L &amp; Lumley, Stone, Journal of fluid Mechanics. 192Aubry, Nadine, Holmes, Philip, Lumley, John L & Stone, Emily 1988 The dynamics of coherent structures in the wall region of a turbulent boundary layer. Journal of fluid Mechanics 192, 115-173. Inputoutput analysis and control design applied to a linear model of spatially developing flows. Bagheri, Shervin, Dan S Henningson, Hoepffner, Schmid, J Peter, Applied Mechanics Reviews. 622Bagheri, Shervin, Henningson, Dan S, Hoepffner, J & Schmid, Peter J 2009 Input- output analysis and control design applied to a linear model of spatially developing flows. Applied Mechanics Reviews 62 (2). Random data: analysis and measurement procedures. Julius S Bendat, Allan G Piersol, John Wiley & SonsBendat, Julius S & Piersol, Allan G 2011 Random data: analysis and measurement procedures. John Wiley & Sons. Cycloergodic properties of discrete-parameter nonstationary stochastic processes. R &amp; Boyles, W Gardner, IEEE Transactions on information theory. 291Boyles, R & Gardner, W 1983 Cycloergodic properties of discrete-parameter nonstationary stochastic processes. IEEE Transactions on information theory 29 (1), 105-114. The extraction of periodic waveforms by time domain averaging. S Braun, Acta Acustica united with Acustica. 322Braun, S 1975 The extraction of periodic waveforms by time domain averaging. Acta Acustica united with Acustica 32 (2), 69-77. A Frequency-Domain Filtering Technique for Triple Decomposition of Unsteady Turbulent Flow. G J Brereton, A Kodal, Journal of Fluids Engineering. 1141Brereton, G. J. & Kodal, A. 1992 A Frequency-Domain Filtering Technique for Triple Decomposition of Unsteady Turbulent Flow. Journal of Fluids Engineering 114 (1), 45-51, arXiv: https://asmedigitalcollection.asme.org/fluidsengineering/article- pdf/114/1/45/5735994/45 1.pdf. . Guillaume A Brès, Frank E Ham, Joseph W &amp; Nichols, Lele, K Sanjiva, Brès, Guillaume A, Ham, Frank E, Nichols, Joseph W & Lele, Sanjiva K 2017 Unstructured large-eddy simulations of supersonic jets. AIAA journal. 554Unstructured large-eddy simulations of supersonic jets. AIAA journal 55 (4), 1164-1184. Importance of the nozzle-exit boundary-layer state in subsonic turbulent jets. Guillaume A Brès, Jordan, Peter, Jaunet, Le Vincent, Rallic, Maxime, André Cavalieri, Vg, Towne, Aaron, Lele, K Sanjiva, Tim &amp; Colonius, Schmidt, T Oliver, Journal of Fluid Mechanics. 851Brès, Guillaume A, Jordan, Peter, Jaunet, Vincent, Le Rallic, Maxime, Cavalieri, André VG, Towne, Aaron, Lele, Sanjiva K, Colonius, Tim & Schmidt, Oliver T 2018 Importance of the nozzle-exit boundary-layer state in subsonic turbulent jets. Journal of Fluid Mechanics 851, 83-124. On the theory of cyclostationary signals. Iii Brown, William Alexander, University of California, DavisBrown III, William Alexander 1987 On the theory of cyclostationary signals. University of California, Davis. H2 optimal actuator and sensor placement in the linearised complex ginzburg-landau system. Kevin K Chen, Clarence W Rowley, Journal of Fluid Mechanics. 681Chen, Kevin K & Rowley, Clarence W 2011 H2 optimal actuator and sensor placement in the linearised complex ginzburg-landau system. Journal of Fluid Mechanics 681, 241-260. Bifurcations to local and global modes in spatially developing flows. J M Chomaz, Huerre, Redekopp, Lg, Physical review letters. 60125Chomaz, JM, Huerre, P & Redekopp, LG 1988 Bifurcations to local and global modes in spatially developing flows. Physical review letters 60 (1), 25. On the energy transfer to small disturbances in fluid flow (part i). Chu, Boa-Teh, Acta Mechanica. 13Chu, Boa-Teh 1965 On the energy transfer to small disturbances in fluid flow (part i). Acta Mechanica 1 (3), 215-234. Reconstruction of the global velocity field in the axisymmetric mixing layer utilizing the proper orthogonal decomposition. Joseph H &amp; Citriniti, George, K William, Journal of Fluid Mechanics. 418Citriniti, Joseph H & George, William K 2000 Reconstruction of the global velocity field in the axisymmetric mixing layer utilizing the proper orthogonal decomposition. Journal of Fluid Mechanics 418, 137-166. Global measures of local convective instabilities. Carlo &amp; Cossu, J M Chomaz, Physical review letters. 78234387Cossu, Carlo & Chomaz, JM 1997 Global measures of local convective instabilities. Physical review letters 78 (23), 4387. Optimal transient growth and very large-scale structures in turbulent boundary layers. Carlo Cossu, Pujals, &amp; Gregory, Sebastien Depardon, Journal of Fluid Mechanics. 619Cossu, Carlo, Pujals, Gregory & Depardon, Sebastien 2009 Optimal transient growth and very large-scale structures in turbulent boundary layers. Journal of Fluid Mechanics 619, 79-94. Orderly structure in jet turbulence. S C Crow, F H Champagne, Journal of Fluid Mechanics. 483Crow, S. C. & Champagne, F. H. 1971 Orderly structure in jet turbulence. Journal of Fluid Mechanics 48 (3), 547-591. A family of embedded runge-kutta formulae. John R &amp; Dormand, Prince, J Peter, Journal of computational and applied mathematics. 61Dormand, John R & Prince, Peter J 1980 A family of embedded runge-kutta formulae. Journal of computational and applied mathematics 6 (1), 19-26. Time-frequency/time-scale analysis. Patrick Flandrin, Academic pressFlandrin, Patrick 1998 Time-frequency/time-scale analysis. Academic press. Identification and reconstruction of high-frequency fluctuations evolving on a low-frequency periodic limit cycle: application to turbulent cylinder flow. Lucas Franceschini, Sipp, Denis, Marquet, Olivier, Johann &amp; Moulin, Dandois, Journal of Fluid Mechanics. 942Franceschini, Lucas, Sipp, Denis, Marquet, Olivier, Moulin, Johann & Dandois, Julien 2022 Identification and reconstruction of high-frequency fluctuations evolving on a low-frequency periodic limit cycle: application to turbulent cylinder flow. Journal of Fluid Mechanics 942. Measurement of spectral correlation. William Gardner, IEEE Transactions on Acoustics, Speech, and Signal Processing. 345Gardner, William 1986a Measurement of spectral correlation. IEEE Transactions on Acoustics, Speech, and Signal Processing 34 (5), 1111-1123. Representation and estimation of cyclostationary processes. William A Gardner, Tech. Rep.Gardner, William A 1972 Representation and estimation of cyclostationary processes. Tech. Rep.. Introduction to random processes with applications to signals and systems((book)). William A Gardner, MacMillan Co447New YorkGardner, William A 1986b Introduction to random processes with applications to signals and systems((book)). New York, MacMillan Co., 1986, 447 . The spectral correlation theory of cyclostationary time-series. William A Gardner, Signal processing. 111Gardner, William A 1986c The spectral correlation theory of cyclostationary time-series. Signal processing 11 (1), 13-36. An introduction to cyclostationary signals. William A Gardner, Cyclostationarity in communications and signal processing. IEEE press New YorkGardner, William A 1994 An introduction to cyclostationary signals. In Cyclostationarity in communications and signal processing, pp. 1-90. IEEE press New York. Statistically inferred time warping: extending the cyclostationarity paradigm from regular to irregular statistical cyclicity in scientific data. William A Gardner, EURASIP Journal on Advances in Signal Processing. 20181Gardner, William A 2018 Statistically inferred time warping: extending the cyclostationarity paradigm from regular to irregular statistical cyclicity in scientific data. EURASIP Journal on Advances in Signal Processing 2018 (1), 1-25. Cyclostationarity: Half a century of research. William A Gardner, Antonio &amp; Napolitano, Luigi Paura, Signal processing. 864Gardner, William A, Napolitano, Antonio & Paura, Luigi 2006 Cyclostationarity: Half a century of research. Signal processing 86 (4), 639-697. Statistical spectral analysis-A nonprobabilistic theory. William A &amp; Gardner, Robinson, A Enders, Prentice HallGardner, William A & Robinson, Enders A 1989 Statistical spectral analysis-A nonprobabilistic theory. Prentice Hall. Periodically and almost-periodically correlated random processes with a continuous time parameter. E G Gladyshev, Theory of Probability & Its Applications. 8Gladyshev, EG 1963 Periodically and almost-periodically correlated random processes with a continuous time parameter. Theory of Probability & Its Applications 8 (2), 173-177. Development of an extended proper orthogonal decomposition and its application to a time periodically forced plane mixing layer. Ari Glezer, Kadioglu, Arne J Zafer &amp; Pearlstein, Physics of Fluids A: Fluid Dynamics. 18Glezer, Ari, Kadioglu, Zafer & Pearlstein, Arne J 1989 Development of an extended proper orthogonal decomposition and its application to a time periodically forced plane mixing layer. Physics of Fluids A: Fluid Dynamics 1 (8), 1363-1373. On periodic nonstationary processes. L I Gudzenko, Radio Eng. Electron. Phys.(USSR). 46Gudzenko, LI 1959 On periodic nonstationary processes. Radio Eng. Electron. Phys.(USSR) 4 (6), 220-224. Analysis of forced subsonic jets using spectral proper orthogonal decomposition and resolvent analysis. Liam Heidt, Colonius, Tim, Nekkanti, Schmdit, Maia Oliver, Igor &amp; Jordan, Peter , AIAA Aviation 2021 Forum. 2108Heidt, Liam, Colonius, Tim, Nekkanti, Akhil, Schmdit, Oliver, Maia, Igor & Jordan, Peter 2021 Analysis of forced subsonic jets using spectral proper orthogonal decomposition and resolvent analysis. In AIAA Aviation 2021 Forum, p. 2108. Instability of flows in spatially developing media. Hunt, David Crighton, George, Proceedings of the Royal Society of London. Series A: Mathematical and Physical Sciences. 435Hunt, RE & Crighton, David George 1991 Instability of flows in spatially developing media. Proceedings of the Royal Society of London. Series A: Mathematical and Physical Sciences 435 (1893), 109-128. An investigation of periodically correlated stochastic processes. H L Hurd, Durham, North CarolinaDuke Universityph.d. dissertation. Tech. RepHurd, H.L. 1969 An investigation of periodically correlated stochastic processes, ph.d. dissertation. Tech. Rep.. Duke University, Durham, North Carolina. The mechanics of an organized wave in turbulent shear flow. Akmf &amp; Hussain, Reynolds, Wc, Journal of Fluid Mechanics. 542part 2. experimental resultsHussain, AKMF & Reynolds, WC 1972 The mechanics of an organized wave in turbulent shear flow. part 2. experimental results. Journal of Fluid Mechanics 54 (2), 241-261. The mechanics of an organized wave in turbulent shear flow. Abul Khair Muhammad Hussain, &amp; Fazle, Reynolds, C William, Journal of Fluid Mechanics. 412Hussain, Abul Khair Muhammad Fazle & Reynolds, William C 1970 The mechanics of an organized wave in turbulent shear flow. Journal of Fluid Mechanics 41 (2), 241-258. Spectral analysis and its applications. Gwilym M Jenkins, Holden-Day, IncSan FranciscoJenkins, Gwilym M 1968 Spectral analysis and its applications. Holden-Day, Inc., San Francisco, Card Nr. 67-13840 . Input-output analysis of high-speed axisymmetric isothermal jet noise. Jinah Jeun, Joseph W &amp; Nichols, Jovanović, R Mihailo, Physics of Fluids. 28447101Jeun, Jinah, Nichols, Joseph W & Jovanović, Mihailo R 2016 Input-output analysis of high-speed axisymmetric isothermal jet noise. Physics of Fluids 28 (4), 047101. Eofs of harmonizable cyclostationary processes. Kwang-Y &amp; Kim, North, R Gerald, Journal of the atmospheric sciences. 5419Kim, Kwang-Y & North, Gerald R 1997 Eofs of harmonizable cyclostationary processes. Journal of the atmospheric sciences 54 (19), 2416-2427. Jianping 1996 Eofs of one-dimensional cyclostationary time series: Computations, examples, and stochastic modeling. Kwang-Y Kim, North, R &amp; Gerald, Huang, Journal of Atmospheric Sciences. 537Kim, Kwang-Y, North, Gerald R & Huang, Jianping 1996 Eofs of one-dimensional cyclostationary time series: Computations, examples, and stochastic modeling. Journal of Atmospheric Sciences 53 (7), 1007-1017. On random processes having nonstationarity of periodic character. V L Lebedev, Nauchn. Dokl. Vysshch. Shchk. Ser. Radiotekh. Elektron. 2Lebedev, VL 1959 On random processes having nonstationarity of periodic character. Nauchn. Dokl. Vysshch. Shchk. Ser. Radiotekh. Elektron. 2, 32-34. The structure of inhomogeneous turbulent flows. Atmospheric turbulence and radio propagation pp. J L Lumley, Lumley, J. L. 1967 The structure of inhomogeneous turbulent flows. Atmospheric turbulence and radio propagation pp. 166-178. Stochastic tools in turbulence. J L Lumley, J. Fluid Mech. 67Lumley, J. L. 1970 Stochastic tools in turbulence. J. Fluid Mech. 67, 413-415. Time-frequency analysis of random signals. Wolfgang Martin, ICASSP'82. IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE7Martin, Wolfgang 1982 Time-frequency analysis of random signals. In ICASSP'82. IEEE International Conference on Acoustics, Speech, and Signal Processing, , vol. 7, pp. 1325- 1328. IEEE. Wigner-ville spectral analysis of nonstationary processes. Wolfgang &amp; Martin, Patrick Flandrin, IEEE Transactions on Acoustics, Speech, and Signal Processing. 336Martin, Wolfgang & Flandrin, Patrick 1985 Wigner-ville spectral analysis of nonstationary processes. IEEE Transactions on Acoustics, Speech, and Signal Processing 33 (6), 1461-1470. Randomized numerical linear algebra: Foundations and algorithms. Per-Gunnar &amp; Martinsson, Joel A Tropp, Acta Numerica. 29Martinsson, Per-Gunnar & Tropp, Joel A 2020 Randomized numerical linear algebra: Foundations and algorithms. Acta Numerica 29, 403-572. AS 2010 A critical-layer framework for turbulent pipe flow. Bj &amp; Mckeon, Sharma, J. FluidMech. 658336382McKeon, BJ & Sharma, AS 2010 A critical-layer framework for turbulent pipe flow. J. FluidMech 658, 336382. Sensitivity of 2-d turbulent flow past a d-shaped cylinder using global stability. Philippe Meliga, Pujals, &amp; Gregory, Eric Serre, Physics of Fluids. 24661701Meliga, Philippe, Pujals, Gregory & Serre, Eric 2012 Sensitivity of 2-d turbulent flow past a d-shaped cylinder using global stability. Physics of Fluids 24 (6), 061701. Analysis of fluid flows via spectral properties of the koopman operator. Igor Mezić, Annual Review of Fluid Mechanics. 45Mezić, Igor 2013 Analysis of fluid flows via spectral properties of the koopman operator. Annual Review of Fluid Mechanics 45, 357-378. . Rashad Moarref, Ati S Sharma, Tropp, Beverley J Mckeon, Moarref, Rashad, Sharma, Ati S, Tropp, Joel A & McKeon, Beverley J 2013 Model-based scaling of the streamwise energy density in high-reynolds-number turbulent channels. Journal of Fluid Mechanics. 734Model-based scaling of the streamwise energy density in high-reynolds-number turbulent channels. Journal of Fluid Mechanics 734, 275-316. On the relevance of reynolds stresses in resolvent analyses of turbulent wall-bounded flows. Pierluigi Morra, Semeraro, Onofrio, Dan S Henningson, Carlo Cossu, Journal of Fluid Mechanics. 867Morra, Pierluigi, Semeraro, Onofrio, Henningson, Dan S & Cossu, Carlo 2019 On the relevance of reynolds stresses in resolvent analyses of turbulent wall-bounded flows. Journal of Fluid Mechanics 867, 969-984. Cyclostationary processes and time series: theory, applications, and generalizations. Antonio Napolitano, Academic PressNapolitano, Antonio 2019 Cyclostationary processes and time series: theory, applications, and generalizations. Academic Press. On the impact of swirl on the growth of coherent structures. Kilian Oberleithner, Christian Paschereit, &amp; Oliver, I Wygnanski, Journal of Fluid Mechanics. 741Oberleithner, Kilian, Paschereit, Christian Oliver & Wygnanski, I 2014 On the impact of swirl on the growth of coherent structures. Journal of Fluid Mechanics 741, 156-199. Analysis of amplification mechanisms and cross-frequency interactions in nonlinear flows via the harmonic resolvent. Alberto Padovan, Otto, E &amp; Samuel, Clarence W Rowley, Journal of Fluid Mechanics. 900Padovan, Alberto, Otto, Samuel E & Rowley, Clarence W 2020 Analysis of amplification mechanisms and cross-frequency interactions in nonlinear flows via the harmonic resolvent. Journal of Fluid Mechanics 900. Analysis of the dynamics of subharmonic flow structures via the harmonic resolvent: Application to vortex pairing in an axisymmetric jet. Alberto &amp; Padovan, Clarence W Rowley, Physical Review Fluids. 7773903Padovan, Alberto & Rowley, Clarence W 2022 Analysis of the dynamics of subharmonic flow structures via the harmonic resolvent: Application to vortex pairing in an axisymmetric jet. Physical Review Fluids 7 (7), 073903. Pressure velocity coupling in a subsonic round jet. C &amp; Picard, J Delville, International Journal of Heat and Fluid Flow. 213Picard, C & Delville, J 2000 Pressure velocity coupling in a subsonic round jet. International Journal of Heat and Fluid Flow 21 (3), 359-364. Tim 2020 Lift-up, kelvin-helmholtz and orr mechanisms in turbulent jets. Ethan Pickering, Rigas, Georgios, Nogueira, A S Petrônio, Cavalieri, V G André, Oliver T Schmidt, Colonius, Journal of Fluid Mechanics. 896Pickering, Ethan, Rigas, Georgios, Nogueira, Petrônio A. S., Cavalieri, André V. G., Schmidt, Oliver T. & Colonius, Tim 2020 Lift-up, kelvin-helmholtz and orr mechanisms in turbulent jets. Journal of Fluid Mechanics 896, A2. Optimal eddy viscosity for resolvent-based models of coherent structures in turbulent jets. Ethan Pickering, Rigas, Georgios, Oliver T Schmidt, Sipp, &amp; Denis, Tim Colonius, Journal of Fluid Mechanics. 917Pickering, Ethan, Rigas, Georgios, Schmidt, Oliver T, Sipp, Denis & Colonius, Tim 2021 Optimal eddy viscosity for resolvent-based models of coherent structures in turbulent jets. Journal of Fluid Mechanics 917. The relationship between spectral correlation and envelope analysis in the diagnostics of bearing faults and other cyclostationary machine signals. Mechanical systems and signal processing. Robert B Randall, Antoni Jérôme, &amp; Chobsaard, S , 15Randall, Robert B, Antoni, Jérôme & Chobsaard, S 2001 The relationship between spectral correlation and envelope analysis in the diagnostics of bearing faults and other cyclostationary machine signals. Mechanical systems and signal processing 15 (5), 945- 962. The mechanics of an organized wave in turbulent shear flow. part 3. theoretical models and comparisons with experiments. Wc &amp; Reynolds, Akmf Hussain, Journal of Fluid Mechanics. 542Reynolds, WC & Hussain, AKMF 1972 The mechanics of an organized wave in turbulent shear flow. part 3. theoretical models and comparisons with experiments. Journal of Fluid Mechanics 54 (2), 263-288. Spectral analysis of nonlinear flows. Clarence W Rowley, Mezić, Igor, Bagheri, Shervin, Schlatter, &amp; Philipp, Dan S Henningson, Journal of fluid mechanics. 641Rowley, Clarence W, Mezić, Igor, Bagheri, Shervin, Schlatter, Philipp & Henningson, Dan S 2009 Spectral analysis of nonlinear flows. Journal of fluid mechanics 641, 115-127. Dynamic mode decomposition of numerical and experimental data. Peter J Schmid, Journal of fluid mechanics. 656Schmid, Peter J 2010 Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5-28. Applications of the dynamic mode decomposition. Peter J Schmid, Li, Larry, Juniper, P &amp; Matthew, Pust, Theoretical and Computational Fluid Dynamics. 251Schmid, Peter J, Li, Larry, Juniper, Matthew P & Pust, O 2011 Applications of the dynamic mode decomposition. Theoretical and Computational Fluid Dynamics 25 (1), 249-259. Spectral proper orthogonal decomposition using multitaper estimates. Oliver T Schmidt, Theoretical and Computational Fluid Dynamics. 365Schmidt, Oliver T 2022 Spectral proper orthogonal decomposition using multitaper estimates. Theoretical and Computational Fluid Dynamics 36 (5), 741-754. Tim 2020 Guide to spectral proper orthogonal decomposition. Oliver T &amp; Schmidt, Colonius, AIAA journal. 583Schmidt, Oliver T & Colonius, Tim 2020 Guide to spectral proper orthogonal decomposition. AIAA journal 58 (3), 1023-1033. An efficient streaming algorithm for spectral proper orthogonal decomposition. Oliver T &amp; Schmidt, Aaron Towne, Computer Physics Communications. 237Schmidt, Oliver T & Towne, Aaron 2019 An efficient streaming algorithm for spectral proper orthogonal decomposition. Computer Physics Communications 237, 98-109. Spectral analysis of jet turbulence. Oliver T Schmidt, Towne, Aaron, Rigas, Georgios, Tim &amp; Colonius, Brès, A Guillaume, Journal of Fluid Mechanics. 855Schmidt, Oliver T, Towne, Aaron, Rigas, Georgios, Colonius, Tim & Brès, Guillaume A 2018 Spectral analysis of jet turbulence. Journal of Fluid Mechanics 855, 953-982. The matlab ode suite. Lawrence F &amp; Shampine, Reichelt, W Mark, SIAM journal on scientific computing. 181Shampine, Lawrence F & Reichelt, Mark W 1997 The matlab ode suite. SIAM journal on scientific computing 18 (1), 1-22. On coherent structure in wall turbulence. As &amp; Sharma, Beverley J Mckeon, Journal of Fluid Mechanics. 728Sharma, AS & McKeon, Beverley J 2013 On coherent structure in wall turbulence. Journal of Fluid Mechanics 728, 196-238. Turbulence and the dynamics of coherent structures. i. coherent structures. Lawrence Sirovich, Quarterly of applied mathematics. 453Sirovich, Lawrence 1987 Turbulence and the dynamics of coherent structures. i. coherent structures. Quarterly of applied mathematics 45 (3), 561-571. Chaotic dynamics of coherent structures. Lawrence Sirovich, Physica D: Nonlinear Phenomena. 371-3Sirovich, Lawrence 1989 Chaotic dynamics of coherent structures. Physica D: Nonlinear Phenomena 37 (1-3), 126-145. Fourier averaging: a phase-averaging method for periodic flow. R Sonnenberger, Graichen, Pl Erk, Experiments in fluids. 283Sonnenberger, R, Graichen, K & Erk, Pl 2000 Fourier averaging: a phase-averaging method for periodic flow. Experiments in fluids 28 (3), 217-224. Xiang 2020 Resolvent-based estimation of space-time flow statistics. Aaron Towne, Lozano-Durán, &amp; Adrián, Yang, Journal of Fluid Mechanics. 88317Towne, Aaron, Lozano-Durán, Adrián & Yang, Xiang 2020 Resolvent-based estimation of space-time flow statistics. Journal of Fluid Mechanics 883, A17. Spectral proper orthogonal decomposition and its relationship to dynamic mode decomposition and resolvent analysis. Aaron Towne, Schmidt, T &amp; Oliver, Tim Colonius, Journal of Fluid Mechanics. 847Towne, Aaron, Schmidt, Oliver T & Colonius, Tim 2018 Spectral proper orthogonal decomposition and its relationship to dynamic mode decomposition and resolvent analysis. Journal of Fluid Mechanics 847, 821-867. The use of fast fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms. Peter Welch, IEEE Transactions on audio and electroacoustics. 152Welch, Peter 1967 The use of fast fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms. IEEE Transactions on audio and electroacoustics 15 (2), 70-73.
[]
[ "On Ensemble Learning", "On Ensemble Learning", "On Ensemble Learning", "On Ensemble Learning", "On Ensemble Learning", "On Ensemble Learning" ]
[ "Mark Stamp ", "S Aniket Chandak ", "Gavin Wong ", "Allen Ye ", "Mark Stamp ", "S Aniket Chandak ", "Gavin Wong ", "Allen Ye ", "Mark Stamp ", "S Aniket Chandak ", "Gavin Wong ", "Allen Ye " ]
[]
[]
In this paper, we consider ensemble classifiers, that is, machine learning based classifiers that utilize a combination of scoring functions. We provide a framework for categorizing such classifiers, and we outline several ensemble techniques, discussing how each fits into our framework. From this general introduction, we then pivot to the topic of ensemble learning within the context of malware analysis. We present a brief survey of some of the ensemble techniques that have been used in malware (and related) research. We conclude with an extensive set of experiments, where we apply ensemble techniques to a large and challenging malware dataset. While many of these ensemble techniques have appeared in the malware literature, previously there has been no way to directly compare results such as these, as different datasets and different measures of success are typically used. Our common framework and empirical results are an effort to bring some sense of order to the chaos that is evident in the evolving field of ensemble learning-both within the narrow confines of the malware analysis problem, and in the larger realm of machine learning in general. *
10.1007/978-3-030-62582-5_8
[ "https://arxiv.org/pdf/2103.12521v1.pdf" ]
232,320,356
2103.12521
d7fc9e90704937f154f3dcfb7ebbe7e2f96d4833
On Ensemble Learning Mark Stamp S Aniket Chandak Gavin Wong Allen Ye On Ensemble Learning In this paper, we consider ensemble classifiers, that is, machine learning based classifiers that utilize a combination of scoring functions. We provide a framework for categorizing such classifiers, and we outline several ensemble techniques, discussing how each fits into our framework. From this general introduction, we then pivot to the topic of ensemble learning within the context of malware analysis. We present a brief survey of some of the ensemble techniques that have been used in malware (and related) research. We conclude with an extensive set of experiments, where we apply ensemble techniques to a large and challenging malware dataset. While many of these ensemble techniques have appeared in the malware literature, previously there has been no way to directly compare results such as these, as different datasets and different measures of success are typically used. Our common framework and empirical results are an effort to bring some sense of order to the chaos that is evident in the evolving field of ensemble learning-both within the narrow confines of the malware analysis problem, and in the larger realm of machine learning in general. * Introduction In ensemble learning, multiple learning algorithms are combined, with the goal of improved accuracy as compared to the individual algorithms. Ensemble techniques are widely used, and as a testament to their strength, ensembles have won numerous machine learning contests in recent years, including the KDD Cup [15], the Kaggle competition [14], and the Netflix prize [26]. Many such ensembles resemble Frankenstein's monster [33], in the sense that they are an agglomeration of disparate components, with some of the components being of questionable value-an "everything and the kitchen sink" approach clearly prevails. This effect can be clearly observed in the aforementioned machine learning contests, where there is little (if any) incentive to make systems that are efficient or practical, as accuracy is typically the only criteria for success. In the case of the Netflix prize, the winning team was awarded $1,000,000, yet Netflix never implement the winning scheme, since the improvements in accuracy "did not seem to justify the engineering effort needed to bring them into a production environment" [3]. In real-world systems, practicality and efficiency are necessarily crucial factors. In this paper, we provide a straightforward framework for categorizing ensemble techniques. We then consider specific (and relatively simple) examples of various categories of such ensembles, and we show how these fit into our framework. For various examples of ensembles, we also provide experimental results, based on a large and diverse malware dataset. While many of the techniques that we consider have previously appeared in the malware literature, we are not aware of any comparable study focused on the effectiveness of various ensembles using a common dataset and common measures of success. While we believe that these examples are interesting in their own right, they also provide a basis for discussing various tradeoffs between measures of accuracy and practical considerations. The remainder of this paper is organized as follows. In Section 2 we discuss ensemble classifiers, including our framework for categorizing such classifiers. Section 3 contains our experimental results. This section also includes a discussion of our dataset, scoring metrics, software used, and so on. Finally, Section 4 concludes the paper and includes suggestions for future work. Ensemble Classifiers In this section, we first give a selective survey of some examples of malware (and closely related) research involving ensemble learning. Then we provide a framework for discussing ensemble classifiers in general. Examples of Related Work The paper [18] discusses various ways to combine classifiers and provides a theoretical framework for such combinations. The focus is on straightforward combinations, such as a maximum, sum, product, majority vote, and so on. The work in [18] has clearly been influential, but it seems somewhat dated, given the wide variety of ensemble methods that are used today. The book [20] presents the topic of ensemble learning from a similar perspective as [18] but in much more detail. Perhaps not surprisingly, the more recent book [62] seems to have a somewhat more modern perspective with respect to ensemble methods, but retains the theoretical flavor of [20] and [18]. The brief blog at [35] provides a highly readable (if highly selective) summary of some of the topics covered in the books [20] and [62]. Here, we take an approach that is, in some sense, more concrete than that in [18,20,62]. Our objective is to provide a relatively straightforward framework for categorizing and discussing ensemble techniques. We then use this framework as a frame of reference for experimental results based on a variety of ensemble methods. Table 1 provides a summary of several research papers where ensemble techniques have been applied to security-related problems. The emphasis here is on malware, but we have also included a few closely related topics. In any case, this represents a small sample of the many papers that have been published, and is only intended to provide an indication as to the types and variety of ensemble strategies that have been considered to date. On this list, we see examples of ensemble methods based on bagging, boosting, and stacking, as discussed below in Section 2.3. Alazab et al. [2] Detection API calls Neural networks Comar et al. [8] Detection Network traffic Random forest Dimjaševic et al. [9] Android System calls RF and SVM Guo et al. [10] Detection API calls BKS Idrees et al. [12] Android Permissions, intents RF and others Jain & Meena [13] Detection Byte -grams AdaBoost Khan et al. [17] Detection Network based Boosting Kong & Yan [19] Classification Function call graph Boosting Morales et al. [24] Android Permissions Several Narouei et al. [25] Detection DLL dependency Random forest Shahzad et al. [31] Detection Opcodes Voting Sheen et al. [32] Various Detection efficiency Pruning Singh et al. [34] Detection Opcodes SVM Smutz & Stavrou [36] Malicious PDF Metadata Random forest Toolan & Carthy [40] Phishing Various C5.0, boosting Ye et al. [58] Detection API calls, strings SVM, bagging Ye et al. [59] Categorization Opcodes Clustering Yerima et al. [60] Zero day 179 features RF, regression Zhang et al. [61] Detection -grams Dempster-Shafer A Framework for Ensemble Classifiers In this section, we consider various means of constructing ensemble classifiers, as viewed from a high-level perspective. We then provide an equally high level framework that we find useful in our subsequent discussion of ensemble classifiers in Sections 2.3 and, especially, in Section 2.4. We consider ensemble learners that are based on combinations of scoring functions. In the general case, we assume the scoring functions are real valued, but the more restricted case of zero-one valued "scoring" functions (i.e., classifiers) easily fits into our framework. We place no additional restrictions on the scoring functions and, in particular, they do not necessarily represent "learning" algorithms, per se. Hence, we are dealing with ensemble methods broadly speaking, rather than ensemble learners in a strict sense. We assume that the ensemble method itself-as opposed to the scoring functions that comprise the ensemble-is for classification, and hence ensemble functions are zero-one valued. Let 1, 2, . . . , be training samples, and let be a feature vector of length , where the features that comprise are extracted from sample . We collect the feature vectors for all training samples into an × matrix that we denote as = (︀ 1 2 · · · )︀ (1) where each is a column of the matrix . Note that each row of corresponds to a specific feature type, while column of corresponds to the features extracted from the training sample . Let : R → R be a scoring function. Such a scoring function will be determined based on training data, where this training data is given by a feature matrix , as in equation (1). A scoring function will generally also depend on a set of parameters that we denote as Λ = (︀ 1 2 . . . )︀(2) The score generated by the scoring function when applied to sample is given by ( ; , Λ) where we have explicitly included the dependence on the training data and the function parameters Λ. For any scoring function , there is a corresponding classification function that we denote aŝ︀ : R → {0, 1}. That is, once we determine a threshold to apply to the scoring function , it provides a binary classification function that we denote aŝ︀. As with , we explicitly indicate the dependence on training data and the function parameters Λ by writinĝ︀ ( ; , Λ). For example, each training sample could be a malware executable file, where all of the belong to the same malware family. Then an example of an extracted feature would be the opcode histogram, that is, the relative frequencies of the mnemonic opcodes that are obtained when is disassembled. The scoring function could, for example, be based on a hidden Markov model that is trained on the feature matrix as given in equation (1), with the parameters Λ in equation (2) being the initial values that are selected when training the HMM. In its most general form, an ensemble method for a binary classification problem can be viewed as a function : R ℓ → {0, 1} of the form (︀ 1( ; 1, Λ1), 2( ; 2, Λ2), . . . , ℓ ( ; ℓ , Λ ℓ ) )︀(3) That is, the ensemble method defined by the function produces a classification based on the scores 1, 2, . . . , ℓ , where scoring function is trained using the data and parameters Λ . Classifying Ensemble Classifiers From a high level perspective, ensemble classifiers can be categorized as bagging, boosting, stacking, or some combination thereof [20,35,62]. In this section, we briefly introduce each of these general classes of ensemble methods and give their generic formulation in terms of equation (3). Bagging In bootstrap aggregation (i.e., bagging), different subsets of the data or features (or both) are used to generate different scores. The results are then combined in some way, such as a sum of the scores, or a majority vote of the corresponding classifications. For bagging we assume that the same scoring method is used for all scores in the ensemble. For example, bagging is used when generating a random forest, where each individual scoring function is based on a decision tree structure. One benefit of bagging is that it reduces overfitting, which is a particular problem for decision trees. For bagging, the general equation (3) is restricted to (︀ ( ; 1, Λ), ( ; 2, Λ), . . . , ( ; ℓ , Λ) )︀(4) That is, in bagging, each scoring function is essentially the same, but each is trained on a different feature set. For example, suppose that we collect all available feature vectors into a matrix as in equation (1). Then bagging based on subsets of samples would correspond to generating by deleting a subset of the columns of . On the other hand, bagging based on features would correspond to generating by deleting a subset of the rows of . Of course, we can easily extend this to bagging based on both the data and features simultaneously, as in a random forest. In Section 2.4, we discuss specific examples of bagging. Boosting Boosting is a process whereby distinct classifiers are combined to produce a stronger classifier. Generally, boosting deals with weak classifiers that are combined in an adaptive or iterative manner so as to improve the overall classifier. We restrict our definition of boosting to cases where the classifiers are closely related, in the sense that they differ only in terms of parameters. From this perspective, boosting can be viewed as "bagging" based on classifiers, rather than data or features. That is, all of the scoring functions are reparameterized versions of the same scoring technique. Under this definition of boosting, the general equation (3) becomes (︀ ( ; , Λ1), ( ; , Λ2), . . . , ( ; , Λ ℓ ) )︀(5) That is, the scoring functions differ only by re-parameterization, while the scoring data and features do not change. Below, in Section 2.4, we discuss specific examples of boosting; in particular, we discuss the most popular method of boosting, AdaBoost. In addition, we show that some other popular techniques fit our definition of boosting. Stacking Stacking is an ensemble method that combines disparate scores using a meta-classifier [35]. In this generic form, stacking is defined by the general case in equation (3), where the scoring functions can be (and typically are) significantly different. Note that from this perspective, stacking is easily seen to be a generalization of both bagging and boosting. Because stacking generalizes both bagging and boosting, it is not surprising that stacking based ensemble methods can outperform bagging and boosting methods, as evidenced by recent machine learning competitions, including the KDD Cup [15], the Kaggle competition [14], as well as the infamous Netflix prize [26]. However, this is not the end of the story, as efficiency and practicality are often ignored in such competitions, whereas in practice, it is virtually always necessary to consider such issues. Of course, the appropriate tradeoffs will depend on the specifics of the problem at hand. Our empirical results in Section 3 provide some insights into these tradeoff issues within the malware analysis domain. In the next section, we discuss concrete examples of bagging, boosting, and stacking techniques. Then in Section 3 we present our experimental results, which include selected bagging, boosting, and stacking architectures. Ensemble Classifier Examples Here, we consider a variety of ensemble methods and discuss how each fits into the general framework presented above. We begin with a few fairly generic examples, and then discuss several more specific examples. Maximum In this case, we have (︀ 1( ; 1, Λ1), 2( ; 2, Λ2), . . . , ℓ ( ; ℓ , Λ ℓ ) )︀ = max{ ( ; , Λ )}(6) Averaging Averaging is defined by (︀ 1( ; 1, Λ1), 2( ; 2, Λ2), . . . , ℓ ( ; ℓ , Λ ℓ ) )︀ = 1 ℓ ℓ ∑︁ =1 ( ; , Λ )(7) Voting Voting could be used as a form of boosting, provided that no bagging is involved (i.e., the same data and features are used in each case). Voting is also applicable to stacking, and is generally applied in such a mode, or at least with significant diversity in the scoring functions, since we want limited correlation when voting. In the case of stacking, a simple majority vote is of the form (︀̂︀ 1( ; 1, Λ1),̂︀2( ; 2, Λ2), . . . ,̂︀ ℓ ( ; ℓ , Λ ℓ ) )︀ = maj (︀̂︀ 1( ; 1, Λ1),̂︀2( ; 2, Λ2), . . . ,̂︀ ℓ ( ; ℓ , Λ ℓ ) )︀ where "maj" is the majority vote function. Note that the majority vote is well defined in this case, provided that ℓ is odd-if ℓ is even, we can simply flip a coin in case of a tie. As an aside, we note that it is easy to see why we want to avoid correlation when voting is used as a combining function. Consider the following example from [47]. Suppose that we have the three highly correlated scores are only 80%, 70% and 60% accurate, respectively, but the majority vote in this case gives us ′ = ( 1 1 1 1 1 1 1 1 0 1 ) which is 90% accurate. ML-Based Combination Recall that the most general formulation of an ensemble classifier is given in equation (3). In this formulation, we can select the function based on a machine learning technique, which is applied to the individual scores ( ; , Λ ). In the remainder of this section, we consider specific ensemble examples involving machine learning techniques. AdaBoost Given a collection of (weak) classifiers 1, 2, . . . , ℓ , AdaBoost is an iterative algorithm that generates a series of (generally, stronger) classifiers, 1, 2, . . . , based on the classifiers . Each classifier is determined from the previous classifier by the simple linear extension ( ) = −1( ) + ( ) and the final classifier is given by = . Note that at each iteration, we include a previously unused from the set of (weak) classifiers and determine a new weight . A greedy approach is used when selecting , but it is not a hill climb, so that results might get worse at any step in the AdaBoost process. From this description, we see that the AdaBoost algorithm fits the form in equation (5) SVM as Meta-Classifier It is natural to use an SVM as a meta-classifier to combine scores [38]. For example, in [34], an SVM is used to generate a malware classifier based on several machine learning and statistical based malware scores. In [34], it is shown that the resulting SVM classifier consistently outperforms any of the component scores, and the differences are most pronounced in the most challenging cases. The use of SVM in this meta-classifier mode can be viewed as a general stacking method. Thus, this SVM technique is equivalent to equation (3), where the function is simply an SVM classifier based on the component scores ( ; , Λ ), for = 1, 2, . . . , ℓ. HMM with Random Restarts A hidden Markov model can be viewed as a discrete hill climb technique [37,38]. As with any hill climb, when training an HMM we are only assured of a local maximum, and we can often significantly improve our results by executing the hill climb multiple times with different initial values, selecting the best of the resulting models. For example, in [51] it is shown that an HMM can be highly effective for breaking classic substitution ciphers and, furthermore, by using a large number of random restarts, we can significantly increase the success rate in the most difficult cases. The work in [51] is closely related to that in [7], where such an approach is used to analyze the unsolved Zodiac 340 cipher. From the perspective considered in this paper, an HMM with random restarts can be seen as special case of boosting. If we simply select the best model, then the "combining" function is particularly simple, and is given by (︀ ( ; , Λ1), ( ; , Λ2), . . . , ( ; , Λ ℓ ) )︀ = max{ ( ; , Λ )}(8) Here, each scoring function is an HMM, where the trained models differ based only on different initial values. We see that equation (8) is a special case of equation (6). However, the "max" in equation (8) is the maximum over the HMM model scores, not the maximum over any particular set of input values. That is, we select the highest scoring model and use it for scoring. Of course, we could use other combining functions, such as an average or majority vote of the corresponding classifiers. In any case, since there is a score associated with each model generated by an HMM, any such combining function is well-defined. Bagged Perceptron Like a linear SVM, a perceptron will separate linearly separable data. However, unlike an SVM, a perceptron will not necessarily produce the optimal separation, in the sense of maximizing the margin. If we generate multiple perceptrons, each with different random initial weights, and then average these models, the resulting classifier will tend to be nearer to optimal, in the sense of maximizing the margin [21,47]. That is, we construct a classifier (︀ ( ; , Λ1), ( ; , Λ2), . . . , ( ; , Λ ℓ ) )︀ = 1 ℓ ℓ ∑︁ =1 ( ; , Λ )(9) where is a perceptron and each represents a set of initial values. We see that equation (9) is a special case of the averaging example given in equation (7). Also, we note that in this sum, we are averaging the perceptron models, not the classifications generated by the models. Although this technique is sometimes referred to as "bagged" perceptrons [47], by our criteria, it is a boosting scheme. That is, the "bagging" here is done with respect to parameters of the scoring functions, which is our working definition of boosting. Bagged Hidden Markov Model Like the HMM with random restarts example given above, in this case, we generate multiple HMMs. However, here we leave the model parameters unchanged, and simply train each on a subset of the data. We could then average the model scores (for example) as a way of combining the HMMs into a single score, from which we can easily construct a classifier. Bagged and Boosted Hidden Markov Model Of course, we could combine both the HMM with random restarts discussed in Section 2.4.7 with the bagging approach discussed in the previous section. This process would yield an HMM-based ensemble technique that combines both bagging and boosting. Experiments and Results In this section, we consider a variety of experiments that illustrate various ensemble techniques. There experiments involve malware classification, based on a challenging dataset that includes a large number of samples from a significant number of malware families. Dataset and Features Our dataset consists of samples from the 21 malware families listed in Table 2. These families are from various different types of malware, including Trojans, worms, backdoors, password stealers, so-called VirTools, and so on. OnLineGames [28] Password Stealer --- Each of the malware families in Table 2 is summarized below. Adload downloads an executable file, stores it remotely, executes the file, and disables proxy settings [41]. Agent downloads Trojans or other software from a remote server [42]. Allaple is a worm that can be used as part of a denial of service (DoS) attack [52]. BHO can perform a variety of actions, guided by an attacker [45]. Bifrose is a backdoor Trojan that enables a variety of attacks [4]. CeeInject uses advanced obfuscation to avoid being detected by antivirus software [48]. Cycbot connects to a remote server, exploits vulnerabilities, and spreads through backdoor ports [5]. FakeRean pretends to scan the system, notifies the user of supposed issues, and asks the user to pay to clean the system [53]. Hotbar is adware that shows ads on webpages and installs additional adware [1]. Injector loads other processes to perform attacks on its behalf [49]. OnLineGames steals login information of online games and tracks user keystroke activity [28]. Renos downloads software that claims the system has spyware and asks for a payment to remove the nonexistent spyware [43]. Rimecud is a sophisticated family of worms that perform a variety of activities and can spread through instant messaging [54]. Small is a family of Trojans that downloads unwanted software. This downloaded software can perform a variety of actions, such as a fake security application [44]. Toga is a Trojan that can perform a variety of actions of the attacker's choice [46]. VB is a backdoor that enables an attacker to gain access to a computer [6]. VBinject is a generic description of malicious files that are obfuscated in a specific manner [50]. Vobfus is a worm that downloads malware and spreads through USB drives or other removable devices [55]. Vundo displays pop-up ads and may download files. It uses advanced techniques to defeat detection [56]. Winwebsec displays alerts that ask the user for money to fix supposed issues [22]. Zbot is installed through email and shares a user's personal information with attackers. In addition, Zbot can disable a firewall [23]. From each available malware sample, we extract the first 1000 mnemonic opcodes using the reversing tool Radare2 (also know as R2) [29]. We discard any malware executable that yields less than 1000 opcodes, as well as a number of executables that were found to be corrupted. The resulting opcode sequences, each of length 1000, serve as the feature vectors for our machine learning experiments. Table 3 gives the number of samples (per family) from which we successfully obtained opcode feature vectors. Note that our dataset contains a total of 9725 samples from the 21 malware families and that the dataset is highly imbalanced-the number of samples per family varies from a low of 129 to a high of nearly 1000. Metrics The metrics used to quantify the success of our experiments are accuracy, balanced accuracy, precision, recall, and the F1 score. Accuracy is simply the ratio of correct classifications to the total number of classifications. In contrast, the balanced accuracy is the average accuracy per family. Precision, which is also known as the positive predictive value, is the number of true positives divided by the sum of the true positives and false positives. That is, the precision is the ratio of samples classified as positives that are actually positive to all samples that are classified as positive. Recall, which is also known as the true positive rate or sensitivity, is the computed by dividing the number of true positives by the number true positives plus the number of false negatives. That is, the recall is the fraction of positive samples that are classified as such. The F1 score is computed as F1 = 2 · precision · recall precision + recall , which is the harmonic mean of the precision and recall. Software The software packages used in our experiments include hmmlearn [11], XGBoost [57], Keras [16], and TensorFlow [39], and scikit-learn [30], as indicated in Table 4. In addition, we use Numpy [27] for linear algebra and various tools available in the package scikit-learn (also known as sklearn) for general data processing. These packages are all widely used in machine learning. Overview of Experiments For all of our experiments, we use opcode sequences of length 1000 as features. For CNNs, the sequences are interpreted as images. We consider three broad categories of experiments. First, we apply "standard" machine learning techniques. These experiments, serve as a baseline for comparison for our subsequent experiments. Among other things, these standard experiments show that the malware classification problem that we are dealing with is challenging. We also conduct bagging and boosting experiments based on a subset of the techniques considered in our baseline standard experiments. These results demonstrate that both bagging and boosting can provide some improvement over our baseline techniques. Finally, we consider a set of stacking experiments, where we restrict our attention to simple voting schemes, all of which are based on architectures previously considered in this paper. Although these are very basic stacking architectures, they clearly show the potential benefit of stacking multiple techniques. Standard Techniques For our "standard" techniques, we test several machine learning methods that are typically used individually. Specifically, we consider hidden Markov models (HMM), convolutional neural networks (CNN), random forest, and long short-term memory (LSTM). The parameters that we have tested in each of these cases are listed in Table 5, with those that gave the best results in boldface. From Table 5, we note that a significant number of parameter combinations were tested in each case. For example, in the case of our random forest model, we tested 5 3 · 3 · 6 = 2250 different combinations of parameters. The confusion matrices for all of the experiments in this section can be found in the Appendix in Figure 2 (a) through Figure 2 (d). We present the results of all of these experiments-in terms of the metrics discussed previously (i.e., accuracy, balanced accuracy, precision, recall, and F1 score)-in Section 3.9, below. Bagging Experiments Recall from our discussion above, that we use the term bagging to mean a multi-model approach where the individual models are trained with the same technique and essentially the same parameters, but different subsets of the data or features. In contrast, we use boosting to refer to multi-model cases where the data and features are essentially the same and the models are of the same type, with the model parameters varied. We will use AdaBoost and XGBoost results to serve as representative examples of boosting. We also consider bagging experiments (in the sense described in the previous paragraph) involving each of the HMM, CNN, and LSTM architectures. The results of these three distinct bagging experimentsin the form of confusion matrices-are given in Figure 3 in the Appendix. In terms of the metrics discussed above, the results of these experiments are summarized in Section 3.9, below. Boosting Experiments As representative examples of boosting techniques, we consider AdaBoost and XGBoost. In each case, we experiment with a variety of parameters as listed in Table 6. The parameter selection that yielded the best results are highlighted in boldface. Confusion matrices for these two boosting experiments are given in Figure 4 in the Appendix. The results of these experiments are summarized in Section 3.9, below, in terms of accuracy, balanced accuracy, and so on. Voting Experiments Since there exists an essentially unlimited number of possible stacking architectures, we have limited our attention to one of the simplest, namely, voting. These results serve as a lower bound on the results that can be obtained with stacking architectures. We consider six different stacking architectures. These stacking experiments can be summarized as follows. CNN consists of the plain and bagged CNN models discussed above. The confusion matrix for this experiment is given in Figure 5 (a). LSTM consists of the plain and bagged LSTM models discussed above. The confusion matrix for this experiment is given in Figure 5 (b). Bagged neural networks combines our bagged CNN and bagged LSTM models. The confusion matrix for this experiment is given in Figure 5 (c). Classic techniques combines (via voting) all of the classic models considered above, namely, HMM, bagged HMM, random forest, AdaBoost, and XGBoost. The confusion matrix for this experiment is given in Figure 5 (d). All neural networks consists of all of the CNN and LSTM models, bagged and plain. The confusion matrix for this experiment is given in Figure 5 (e). All models combines all of the classic and neural network models into one voting scheme. The confusion matrix for this experiment is given in Figure 5 (f). In the next section, we present the results for each of the voting experiments discussed in this section in terms of the our various metrics. These metrics enable us to directly compare all of our experimental results. Table 7 summarizes the results of all of the experiments discussed above, in term of the following metrics: accuracy, balanced accuracy, precision, recall, and F1 score. These metrics have been introduced in Section 3.1, above. In Table 7, the best result for each type of experiment is in boldface, with the best results overall also being boxed. We see that a voting strategy based on all of the bagged neural network techniques gives us the best result for each of the five statistics that we have computed. Discussion Since our dataset is highly imbalanced, we consider the balanced accuracy as the best measure of success. The balanced accuracy results in Table 7 are given in the form of a bar graph in Figure 1. Note that the results in Figure 1 clearly show that stacking techniques are beneficial, as compared to the corresponding "standard" techniques. Stacking not only yields the best results, but it dominates in all categories. We note that five of the six stacking experiments perform better than any of the standard, bagging, or boosting experiments. This is particularly noteworthy since we only considered a simple stacking approach. As a results, our stacking experiments likely provide a poor lower bound on stacking in general, and more advanced stacking techniques may improve significantly over the results that we have obtained. Conclusion and Future Work In this paper, we have attempted to impose some structure on the field of ensemble learning. We showed that combination architectures can be classified as either bagging, boosting, or in the more general case, stacking. We then provided experimental results involving a challenging malware dataset to illustrate the potential benefits of ensemble architectures. Our results clearly show that ensembles improve on standard techniques, with respect to our specific dataset. Of course, in principle, we expect such combination architectures to outperform standard techniques, but it is instructive to confirm this empirically, and to show that the improvement can be substantial. These results make it clear that there is a reason why complex stacking architectures win machine learning competitions. However, stacking models are not without potential pitfalls. As the architectures become more involved, training can become impractical. Furthermore, scoring can also become prohibitively costly, especially if large numbers of features are used in complex schemes involving extensive use of bagging or boosting. For future work, it would be useful to quantify the tradeoff between accuracy and model complexity. While stacking will generally improve results, marginal improvements in accuracy that come at great additional cost in training and scoring are unlikely to be of any value in real world applications. More concretely, future work involving additional features would be very interesting, as it would allow for a more thorough analysis of bagging, and it would enable us to draw firmer conclusions regarding the relative merits of bagging and boosting. Of course, more more complex classes of stacking techniques could be considered. Appendix: Confusion Matrices 1 indicates correct classification, and each 0 is an incorrect classification. Then, botĥ︀1 and̂︀2 are 80% accurate, and̂︀3 is 70% accurate. If we use a simple majority vote, , witĥ︀( ; , Λ ) = ( ), and (︀̂︀ ( ; , Λ1),̂︀( ; , Λ2), . . . ,̂︀( ; , Λ ) )︀ =̂︀( ; , Λ ) = ( ) Figure 1 : 1Balanced accuracy results Figure 2: Confusion matrices for standard techniques Figure 5 : 5Confusion matrices for voting ensembles Table 1 : 1Security research papers using ensemble classifiersAuthors Application Features Ensemble Table 2 : 2Type of each malware familyIndex Family Type Index Family Type 1 Adload [41] Trojan Downloader 12 Renos [43] Trojan Downloader 2 Agent [42] Trojan 13 Rimecud [54] Worm 3 Allaple [52] Worm 14 Small [44] Trojan Downloader 4 BHO [45] Trojan 15 Toga [46] Trojan 5 Bifrose [4] Backdoor 16 VB [6] Backdoor 6 CeeInject [48] VirTool 17 VBinject [50] VirTool 7 Cycbot [5] Backdoor 18 Vobfus [55] Worm 8 FakeRean [53] Rogue 19 Vundo [56] Trojan Downloader 9 Hotbar [1] Adware 20 Winwebsec [22] Rogue 10 Injector [49] VirTool 21 Zbot [23] Password Stealer 11 Table 3 : 3Type of each malware familyIndex Family Samples Index Family Samples 1 Adload 162 12 Renos 532 2 Agent 184 13 Rimecud 153 3 Allaple 986 14 Small 180 4 BHO 332 15 Toga 406 5 Bifrose 156 16 VB 346 6 CeeInject 873 17 VBinject 937 7 Cycbot 597 18 Vobfus 929 8 FakeRean 553 19 Vundo 762 9 Hotbar 129 20 Winwebsec 837 10 Injector 158 21 Zbot 303 11 OnLineGames 210 Total 9725 Table 4 : 4Software used in experimentsTechnique Software HMM hmmlearn XGBoost XGBoost AdaBoost sklearn CNN Keras, TensorFlow LSTM Keras, TensorFlow Random Forest sklearn Table 5 : 5Parameters for standard techniquesTechnique Parameters Values tested HMM n components [1,2,5,10] n iter [50,100,200,300,500] tol [0.01,0.5] CNN learning rate [0.001,0.0001] batch size [32,64,128] epochs [50,75,100 Random Forest n estimators [100,200,300,500,800] min samples split [2,5,10,15,20] min samples leaf [1,2,5,10,15] max features [auto,sqrt,log 2 ] max depth [30,40,50,60,70,80] LSTM layers [1,3] directional [uni-dir,bi-dir] learning rate [0.01] batch size [1,16,32] epochs [20] Table 6 : 6Parameters for boosting techniquesTechnique Parameters Values tested AdaBoost n estimators [100,200,300,500,800,1000] learning rate [0.5,1.0,1.5,2.0] algorithm [SAMME,SAMME.R] XGBoost eta [0.05,0.1,0.2,0.3,0.5] max depth [1,2,3,4] objective [multi:softprob,binary:logistic] steps [1,5,10,20,50] Table 7 : 7Comparison of experimental resultsExperiments Case Accuracy Balanced Precision Recall F1 score accuracy Standard HMM 0.6717 0.6336 0.7325 0.6717 0.6848 CNN 0.8211 0.7245 0.8364 0.8211 0.8104 Random Forest 0.7549 0.6610 0.7545 0.7523 0.7448 LSTM 0.8410 0.7185 0.7543 0.7185 0.8145 Bagging Bagged HMM 0.7168 0.6462 0.7484 0.7168 0.7165 Bagged CNN 0.8910 0.8105 0.9032 0.8910 0.8838 Bagged LSTM 0.8602 0.7754 0.8571 0.8602 0.8549 Boosting AdaBoost 0.5378 0.4060 0.5231 0.5378 0.5113 XGBoost 0.7472 0.6636 0.7371 0.7472 0.7285 Voting Classic 0.8766 0.8079 0.8747 0.8766 0.8719 CNN 0.9260 0.8705 0.9321 0.9260 0.9231 LSTM 0.8560 0.7470 0.8511 0.8560 0.8408 Bagged neural networks 0.9337 0.8816 0.9384 0.9337 0.9313 All neural networks 0.9208 0.8613 0.9284 0.9208 0.9171 All models 0.9188 0.8573 0.9249 0.9188 0.9154 Name=Adware:Win32/Hotbar&threatId=6204. Adware:win32/hotbarAdware:win32/hotbar. https://www.microsoft.com/en-us/wdsi/threats/malware- encyclopedia-description?Name=Adware:Win32/Hotbar&threatId=6204. Zero-day malware detection based on supervised learning algorithms of API call signatures. Mamoun Alazab, Sitalakshmi Venkatraman, Paul Watters, Moutaz Alazab, Proceedings of the Ninth Australasian Data Mining Conference. the Ninth Australasian Data Mining ConferenceAustralian Computer Society121Mamoun Alazab, Sitalakshmi Venkatraman, Paul Watters, and Moutaz Alazab. Zero-day mal- ware detection based on supervised learning algorithms of API call signatures. In Proceedings of the Ninth Australasian Data Mining Conference, volume 121 of AusDM '11, pages 171-182. Australian Computer Society, 2011. Xavier Amatriain, Justin Basilico, Netflix recommendations: Beyond the 5 stars. Xavier Amatriain and Justin Basilico. Netflix recommendations: Beyond the 5 stars (part 1). https://medium.com/netflix-techblog/netflix-recommendations-beyond-the-5- stars-part-1-55838468f429, 2012. Name=Backdoor:Win32/Bifrose&threatId=-2147479537. Backdoor:win32/bifroseBackdoor:win32/bifrose. https://www.microsoft.com/en-us/wdsi/threats/malware- encyclopedia-description?Name=Backdoor:Win32/Bifrose&threatId=-2147479537. Backdoor:win32/cycbot. Name=Backdoor:Win32/Cycbot. Backdoor:win32/cycbot.g. https://www.microsoft.com/en-us/wdsi/threats/malware- encyclopedia-description?Name=Backdoor:Win32/Cycbot.G. Name=Backdoor:Win32/VB&threatId=7275. Backdoor:win32/vbBackdoor:win32/vb. https://www.microsoft.com/en-us/wdsi/threats/malware- encyclopedia-description?Name=Backdoor:Win32/VB&threatId=7275. Decipherment with a million random restarts. Taylor Berg, - Kirkpatrick, Dan Klein, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingTaylor Berg-Kirkpatrick and Dan Klein. Decipherment with a million random restarts. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, pages 874-878, 2013. Combining supervised and unsupervised learning for zero-day malware detection. Lei Prakash Mandayam Comar, Sabyasachi Liu, Pang-Ning Saha, Antonio Tan, Nucci, 2013 Proceedings IEEE INFOCOM. IEEEPrakash Mandayam Comar, Lei Liu, Sabyasachi Saha, Pang-Ning Tan, and Antonio Nucci. Com- bining supervised and unsupervised learning for zero-day malware detection. In 2013 Proceedings IEEE INFOCOM, pages 2022-2030. IEEE, 2013. Android malware detection based on system calls. Marko Dimjaševic, Simone Atzeni, Ivo Ugrina, Zvonimir Rakamaric, UUCS-15-003UtahSchool of Computing, University of Utah, Salt Lake CityTechnical ReportMarko Dimjaševic, Simone Atzeni, Ivo Ugrina, and Zvonimir Rakamaric. Android malware detection based on system calls. Technical Report UUCS-15-003, School of Computing, University of Utah, Salt Lake City, Utah, 2015. A malware detection algorithm based on multi-view fusion. Shanqing Guo, Qixia Yuan, Fengbo Lin, Fengyu Wang, Tao Ban, International Conference on Neural Information Processing, ICONIP 2010. SpringerShanqing Guo, Qixia Yuan, Fengbo Lin, Fengyu Wang, and Tao Ban. A malware detection algo- rithm based on multi-view fusion. In International Conference on Neural Information Processing, ICONIP 2010, pages 259-266. Springer, 2010. Pindroid: A novel android malware detection system using ensemble learning methods. Fauzia Idrees, Muttukrishnan Rajarajan, Mauro Conti, M Thomas, Yogachandran Chen, Rahulamathavan, Computers & Security. 68Fauzia Idrees, Muttukrishnan Rajarajan, Mauro Conti, Thomas M Chen, and Yogachandran Rahulamathavan. Pindroid: A novel android malware detection system using ensemble learning methods. Computers & Security, 68:36-46, 2017. Byte level -gram analysis for malware detection. Sachin Jain, Yogesh Kumar Meena, Computer Networks and Intelligent Computing. SpringerSachin Jain and Yogesh Kumar Meena. Byte level -gram analysis for malware detection. In Computer Networks and Intelligent Computing, pages 51-59. Springer, 2011. Welcome to Kaggle competitions. Kaggle, Kaggle. Welcome to Kaggle competitions. https://www.kaggle.com/competitions, 2018. . KDD Cup of fresh air. KDD Cup of fresh air. https://biendata.com/competition/kdd_2018/, 2018. Keras: The Python deep learning API. Keras: The Python deep learning API. https://keras.io/. Fractal based adaptive boosting algorithm for cognitive detection of computer malware. Sana Muhammad Salman Khan, Siddiqui, D Robert, Ken Mcleod, Witold Ferens, Kinsner, 15th International Conference on Cognitive Informatics & Cognitive Computing, ICCI*CC. IEEEMuhammad Salman Khan, Sana Siddiqui, Robert D McLeod, Ken Ferens, and Witold Kinsner. Fractal based adaptive boosting algorithm for cognitive detection of computer malware. In 15th International Conference on Cognitive Informatics & Cognitive Computing, ICCI*CC, pages 50- 59. IEEE, 2016. On combining classifiers. Josef Kittler, Mohamad Hatef, P W Robert, Jiri Duin, Matas, IEEE Transactions on Pattern Analysis and Machine Intelligence. 203Josef Kittler, Mohamad Hatef, Robert P. W. Duin, and Jiri Matas. On combining classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(3):226-239, March 1998. Discriminant malware distance learning on structural information for automated malware classification. Deguang Kong, Guanhua Yan, Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '13. the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '13ACMDeguang Kong and Guanhua Yan. Discriminant malware distance learning on structural informa- tion for automated malware classification. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '13, pages 1357-1365. ACM, 2013. Combining Pattern Classifiers: Methods and Algorithms. I Ludmila, Kuncheva, Ludmila I. Kuncheva. Combining Pattern Classifiers: Methods and Algorithms. . Wiley, Hoboken, New JerseyWiley, Hoboken, New Jersey, 2004. https://pdfs.semanticscholar.org/453c/ Investigating machine learning methods in recommender systems. Marios Michailidis, Thesis. University College LondonMarios Michailidis. Investigating machine learning methods in recommender systems. Thesis, University College London, 2017. Microsoft malware protection center. Microsoft malware protection center, winwebsec. https://www.microsoft.com/security/ portal/threat/encyclopedia/entry.aspx?Name=Win32%2fWinwebsec. Symantec security response. Symantec security response, zbot. http://www.symantec.com/security_response/writeup. jsp?docid=2010-011016-3514-99. Native malware detection in smartphones with Android OS using static analysis, feature selection and ensemble classifiers. Salvador Morales-Ortega, Ponciano Jorge Escamilla-Ambrosio, Abraham Rodriguez-Mota, Lilian D Coronado-De-Alba , 11th International Conference on Malicious and Unwanted Software, MALWARE 2016. IEEESalvador Morales-Ortega, Ponciano Jorge Escamilla-Ambrosio, Abraham Rodriguez-Mota, and Lilian D Coronado-De-Alba. Native malware detection in smartphones with Android OS using static analysis, feature selection and ensemble classifiers. In 11th International Conference on Malicious and Unwanted Software, MALWARE 2016, pages 1-8. IEEE, 2016. Dllminer: structural mining for malware detection. Masoud Narouei, Mansour Ahmadi, Giorgio Giacinto, Hassan Takabi, Ashkan Sami, Security and Communication Networks8Masoud Narouei, Mansour Ahmadi, Giorgio Giacinto, Hassan Takabi, and Ashkan Sami. Dllminer: structural mining for malware detection. Security and Communication Networks, 8(18):3311-3322, 2015. . Netflix Prize, Netflix Prize. https://www.netflixprize.com, 2009. . Numpy, Numpy. https://numpy.org/. Pws:win32/onlinegames. Pws:win32/onlinegames. https://www.microsoft.com/en-us/wdsi/threats/malware- encyclopedia-description?Name=PWS%3AWin32%2FOnLineGames. Radare2: Libre and portable reverse engineering framework. Radare2: Libre and portable reverse engineering framework. https://rada.re/n/. Machine learning in Python. scikit-learn: Machine learning in Python. https://scikit-learn.org/stable/. Comparative analysis of voting schemes for ensemble-based malware detection. Khurram Raja, Niklas Shahzad, Lavesson, Journal of Wireless Mobile Networks, Ubiquitous Computing, and Dependable Applications. 4Raja Khurram Shahzad and Niklas Lavesson. Comparative analysis of voting schemes for ensemble-based malware detection. Journal of Wireless Mobile Networks, Ubiquitous Computing, and Dependable Applications, 4(1):98-117, 2013. Malware detection by pruning of parallel ensembles using harmony search. Shina Sheen, P Anitha, Sirisha, Pattern Recognition Letters. 3414Shina Sheen, R Anitha, and P Sirisha. Malware detection by pruning of parallel ensembles using harmony search. Pattern Recognition Letters, 34(14):1679-1686, 2013. Frankenstein or The Modern Prometheus. Dent. Mary Wollstonecraft Shelley, 1869Mary Wollstonecraft Shelley. Frankenstein or The Modern Prometheus. Dent, 1869. Support vector machines and malware detection. Tanuvir Singh, Fabio Di Troia, Aaron Visaggio, Thomas H Corrado, Mark Austin, Stamp, Journal of Computer Virology and Hacking Techniques. 124Tanuvir Singh, Fabio Di Troia, Visaggio Aaron Corrado, Thomas H. Austin, and Mark Stamp. Support vector machines and malware detection. Journal of Computer Virology and Hacking Techniques, 12(4):203-212, 2016. Ensemble learning to improve machine learning results. Vadim Smolyakov, Vadim Smolyakov. Ensemble learning to improve machine learning results. https://blog. statsbot.co/ensemble-learning-d1dcd548e936, 2017. Malicious pdf detection using metadata and structural features. Charles Smutz, Angelos Stavrou, Proceedings of the 28th Annual Computer Security Applications Conference, ACSAC 2012. the 28th Annual Computer Security Applications Conference, ACSAC 2012ACMCharles Smutz and Angelos Stavrou. Malicious pdf detection using metadata and structural features. In Proceedings of the 28th Annual Computer Security Applications Conference, ACSAC 2012, pages 239-248. ACM, 2012. A revealing introduction to hidden Markov models. Mark Stamp, Mark Stamp. A revealing introduction to hidden Markov models. https://www.cs.sjsu.edu/ stamp/RUA/HMM.pdf, 2004. Introduction to Machine Learning with Applications in Information Security. Mark Stamp, Hall/CRCBoca RatonMark Stamp. Introduction to Machine Learning with Applications in Information Security. Chap- man and Hall/CRC, Boca Raton, 2017. TensorFlow: An end-to-end open source machine learning platform. TensorFlow: An end-to-end open source machine learning platform. https://www.tensorflow. org/. Phishing detection using classifier ensembles. Fergus Toolan, Joe Carthy, eCRIME '09eCrime Researchers Summit. IEEEFergus Toolan and Joe Carthy. Phishing detection using classifier ensembles. In eCrime Re- searchers Summit, 2009, eCRIME '09, pages 1-9. IEEE, 2009. Trojandownloader:win32/adload. Trojandownloader:win32/adload. https://www.microsoft.com/en-us/wdsi/threats/malware- encyclopedia-description?Name=TrojanDownloader%3AWin32%2FAdload. Name=TrojanDownloader:Win32/Agent&ThreatID=14992. Trojandownloader:win32/agentTrojandownloader:win32/agent. https://www.microsoft.com/en-us/wdsi/threats/malware- encyclopedia-description?Name=TrojanDownloader:Win32/Agent&ThreatID=14992. Name=TrojanDownloader:Win32/Renos&threatId=16054. Trojandownloader:win32/renosTrojandownloader:win32/renos. https://www.microsoft.com/en-us/wdsi/threats/malware- encyclopedia-description?Name=TrojanDownloader:Win32/Renos&threatId=16054. Trojandownloader, Name=TrojanDownloader:Win32/Small&threatId=15508. Trojandownloader:win32/small. https://www.microsoft.com/en-us/wdsi/threats/malware- encyclopedia-description?Name=TrojanDownloader:Win32/Small&threatId=15508. Trojan, Name=Trojan:Win32/BHO&threatId=-2147364778. Trojan:win32/bho. https://www.microsoft.com/en-us/wdsi/threats/malware- encyclopedia-description?Name=Trojan:Win32/BHO&threatId=-2147364778. Trojan, Name=Trojan:Win32/Toga&threatId=-2147259798. Trojan:win32/toga. https://www.microsoft.com/en-us/wdsi/threats/malware- encyclopedia-description?Name=Trojan:Win32/Toga&threatId=-2147259798. Kaggle ensembling guide. Hendrik Jacob Van Veen, Le Nguyen The, Armando Dat, Segnini, Hendrik Jacob van Veen, Le Nguyen The Dat, and Armando Segnini. Kaggle ensembling guide. https://mlwave.com/kaggle-ensembling-guide/, 2015. Virtool:win32/ceeinject. Virtool:win32/ceeinject. https://www.microsoft.com/en-us/wdsi/threats/malware- encyclopedia-description?Name=VirTool%3AWin32%2FCeeInject. Name=VirTool:Win32/Injector&threatId=-2147401697. Virtool:win32/injectorVirtool:win32/injector. https://www.microsoft.com/en-us/wdsi/threats/malware- encyclopedia-description?Name=VirTool:Win32/Injector&threatId=-2147401697. Name=VirTool:Win32/VBInject&threatId=-2147367171. Virtool:win32/vbinjectVirtool:win32/vbinject. https://www.microsoft.com/en-us/wdsi/threats/malware- encyclopedia-description?Name=VirTool:Win32/VBInject&threatId=-2147367171. Classic cryptanalysis using hidden Markov models. Rohit Vobbilisetty, Fabio Di Troia, Richard M Low, Aaron Visaggio, Mark Stamp, Cryptologia. 411Rohit Vobbilisetty, Fabio Di Troia, Richard M. Low, Corrado Aaron Visaggio, and Mark Stamp. Classic cryptanalysis using hidden Markov models. Cryptologia, 41(1):1-28, 2017. . Win32/Fakerean, Win32/fakerean. https://www.microsoft.com/en-us/wdsi/threats/malware-encyclopedia- description?Name=Win32/FakeRean. . / Win32, Rimecud, Win32/rimecud. https://www.microsoft.com/en-us/wdsi/threats/malware-encyclopedia- description?Name=Win32/Rimecud&threatId=. . / Win32, Vobfus, Win32/vobfus. https://www.microsoft.com/en-us/wdsi/threats/malware-encyclopedia- description?Name=Win32/Vobfus&threatId=. . / Win32, Vundo, Win32/vundo. https://www.microsoft.com/en-us/wdsi/threats/malware-encyclopedia- description?Name=Win32/Vundo&threatId=. Sbmds: an interpretable string based malware detection system using svm ensemble with bagging. Yanfang Ye, Lifei Chen, Dingding Wang, Tao Li, Qingshan Jiang, Min Zhao, Journal in Computer Virology. 54283Yanfang Ye, Lifei Chen, Dingding Wang, Tao Li, Qingshan Jiang, and Min Zhao. Sbmds: an interpretable string based malware detection system using svm ensemble with bagging. Journal in Computer Virology, 5(4):283, 2009. Automatic malware categorization using cluster ensemble. Yanfang Ye, Tao Li, Yong Chen, Qingshan Jiang, Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '10. the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '10ACMYanfang Ye, Tao Li, Yong Chen, and Qingshan Jiang. Automatic malware categorization us- ing cluster ensemble. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '10, pages 95-104. ACM, 2010. High accuracy android malware detection using ensemble learning. Y Suleiman, Sakir Yerima, Igor Sezer, Muttik, IET Information Security. 96Suleiman Y Yerima, Sakir Sezer, and Igor Muttik. High accuracy android malware detection using ensemble learning. IET Information Security, 9(6):313-320, 2015. Malicious codes detection based on ensemble learning. Boyun Zhang, Jianping Yin, Jingbo Hao, Dingxing Zhang, Shulin Wang, International Conference on Autonomic and Trusted Computing. SpringerBoyun Zhang, Jianping Yin, Jingbo Hao, Dingxing Zhang, and Shulin Wang. Malicious codes detection based on ensemble learning. In International Conference on Autonomic and Trusted Computing, ATC 2007, pages 468-477. Springer, 2007. Zhi-Hua Zhou, Ensemble Methods: Foundations and Algorithms. Boca Raton, FloridaCRC PressZhi-Hua Zhou. Ensemble Methods: Foundations and Algorithms. CRC Press, Boca Raton, Florida, 2012. http://www2.islab.ntua.gr/attachments/article/86/Ensemble%20methods% 20-%20Zhou.pdf.
[]
[ "PINNTOMO: SEISMIC TOMOGRAPHY USING PHYSICS-INFORMED NEURAL NETWORKS A PREPRINT", "PINNTOMO: SEISMIC TOMOGRAPHY USING PHYSICS-INFORMED NEURAL NETWORKS A PREPRINT" ]
[ "Umair Bin Waheed [email protected] ", "Tariq Alkhalifah ", "Ehsan Haghighat ", "Chao Song ", "Jean Virieux ", "\nDepartment of Geosciences King\nDepartment of Civil Engineering Massachusetts Institute of Technology\nPhysical Sciences and Engineering Division King Abdullah University of Science and Technology Thuwal 23955\nFahd University of Petroleum and Minerals Dhahran 31261\n02139MASaudi Arabia., Saudi Arabia., USA\n", "\nPhysical Sciences and Engineering Division King Abdullah University of Science and Technology Thuwal 23955\nISTERRE Université Grenoble Alpes\nSaint-Martin-d'Heres 38400Saudi Arabia., France\n" ]
[ "Department of Geosciences King\nDepartment of Civil Engineering Massachusetts Institute of Technology\nPhysical Sciences and Engineering Division King Abdullah University of Science and Technology Thuwal 23955\nFahd University of Petroleum and Minerals Dhahran 31261\n02139MASaudi Arabia., Saudi Arabia., USA", "Physical Sciences and Engineering Division King Abdullah University of Science and Technology Thuwal 23955\nISTERRE Université Grenoble Alpes\nSaint-Martin-d'Heres 38400Saudi Arabia., France" ]
[]
Seismic traveltime tomography using transmission data is widely used to image the Earth's interior from global to local scales. In seismic imaging, it is used to obtain velocity models for subsequent depth-migration or full-waveform inversion. In addition, cross-hole tomography has been successfully applied for a variety of applications, including mineral exploration, reservoir monitoring, and CO 2 injection and sequestration. Conventional tomography techniques suffer from a number of limitations, including the use of a smoothing regularizer that is agnostic to the physics of wave propagation. Here, we propose a novel tomography method to address these challenges using developments in the field of scientific machine learning. Using seismic traveltimes observed at seismic stations covering part of the computational model, we train neural networks to approximate the traveltime factor and the velocity fields, subject to the physics-informed regularizer formed by the factored eikonal equation. This allows us to better compensate for the ill-posedness of the tomography problem compared to conventional methods and results in a number of other attractive features, including computational efficiency. We show the efficacy of the proposed method and its capabilities through synthetic tests for surface seismic and cross-hole geometries. Contrary to conventional techniques, we find the performance of the proposed method to be agnostic to the choice of the initial velocity model.
null
[ "https://arxiv.org/pdf/2104.01588v1.pdf" ]
233,025,234
2104.01588
d26738fcbae608aaa430ca7f6ceab70f79595c82
PINNTOMO: SEISMIC TOMOGRAPHY USING PHYSICS-INFORMED NEURAL NETWORKS A PREPRINT April 6, 2021 Umair Bin Waheed [email protected] Tariq Alkhalifah Ehsan Haghighat Chao Song Jean Virieux Department of Geosciences King Department of Civil Engineering Massachusetts Institute of Technology Physical Sciences and Engineering Division King Abdullah University of Science and Technology Thuwal 23955 Fahd University of Petroleum and Minerals Dhahran 31261 02139MASaudi Arabia., Saudi Arabia., USA Physical Sciences and Engineering Division King Abdullah University of Science and Technology Thuwal 23955 ISTERRE Université Grenoble Alpes Saint-Martin-d'Heres 38400Saudi Arabia., France PINNTOMO: SEISMIC TOMOGRAPHY USING PHYSICS-INFORMED NEURAL NETWORKS A PREPRINT April 6, 2021Tomography · Inversion · Traveltimes · Neural networks · Machine learning Seismic traveltime tomography using transmission data is widely used to image the Earth's interior from global to local scales. In seismic imaging, it is used to obtain velocity models for subsequent depth-migration or full-waveform inversion. In addition, cross-hole tomography has been successfully applied for a variety of applications, including mineral exploration, reservoir monitoring, and CO 2 injection and sequestration. Conventional tomography techniques suffer from a number of limitations, including the use of a smoothing regularizer that is agnostic to the physics of wave propagation. Here, we propose a novel tomography method to address these challenges using developments in the field of scientific machine learning. Using seismic traveltimes observed at seismic stations covering part of the computational model, we train neural networks to approximate the traveltime factor and the velocity fields, subject to the physics-informed regularizer formed by the factored eikonal equation. This allows us to better compensate for the ill-posedness of the tomography problem compared to conventional methods and results in a number of other attractive features, including computational efficiency. We show the efficacy of the proposed method and its capabilities through synthetic tests for surface seismic and cross-hole geometries. Contrary to conventional techniques, we find the performance of the proposed method to be agnostic to the choice of the initial velocity model. Introduction Seismic tomography has been used over the years as a pre-eminent tool for subsurface model building at various scales ranging from global and regional scales in earthquake seismology [1] to local scales in exploration seismology [2]. Building a velocity macro-model is a crucial step for the success of depth migration [3] and full-waveform inversion [4] for high-resolution imaging of the Earth's crust. First arrival traveltime tomography based on refraction data or diving waves has been successfully used to build such initial models [5]. In particular, the method is attractive for land arXiv:2104.01588v1 [physics.comp-ph] 4 Apr 2021 seismic data processing with surface acquisition, since it is often difficult or even impossible to identify reflections in the data. Moreover, cross-hole seismic tomography, which uses direct arrival times, has been around for more than three decades in oil/gas [6] and mineral exploration [7]. It has also been successful for applications in void and tunnel detection [8], reservoir characterization [9], fracture detection [10], hydrological parameter estimation [11], geotechnical site investigations [12], and time-lapse studies related to carbon capture and sequestration [13]. Seismic tomography is typically solved as an inverse problem that minimizes the misfit between a set of observed arrival times on the receivers and those synthetically generated using an estimate of the velocity model. Minimization of this misfit function requires a nonlinear optimization procedure. However, the conventional ray tomography approach linearizes the tomography operator, which requires the computation of the Fréchet derivatives. Then the linearized tomography operator is inverted iteratively. For modern seismic surveys, the requirement of explicitly computing the Fréchet derivatives is challenging to handle in terms of the computation cost and memory requirements. This gave way to the adjoint-state method [14,15] that formulates tomographic inversion as a nonlinear optimization process by directly computing the gradient of the misfit function. Nevertheless, these conventional methods still suffer from a number of limitations. Usually, they use some form of smoothing regularization to compensate for the ill-posedness of the problem. This ends up limiting the resolution of the inverted velocity model. In addition, these methods typically need an initial model with some general background features of the Earth represented, like a constant depth gradient. The choice of the initial model may affect the final solution and is usually not obvious prior to inversion. Furthermore, for models with irregular topography, considerable grid and algorithmic adaptions are needed to account for the free-surface topography [16]. Therefore, in this work, we propose a novel algorithm for the seismic tomography problem based on developments in the field of scientific machine learning. In particular, we use the emerging paradigm of physics-informed neural networks (PINNs) that overcomes the limitation of deep learning associated with sparse data by incorporating the governing partial differential equation (PDE) into the neural network's loss function. PINNs have already demonstrated success in solving a number of forward and inverse problems in other scientific disciplines [17,18]. Recently, PINNs have also shown remarkable success in overcoming limitations associated with conventional techniques in modeling seismic traveltimes [19,20] and wavefields [21,22]. Here, we develop a PINN-based tomography (PINNtomo) algorithm to invert for the velocity model. Given traveltimes at seismic stations covering part of the computational domain, we use neural networks to approximate the traveltime factor and the velocity fields, subject to the physics-informed regularizer based on the factored eikonal equation. Doing so allows us to better compensate for the poorly determined aspects of the velocity model compared to conventional physics-agnostic smoothing regularizers. Also, we find the performance of the method to be independent of the initial velocity model. Moreover, since the method is mesh-free, it is easily adaptable to models with irregular topography without modifications. Additional advantages of the method include ease of deployment across a variety of platforms (CPUs, GPUs) and architectures (desktops, clusters) without any modification. Through tests on realistic surface seismic and cross-hole geometries, we demonstrate the efficacy of the proposed algorithm in solving the tomography problem. This is done by obtaining a velocity model that produces traveltimes matching those observed at seismic stations while honoring the physics of wave propagation by minimizing the residual of the eikonal equation at selected grid points in the computational domain. Theory In an isotropic medium, the eikonal equation relates the gradient of the traveltime surfaces to the velocity of the wavefront through the relation: |∇T (x)| 2 = 1 v 2 (x) , ∀ x ∈ Ω, T (x s ) = 0,(1) where Ω is a domain in R d with d as the space dimension, T (x) is the traveltime or Euclidean distance to any point x from the point-source x s , v(x) is the velocity defined on Ω, and ∇ denotes the gradient operator. Since equation (1) contains singularity at the point-source location, traveltime modeling studies [19,20] have shown that a factored form of the eikonal equation is easier to train using PINNs. Therefore, we factorize the traveltime T (x) into two multiplicative functions [23], i.e., T (x) = T 0 (x) τ (x),(2) where T 0 (x) is the known function which is computed analytically, leaving τ (x) as the unknown traveltime factor. By substituting the above in equation (1), we get the factored eikonal equation: T 2 0 |∇τ | 2 + τ 2 |∇T 0 | 2 + 2 T 0 τ (∇T 0 .∇τ ) = 1 v 2 (x) , τ (x s ) = 1.(3) The known traveltime T 0 is computed analytically using the expression: T 0 (x) = |x − x s | v(x s ) ,(4) where v(x s ) is the velocity at the source location. This ensures that T 0 captures the point-source singularity leaving τ as a smooth function in the source neighborhood. We can re-write the factored eikonal equation in its residual form as: L(x) : T 2 0 |∇τ | 2 + τ 2 |∇T 0 | 2 + 2 T 0 τ (∇T 0 .∇τ ) − 1 v 2 (x) = 0.(5) To invert for the unknown velocity model, we consider two multilayer neural networks -one to approximate the unknown traveltime factor for an arbitrary source location x s , τ (x s , x), and the other for the velocity, v(x), i.e., τ (x s , x) ≈τ (x s , x) = N τ (x s , x; θ τ ), v(x) ≈v(x) = N v (x; θ v ),(6) where N τ and N v are the neural networks with trainable parameters θ τ and θ v , respectively. Since traveltime data from multiple sources are needed to obtain a reliable velocity model, the traveltime factor network also takes shot locations as input, in addition to the spatial coordinates. On the contrary, the velocity model only takes spatial coordinates as input since it does not vary with the source locations. Since both the traveltime factor and velocity are strictly positive quantities, we pass the output of the networks through a sigmoid function, σ(), and multiply these by scaling coefficients, i.e., τ (x s , x) = σ (N τ (x s , x; θ τ )) τ peak , v(x) = σ (N v (x; θ v )) v peak ,(7) where τ peak and v peak are the peak values that can be obtained from the traveltime factor network and the velocity network, respectively. These scaling factors should be chosen such that they are larger than the expected maximum values to avoid clipping of the output. Finally, we use a single loss function to train both networks simultaneously. The loss function is given as: J(θ τ , θ v ) = 1 N s N r Ns n=1 Nr i=1 T 0 (x n,i ) τ (x n,i ) −T n,i 2 + 1 N s Ns n=1 (τ (x n,s ) − 1) 2 + 1 N s N t Ns n=1 Nt i=1 (L(x n,i )) 2 ,(8) where N s denotes the total number of sources, N r is the total number of receivers, and N t is the number of training (collocation) points from the computational domain. The first term minimizes the misfit between traveltimes predicted by the neural network and the observed traveltimes,T n,i , over all sources and receivers. The second term enforces the boundary condition for all source positions, while the third term ensures that the outputs of the neural networks minimize the residual of the eikonal equation over all sources and training points. Figure 1 summarizes the proposed loss functions and the neural networks used. The network parameters θ τ and θ v are then identified by solving the following minimization problem: arg min θτ ,θv J(θ τ , θ v ).(9) ! !% traveltime factor network velocity network Loss = 1 ! . " $ #$% & ! $ '$% & " ( ( #,' ).̂# ,' − , #,' * (data mismatch) + 1 ! $ #$% & !̂# ,! − 1 (boundary condition) + 1 ! . + $ #$% & ! $ '$% & # ( #, Numerical Tests In this section, we test the proposed tomography workflow on cross-hole and surface acquisition geometries. For both tests, we use neural networks containing 10 hidden layers. For the traveltime factor network, each layer contains 20 neurons, whereas, for the velocity network, each layer contains 10 neurons. We use a locally adaptive exponential linear unit (l-ELU) as the activation function for the hidden layers. Locally adaptive activation functions have been shown to achieve superior performance and convergence speed over base methods [24]. We train the networks first using the Adam optimizer with a batch size of 1024 for 500 epochs and then using the L-BFGS-B optimizer until convergence. These hyper-parameters are chosen based on some initial tests and kept fixed throughout the study to avoid the need for tuning. The PINN framework is implemented using the SciANN package [25] -a high level Tensorflow wrapper for scientific computations. Example 1: Cross-hole tomography First, we apply the tomography workflow on a 1 × 1 km 2 computational domain with the velocity model shown in Figure 2(a). We consider 11 sources on the left boundary of the model (x = 0 km) uniformly spaced with an interval of 100 m, and 51 receivers on the right boundary of the model (x = 1 km) with a uniform spacing of 20 m. The observed traveltimes are computed through a first-order factored eikonal solver using the fast sweeping method [23]. The initial velocity model for this test is shown in Figure 2(b). It is worth noting that, contrary to conventional tomography methods, here, the initial velocity model is automatically selected based on the initialization of the parameters of the velocity network. While the choice of the initial velocity model is critical to the success of ray tomography, we find our formulation to be agnostic to this choice. For illustration, the initial velocity model is plotted using the same color scale as the true model. Figure 3 shows the inverted velocity model, indicating that the long-wavelength features of the true model have been well-recovered, as expected from traveltime tomography. Figure 4 compares the velocity profiles at x = 0.4 km and x = 0.8 km between the initial, true, and inverted velocities. We observe that the initial velocity model evolved significantly to match the long-wavelength trend of the true velocity model. Finally, in Figure 5 we show the final fit between the observed and the predicted traveltimes at all receivers for the source at (x s , z s ) = (0 km, 0.5 km), indicating good agreement between the two. Example 2: Surface tomography Next, we test the performance of PINNtomo using a surface acquisition geometry. We consider a 1 × 5 km 2 model with the velocity distribution as shown in Figure 6(a). We consider 51 sources with a uniform spacing of 100 m and 126 receivers with a uniform spacing of 40 m, both on the surface of the model (z = 0 km). Like earlier, the observed traveltimes are computed through a first-order factored eikonal solver using the fast sweeping method. Again, the initial velocity is a consequence of the random initialization of the velocity network's parameters. For illustration, the initial velocity model is shown in Figure 6(b) using the same color scale as the true model. Figure 7 shows the inverted velocity model, indicating that the long-wavelength features, in particular for the velocity bump between x = 2 − 3 km have been well-recovered. We also observe a good fit between the observed and predicted traveltimes at the receivers, as shown in Figure 8. It is worth mentioning that for conventional tomography, a depth gradient for the initial velocity is necessary, and the choice of it may bear some consequences on the inverted velocities. However, as highlighted earlier, PINNtomo is agnostic to the initial velocity distribution and does not require an initial model with a depth gradient, even for surface tomography. Discussion and conclusions In this work, we presented a novel approach to the traveltime tomography problem. To this end, we use neural networks to approximate the traveltime factor and the velocity fields, subject to the physics-informed regularization based on the eikonal equation. Through tests on cross-hole and surface acquisition geometries, we observe that the method is capable of reliably recovering the long wavelength features of the velocity field. Conventional techniques require an initial depth gradient for velocity that may affect the inversion result. On the contrary, the performance of the proposed approach is largely agnostic to the initial velocity model. The proposed approach enjoys several advantages compared to conventional tomography methods. Typically, conventional methods use some form of smoothing regularization to compensate for the poor illumination of the model space due to the acquisition geometry. These physics-agnostic regularizers end up limiting the resolution of the inverted velocity model. On the contrary, using the residual of the eikonal equation in the loss function, PINNtomo enforces a physics-informed regularizer, which uses the actual wave propagation physics to address the ill-posedness of the problem. Moreover, unlike conventional methods, the proposed approach is mesh-free, and therefore, can be easily used for models with irregular topography and can also accommodate source and receiver points that are not on a regular grid. Furthermore, through the use of transfer learning, the approach is well-suited to study temporal variations in the near-surface using surface seismic tomography or at larger depths using cross-hole tomography. The method also outperforms conventional techniques in terms of both memory and computational requirements. While the adjoint-state tomography method overcame the memory limitation of the ray method by avoiding the need to explicitly compute the Fréchet derivative matrix, it still depends on the size of the velocity model, which could still be cumbersome for large 3D surveys. Here, the memory required is dependent on the optimization batch size, which is usually much lower than the entire model size. Moreover, since PINNtomo uses Tensorflow at the backend, it allows easy deployment of computations across a variety of platforms (CPUs, GPUs) and architectures (desktops, clusters). Therefore, in this study, we used an NVIDIA Tesla P100 GPU that required only ∼ 6 minutes to invert the velocity model for the cross-hole example and ∼ 20 minutes for the surface tomography example. Figure 1 : 1Illustration of the PINNtomo algorithm: We use two neural networks to approximate the traveltime factorτ and the velocityv. The loss function used to train these networks minimizes the traveltime mismatch at the receivers (first term), honors the boundary condition (second term), and minimizes the residual of the eikonal equation (third term) at selected training points within the computational domain. These quantities are minimized for all source locations x s = (x s , z s ). Figure 2 :Figure 3 : 23The true (a) and initial (b) velocity models used for the cross-hole tomography test. For this test, we use 11 equispaced sources on the left boundary (x = 0 km) of the model and 51 equispaced receivers on the right boundary (x = 1 km) of the model. The inverted velocity model for the cross-hole tomography test indicating reliable reconstruction of the long-wavelength features of the true velocity model. Figure 4 : 4A comparison of the velocity profiles at x = 0.4 km (a) and x = 0.8 km (b) between the initial (dotted blue), true (solid black), and inverted (dashed red) velocity models. Figure 5 : 5A comparison between the observed traveltimes (solid black) and those predicted using the neural network (dashed red) at the receivers located on the right boundary of the model for the source at (x s , z s ) = (0 km, 0.5 km). Figure 6 : 6The true (a) and initial (b) velocity models used for the surface tomography test. For this test, we use 51 equispaced sources and 126 equispaced receivers on the surface (z = 0 km) of the model. Figure 7 : 7The inverted velocity model for the surface tomography test indicating reliable reconstruction of the longwavelength features of the true model. Figure 8 : 8A comparison between the observed traveltimes (solid black) and those predicted using the neural network (dashed red) at the receivers located on the surface for the source at x = 3 km. Seismic tomography: a window into deep earth. Nicholas Rawlinson, S Pozgay, Fishwick, Physics of the Earth and Planetary Interiors. 1783-4Nicholas Rawlinson, S Pozgay, and S Fishwick. Seismic tomography: a window into deep earth. Physics of the Earth and Planetary Interiors, 178(3-4):101-135, 2010. Exploration seismic tomography: Fundamentals. Society of Exploration Geophysicists. R Robert, Stewart, Robert R Stewart. Exploration seismic tomography: Fundamentals. Society of Exploration Geophysicists, 1991. An overview of depth imaging in exploration geophysics. John Etgen, H Samuel, Yu Gray, Zhang, Geophysics. 746John Etgen, Samuel H Gray, and Yu Zhang. An overview of depth imaging in exploration geophysics. Geophysics, 74(6):WCA5-WCA17, 2009. An overview of full-waveform inversion in exploration geophysics. Jean Virieux, Stéphane Operto, Geophysics. 746Jean Virieux and Stéphane Operto. An overview of full-waveform inversion in exploration geophysics. Geophysics, 74(6):WCC1-WCC26, 2009. Hervé Chauris, and Henri Calandra. First-arrival traveltime tomography based on the adjoint-state method. Cédric Taillandier, Mark Noble, Geophysics. 746Cédric Taillandier, Mark Noble, Hervé Chauris, and Henri Calandra. First-arrival traveltime tomography based on the adjoint-state method. Geophysics, 74(6):WCB1-WCB10, 2009. Crosshole seismic tomography. Nd Bregman, C H Bailey, Chapman, Geophysics. 542ND Bregman, RC Bailey, and CH Chapman. Crosshole seismic tomography. Geophysics, 54(2):200-215, 1989. Crosshole seismic imaging for sulfide orebody delineation near Sudbury. Joe Wong, 65Ontario, Canada. GeophysicsJoe Wong. Crosshole seismic imaging for sulfide orebody delineation near Sudbury, Ontario, Canada. Geophysics, 65(6):1900-1907, 2000. Tunnel signature prediction for a cross-borehole seismic survey. Richard D Rechtien, J Roy, Robert F Greenfield, BallardJr, Geophysics. 601Richard D Rechtien, Roy J Greenfield, and Robert F Ballard Jr. Tunnel signature prediction for a cross-borehole seismic survey. Geophysics, 60(1):76-86, 1995. Imaging the permeability structure within the near-surface sediments by acoustic crosswell tomography. Tokuo Yamamoto, Journal of applied geophysics. 471Tokuo Yamamoto. Imaging the permeability structure within the near-surface sediments by acoustic crosswell tomography. Journal of applied geophysics, 47(1):1-11, 2001. Potential coordinate mislocations in crosshole tomography: Results from the Grimsel test site. Hansruedi Maurer, Alan G Green, Switzerland. Geophysics. 626Hansruedi Maurer and Alan G Green. Potential coordinate mislocations in crosshole tomography: Results from the Grimsel test site, Switzerland. Geophysics, 62(6):1696-1709, 1997. Inferring the relation between seismic slowness and hydraulic conductivity in heterogeneous aquifers. Jerry M David W Hyndman, Steven M Harris, Gorelick, Water Resources Research. 368David W Hyndman, Jerry M Harris, and Steven M Gorelick. Inferring the relation between seismic slowness and hydraulic conductivity in heterogeneous aquifers. Water Resources Research, 36(8):2121-2132, 2000. The potential of seismic cross-hole tomography for geotechnical site investigation. Yannick Choy, Hing Ng, William Danovan, Taeseo Ku, E3S Web of Conferences. EDP Sciences9218006Yannick Choy Hing Ng, William Danovan, and Taeseo Ku. The potential of seismic cross-hole tomography for geotechnical site investigation. In E3S Web of Conferences, volume 92, page 18006. EDP Sciences, 2019. High-resolution characterization of a CO2 plume using crosswell seismic tomography. J Jb Ajo-Franklin, J Peterson, T M Doetsch, Daley, International Journal of Greenhouse Gas Control. 18JB Ajo-Franklin, J Peterson, J Doetsch, and TM Daley. High-resolution characterization of a CO2 plume using crosswell seismic tomography: Cranfield, MS, USA. International Journal of Greenhouse Gas Control, 18:497-509, 2013. Gradient calculation of the traveltime cost function without ray tracing. Alain Sei, W William, Symes, SEG Technical Program Expanded Abstracts. Society of Exploration GeophysicistsAlain Sei and William W Symes. Gradient calculation of the traveltime cost function without ray tracing. In SEG Technical Program Expanded Abstracts 1994, pages 1351-1354. Society of Exploration Geophysicists, 1994. An adjoint state method for three-dimensional transmission traveltime tomography using first-arrivals. Shingyu Leung, Jianliang Qian, Communications in Mathematical Sciences. 41Shingyu Leung, Jianliang Qian, et al. An adjoint state method for three-dimensional transmission traveltime tomography using first-arrivals. Communications in Mathematical Sciences, 4(1):249-266, 2006. Topography-dependent eikonal traveltime tomography for upper crustal structure beneath an irregular surface. Ting Ma, Zhongjie Zhang, Pure and Applied Geophysics. 1726Ting Ma and Zhongjie Zhang. Topography-dependent eikonal traveltime tomography for upper crustal structure beneath an irregular surface. Pure and Applied Geophysics, 172(6):1511-1529, 2015. Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Maziar Raissi, Alireza Yazdani, George Em Karniadakis, Science. 3676481Maziar Raissi, Alireza Yazdani, and George Em Karniadakis. Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Science, 367(6481):1026-1030, 2020. Physics-informed neural networks for cardiac activation mapping. Francisco Sahli Costabal, Yibo Yang, Paris Perdikaris, E Daniel, Ellen Hurtado, Kuhl, Frontiers in Physics. 842Francisco Sahli Costabal, Yibo Yang, Paris Perdikaris, Daniel E Hurtado, and Ellen Kuhl. Physics-informed neural networks for cardiac activation mapping. Frontiers in Physics, 8:42, 2020. Umair Waheed, Ehsan Haghighat, Tariq Alkhalifah, Chao Song, Qi Hao, arXiv:2007.08330Eikonal solution using physicsinformed neural networks. arXiv preprintUmair Waheed, Ehsan Haghighat, Tariq Alkhalifah, Chao Song, and Qi Hao. Eikonal solution using physics- informed neural networks. arXiv preprint arXiv:2007.08330, 2020. Eikonet: Solving the eikonal equation with deep neural networks. D Jonathan, Kamyar Smith, Zachary E Azizzadenesheli, Ross, IEEE Transactions on Geoscience and Remote Sensing. Jonathan D Smith, Kamyar Azizzadenesheli, and Zachary E Ross. Eikonet: Solving the eikonal equation with deep neural networks. IEEE Transactions on Geoscience and Remote Sensing, 2020. Ben Moseley, Andrew Markham, Tarje Nissen-Meyer, arXiv:2006.11894Solving the wave equation with physics-informed deep learning. arXiv preprintBen Moseley, Andrew Markham, and Tarje Nissen-Meyer. Solving the wave equation with physics-informed deep learning. arXiv preprint arXiv:2006.11894, 2020. Solving the frequency-domain acoustic VTI wave equation using physics-informed neural networks. Chao Song, Umair Bin Tariq Alkhalifah, Waheed, Geophysical Journal International. Chao Song, Tariq Alkhalifah, and Umair Bin Waheed. Solving the frequency-domain acoustic VTI wave equation using physics-informed neural networks. Geophysical Journal International, 2021. Fast sweeping method for the factored eikonal equation. Sergey Fomel, Songting Luo, Hongkai Zhao, Journal of Computational Physics. 22817Sergey Fomel, Songting Luo, and Hongkai Zhao. Fast sweeping method for the factored eikonal equation. Journal of Computational Physics, 228(17):6440-6455, 2009. Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks. D Ameya, Kenji Jagtap, George Em Kawaguchi, Karniadakis, Proceedings of the Royal Society A. 47620200334Ameya D Jagtap, Kenji Kawaguchi, and George Em Karniadakis. Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks. Proceedings of the Royal Society A, 476(2239):20200334, 2020. Sciann: A Keras/TensorFlow wrapper for scientific computations and physics-informed deep learning using artificial neural networks. Ehsan Haghighat, Ruben Juanes, Computer Methods in Applied Mechanics and Engineering. 373113552Ehsan Haghighat and Ruben Juanes. Sciann: A Keras/TensorFlow wrapper for scientific computations and physics-informed deep learning using artificial neural networks. Computer Methods in Applied Mechanics and Engineering, 373:113552, 2021.
[]
[ "The Serverless Computing Survey: A Technical Primer for Design Architecture", "The Serverless Computing Survey: A Technical Primer for Design Architecture" ]
[ "Zijun Li ", "Linsong Guo ", "Jiagan Cheng ", "Quan Chen ", "Zijun Li ", "Linsong Guo ", "Jiagan Cheng ", "Quan Chen ", "Bingsheng He ", "Minyi Guo ", "\nDepartment of Computer Science and Engineering\nBINGSHENG HE\nShanghai Jiao Tong University\nChina\n", "\nDepartment of Computer Science\nNational University of Singapore\nSingapore\n", "\nDepartment of Computer Science and Engineering\nMINYI GUO\nShanghai Jiao Tong University\nChina\n" ]
[ "Department of Computer Science and Engineering\nBINGSHENG HE\nShanghai Jiao Tong University\nChina", "Department of Computer Science\nNational University of Singapore\nSingapore", "Department of Computer Science and Engineering\nMINYI GUO\nShanghai Jiao Tong University\nChina" ]
[]
The development of cloud infrastructures inspires the emergence of cloud-native computing. As the most promising architecture for deploying microservices, serverless computing has recently attracted more and more attention in both industry and academia. Due to its inherent scalability and flexibility, serverless computing becomes attractive and more pervasive for ever-growing Internet services. Despite the momentum in the cloud-native community, the existing challenges and compromises still wait for more advanced research and solutions to further explore the potentials of the serverless computing model. As a contribution to this knowledge, this article surveys and elaborates the research domains in the serverless context by decoupling the architecture into four stack layers: Virtualization, Encapsule, System Orchestration, and System Coordination. We highlight the key implications and limitations of these works in each layer, and make suggestions for potential challenges to the field of future serverless computing.Note: This paper has been accepted by ACM Computing Surveys (CSUR), and the current e-print version is our major revision. For a complete view, please visit ACM CSUR.
10.1145/3508360
[ "https://arxiv.org/pdf/2112.12921v2.pdf" ]
245,502,110
2112.12921
9a374c2aa04d0428a3bc8d6e8526a9c19ed7e44e
The Serverless Computing Survey: A Technical Primer for Design Architecture 2022. January 2022 Zijun Li Linsong Guo Jiagan Cheng Quan Chen Zijun Li Linsong Guo Jiagan Cheng Quan Chen Bingsheng He Minyi Guo Department of Computer Science and Engineering BINGSHENG HE Shanghai Jiao Tong University China Department of Computer Science National University of Singapore Singapore Department of Computer Science and Engineering MINYI GUO Shanghai Jiao Tong University China The Serverless Computing Survey: A Technical Primer for Design Architecture 11352022. January 202210.1145/nnnnnnn.nnnnnnnCCS Concepts: • Computer systems organization → Cloud computingn-tier architectures• Networks → Cloud computing• Theory of computation → Parallel computing models Additional Key Words and Phrases: serverless computing, architecture design, FaaS, Lambda paradigm ACM Reference Format: The development of cloud infrastructures inspires the emergence of cloud-native computing. As the most promising architecture for deploying microservices, serverless computing has recently attracted more and more attention in both industry and academia. Due to its inherent scalability and flexibility, serverless computing becomes attractive and more pervasive for ever-growing Internet services. Despite the momentum in the cloud-native community, the existing challenges and compromises still wait for more advanced research and solutions to further explore the potentials of the serverless computing model. As a contribution to this knowledge, this article surveys and elaborates the research domains in the serverless context by decoupling the architecture into four stack layers: Virtualization, Encapsule, System Orchestration, and System Coordination. We highlight the key implications and limitations of these works in each layer, and make suggestions for potential challenges to the field of future serverless computing.Note: This paper has been accepted by ACM Computing Surveys (CSUR), and the current e-print version is our major revision. For a complete view, please visit ACM CSUR. INTRODUCTION Definition of Serverless Computing Traditional Infrastructure-as-a-Service (IaaS) deployment mode demands a long-term running server for sustainable service delivery. However, this exclusive allocation needs to retain resources regardless of whether the user application is running or not. Consequently, it results in low resource utilization in current data centers by only about 10% on average, especially for an online service with a diurnal pattern. The contradiction attracts the development of a platform-managed on-demand service model to attain higher resource utilization and lower cloud computing costs. To this end, serverless computing was put forward, and most large cloud vendors such as Amazon, Google, Microsoft, IBM, and Alibaba have already offered such elastic computing services. In the following, we will first review the definition given in Berkeley View [65], and then we will give a broader definition. We believe that a narrow perception of the FaaS-based serverless model may weaken its advancement. So far, there is no formal definition of serverless computing. The common acknowledged definitions from Berkeley View [65] are presented as follows: • = ( --- ) + ( --- ). One fallacy is that 'Serverless' is interchangeable with 'FaaS', which is revealed in a recent interview [78]. To be precise, they both are essential to serverless computing. model enables the function isolation and invocation, while provides overall backend support for online services. • In the model (aka Lambda paradigm), an application is sliced into functions or functionlevel microservices [26,45,57,65,117,141]. The function identifier, the language runtime, the memory limit of one instance, and the function code blob URI (Uniform Resource Identifier) together define the existence of a function [94]. • The covers a wide range of services that any application relies on can be categorized into it. For example, the cloud storage (Amazon S3 and DynamoDB), the message bus system for passing (Google cloud pub/sub), the message notification service (Amazon SNS), and DevOps tools (Microsoft Azure DevOps). To depict the serverless computing model, we take the asynchronous invocation in Figure 1 as an example. The serverless system receives triggered API queries from the users, validates them, and invokes the functions by creating new sandboxes (aka the cold startup [15,28,65]) or reusing running warm ones (aka the warm startup). The isolation ensures that each function invocation runs in an individual container or a virtual machine assigned from an access-control controller. Due to the event-driven and single-event processing nature, the serverless system can be triggered to provide on-demand isolated instances and scale them horizontally according to the actual application workload. Afterwards, each execution worker accesses a backend database to save execution results [23]. By further configuring triggers and bridging interactions, users can customize the execution for complex applications (e.g., building internal event calls in a { , , } pipeline). In the broader scenario, we think that the serverless computing model should be identified with the following features: • Auto-scaling. Auto-scalability should not be only narrowed to the FaaS model (e.g., container black boxs as scheduling units in OpenWhisk [134]). The indispensable factor in identifying a serverless system is performing horizontal and vertical scaling when accommodating workload dynamics. Allowing an application to scale the number of instances to zero also introduces a worrisome challenge -cold startup. When a function experiences the cold startup, instances need to start from scratch, initialize the software environment, and load applicationspecific code. These steps can significantly drag down the service response, leading to QoS (Quality-of-Service) violations. • Flexible scheduling. Since the application is no longer bound to a specific server, the serverless controller dynamically schedules applications according to the resource usage in the cluster, while ensuring load balancing and performance assurances. Moreover, the serverless platform also takes the multi-region collaboration into account [154]. For a more robust and available serverless system, flexible scheduling allows the workload queries to be distributed across a broader range of regions [119]. It avoids serious performance degradation or damage to the service continuity in case of unavailable or crash nodes. • Event-driven. The serverless application is triggered by events, such as the arrival of RESTful HTTP queries, the update of a message queue, or new data to a storage service. By binding events to functions with triggers and rules, the controller and functions can use metadata encapsulated in context attributes. It makes relationships between events and the system detectable, enabling different collaboration responses to different events. Cloud-Native Computing Foundation (CNCF) serverless group also published CloudEvents specifications for commonly describing event metadata to provide interoperability. • Transparent development. On the one hand, managing underlying host resources will no longer be a bother for application maintainers. It is because they are agnostic about the execution environment. Simultaneously, cloud vendors should ensure available physical nodes, isolated sandboxes, software runtimes, and computing power while making them transparent to maintainers. On the other hand, serverless computing should also integrate DevOps tools to help deploy and iterate more efficiently. • Pay-as-you-go. The serverless billing model shifts the cost of computing power from a capital expense to an operating expense. This model eliminates the requirement from users to buy exclusive servers based on the peak load. By sharing network, disk, CPU, memory, and other resources, the pay-as-you-go model only indicates the resources that applications actually used [1,2,26], no matter whether the instances are running or idle. We regard an elastic computing model with the above five features incorporated as the key to the definition of serverless computing. Along with the serverless emergence, application maintainers would find it more attractive that resource pricing is billed based on the actual processing events of an application rather than the pre-assigned resources [2]. Nowadays the serverless computing is commonly applied in backend scenarios for batch jobs, including data analytics (e.g., distributed computing model in PyWren [64]), machine learning tasks (e.g., deep learning) [78,111], and event-driven web applications. Survey Method by the Layered Serverless Architecture Several surveys in serverless computing have discussed the characteristics of serverless generalization [15,52,65,112,116,144]. However, they only propose literature reviews from a high-level perspective while ignoring to provide enough architecture implications. As a result, researchers and serverless vendors may find it struggling to grasp and comprehend each issue in the real serverless architecture. In the lack of systematic knowledge, challenges and proposed solutions will lack high portability and compatibility for various serverless systems. To this end, this survey is inspired to propose a layered design and summarize the research domains from different views. It can help researchers and practitioners to further understand the nature of serverless computing. As shown in Figure 2, we analyze its design architecture with a bottom-up logic and decouple the serverless computing architecture into four stack layers: Virtualization, Encapsule, System Orchestration, and System Coordination. Virtualization layer. The Virtualization layer enables function isolation within a performance and functionality secured sandbox. The sandbox serves as the runtime for application service code, runtime environment, dependencies, and system libraries. To prevent access to resources in the multi-application or multi-tenant scenarios, cloud vendors usually adopt containers/virtual machines to achieve isolation. Currently, the popular sandbox technologies are Docker [41], gVisor [49], Kata [67], Firecracker [3], and Unikernel [86]. Section 2 introduces these solutions to isolate functions and analyze their pros and cons. Encapsule layer. Various middlewares in the Encapsule layer enable customized function triggers and executions, and they also provide data metrics collection for communicating and monitoring. We call all these additional middlewares the sidecar. It separates the service's business logic and enables loose coupling between the functions and the underlying platform. Meanwhile, to speed up instance startup and initialization, the prewarm pool is commonly used in the Encapsule layer [44,97,104,105]. In addition, serverless systems may use prediction by analyzing the load pattern to prewarm each by one-to-one approach [118,146], or build a template for all functions to dynamically install requirements (REQs) according to the runtime characteristics by a one-for-all approach. Those concepts are introduced and compared in Section 3. System Orchestration layer. The System Orchestration layer allows users to configure triggers and bind rules, ensuring the high availability and stability of the user application by dynamically adjusting as load changes. Through the cloud orchestrator, the combination of online and offline scheduling can avoid resource contention, recycle idle resources and ease the performance degradation for co-located functions. The above implementations are also typically integrated into container orchestration services (e.g., Google Kubernetes and Docker Swarm). While in the serverless system, resource monitor, controller, and load balancer are consolidated to resolve scheduling challenges [4,32,50,57,66,70,88,139]. They enable the serverless system to achieve scheduling optimizations in three different levels: resource-level, instance-level, and application-level, respectively. Section 4 detailly analyzes the scheduling methodology from these three angles. System Coordination layer. The System Coordination layer consists of a series of Backendas-a-Service (BaaS) components that use unified APIs and SDKs to integrate backend services into functions. Distinctly, it differs from the traditional middlewares that use local physical services outside the cloud. These BaaS services provide the storage, queue service [94,99], trigger binding [75,77], API gateway, data cache [6,7], DevOps tools [24,25,63,122], and other customized components for better meeting the System Orchestration layer's flexibility requirements. Section 5 discusses these essential BaaS components in a serverless system. Each stack layer plays an essential role in the serverless architecture. Therefore, based on the above hierarchy, we conclude the contributions of this survey as follows: (1) Introduce the serverless definition and summarize the features. (2) Elaborate the architecture design based on a four-layer hierarchy, and review the significant and representative works in each layer. (3) Analyze the current serverless performance and its limitations. (4) Explore the challenges, limitations, and opportunities in serverless computing. The rest of the survey is organized as follows: Sections 2-5 introduce the four stack layers and elaborate current research domains in serverless computing. Section 6 analyzes several factors that degrade performance, and compares the current production serverless systems. Finally, the challenges, limitations, and opportunities of serverless computing are given in Section 7-8. We conclude this paper in Section 9. VIRTUALIZATION LAYER Whenever a user function is invoked in serverless computing, it will be loaded and executed within a virtualized sandbox. A function can either reuse a warm sandbox or create a new one, but usually not co-run with different user functions. In this premise, most of the concerns in virtualization are isolation, flexibility, and low startup latency. The isolation ensures that each application process runs in the demarcated resource space, and the running process can avoid interference by others. The flexibility is demonstrated by the ability of testing and debugging, and the additional supports for extending over the system. Low startup latency requires a fast response for the sandbox creation and initialization. The current sandboxing mechanism on the Virtualization layer is broken into four representative categories: traditional VM (Virtual Machine), container, secure container, and Unikernel. Table 1 compares these mainstream approaches in several respects. In the table, "Startup latency" represents the response latency of cold startup. "Isolation level" indicates the capacity of functions running without interference by others. "OSkernel" shows whether the kernel in GuestOS is shared. "Hotplug" allows the function instance to start with minimal resources (CPU, memory, virtio blocks) and add additional resources at runtime. "OCI supported" means whether it provides the Open Container Initiative (OCI), an open governance structure for expressing container formats and runtimes. Moreover, " " in all tables of this survey means this technique or strategy is used, and vice versa. The traditional VM-based isolation adopts a VMM (virtual machine manager, e.g., hypervisor) which provides virtualization capabilities to GuestOS. It can also mediate access to all shared resources by provided interfaces (or using Qemu/KVM). With snapshots, VM shows high flexibility in quick failsafe when patch performing on applications within each VM instance. Though VM provides a strong isolation mechanism and flexibility, it lacks the benefits of lower startup latency for user applications (usually >1000ms). This tradeoff is fundamental in the serverless computing, where functions are small while the relative overhead of VMM and guest kernel is high. Container customization: provide high flexibility and performance. Another common function isolation mechanism in serverless computing is using containers. The container engine leverages the Linux kernel to isolate resources, and creating containers as different processes in HostOS [19,92]. Each container shares the HostOS kernel with the read-only attribute, which typically includes binaries and libraries. The high flexibility is also attached to the container with the UnionFS (Union File System), which enables the combination of the layered container image by read-only and read-write layers. Essentially, a container achieves the isolation through namespace to enable processes sharing the same system kernel and Linux Cgroups to set resource limits. Without hardware isolation, container-based sandboxing shows lower startup latency than coarse-grained consolidation strategies [11,147] in hypervisor-based VMs. The representative container engine is Docker [41]. Docker packages software into a standardized RunC container adapted to the environment requirements, including libraries, system tools, code, and runtime. Docker container has been widely applied to various serverless systems for its lightweight nature. Some works further optimize the container runtime for better adaption to the application requirements in the serverless system. SOCK [101] proposes an integration solution for serverless RunC containers, where redundant mechanisms in Docker containers are discarded in this lean container. By only constructing a root file system, creating communication channels, and imposing isolation boundaries, SOCK container makes serverless systems running more efficiently in startup latency and throughput. The startup latency of SOCK container is reduced to 10ms-50ms, comparing to docker containers that usually takes 50ms-500ms. Different from condensing redundance in lean containers, as additional tools (e.g., debuggers, editors, coreutils, shell) enriching the container and increasing the image size, CNTR [130] splits the container image into "fat" and "slim" parts. A user can independently deploy the "slim" image and expand it with additional tools by dynamically attaching the "fat" image to it. The evaluation of CNTR shows that the proposed mechanism can significantly improve the overall performance and effectively reduce the image size when extensively applied in the data center. Secure Container: compromise security with high flexibility and performance. At the same time, security concerns in Virtualization Layer arise for the relatively low isolation level of containers. Any process-based solution must include a relaxation of the security model for its insufficiency for mutually-untrusted functions. It requires containers to prevent code vulnerabilities in the case of shared kernel architecture. Side-channel attacks such as Meltdown [84], Zombieload [114], Spectre [72] prompt mitigation approaches toward vulnerabilities, especially for multi-tenants in serverless context. In this case, container isolation should concern with preventing privilege escalation, information, and communication disclosure side channels [3]. The state-ofthe-art solution to this issue is leveraging Secure Container. For example, Microsoft proposes their Hyper-V Container for Windows [58]. Hyper-V offers enhanced security and broader compatibility. Each instance runs inside a highly optimized MicroVM and does not share its kernel with others on the same host. However, it is still a heavy-weight virtualization that can introduce more than 1000ms of startup latency. In Google gVisor [49], the kernel in it acts as a non-privileged process to restrict syscalls that called in userspace. However, the overhead introduced during interception and processing syscalls in a sandbox is high. As a result, it is not well-suited for applications with heavy syscalls. In order to isolate different tenants with affordable overhead, FireCracker [3] creates MicroVMs by customizing VMM for cloud-native applications. Each Firecracker sandbox runs in user space and is restricted by Seccomp, Cgroup, and Namespace policies. With a container engine built-in MicroVMs, Kata [67] adopts an agent to communicate with the kata-proxy located on the host through the hypervisor, thus achieve a secure environment in a lightweight manner. Both FireCracker and Kata containers can significantly reduce startup latency and memory consumption, and they all need only 50ms-500ms to start a sandbox. With secure containers, it can provide complete and strong isolation from the HostOS and other tenants, at the cost of the limited flexibility in the condensed MicroVM. Meanwhile, it still results in instances' long startup latency due to the additional application initialization, e.g., JVM or Python interpreter setup. Specialized Unikernel: enhance flexibility with high security and performance. Another emerging virtualization technique is called Unikernel [86], which leverages libraryOS, including series of essential dependent libraries to construct a specialized, single-address-space machine image. Because the Unikernel runs as a built-in GuestOS, the compile-time invariance rules out runtime management, which significantly reduces the applicability and flexibility of Unikernel. However, unnecessary programs or tools such as , , are not contained, so the image size of a Unikernel is smaller (e.g., 2MB by mirage-skeleton [95] that compiled from Xen), the startup latency is much less (e.g., startup within 10ms), and the security is more substantial than containers. Based on it, LightVM [90] replaces the time-consuming XenStore and implements the split tool stack, separating functionality that runs periodically from that which must be carried out, thus improving efficiency and reducing VM startup latency. From the perspective of the software ecosystem, to solve the challenge that traditional applications are struggling to be transplanted to the Unikernel model [86,113], Olivier proposes HermitTux [102], a Unikernel model compatible with Linux binary. HermitTux makes the Unikernel model compatible with Linux Application Binary Interface while retaining the benefits of Unikernel. However, Unikernel is not adaptable for developers once built, making it inherently inflexible for applications, let alone the terrible DevOps environment. Furthermore, in heterogeneous clusters, the heterogeneity of the underlying hardware forces Unikernel to update as drivers change, making it the antithesis of serverless philosophy. Tradeoffs among Security, performance, and flexibility. At last, we make the indicatrix diagram of these four technologies in Figure 3 to show the tradeoffs among security, performance, and flexibility. To conclude, hypervisor-based VM shows better isolation and flexibility, while the container can make the instance start faster and flexible to customize the runtime environment. Secure Container offers both high security and relatively low startup latency with flexibility compromise. Unikernel demonstrates great potential in terms of performance and security, but it loses flexibility. When offering adaptable images in the production environment by either virtualization mechanism, it is also critical to avoid that built ones are signed and originated from an unsafe pedigree, with the solutions [69,128] by keeping continuous vulnerability assessment and remediation program. ENCAPSULE LAYER A cold startup in serverless computing may occur when the function fails to capture a warm running container, or experiences a bursty load. In the former, a function is invoked for the first time, or scheduled with a longer invocation interval than the instance lifetime. The typical characteristic is that instances (or pods) must get started from scratch. In the latter case of a bursty load, instances need to perform horizontal scaling during a surge in user workloads. Function instances will autoscale as load changes to ensure adequate resource allocation. Besides taking less than one second to prepare a sandbox in the Virtualization layer, the initialization of software environment, e.g., load Python libraries; and application-specific user code can dwarf the former [42,65,83,101,117]. While we can provide a more lightweight sandboxing mechanism to reduce the cold startup latency in the virtualization layer, the state-of-the-art sandboxing mechanism may not demonstrate perfect compatibility for containers or VMs when migrated to the existing serverless architecture. In response to the tradeoff between performance and compatibility, an efficient solution is to prewarm instances in the Encapsule layer. This approach is known as the prewarm startup, which has been widely researched. Representative work about instance prewarm is listed in Table 2. Before giving the detailed analysis and comparison, we first describe the taxonomy in each column. "Template" reflects whether the cold startup instance comes from a template. "Static image" shows whether the VM/container image for prewarm disables dynamically updating in each cold startup. "Pool" indicates whether there is a prewarm pool for function cold startups. "Exclusive" and "Fixed-size" represents whether the prewarmed instance and its prewarm pool is exclusive and size-fixed. "Predict/Heuristic" points out whether the prediction algorithm or heuristic-based method is used to prewarm instances. "REQs" reflects whether the runtime libraries and packages are dynamically loading and updating in the prewarm instance. "C/R" reflects whether it supports checkpoint and restore to accelerate the startup. "Sidecar based" represents whether the relevant technologies can be implemented or integrated into the sidecar. "Imp" indicates where it is implemented. There are two common prewarm startup approaches: one-to-one prewarm startup and onefor-all prewarm startup. In the one-to-one prewarm startup, each function instance is prewarmed from a size-fixed pool, or by dynamic prediction based on the historical workload traces. While in the one-for-all prewarm startup, instances of all functions are prewarmed from cached sandboxes, which are pre-generated according to a common configuration file. When a cold startup occurs, the function only needs to specialize these pre-initialized sandboxes by importing function-specific code blob URI and settings. For higher scalability and lower instance initialization latency, C/R (Checkpoint/Restore) is also combined with prewarmed instances in a serverless system. C/R is a technique that can freeze a running instance, make a checkpoint into a list of files, and then restore the running state of the instance at the frozen point. A common pattern in serverless implementations is to pause the instance when idle to save resources, and then recovery it for reusing when invoked [55,94]. One-to-one prewarm by size-fixed pool: effective but resource-unfriendly. The one-to-one strategy prewarms instances for each function, which means it is crucial to determine the warm-up time. Otherwise, a slow or quick warm-up cycle will either result in less efficiency at reducing cold startup or unnecessary prewarmed instances wasting a mass of resources. The current solution is to build an exclusive prewarm pool with a fixed size for each function to maximize the service stability. For example, Azure Functions [105] warms up instances of each function by setting up a fixed-size prewarm pool. Once the always-ready instance is occupied, prewarmed instances will be active and continue to buffer until reaching the limit. The open-sourced Fission [44] also prewarms like Azure Function. It introduces a component called , which manages a pool of generic instances with a fixed pool size and injects function code into the idle instances to reduce the cold start latency. One-to-one prewarm by predictive warm-up: ways to make resource friendly. One-to-one strategy prewarms exclusive instances in each size-fixed prewarm pool, and load codes whenever invocations arrive. In this pattern, it is a safe strategy without introducing other security concerns. However, it would produce massive idle instances in the background and make the serverless system resource unfriendly. Such a deficiency inspires researchers to propose more flexible prewarm strategies like using prediction-based and heuristic-based methods. Xu et al. [146] design an AWU (Adaptive Warm-up) (AWU) strategy by leveraging the LSTM (Long Short-Term Memory) networks to discover the dependence relationships based on the historical traces. It predicts the invoking time of each function to prewarm the instances, and initialize the prewarmed containers according to the ACPS (Adaptive Container Pool Scaling) strategy once AWU fails. Shahrad et al. [118] propose a practical resource management policy for the one-to-one prewarm startup. By characterizing the FaaS workloads, they dynamically change the instance lifetime of the recycling and provisioning instances according to the time series prediction. CRIU (Checkpoint/Restore In Userspace) [39] is a software tool on Linux to implement Checkpoint/Restore functions. Replayable Execution [140] makes improvements based on CRIU, using "mmap" to map checkpoint files to memory and leveraging the Copy-on-Write in OS to share cold data among multiple containers. By exploiting the intensive-deflated execution characteristics, it reduces the container's cold startup time and memory usage. One-for-all prewarm with caching-aware: try to make the prewarm generalized and resource friendly with privacy guaranteed. One-for-all prewarm startup shares a similar mechanism with the Template method, which is hatched and already pre-imported most of the bins/libs after being informed by the socket. When a new invocation arrives and requires a new instance, it only needs to initialize or specialize from the templates. In the process above, Catalyzer [42] optimizes the restore process in C/R by accelerating the recovery on the retrenched critical path. Meanwhile, it proposes a sandbox fork to leverage a template sandbox that already has pre-loaded the specific function for state resuing. To make the cold startup less initialization together with more flat startup latency, Mohan et al. [97] propose a self-evolving pause container pool by pre-allocating virtual network interfaces with lazy pause containers binding. As performance improves, so arises vulnerability. The security concerns in the one-for-all prewarm strategy are usually referred to as privacy concerns in Encapsule Layer. In other words, it is essential to make the private packages/libraries (REQs) inaccessible. For example, the famous open-sourced Apache OpenWhisk [103] resolves it by allowing that users can assign private packages in a ZIP or virtualenv to dynamically specialize the prewarmed container [104]. Zygote mechanism is a cache-based design used in Android for applications instantiation. SOCK [101] leverages the generalized Zygote mechanism that when creating prewarmed containers by considering the internal characteristics of workloads. Specifically, it designs a packages-aware caching model to dynamically adjust cached packages with the highest benefit-to-cost ratio. Because that it performs the metric allocating with runtime sampling in the system-level, malicious activities cannot reveal the sensitive packages/libraires for a specific function. One-to-one and one-for-all prewarm: the challenging points. For one-to-one prewarm startup and one-for-all prewarm startup, both can be beneficial for optimizing the cold startup in the Encapsule Layer of serverless architecture. Their respective flaws are also apparent. The one-to-one prewarm startup focuses on significantly less initialization latency by exchanging the memory resource. It meets the challenge that a warm-up time in point is usually hard to measure or predict while ensuring the reasonable allocation of memory resources, according to the research [118]. On the one hand, prediction-based and heuristic-based methods are particularly effective when historical data is sufficient to build an accurate model, but degrades when the trace is scarce. On the other hand, the prediction and iteration operations can introduce high CPU overhead when massive applications and function chains co-exist. The template mechanism in the one-for-all prewarm startup is adopted to ease the high cost of functions cold startup from scratch. In addition, maintaining a global prewarm pool introduces less additional memory resource consumption than the one-to-one prewarm startup. However, it still suffers from several challenges, including the huge template image size [8,51] and confliction of various pre-imported libraries. It may also potentially reveal the vicinity in which applications with a similar portrait are widely deployed. It is very important to "suit the remedy to the case" for cold startups in different scenarios. For example, it is much more efficient to generate a template by one-for-all prewarm startup when the function is invoked for the first time, or with poor predictions during the trace analysis. The one-to-one prewarm startup performs better for functions with general rules or diurnal patterns, and vice versa. ORCHESTRATION LAYER The main challenge in the System Orchestration Layer is the friendly and elastic support for different services. Even though the current serverless orchestrators are implemented differently, the challenges they encounter in the system are much the same. As hundreds of functions coexist on a serverless node, it raises challenges for scheduling massive functions with inextricable dependencies. Similar to the traditional solutions [26,35,59,76,126], the serverless model also concerns the ability to predict the on-demand computing resources, and an efficient scheduling strategy for services. As shown in Figure 4, researchers usually propose to introduce the load balancer and resource monitor components into the controller, to resolve provision and scheduling challenges. The load balancer aims to coordinate the resource usage to avoid overloading any single resource. Meanwhile, the resource monitor keeps watching the resource utilization of each node, and passes the updated information to the load balancer. With the resource monitor and load balancer, a serverless controller can perform better scheduling strategies in three aspects: resource-level, instance-level, and application-level. We summarize the hierarchy in Table 3. Specifically, the "focused hierarchy" indicates an optimized method (aka resource adjusting) is designed besides an essential strategy for resource auto-provision, which can be classified into "R" (Resource-level), "I" (Instance-level), or "A" (Application-level), respectively. "Resource adjusting" shows whether the scheduling provides an adjustment for resource provision. "SLO" reflects whether SLO constraints are considered. "Intf" represents whether the resource contention or interference is discussed. "Usage feedback" reflects whether the feedback of resource usage in a physical node is considered. "Dynamic strategy" indicates whether it is a dynamic and runtime scheduling strategy. "Trace driven" indicates whether making choices depends on traces or collected data metrics. "Predict/Heuristic" reflects whether a prediction-based or heuristic-based method is used. "Implement" points out where it is implemented ("P" represents it is a prototype). Finally, "Insight" summarizes its unique insight and key motivation. Dynamic Adjustment of Resource Provision (Resource-level) Resources including CPU, Memory serve as the basic scheduling objects in serverless computing. For isolation and stability, resources are configured by an orchestrator, and access to them is restricted. The serverless controller will allocate resources for a new instance and isolate the execution environment when a cold startup occurs. Therefore, the key to building an efficient serverless controller is auto-scaling just the right amount of resources to satisfy the resilient workloads. However, the controller component itself cannot adjust appropriately because the resource is highly dynamic in the cluster and the potential inaccurate resource specifications by default. Make resource provision of the container "just the right amount". The common solution for avoiding resource over-provisioning is building feedbacks regarding historical traces. For example, works in [33,34] optimize the original resource settings by varying the trace-driven patterns for VMs. In serverless computing systems with more fine-grained functions, a real-time resource monitor can be employed to help the controller make dynamic resource Step Functions [43] A AWS Lambda Health check WUKONG [29,30] A AWS Lambda Graph to seq Viil et al. [136] A&(R) Pegasus Partition SAND [4] A P Colocation GlobalFlow [154] A&(I) AWS Lambda Cross-regions SONIC [129] A&(R) AWS Lambda Hybrid exchange adjustments, as shown in Figure 4(a). For example, Pigeon [82] builds a serverless framework, introducing a function-level resource scheduler and an oversubscribed static pool. The scheduler assigns containers with different resource configurations to the queries based on the node capacity and function requirement. However, their container pool is based on a static configuration, which may lead to resource segmentation and low utilization. FlowCon [155] facilitates dynamic resource allocation for container-based DL training tasks in the near future and resets resource configuration elastically. Though they design a dynamic auto-provision strategy based on the monitor feedback, the SLO constraints and resource interference are not considered further. It results in their lack of flexibility and practicality in a real production environment. DRL (Deep Reinforcement Learning), evolving from Deep Q-learning, is a widely used combination algorithm by learning control strategies from higher-dimensional perceptual inputs [96], which can be used to make resource provision decisions. For example, Wang et al. [139] propose a serverless scheduler based on DRL for ML training jobs. It can dynamically adjust the number of function instances needed and their memory size, to balance high model quality and the training cost. The keys to making resource provision robust to performance. Recent works take the SLA into account to ensure stability when functions are invoked in a shared-resource cloud. CherryPick [5] leverages the Bayesian optimization, which estimates a confidence interval of an application's running time and cost, to help search the optimal resource configurations. Unlike static searching solutions, it builds a performance model to distinguish the unnecessary iteration trials, thus accelerating the convergence. However, CherryPick's performance model targets big data applications specifically, not generalized to other applications. Similarly, Lin et al. [81] build an analytical model to help general serverless applications deployment. It can predict the application's end-to-end response time and the average cost under a given configuration. They also propose a Probability Refined Critical Path Greedy algorithm (PRCP) based on the transition probability, recursively searching the critical path of execution order. With PRCP, they can achieve the best performance with a specific configuration under budget constraints or less cost under QoS constraints. Besides SLA, shared-resource contention should also be noticed in the multi-tenant environment. HoseinyFarahabady et al. [57] discuss this topic. Their proposed MPC optimizes the serverless controller for resource predictively allocation. By introducing a set of cost functions, it reduces the QoS violation, stabilizes the CPU utilization, and avoids serious resource contention. However, these resource and workload estimations based on ML (Machine Learning) or AI (Artificial intelligence) usually achieve a trade-off between an optimal global solution and robust performance to inaccurate workload information [26,31,60,133]. Whether they can avoid fragile robustness and improve resource utilization in the production environment is unknown and remains a critical avenue to explore. Load Balancing for Instance Scheduling (Instance-level) In addition to dynamic adjustment at the resource level, the most important part of a serverless system is instance-level scheduling. From the perspective of cloud vendors, they hope to achieve either higher throughput, higher resource utilization, or less energy consumption. At the same time, users prefer cheaper deployment costs and less end-to-end invocation latency. To this end, instances from multi-functions or multi-tenants should be carefully scheduled across the cluster to achieve the above targets. The mainstream solution is leveraging load balancer, which is also shown in Figure 4(b). It is designed as the queries router to help schedule functions and achieve the load balancing between nodes in the cluster. The strategies can be classified into two categories: Hash-based and Multi objective-based methods. In the hash-based method, the controller uses a hash function to decide a home node (or executor) of a given function for default routing. Then it will set a stepsize to recursively filter out an alternative if the home is not available or under resource-constrained. They are usually done by a health check in each physical node. Until the cloud provider has a full understanding of the characteristics of workloads running in its serverless system and the cluster, we recommend using the hash-based method to implement a load balancer. In the multi-objectivebased balancing method, the load balancer aims at multiple optimizations, for example, throughput, response time, resource utilization, etc. Therefore, it should balance different factors to satisfy both the cloud vendors and users. Leverage resource monitor and load balancer to make scheduling decisions. Resource monitor can provide a global view of resource status in a cluster, which helps the load balancer make better scheduling decisions. Chang et al. [32] design a comprehensive monitoring mechanism for the Kubernetes-based system. It can provide a variety of runtime information to the scheduler, including system resource utilization and the QoS performance of an application. The flaw of the study is that it does not provide a complicated resource scheduling algorithm. Kaffes et al. [66] propose a centralized and core-granular scheduler. Centralization provides a global view of the cluster to the scheduler so that it can eliminate heavy-weight function migrations. Core-granularity binds cores with functions and therefore avoids core-sharing among functions and promises performance stability. However, they only consider the scheduling of CPU resource, but ignore other important resources like memory. FnSched [127] regulates CPU-shares to accommodate the incoming application invocations by checking the available resource. A key advantage of employing a greedy algorithm is that fewer invoker instances are scheduled by concentrating invocations in response to varying workloads. Though FnSched makes a tradeoff between scalable efficiency and acceptable response latencies, it is limited by the assumption that function execution times are not variable. Guan et al. [50] propose an AODC-based (Application Oriented Docker Container) resource allocation algorithm by considering both the available resources and the required libraries. They model the container placement and task assignment as an optimization problem, and then take a Linear Programming Solver to find the feasible solution. The Pallet Container performs the AODC algorithm, serving as both a load balancer and resource monitor. The downside is that plenty of containers will occupy the memory space as the number of functions increases. Take the performance interference and QoS constraints into consideration. While improving utilization, load balancing strategies also bring the interference challenge that sharing resources between instances may result in performance degradation and QoS violation. Different functions' sensitivities to different resources may vary, which means that we should avoid physical co-location of functions that are sensitive to the same resource (e.g., CPU-sensitive containers may cause serious CPU contention when co-located). The load balancer should notice and moderate the interference when scheduling containers. McDaniel et al. [93] manage the I/O of containers at both the cluster and the node levels to reduce resource contention and eliminate performance degradation effectively. Based on a resource monitor in Docker Swarm, it refines the container I/O management by providing a client-side API, thus enforcing proportional shares among containers for I/O contention. Kim et al. [70] present a fine-grained CPU cap control solution by automatically and distributedly adjusting the allocation of CPU capacity. Based on performance metrics, applications are grouped and allowed to make adjustment decisions, and application processes of the group consume only up to the quota of CPU time. Hence, it minimizes the response time skewness and improves the robustness of the controller to performance degradation. Smart spread [88] proposes an ML-based function placement strategy by considering several resource utilization statistics. It can predictively find the best performing instance as well as incurs the least degradation in performance of the instance. Data-driven Workflows for Application Deployment (Application-level) At the application level, load balancing strategies can be categorized into two kinds: the spread strategy, which distributes functions of an application across all the physical nodes, and the bin-pack strategy, which tries to schedule functions of an application to the same node first [57]. Intuitively, the spread strategy seems to better balance the workloads on the nodes while avoiding serious resource contention. However, it weakens the data locality, which means that the spread strategy will introduce more transmission overhead than bin-pack if functions are data-dependent. So it is why we need to optimize the scheduling from the perspective of the application. Invocation patterns and workflow execution models. As shown in Figure 5(a), if a function is invoked from user queries via the RESTful API or other triggers, it is called external invocation. The instance-level load balancing can perform well in external invocation scenarios. However, the emerging cloud applications may consist of several functions, and there are data dependencies between multiple functions. For example, the implementation of a real-world social network consists of around 170 functions [2]. In this case, functions in such an application will get active by various triggers which may come from the user query or another function. If a function is initialized or assigned by other functions, it follows the internal invocation pattern. Currently, researchers raise their vision to the data-driven scheduling for internal invocations from the perspective of application-level topology. Workflow is the most common implementation of internal invocations, where functions are executed in a specified order to satisfy complex business logic. The execution models of these data-driven workflows can be extracted into two approaches: sequence-based workflow and DAGbased workflow. As shown in Figure 5(b), functions are invoked in a pipeline through a registered dataflow in the sequence-based workflow. The sequence-based workflow is the basic and the most common pattern in the serverless workflow, and most cloud vendors provide such execution mode for application definition. Obviously, there is more than one sequenced workflow in one complex application, and the same functions can be executed in various sequences. If we regard each function as a node and dataflow between nodes as a vector edge, such an application with multiple interlaced sequenced workflows can be defined by the DAG (hence the name "DAG-based workflow"). Nowadays, few Cloud vendors provide services for the application definition in the DAG (Direct Acyclic Graphs) form, aka serverless workflows [1,21,27]. The scheduling overhead introduced in serverless sequences. With massive functions communicate with each other, scheduling of dataflow dependencies introduces more complexities. However, the existing serverless systems in the production environment commonly treat these workflows as simple recursion of internal invocations. It raises the challenge of reducing the overhead in the System Orchestration layer by scheduling function sequences [16]. Current policy to manage the function sequences is quite simple-functions are triggered following the first-come-first-served algorithm [129]. However, as the length of the function sequence increases, cascading cold start overheads should be addressed to avoid seriously end-to-end latencies degradation of sequenced workflows [20,40]. To this end, Xanadu [40] combines the prewarm strategy with a most-likely-path (MLP) estimation in the workflow execution. It prewarms instances by a speculative-based strategy and makes just-in-time resource provisioning. However, the prediction miss would introduce additional memory waste, especially in the scenario of multi-branch or DAGs. Moreover, serverless workflow engines prefer the Master-Worker architecture where ready functions are identified by the state and invoked directly by the master without a queue [9,17,30,47,89], including AWS Step Functions [43] and Fission Workflows [46]. As shown in Figure 5(a), the deficiency is that the additional overhead is introduced in the function workflow through unnecessary middlewares (e.g., unnecessary storage in an internal invocation). Enhance the data locality for efficient serverless DAG executions. To help function workflow avoid undesired middlewares, researchers usually co-locate the functions into subgraphs to enhance the data locality, as shown in Figure 4(c). For example, Viil et al. [136] use multilevel k-way graph partitioning to provision and configure scientific workflows automatically into multi-cloud environments. However, their partition algorithm may not match well with serverless applications, where each node in the graph can auto-scale multiple replicas in such as, foreach steps. In this case, the connections and edge weights become unpredictable. In serverless context, WUKONG [29,30] implements a decentralized DAG engine based on AWS Lambda, which combines static and dynamic scheduling. It divides the workflow of an application into subgraphs, before and during execution, thus improving parallelism and data locality simultaneously. However, WUKONG's colocation of multi-functions within a Lambda executor may introduce additional security vulnerabilities due to its weakened isolation. SAND [4] presents a new idea to group these workflow functions into the same instance so that libraries can be shared across forked to reduce initialization cost, and additional transmission can be eliminated in the workflow due to the data locality. SAND performs a better isolation mechanism than WUKONG by using process forking for function invocations, however it ignores the colocation interference resulting from the resource contention. When exchanging intermediate data of DAGs, SONIC [87] proposes to use the VM-storage-based transmission strategy when functions are co-located on the same node. The optimal transferring depends on application-specific parameters, such as the input size and node parallelism. By predicting such runtime metrics of functions in the workflow, it dynamically perform the data passing selection with a communication-aware function placement. GlobalFlow [154] considers a geographically distributed scenario where functions reside in one region and data in another region. It groups the functions in the same region into subgraphs and connects them with lightweight functions, so that it improves data locality and reduces transmission latency. As the authors stated, the combination of local and cross-region strategies in a holistic manner can be further explored. Summary of the challenges in the scheduling of serverless workflows. Workflow scheduling is an NP-hard problem, and researchers have been designing various strategies for it [1,91]. Such optimization in the workflow aims to minimize the makespan, reduce the execution cost, and improve resource utilization while satisfying single or multiple constraints. To resolve the above challenges, leveraging enhanced data locality is a focus in serverless computing. The challenge is that the end-to-end latency of a workflow query could increase significantly due to frequent interactions with the storage from different nodes. Resource volatility becomes another focus in the serverless system, which can be unpredictable as the number of functions increases in the production environment. It introduces more difficulty to find an efficient workflow placement and scheduling strategy in a concise decision time (e.g., 10ms for load balancing). In order to evaluate the efficiency and performance for future workflow-based research, DAG-based or DGbased serverless benchmarks also urgently need to be published. They are better adapted based on real applications rather than simple micro-benchmarks [110] or function self-loops [81,148]. Keeping a guaranteed QoS performance is also significant for applications in serverless computing, whereas it has not been widely investigated. Security Concerns in Orchestration Layer In the Orchestration Layer, the most serious security concern is how to resolve unavailability. It usually refers more to performance security than functional security. The attackers may establish destructive behaviors from either resource-level, instance-level, or application-level, resulting in unmatched in-memory footprint, concurrency exhaustion, or workflow exceptions. The efficient solution to these concerns is leveraging BaaS components to restrict access. In-memory footprint by unrestricted read-in. Contrary to the intuitive believing, serverless architecture can make the programming more complicated because the decoupled microservices have higher requirements on the normalized input data. The unrestricted memory read-in from the input data may result in the timeout or breakdown for its oversized memory footprint (e.g., 300MB memory read-in within a 256MB-limit container). Function developers may overlook this vulnerability in a public cloud where an attack can easily disguise as a legitimate invocation. On the premise that the user code is fragile against such kind of attack, a serverless system needs to bind filtering rules in event trigger to help avoid this security concern. Concurrency exhaustion by DDoS (Distributed Denial of Service) attack. When an application is decoupled into a mass of functions, its concurrency bottleneck depends on the maximum throughput of all function nodes. In this case, the DDoS attack on serverless is a more [4], [94], [103] Cache in system [87], [107], [138], [149] Focus Strategy/tools [24,25], [48], [56], [61], [63], [122] Focus Serverless DB [38], [71], [100], [106], [108], [137] Focus Diversity [74], [75], [77], [85], [118] Focus Database caching [7], [135] State cache [115], [120], [125], [151] Checkpoint cache [80], [85], [124], [142], [153] Focus Query routing [6], [99] Focus Image cache [28], [51], [79] Instance pool significant threat. Any unavailable function node can seriously take down the entire application with a wilder attack surface. It can cause seriously degraded QoS by invocation exhaustion attacks in a single function node, or generate a large bill for the application account. The latter is known as the DoW (Denial of Wallet) attack in serverless computing. This security risk can be mitigated by setting an upper limit on invocations concurrency and instances quota on function creating. Workflow exception by malicious dataflow. When triggering invocations, function input parameters and preferences are also passed within the dataflow. Therefore, an internal invocation is more fragile than an external one if the function maintainer fails to provide essential verification of queries source. In addition, attackers can inject malicious data into the queries to generate the invocation exceptions, or make the workflow execution order out of control. In the case of serverless architecture, all function nodes in the application need to authorize their access permission to identify whether the input data tamper with the current invocation. ESSENTIAL BAAS COMPONENTS IN COORDINATION LAYER We have examined the most critical implementations in Section 2, 3, and 4, and there are also some other components in the System Coordination layer introduced to support or enhance the serverless system. We also outline the relevant techniques and research in Figure 6. In terms of implementation, a serverless system needs to integrate the six significant components or services: Storage, Queue, API gateway, Trigger, Data cache, and DevOps tools. Most of the literature focuses on Data cache, Queue service, and function storage from an academic perspective. In contrast, the cloud vendors mainly focus on productions about Trigger service and DevOps tools. We will discuss each of these components detailly in the following text. Storage Service One of the key requirements of a serverless workload is efficiently sharing ephemeral data between functions or saving results for asynchronous invocation. Therefore, a natural way to communicate between them is to exchange the data through a remote store. Different phases of storage during the function execution. During a serverless invocation, there are three phases where the database service is required: Authentication, In-Function, and Log. Authentication is usually performed ahead of controller scheduling to avoid security issues, and it should get a fast response for access. Using an MMDB (Main Memory DataBase) to implement the Authentication phase is recommended in a serverless system, such as Redis, a high-performance key-value database. The calls of storage APIs during the function execution make up the in-Function phase. Users can choose to use either a DRDB (Disk-Resident DataBase, e.g., MySQL) or an MMDB by different BaaS interfaces for ephemeral storage. The Log phase builds the bridge for users to return invocations results, especially for the functions invoked in an asynchronous manner. A detailed record in JSON format, including runtime, execution time, queue time, states, will be ephemerally or permanently stored and returned (e.g., CouchDB in OpenWhisk). It is recommended to be designed as serverless storage, following the invocation patterns to only pay for queries consumed by the storage operation and the storage space consumed when logging. However, the throughput of existing storage is a major bottleneck due to the frequent and vast functions interactions [64,65]. Although current serverless systems support NAS (Network Attached Storage) to help reduce storage API calls, these shared access protocols are still network-based data communication essentially. IO bottleneck in storage: modeling in serverless context. Traditional solutions use predictive methods [38,100,137] and active storage [109,131,132,143,145,152] to automatically scale resources and optimize the data locality on demand. For serverless storage, researchers explore using a hybrid method to ease the I/O bottleneck. For example, Pocket [71] is strict with the separation of responsibilities across the control, metadata, and data planes. By using heuristics and combining several storage technologies, it dynamically rightsizes resources and achieves a balance between cost and performance efficiency. To alleviate the extremely inefficient execution for data analytics workloads in the serverless platform, Locus [106] models a mixture of cheap but slow storage with expensive but fast storage. It makes a cost-performance tradeoff to choose the most appropriate configuration variable and shuffle implementation. Middleware Zion [108] enables a data-driven serverless computing model for stateless functions. It optimizes the elasticity and resource contention by injecting computations into data pipelines and running on dataflows in a scalable manner. Due to the data-shipping architecture of serverless applications, current works usually focus on designing a more elastic serverless storage, enhancing the data locality to ease the I/O contention of function communication on the DB-side. However, due to the potential heterogeneity of different functions, the uncertainty above still makes these technologies in practice challenging. Specialized Queue In various implementations of the serverless system, the Queue is acquiescently integrated into the System Orchestration layer, which passes messages between different system components. For instance, Apache Kafka, serves as a distributed message streaming platform that allows applications to write and subscribe to messages across different hosts. Interact with the controller by node queue and function queue. The function queue can send messages between the controller and functions. In contrast, the node queue serves for load balancing to schedule the functions to different nodes (e.g., a queue in the cluster manager). A representative of adopting function queue design is OpenWhisk [103]. When the OpenWhisk controller receives an invocation query, it decides which invoker should execute the instance and then sends it to the selected invoker via Kafka. To increase parallelism, it also leverages the topic partitioned so that messages with the same consumer can be written to the same partition. SAND [4] also follows this message queue design by introducing a two-level hierarchical message bus: a local message bus deployed on each host and a global message bus distributed across different hosts. A local message bus is partitioned into different message queues such that every function on the host subscribes to messages from this function queue. In this way, if a function and its successors are running on the same host, it can directly write its output into the local message queue subscribed by its successor. Otherwise, the output is written to the global message bus. Other work dives into the shortcomings of the queue-based mechanism, which may lead to reduced performance and availability in the serverless context. McGrath et al. [94] propose to introduce the "cold queue" and the "warm queue" to assume different responsibility for function queries. DORADO [99] also uses shared memory to mediate communication and persist data. By such means, queries can be routed to any container that is replicated. It is more convenient for developers to adopt scalable queues in serverless computing, as the scaling is delegated to cloud vendors. For example, Amazon Simple Queue Service provides scalability by processing each buffered invocation independently, scaling transparently to handle bursty loads without provisioning instructions. The idea of the scalable queue also meets the requirements for serverless computing, such as pay-per-use, dependability, convenience, and flexibility. API Gateway and Various Triggers In serverless computing, instances are created on-demand, and invocations are not bound with a static address. When deploying the containers or VMs to the cluster, the system will dynamically assign addresses to services. In this case, containers on the same node in the default network can communicate by IP addresses, while containers across nodes need to allocate ports for forwarding. The dynamic port allocation raises the challenge to manage as it intensifies at scale. The API Gateway component can provide a unique entry point to ensure accurate services' addresses. When the queries join the API Gateway, the service registry is inquired, and the queries are forwarded to the available service instances according to the IP route. It should also consider the availability and reliability of the serverless system, for example, the lazy reaction for services incompatible due to the hardware heterogeneity. Meanwhile, serverless systems design various triggers to invoke functions in response to queries. A trigger defines how a function is invoked, and the binding rule represents a mapping between them. The trigger and binding rule together make up a probe for the detectable event and help avoid hardcoding access to other services [77]. Besides invoking an event-driven function, triggers can also provide a declarative way to connect data to the code (e.g., storage services). There are four most popular triggers: HTTP, Queue, Timer, and Event. HTTP trigger is widely used [118] to handle external invocations, by which a function can be easily invoked once an HTTP query arrives. While HTTP trigger simplifies the external invocation for functions, it shows less efficiency in the case of internal invocations. An alternative way to handle internal events is using Queue trigger, by which functions get triggered whenever an invocation enqueues. For instance, Kubeless [74] provides a Kafka-based queue trigger bond with a Kafka topic so that users can invoke the function by writing messages into the topic. Specific purposes also require more extensive triggers. For example, a timer trigger in Kubernetes can invoke a function periodically. It creates a CronJob [75] object, which is written in a Cron expression representing the set of invocation time, to schedule a job accordingly. An Event trigger invokes a function in response to an event, which is the atomic piece of information that describes something that happened in the system. A convincing example of such implementation is Triggerflow [85], which maps a workflow by setting an event trigger in each edge. Data Cache To ensure graceful performance in case that the workload bursts and reaches a hard limit in concurrency, the common practice among cloud-based applications are utilizing multiple levels of caches [12]. Data caching can cut out unnecessary roundtrips for less response time when queries experience a full-end invocation path. One common idea is caching at API Gateway (e.g., caching solution for GET method [6]), or caching the pages and only result in a storage I/O query (e.g., Amazon DAX [7] and Aurora [135] for database caching). Another common solution is to cache in the system [87,107,138,149]. It freezes maintainers from declaring inside the function by enabling the caching of static assets or large objects. However, the cached data is only available in the ephemeral container, which makes sharing across all short-term instances of a function challenging. And this approach may not be as effective as it seems-the first invocation in every container will result in cache misses. Image cache: on-demand loading and page sharing. The simplest and popular method is to provide the image cache for accelerating. In a containerbased serverless system, an image is composed of multiple layers and shared by numerous containers as needed. When functions are invoked on a host for the first time, images need to be downloaded and cached locally. For example, Slacker [51] builds a Docker storage driver and uses block-level COW to implement snapshots in a VMstore. Docker images are represented by VMstore's readonly snapshots, and the pull and push operations only involve sharing snapshot IDs rather than large network transfers. It makes it possible for docker workers to fetch data lazily from shared storage as needed. DADI [79] also implements an image service merging a sequence of block-based layers, and it caches recently used data blocks adopting the overlay with a tree-structured design. SEUSS [28] factoring out common execution state shared in a snapshot stack, which expresses a lineage between snapshots. It uses CoW to capture into a snapshot only the pages that were modified for fast deployment. Furthermore, SEUSS proposes to use anticipatory optimization to reduce the number of written pages captured in each snapshot. State cache: make the serverless applications stateful. State cache can be further combined with an active database to enable function execution stateful. Bledi [151] extended from Olive [115] adopts the refined SSF (Stateful Serverless Function) instance with a built-in database. By saving a set of intent tables recording the SSF's state and function information, it provides a fault-tolerant workflow execution manner. SNF [120] decouples the functions into computing units and state units, and relaxes the constraint of communication between cooperating units. When processing a subsequent flowlet in the same flow, the function's internal state is cached in the local memory. Then SNF can proactively replicate ephemeral state among compute units. Cloudburst [125] packs the local cache with a function executor in each instance, and periodically publishes a snapshot of cached keys to the key-value store. By such means, Cloudburst can enhance the data locality via physical colocation of mutable caches and enable state management with the remote auto-scaling key-value database. Checkpoint cache: enable functions with fault tolerance. The demand for fault tolerance also inspires researchers to make relevant techniques applicable in serverless context, such as C/R-based [80,153] and log-based [85,142]. One example of such implementation in serverless computing is AFT [124], which builds an interposition between a storage engine and a common serverless platform by providing atomic fault tolerance shim. It leverages the data cache and the shared storage to guarantee the isolation of atomic read, avoid storage lookups for frequent access, and prevent significant consistency anomalies. In addition to the implementations we discussed above, other caching mechanisms can be explored and integrated into any layer of our proposed serverless architecture. Of course, they are all recommended to follow the pay-as-go mode based on the used resource. In summary, data caching is still an essential and important component for higher flexibility and better performance. DevOps Tools DevOps is a compound of development and operations, which improves collaboration and productivity. Since the responsibility of managing underlying resources and runtime environment are transferred to the cloud vendors in the serverless concept, developers only need to focus on the code logic. Operations teams are actually liberated from this process, where they are required to check, compile, pack images, and test deployment after developers submitting the code. There is a necessity for providing DevOps tools in the serverless system. We agree with the pipeline implementation [61] to group the DevOps into CI (Continuous Integration), CD (Continuous Delivery), and CM (Continuous Monitoring) categories. CI refers to the continuous merging of developed code with others during software development while ensuring automatical validation and building. Not only should it make the operations within the function (e.g., Jenkins [63] for integration test, Honeycomb [56] and SonarQube [122] for inspection test), but also focus on availability between functions in an application (e.g., monitoring and debugging tool IOpipe for workflows [24]). CD requires that the serverless system can automatically update the application instances with the old version while keeping the services still available. Nowadays the common solutions are Rolling-update, RED-Black (aka Blue-Green), and Canary deployment, which are adopted by Kubernetes [25,48]. Essentially, these deployment strategies share the same mechanism to keep part of instances with old version serving and then gradually replace them with new ones. The difference between them is the complexity of rollback to the last available instances. CM enables the DevOps team to receive feedback on the problems and errors during the above steps in time. It also provides a visualized interface for monitoring applications' runtime behavior and resource utilization. Actually, the feedback tool is already implemented in most CI/CD tools and runs through the whole DevOps lifecycle of the application. By enriching the CM component, visualizing activities, and resource monitoring [69], users can better understand their services running in the cloud. The concept of DevOps in serverless typically appears more frequently in production scenarios, and most platforms provide various such tools to ensure good compatibility and flexibility. However, the introduction of DevOps may bring about new vulnerabilities threatening the security of instances, and serverless research should also provide more substantial support for detecting vulnerable containers [22,69,73,128]. PERFORMANCE AND COMPARISON This section first summarizes the performance with different VMs/containers, language runtime, and resource limits in serverless computing. Then, we analyze the current production serverless systems to show the preferences. Performance Analysis The runtime within the instance built from different virtualization technologies can exhibit different cold startup performances. Besides, the language runtime is another factor that can seriously affect the cold startup latency. For example, evaluations in Catalyzer [42] show the cold startup latencies with different VM/container and language runtimes. As shown in Figure 7, HyperContainer introduces the highest cold startup latency with various language runtimes. Process-based Docker runtime certainly performs significantly better than others. Generally speaking, the interpreted languages (e.g., Python) incur a higher initial cost and make startup times up to 10× slower [4,101] than the compiled languages (e.g., C) when cold startups. However, according to the performance tests from Jackson [62], which measure the startup and execution latency of different language runtimes, the performance of compiled and interpreted language runtime also depends on the platform. For example, the cold startup latency of .NET C# on AWS Lambda is higher than that of Node.js, while it is the opposite on Azure Functions. It is because that Azure Functions would provide better support for C# based on its core technology .NET for Microsoft, and implements by running on windows containers rather than the open-source .NET CLR (Common Language Runtime) based on Linux containers. Fig. 9. Cold startup latency with different container memory limits [117]. Besides the cold startup analysis of different language runtimes, SAND [4] also measures several sandbox isolation mechanisms for function executions, and we show their results in Figure 8. Native executions (exec and fork) are the fastest methods, while Unikernel (Xen MirageOS) performs similar to using a Docker container. Regardless of the recycled user code in memory in the paused container, using the Docker client interface to start a warm function (Docker exec C) is much faster than a cold startup (Docker run C). Another significant factor that slows down the cold startups for the container-based serverless system is the memory limit. The performance evaluation about memory allocation [117] is shown in Figure 9. The cold startup latency of each microbenchmark function increases as stepping to smaller memory limits. We can also see that there is a significant decrease in container startup latency when stepping from 128MB to 256MB. And larger memory limit results in less obvious optimizations without the reasonable regime of marginal increases. It also explains why most serverless systems set 256MB as the default memory limit of the function container. As the supplement of the above factors that affect serverless cold startup performance, Shahrad et al. [117] explore other factors that may affect the function cold startup and execution time, such as MKPI (mispredictions per kilo-instruction), LLC (Last-level Cache) size, and memory bandwidth. Firstly, they find that a longer execution time usually appears with noticeably lower branch MKPI within a function. It is easy for us to understand that functions with short execution time spend most of the time on language runtime startup, and thus the branch predictor outputs more miss when stay trained. Secondly, the LLC size is not a significant factor that affects the cold startup latency and execution time. Higher LLC size cannot bring in better performance for serverless function execution because of the insensitivity. Only when the LLC size is very small (e.g., less than 2M) will it become a bottleneck for the function execution and cold startup. Cloud vendors usually set a default LLC size and pre-profile in the serverless system to avoid serious performance degradation. BabelFish [121] also finds that lazy page table management can result in heavy TLB stress in a containerized environment. To avoid redundant kernel jobs produced in the process of page table management, they try to share translations across containers in the TLB and page tables. Production Comparison With more attempts to enable the rapid development of cloud-native applications, Wang et al. [141] evaluate the performance of three commercial serverless platforms by invoking measurement functions with stepwise memory limits to collect various system-level metrics. Lee et al. [77] also gives a detailed comparison between Amazon Lambda, Google Functions, Microsoft Azure Functions, and IBM OpenWhisk. They demonstrate the differences in terms of throughput, network bandwidth, I/O capacity, and computing performance. Based on these experiments, we summarize the metrics in Table 4. From this table, we can get a glimpse of their respective strengths and weaknesses. For example, AWS Lambda shows higher capacity and throughput of concurrent function invocations, however, performing poorly in trigger throughput. Microsoft Azure Functions enable fast read and write speed when queries are invoked in sequence, and show relatively higher function cold startup latency. Undoubtedly, all cloud vendors are aware of the challenges in serverless architecture and are actively optimizing the function invocation performance and relevant BaaS bottlenecks. OTHER KEY LIMITATIONS AND CHALLENGES The limitations of the current works in each layer and challenges are already discussed in the corresponding sections. This section will highlight other key limitations and challenges in the Encapsule, Orchestration, and Coordination layer, respectively, as an orthogonal supplement. We refer readers for more detailed and focused discussions on the Virtualization Layer in other survey [13,18]. Stateless within Encapsule Layer An essential feature of serverless is that the service is loaded and executed on-demand rather than deployed in a long-term running instance. To prevent a large number of instances from occupying memory resources, the serverless controller sets an instance lifetime to recycle them automatically. Because short life-span functions within the application are no longer associated with a particular instance or server, each query processed cannot be guaranteed to be invoked by the same function instance. In other words, the application's state cannot and will not be kept on the resumed instance [68]. The stateless nature weakens the generality of the serverless architecture, limiting its scope to stateless applications, such as Web applications, IoT (Internet of Things), media processing, etc. Undoubtedly, the extension of stateful serverless architecture (see [120,125,151] in Section 5.4) by saving state in object storage or key-value stores fails to provide low latency and high throughput simultaneously, makes it inferior to regular sticky sessions as IaaS or PaaS does. Memory Fragmentation within Orchestration Layer In the serverless architecture where multiple tenants co-exist, concurrent invocations are either processed in multiple containers and experience undesired cold startups in each one, or executed concurrently in one single container (e.g., OpenFaaS and OpenLambda). In the former, a container is allowed to execute only one invocation at a time for performance isolation. In this case, the memory footprint of massive sidecars prevents serverless containers from achieving high-density deployment and improved resource utilization [3]. The key to this challenge is slimming and condensing the container runtime by deduplication within the VMM and guest kernel, such as sharing the page cache across different instances on the host. In the latter, memory fragmentation becomes a top priority. The figure 10 depicts two common scenarios where memory fragmentation may arise. Allocation fragmentation is usually due to the improper provision of a microVM. Function executors can not fully utilize the memory allocated. Scheduling fragmentation is inevitable and usually caused by instance-level load balancing strategy when auto-scaling with workload changes. Since the serverless emergence, challenges remain in further achieving an efficient methodology in instructing how to search for a high-density container deployment solution. API and Benchmark Lock-in within Coordination Layer When people talk about serverless vendor lock-in, they are concerned about the portability of functions. However, the real point of this problem depends on the API from other services rather than the function itself. Though some efforts such as Apex [10] and Sparta [123] allow users to deploy functions to serverless platforms in languages that are not supported natively, the BaaS services from different platforms and their API definitions are still different. The challenge with API lock-in is derived from the tight coupling between the user functions and other BaaS components, which can add difficulty to the code migration between different FaaS platforms. The over-simplified benchmark is another problem with API lock-in. Easy-to-build microbenchmarks are over-emphasized and used in 75% of the current works [110]. We call for the establishment and open-source of cross-platform real-world application benchmarks besides scientific workflows [64,89,119]. However, when decomposing a large service into different functions and then build fine-grained node interconnections, the complexity of the application architecture makes the grading of the function challenging to guide and determine. OPPORTUNITIES IN SERVERLESS COMPUTING At last, we discuss some future opportunities that serverless computing faces and give some preliminary, constructive explorations to solutions. Application-Level Optimization Application-level optimization requires coordinating between different functions within the application instead of focusing on each general function. Complex interconnections like data dependence and caller-callee relation may conceal between functions. Future works could achieve applicationlevel optimization in two ways: workflow support and workflow scheduling. Workflow support means general support for the inter-connection among functions. We think the following supports are necessary: • Better storage. In some cases, functions need to exchange large ephemeral files with each other. If we register intermediate storage, transferring between storage and functions will take up most of the I/O resource and significantly slow down the response. This consequence is exacerbated in the serverless workflow scenario. Therefore, better storage demands higher priority for metadata exchange between functions within an application. • Higher parallelism capacity. In an example of video processing, multiple recoding instances can be invoked simultaneously in a MapReduce way to speed up the transcoding. Distinctly, there is great potential in parallelism to optimize end-to-end latency. However, higher parallelism is hard to implement due to the considerations on resource utilization management in physical nodes. If a serverless system could provide superior parallelism with sustainable resource overhead, it can further empower users. For example, a serverless system allows multiple queries to be invoked concurrently within an instance with a guaranteed QoS, or optimizes the guest kernel, container runtime, and cgroups to achieve lighter virtualization in high concurrency and density scenarios. Workflow scheduling drives a scheduling strategy that takes functions' interconnection into account. We think the following considerations are missing in current works: • Caller-callee relation. Caller-callee relation is common in a complex application. Usually, the callee will be invoked after the caller finishes, as Figure 11(a) shows. It gives researchers an opportunity to explore: the system can prewarm function instances and execute them in advance with partial data by the dataflow architecture. As shown in Figure 11(b), Function B, C can start execution earlier while Function A is not complete, thanks to the data dependency rather than function state dependency. In the case of providing an optimized interface with dataflow canonical patterns and applying directly to functions, cloud vendors could enable an application to achieve higher parallelism and lower response latency via a data pipeline. • Data locality. We have mentioned that metadata exchange may continually happen between functions in an application. If two functions with data dependency are scheduled on the same physical node, the data transmission can be significantly reduced by middleware. However, the current serverless system is a data-shipping architecture, which sends data to the code node to parse instead of sending code to the data node. Thus, on the one hand, a serverless system cannot guarantee that the data stored and the workers scheduled are just in the same physical node. On the other hand, frequent code transferring should also be avoided due to security and privacy concerns. Improving data locality can effectively reform the application design from a data-shipping architecture into a code-shipping one [54]. Robust Performance of Cold Startup Alleviation Current works usually use predictive methods to reduce cold startups, while they all require functions' historical traces or system-level metrics. By predicting in the near future, the system will enlarge the container pool or prewarm template containers. Nonetheless, it is impractical for each function to collect enough data and build an accurate prediction model. Like Shahrad [118] shows in the Azure trace, about 40% and 30% of registered functions and applications, respectively, are invoked less than ten times daily. This fact also makes it more challenging to collect system-level information periodically for such kind of services. For example, an LRU-based template can maximize the cache hits for hotspot functions startup, whereas cold startups of non-hot functions can not benefit from the cache updating at the system level. The current compromise to this discrepancy is to use a reserved container pool for functions, in spite of a massive waste of resources. It is crucial to explore the warm-up strategy with strong robustness to performance, especially for functions that are sporadically triggered or sensitive to latency. It requires the serverless controller and load balancer to be more general enough to alleviate cold startups or reduce the performance degradation. They may make decisions based on the information inside the functions, such as the service category, the environment libraries used, and the context diagram, for cold startup prediction and alleviation. For example, a serverless system can build shared images and template containers for functions within the same category, or pack the functions with similar environment configurations and implement more fine-grained inside isolation mechanisms. Accelerators in Serverless Accelerators like GPUs and FPGAs are widely used in many applications such as databases [36,53] and graph processing [37,156]. They can significantly speed up the processing of specific tasks, like image processing and machine learning applications. To satisfy the demand for accelerators, cloud vendors furnish accelerators in IaaS (e.g., AWS EC2 P4 and F1 instance) and SaaS (e.g., AWS SageMaker) manner. However, the inflexibility of such accelerators impedes the instantiation in serverless computing. This circumstance leads to two obstacles: (1) it makes the usage of accelerators less convenient and flexible in the cloud; (2) it limits the range of applications that serverless can support. We think a multiplexing accelerator in serverless is the key to solving these obstacles. For example, some works [98,150] integrate GPUs into serverless systems, and BlastFunction [14] makes FPGAs available in serverless. However, the current works are still insufficient. We think future research can focus on the following points: • Accelerator-aware scheduling. Accelerators can also be considered a resource in serverless systems, except they have more irreplaceable features than others. Latency-aware scheduling and on-demanding scaling is more expensive on accelerators, stimulating the serverless controller to treat accelerators distinctively. In such a situation, the scheduling strategy should be more conservative when scheduling multiple tasks on one accelerator. • Accelerator virtualization. Virtualization is an essential technology applied in a serverless system. It is used to fulfill runtime environment management, resource isolation, and high security. However, serverless accelerator schemes are not explored insofar as CPU virtualizations. It makes accelerators embarrassing to be integrated into the serverless system. Therefore, to better support accelerators in serverless, accelerator virtualization should be further explored. • Automatic batching. Accelerators usually have strong I/O bandwidth restrictions. Batching queries is a common operation to conquer these restrictions and make full use of accelerators' computation ability. However, the batching operation will introduce redundancy into end-toend latency. Therefore, a serverless batching strategy that balances utilization and latency should be investigated in future research. CONCLUSION The rapid development of the cloud-native concept inspires developers to reorganize cloud applications into microservices. Elastic serverless computing becomes the best practice for these microservices. This survey explicates and reviews the fundamental aspects of serverless computing, and provides a comprehensive depiction of four-layered design architecture: Virtualization, Encapsule, System Orchestration, and System Coordination layers. We elaborate on the responsibility and significance of each layer, enumerate relevant works, and give practical implications when adopting these state-of-the-art techniques. Serverless computing is still in its infancy, and the potential remains sealed in forthcoming years. Fig. 1 . 1The example of an asynchronous invocation in serverless computing. Fig. 2 . 2General implementation of the serverless architecture. Fig. 3 . 3The flexibility, startup latency, and isolation level of four virtualization mechanisms. Fig. 4 . 4System logic and scheduling levels in Orchestration Layer. Fig. 5 . 5Two Invocation patterns for funtions and two execution models of workflows. Fig. 6 . 6Techniques and works about BaaS components in System Coordination layer. Fig. 7 .Fig. 8 . 78Cold startup latency under different runtimes[42]. Cold startup latency under different isolation mechanisms[4]. Fig. 10 . 10Two scenarios where the memory fragmentation arises. Fig. 11 . 11The dataflow architecture for serverless workflow. Table 1 . 1Techniques in the Virtualization layer.Virtualization Startup latency Isolation level OSkernel Hotplug Hypervisor OCI supported Backed by Traditional VM >1000ms Strong unsharing / Docker [41] 50ms-500ms Weak host-sharing Docker SOCK [101] 10ms-50ms Weak host-sharing / Hyper-V [58] >1000ms Strong unsharing Microstft gVisor [49] 50ms-500ms Strong unsharing Google Kata [67] 50ms-500ms Strong unsharing OpenStack FireCracker [3] 50ms-500ms Strong unsharing Amazon Unikernel [86] 10ms-50ms Strong Built-in Docker Table 2 . 2Works in Encapsule layer.Representative work Template Static image Pool Exclusive Fixed size Predict /Heuristic REQs C/R Sidecar based Imp Pause container [55, 94] / Azure functions [105] AWS Fission [44] Kubernetes Adaptive Warm-up [146] Kubernetes Serverless in the Wild [118] OpenWhisk Replayable Execution [140] FaaS FW Catalyzer [42] gVisor-based Mohan et al. [97] OpenWhisk Apache OpenWhisk [104] / SOCK [101] OpenLambda Table 3 . 3Works by focused hierarchy in System Orchestration layer.Representative work Focused hierarchy Resource adjusting SLO Intf Usage feedback Dynamic strategy Trace driven Predict /Heuristic Implement Insight Pigeon [82] R kubernetes Static pool FlowCon [155] R&(I) P DL tasks SIREN [139] R&(I) AWS Lambda ML tradeoff CherryPick [5] R P Bayesian Opt Lin et al. [81] R&(A) AWS Lambda Profiling MPC [57] R OpenWhisk / Chang et al. [32] I&(R) kubernetes / Kaffes et al. [66] I&(R) P / FnSched [127] I&(R) OpenWhisk / Guan et al. [50] I&(A) P Library McDaniel et al. [93] I Docker Swarm Two-tiered Kim et al. [70] I&(R) P CPU cap Smart spread [88] I AWS Lambda Profiling Xanadu [40] A&(R) OpenWhisk Profile&predict Table 4 . 4Comparing metrics of four serverless vendors[77,94] ("CCI" means the concurrent invocations).Item Amazon Lambda Google Functions Microsoft Azure Functions IBM OpenWhisk GFLOPS per function 19.63 4.35 2.15 3.19 TFLOPS in 3000 66.30 13.04 7.94 12.30 Throughput of 1-5 CCI 20-55TPS 1-25TPS 60-150TPS 1TPS Throughput of 2000 CCI 400TPS 40TPS 120TPS 210TPS CCI Tail latency best superior worst inferior CI/CD performance best fail frequently long latency balanced Read/Write (1-100 CCI) 153/83 MB/s -93/39.5 MB/s 56/9.5 MB/s -54/3.5 MB/s 424/44 MB/s -NA 68/8 MB/s-34/0.5 MB/s File I/O (1-100 CCI) 2-3.5 second 10-30 3.5-NA 15-60 Object I/O (1-100 CCI) 1.3-2.4 second 5-8 12-NA 1-30 Trigger Throughput 55-25-860 (HTTP-Object-DB) 20-25-NA 145-250-NA 50-NA-40 Language Runtime overhead balanced 0.05s avg (-0.06) 0.22s (+0.1) (-0.02) 0.22s (+0.03) (-0.02) 0.17s (+0.02) Dependencies overhead (-0.5) 1.1s (+0.2) avg (-0.5) 1.9s (+0.4) (-1.3) 3.4s (NA) NA Maximum Memory 3008MB 2048MB 1536MB 512MB Execution Timeout 5 minutes 9 minutes 10 minutes 5 minutes Price per Memory $0.0000166/GB-s $0.0000165/GB-s $0.0000016/GB-s $0.000017/GB-s Price per Execution $0.2 per 1M $0.4 per 1M $0.2 per 1M NA Free Tier First 1 M Exec First 2 M Exec First 1 M Exec Free Exec/40,000GB-s Idle instance lifetime 5-7 min 15 min Mostly 20-30 minutes default 15 min , Vol. 1, No. 1, Article . Publication date: January 2022. A Survey on Scheduling Strategies for Workflows in Cloud Environment and Emerging Trends. Mainak Adhikari, Tarachand Amgoth, Satish Narayana Srirama, 10.1145/3325097ACM Comput. Surv. 5236Mainak Adhikari, Tarachand Amgoth, and Satish Narayana Srirama. 2019. A Survey on Scheduling Strategies for Workflows in Cloud Environment and Emerging Trends. ACM Comput. Surv. 52, 4 (2019), 68:1-68:36. https: //doi.org/10.1145/3325097 Serverless computing: economic and architectural impact. Gojko Adzic, Robert Chatley, 10.1145/3106237.3117767Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering. Eric Bodden, Wilhelm Schäfer, Arie van Deursen, and Andrea Zismanthe 2017 11th Joint Meeting on Foundations of Software EngineeringPaderborn, GermanyACMGojko Adzic and Robert Chatley. 2017. Serverless computing: economic and architectural impact. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2017, Paderborn, Germany, September 4-8, 2017, Eric Bodden, Wilhelm Schäfer, Arie van Deursen, and Andrea Zisman (Eds.). ACM, 884-889. https: //doi.org/10.1145/3106237.3117767 Firecracker: Lightweight Virtualization for Serverless Applications. Alexandru Agache, Marc Brooker, Alexandra Iordache, Anthony Liguori, 17th USENIX Symposium on Networked Systems Design and Implementation. Ranjita Bhagwan and George PorterSanta Clara, CA, USA2020USENIX AssociationAlexandru Agache, Marc Brooker, Alexandra Iordache, and Anthony Liguori. 2020. Firecracker: Lightweight Virtual- ization for Serverless Applications. In 17th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2020, Santa Clara, CA, USA, February 25-27, 2020, Ranjita Bhagwan and George Porter (Eds.). USENIX Association, 419-434. https://www.usenix.org/conference/nsdi20/presentation/agache SAND: Towards High-Performance Serverless Computing. Ruichuan Istemi Ekin Akkus, Ivica Chen, Manuel Rimac, Klaus Stein, Andre Satzke, Paarijaat Beck, Volker Aditya, Hilt, 2018 USENIX Annual Technical Conference, USENIX ATC 2018. Haryadi S. Gunawi and Benjamin ReedBoston, MA, USAUSENIX AssociationIstemi Ekin Akkus, Ruichuan Chen, Ivica Rimac, Manuel Stein, Klaus Satzke, Andre Beck, Paarijaat Aditya, and Volker Hilt. 2018. SAND: Towards High-Performance Serverless Computing. In 2018 USENIX Annual Technical Conference, USENIX ATC 2018, Boston, MA, USA, July 11-13, 2018, Haryadi S. Gunawi and Benjamin Reed (Eds.). USENIX Association, 923-935. https://www.usenix.org/conference/atc18/presentation/akkus . Guo Li, Cheng, Quan, Li, Guo, Cheng and Quan, et al. CherryPick: Adaptively Unearthing the Best Cloud Configurations for Big Data Analytics. Omid Alipourfard, Harry Hongqiang, Jianshu Liu, Chen, 14th USENIX Symposium on Networked Systems Design and Implementation. Aditya Akella and Jon HowellBoston, MA, USAUSENIX AssociationOmid Alipourfard, Hongqiang Harry Liu, and Jianshu Chen. 2017. CherryPick: Adaptively Unearthing the Best Cloud Configurations for Big Data Analytics. In 14th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2017, Boston, MA, USA, March 27-29, 2017, Aditya Akella and Jon Howell (Eds.). USENIX Association, 469-482. https://www.usenix.org/conference/nsdi17/technical-sessions/presentation/alipourfard Enabling API caching to enhance responsiveness in AWS. Amazon, Amazon. 2021. Enabling API caching to enhance responsiveness in AWS. https://docs.aws.amazon.com/apigateway/ latest/developerguide/api-gateway-caching.html A fully managed, highly available, in-memory cache service 2021. Amazon DynamoDB Accelerator. Amazon DynamoDB Accelerator (DAX): A fully managed, highly available, in-memory cache service 2021. https: //aws.amazon.com/dynamodb/dax/ Improving Docker Registry Design Based on Production Workload Analysis. Ali Anwar, Mohamed Mohamed, Vasily Tarasov, Michael Littley, Lukas Rupprecht, 16th USENIX Conference on File and Storage Technologies. Nitin Agrawal and Raju RangaswamiOakland, CA, USAUSENIX AssociationAli Anwar, Mohamed Mohamed, Vasily Tarasov, Michael Littley, and Lukas Rupprecht. 2018. Improving Docker Registry Design Based on Production Workload Analysis. In 16th USENIX Conference on File and Storage Technologies, FAST 2018, Oakland, CA, USA, February 12-15, 2018, Nitin Agrawal and Raju Rangaswami (Eds.). USENIX Association, 265-278. https://www.usenix.org/conference/fast18/presentation/anwar Sprocket: A Serverless Video Processing Framework. Lixiang Ao, Liz Izhikevich, Geoffrey M Voelker, George Porter, 10.1145/3267809.3267815Proceedings of the ACM Symposium on Cloud Computing. the ACM Symposium on Cloud ComputingSoCC; Carlsbad, CA, USAACMLixiang Ao, Liz Izhikevich, Geoffrey M. Voelker, and George Porter. 2018. Sprocket: A Serverless Video Processing Framework. In Proceedings of the ACM Symposium on Cloud Computing, SoCC 2018, Carlsbad, CA, USA, October 11-13, 2018. ACM, 263-274. https://doi.org/10.1145/3267809.3267815 . Apex: Serverless Architecture. Apex: Serverless Architecture 2021. https://apex.sh/ Semi-online task assignment policies for workload consolidation in cloud computing systems. Vincent Armant, Milan De Cauwer, Kenneth N Brown, Barry O&apos; Sullivan, 10.1016/j.future.2017.12.035Future Gener. Comput. Syst. 82Vincent Armant, Milan De Cauwer, Kenneth N. Brown, and Barry O'Sullivan. 2018. Semi-online task assignment policies for workload consolidation in cloud computing systems. Future Gener. Comput. Syst. 82 (2018), 89-103. https://doi.org/10.1016/j.future.2017.12.035 CloudCache: Ondemand Flash Cache Management for Cloud Computing. Dulcardo Arteaga, Jorge Cabrera, Jing Xu, Swaminathan Sundararaman, Ming Zhao, 14th USENIX Conference on File and Storage Technologies. Angela Demke Brown and Florentina I. PopoviciSanta Clara, CA, USAUSENIX AssociationDulcardo Arteaga, Jorge Cabrera, Jing Xu, Swaminathan Sundararaman, and Ming Zhao. 2016. CloudCache: On- demand Flash Cache Management for Cloud Computing. In 14th USENIX Conference on File and Storage Technologies, FAST 2016, Santa Clara, CA, USA, February 22-25, 2016, Angela Demke Brown and Florentina I. Popovici (Eds.). USENIX Association, 355-369. https://www.usenix.org/conference/fast16/technical-sessions/presentation/arteaga Container-Based Performance Evaluation: A Survey and Challenges. G Naylor, Paulo S L Bachiega, Sarita Mazzini Souza, Simone Bruschi, Rocio Do, Senger De Souza, 10.1109/IC2E.2018.000752018 IEEE International Conference on Cloud Engineering. Abhishek Chandra, Jie Li, Ying Cai, and Tian GuoOrlando, FL, USAIEEE Computer Society2Naylor G. Bachiega, Paulo S. L. Souza, Sarita Mazzini Bruschi, and Simone do Rocio Senger de Souza. 2018. Container- Based Performance Evaluation: A Survey and Challenges. In 2018 IEEE International Conference on Cloud Engineering, IC2E 2018, Orlando, FL, USA, April 17-20, 2018, Abhishek Chandra, Jie Li, Ying Cai, and Tian Guo (Eds.). IEEE Computer Society, 398-403. https://doi.org/10.1109/IC2E.2018.00075 BlastFunction: an FPGA-as-a-Service system for Accelerated Serverless Computing. M Bacis, R Brondolin, M D Santambrogio, 10.23919/DATE48585.2020.91163332020 Design, Automation Test in Europe Conference Exhibition (DATE). M. Bacis, R. Brondolin, and M. D. Santambrogio. 2020. BlastFunction: an FPGA-as-a-Service system for Accelerated Serverless Computing. In 2020 Design, Automation Test in Europe Conference Exhibition (DATE). 852-857. https: //doi.org/10.23919/DATE48585.2020.9116333 Serverless computing: Current trends and open problems. Ioana Baldini, Paul Castro, Kerry Chang, Perry Cheng, Stephen Fink, Vatche Ishakian, Nick Mitchell, Vinod Muthusamy, Rodric Rabbah, Aleksander Slominski, Research Advances in Cloud Computing. SpringerIoana Baldini, Paul Castro, Kerry Chang, Perry Cheng, Stephen Fink, Vatche Ishakian, Nick Mitchell, Vinod Muthusamy, Rodric Rabbah, Aleksander Slominski, et al. 2017. Serverless computing: Current trends and open problems. In Research Advances in Cloud Computing. Springer, 1-20. The serverless trilemma: function composition for serverless computing. Ioana Baldini, Perry Cheng, Stephen J Fink, Nick Mitchell, 10.1145/3133850.3133855Proceedings of the 2017 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software. the 2017 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and SoftwareVancouver, BC, CanadaACMIoana Baldini, Perry Cheng, Stephen J. Fink, and Nick Mitchell. 2017. The serverless trilemma: function composition for serverless computing. In Proceedings of the 2017 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software, Onward! 2017, Vancouver, BC, Canada, October 23 -27, 2017. ACM, 89-103. https://doi.org/10.1145/3133850.3133855 HyperFlow: A model of computation, programming approach and enactment engine for complex distributed workflows. Bartosz Balis, 10.1016/j.future.2015.08.015Future Gener. Comput. Syst. 55Bartosz Balis. 2016. HyperFlow: A model of computation, programming approach and enactment engine for complex distributed workflows. Future Gener. Comput. Syst. 55 (2016), 147-162. https://doi.org/10.1016/j.future.2015.08.015 A Survey On Secure Container Isolation Approaches for Multi-Tenant Container Workloads and Serverless Computing. Christian Bargmann, Marina Tropmann-Frick, Proceedings of the Eighth Workshop on Software Quality Analysis, Monitoring, Improvement, and Applications, SQAMIA 2019. Zoran Budimac and Bojana Koteskathe Eighth Workshop on Software Quality Analysis, Monitoring, Improvement, and Applications, SQAMIA 2019Ohrid, North Macedonia2508CEUR-WS.orgChristian Bargmann and Marina Tropmann-Frick. 2019. A Survey On Secure Container Isolation Approaches for Multi-Tenant Container Workloads and Serverless Computing. In Proceedings of the Eighth Workshop on Software Quality Analysis, Monitoring, Improvement, and Applications, SQAMIA 2019, Ohrid, North Macedonia, September 22-25, 2019 (CEUR Workshop Proceedings, Vol. 2508), Zoran Budimac and Bojana Koteska (Eds.). CEUR-WS.org. http://ceur-ws.org/Vol-2508/paper-bar.pdf Secure yet usable: Protecting servers and Linux containers. S Barlev, Z Basil, S Kohanim, R Peleg, S Regev, Alexandra Shulman-Peleg, 10.1147/JRD.2016.2574138IBM J. Res. Dev. 6012S. Barlev, Z. Basil, S. Kohanim, R. Peleg, S. Regev, and Alexandra Shulman-Peleg. 2016. Secure yet usable: Protecting servers and Linux containers. IBM J. Res. Dev. 60, 4 (2016), 12. https://doi.org/10.1147/JRD.2016.2574138 Using application knowledge to reduce cold starts in FaaS services. David Bermbach, Ahmet-Serdar Karakaya, Simon Buchholz, 10.1145/3341105.3373909SAC '20: The 35th ACM/SIGAPP Symposium on Applied Computing, online event. Hung, Tomás Cerný, Dongwan Shin, and Alessio BechiniBrno, Czech RepublicACMDavid Bermbach, Ahmet-Serdar Karakaya, and Simon Buchholz. 2020. Using application knowledge to reduce cold starts in FaaS services. In SAC '20: The 35th ACM/SIGAPP Symposium on Applied Computing, online event, [Brno, Czech Republic], March 30 -April 3, 2020, Chih-Cheng Hung, Tomás Cerný, Dongwan Shin, and Alessio Bechini (Eds.). ACM, 134-143. https://doi.org/10.1145/3341105.3373909 Bi-criteria Workflow Tasks Allocation and Scheduling in Cloud Computing Environments. Kahina Bessai, Samir Youcef, Ammar Oulamara, Claude Godart, Selmin Nurcan, 10.1109/CLOUD.2012.832012 IEEE Fifth International Conference on Cloud Computing. Honolulu, HI, USAIEEE Computer SocietyKahina Bessai, Samir Youcef, Ammar Oulamara, Claude Godart, and Selmin Nurcan. 2012. Bi-criteria Workflow Tasks Allocation and Scheduling in Cloud Computing Environments. In 2012 IEEE Fifth International Conference on Cloud Computing, Honolulu, HI, USA, June 24-29, 2012. IEEE Computer Society, 638-645. https://doi.org/10.1109/CLOUD. 2012.83 Leveraging the Serverless Architecture for Securing Linux Containers. Nilton Bila, Paolo Dettori, Ali Kanso, Yuji Watanabe, Alaa Youssef, 10.1109/ICDCSW.2017.6637th IEEE International Conference on Distributed Computing Systems Workshops, ICDCS Workshops. Ferreira, and Teruo HigashinoAtlanta, GA, USAIEEE Computer SocietyNilton Bila, Paolo Dettori, Ali Kanso, Yuji Watanabe, and Alaa Youssef. 2017. Leveraging the Serverless Architecture for Securing Linux Containers. In 37th IEEE International Conference on Distributed Computing Systems Workshops, ICDCS Workshops 2017, Atlanta, GA, USA, June 5-8, 2017, Aibek Musaev, João Eduardo Ferreira, and Teruo Higashino (Eds.). IEEE Computer Society, 401-404. https://doi.org/10.1109/ICDCSW.2017.66 Putting the "Micro" Back in Microservice. Sol Boucher, Anuj Kalia, David G Andersen, Michael Kaminsky, 2018 USENIX Annual Technical Conference, USENIX ATC 2018. Haryadi S. Gunawi and Benjamin ReedBoston, MA, USAUSENIX AssociationSol Boucher, Anuj Kalia, David G. Andersen, and Michael Kaminsky. 2018. Putting the "Micro" Back in Microservice. In 2018 USENIX Annual Technical Conference, USENIX ATC 2018, Boston, MA, USA, July 11-13, 2018, Haryadi S. Gunawi and Benjamin Reed (Eds.). USENIX Association, 645-650. https://www.usenix.org/conference/atc18/presentation/boucher Serverless: IOpipe Launches a Monitoring Tool for AWS Lambda. Mark Boyd, Mark Boyd. 2021. "Serverless: IOpipe Launches a Monitoring Tool for AWS Lambda". https://thenewstack.io/iopipe- launches-lambda-monitoring-tool-aws-summit/ Canary Deployments using Istio" about the Red-Black and the Blue-green deployment. Frank Budinsky, FRANK BUDINSKY. 2021. "Canary Deployments using Istio" about the Red-Black and the Blue-green deployment. https://istio.io/latest/blog/2017/0.1-canary/ A Manifesto for Future Generation Cloud Computing: Research Directions for the Next Decade. Rajkumar Buyya, Giuliano Satish Narayana Srirama, Rodrigo N Casale, Calheiros, 10.1145/3241737ACM Comput. Surv. 51Rajkumar Buyya, Satish Narayana Srirama, Giuliano Casale, and Rodrigo N. Calheiros. 2019. A Manifesto for Future Generation Cloud Computing: Research Directions for the Next Decade. ACM Comput. Surv. 51, 5 (2019), 105:1-105:38. https://doi.org/10.1145/3241737 Cloud computing and emerging IT platforms: Vision, hype, and reality for delivering computing as the 5th utility. Rajkumar Buyya, Srikumar Chee Shin Yeo, James Venugopal, Ivona Broberg, Brandic, 10.1016/j.future.2008.12.001Future Gener. Comput. Syst. 25Rajkumar Buyya, Chee Shin Yeo, Srikumar Venugopal, James Broberg, and Ivona Brandic. 2009. Cloud computing and emerging IT platforms: Vision, hype, and reality for delivering computing as the 5th utility. Future Gener. Comput. Syst. 25, 6 (2009), 599-616. https://doi.org/10.1016/j.future.2008.12.001 SEUSS: skip redundant paths to make serverless fast. James Cadden, Thomas Unger, Yara Awad, Han Dong, 10.1145/3342195.3392698EuroSys '20: Fifteenth EuroSys Conference 2020. Heraklion, GreeceACM3215James Cadden, Thomas Unger, Yara Awad, and Han Dong. 2020. SEUSS: skip redundant paths to make serverless fast. In EuroSys '20: Fifteenth EuroSys Conference 2020, Heraklion, Greece, April 27-30, 2020. ACM, 32:1-32:15. https: //doi.org/10.1145/3342195.3392698 Wukong: A Scalable and Locality-Enhanced Framework for Serverless Parallel Computing. Benjamin Carver, Jingyuan Zhang, Ao Wang, Ali Anwar, Panruo Wu, Yue Cheng, 10.1145/3419111.3421286Proceedings of the 11th ACM Symposium on Cloud Computing (Virtual Event, USA) (SoCC '20). the 11th ACM Symposium on Cloud Computing (Virtual Event, USA) (SoCC '20)New York, NY, USAAssociation for Computing MachineryBenjamin Carver, Jingyuan Zhang, Ao Wang, Ali Anwar, Panruo Wu, and Yue Cheng. 2020. Wukong: A Scalable and Locality-Enhanced Framework for Serverless Parallel Computing. In Proceedings of the 11th ACM Symposium on Cloud Computing (Virtual Event, USA) (SoCC '20). Association for Computing Machinery, New York, NY, USA, 1-15. https://doi.org/10.1145/3419111.3421286 Benjamin Carver, Jingyuan Zhang, Ao Wang, Yue Cheng, arXiv:1910.05896Search of a Fast and Efficient Serverless DAG Engine. Benjamin Carver, Jingyuan Zhang, Ao Wang, and Yue Cheng. 2019. In Search of a Fast and Efficient Serverless DAG Engine. CoRR abs/1910.05896 (2019). arXiv:1910.05896 http://arxiv.org/abs/1910.05896 PSO-DS: a scheduling engine for scientific workflow managers. Israel Casas, Javid Taheri, Rajiv Ranjan, Albert Y Zomaya, 10.1007/s11227-017-1992-zJ. Supercomput. 73Israel Casas, Javid Taheri, Rajiv Ranjan, and Albert Y. Zomaya. 2017. PSO-DS: a scheduling engine for scientific workflow managers. J. Supercomput. 73, 9 (2017), 3924-3947. https://doi.org/10.1007/s11227-017-1992-z A Kubernetes-Based Monitoring Platform for Dynamic Cloud Resource Provisioning. Chia-Chen Chang, Shun-Ren Yang, En-Hau Yeh, Phone Lin, Jeu-Yih Jeng, 10.1109/GLOCOM.2017.82540462017 IEEE Global Communications Conference. SingaporeChia-Chen Chang, Shun-Ren Yang, En-Hau Yeh, Phone Lin, and Jeu-Yih Jeng. 2017. A Kubernetes-Based Monitoring Platform for Dynamic Cloud Resource Provisioning. In 2017 IEEE Global Communications Conference, GLOBECOM 2017, Singapore, December 4-8, 2017. IEEE, 1-6. https://doi.org/10.1109/GLOCOM.2017.8254046 Considering resource demand misalignments to reduce resource overprovisioning in cloud datacenters. Liuhua Chen, Haiying Shen, 10.1109/INFOCOM.2017.80570842017 IEEE Conference on Computer Communications, INFOCOM 2017. Atlanta, GA, USALiuhua Chen and Haiying Shen. 2017. Considering resource demand misalignments to reduce resource over- provisioning in cloud datacenters. In 2017 IEEE Conference on Computer Communications, INFOCOM 2017, Atlanta, GA, USA, May 1-4, 2017. IEEE, 1-9. https://doi.org/10.1109/INFOCOM.2017.8057084 Cache contention aware Virtual Machine placement and migration in cloud datacenters. Liuhua Chen, Haiying Shen, Stephen Platt, 10.1109/ICNP.2016.778444724th IEEE International Conference on Network Protocols, ICNP 2016. SingaporeLiuhua Chen, Haiying Shen, and Stephen Platt. 2016. Cache contention aware Virtual Machine placement and migration in cloud datacenters. In 24th IEEE International Conference on Network Protocols, ICNP 2016, Singapore, November 8-11, 2016. IEEE Computer Society, 1-10. https://doi.org/10.1109/ICNP.2016.7784447 PARTIES: QoS-Aware Resource Partitioning for Multiple Interactive Services. Shuang Chen, Christina Delimitrou, José F Martínez, 10.1145/3297858.3304005Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2019. Iris Bahar, Maurice Herlihy, Emmett Witchel, and Alvin R. Lebeckthe Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2019Providence, RI, USAACMShuang Chen, Christina Delimitrou, and José F. Martínez. 2019. PARTIES: QoS-Aware Resource Partitioning for Multiple Interactive Services. In Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2019, Providence, RI, USA, April 13-17, 2019, Iris Bahar, Maurice Herlihy, Emmett Witchel, and Alvin R. Lebeck (Eds.). ACM, 107-120. https://doi.org/10.1145/3297858.3304005 Is FPGA Useful for Hash Joins. Xinyu Chen, Yao Chen, Ronak Bajaj, Jiong He, Bingsheng He, Weng-Fai Wong, Deming Chen, 10th Conference on Innovative Data Systems Research. Amsterdam, The Netherlands2020Online Proceedings. www.cidrdb.orgXinyu Chen, Yao Chen, Ronak Bajaj, Jiong He, Bingsheng He, Weng-Fai Wong, and Deming Chen. 2020. Is FPGA Useful for Hash Joins?. In 10th Conference on Innovative Data Systems Research, CIDR 2020, Amsterdam, The Netherlands, January 12-15, 2020, Online Proceedings. www.cidrdb.org. http://cidrdb.org/cidr2020/papers/p27-chen-cidr20.pdf ThunderGP: HLS-Based Graph Processing Framework on FPGAs. Xinyu Chen, Hongshi Tan, Yao Chen, Bingsheng He, Weng-Fai Wong, Deming Chen, 10.1145/3431920.3439290The 2021 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (Virtual Event, USA) (FPGA '21). New York, NY, USAAssociation for Computing MachineryXinyu Chen, Hongshi Tan, Yao Chen, Bingsheng He, Weng-Fai Wong, and Deming Chen. 2021. ThunderGP: HLS-Based Graph Processing Framework on FPGAs. In The 2021 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (Virtual Event, USA) (FPGA '21). Association for Computing Machinery, New York, NY, USA, 69-80. https://doi.org/10.1145/3431920.3439290 Resource Central: Understanding and Predicting Workloads for Improved Resource Management in Large Cloud Platforms. Eli Cortez, Anand Bonde, Alexandre Muzio, 10.1145/3132747.3132772Proceedings of the 26th Symposium on Operating Systems Principles. the 26th Symposium on Operating Systems PrinciplesShanghai, ChinaACMEli Cortez, Anand Bonde, and Alexandre Muzio. 2017. Resource Central: Understanding and Predicting Workloads for Improved Resource Management in Large Cloud Platforms. In Proceedings of the 26th Symposium on Operating Systems Principles, Shanghai, China, October 28-31, 2017. ACM, 153-167. https://doi.org/10.1145/3132747.3132772 CRIU: A utility to checkpoint/restore Linux tasks in userspace 2021. CRIU: A utility to checkpoint/restore Linux tasks in userspace 2021. https://github.com/checkpoint-restore/criu Xanadu: Mitigating cascading cold starts in serverless function chain deployments. Nilanjan Daw, Umesh Bellur, Purushottam Kulkarni, 10.1145/3423211.3425690Middleware '20: 21st International Middleware Conference. Dilma Da Silva and Rüdiger KapitzaDelft, The NetherlandsACMNilanjan Daw, Umesh Bellur, and Purushottam Kulkarni. 2020. Xanadu: Mitigating cascading cold starts in serverless function chain deployments. In Middleware '20: 21st International Middleware Conference, Delft, The Netherlands, December 7-11, 2020, Dilma Da Silva and Rüdiger Kapitza (Eds.). ACM, 356-370. https://doi.org/10.1145/3423211. 3425690 Docker 2021. Docker 2021. https://www.docker.com/ Catalyzer: Sub-millisecond Startup for Serverless Computing with Initialization-less Booting. Dong Du, Tianyi Yu, Yubin Xia, Binyu Zang, Guanglu Yan, Chenggang Qin, Qixuan Wu, Haibo Chen, ASPLOS '20: Architectural Support for Programming Languages and Operating Systems. Lausanne, Switzerland; James RDong Du, Tianyi Yu, Yubin Xia, Binyu Zang, Guanglu Yan, Chenggang Qin, Qixuan Wu, and Haibo Chen. 2020. Catalyzer: Sub-millisecond Startup for Serverless Computing with Initialization-less Booting. In ASPLOS '20: Architec- tural Support for Programming Languages and Operating Systems, Lausanne, Switzerland, March 16-20, 2020, James R. . Luis Larus, Ceze, 10.1145/3373376.3378512Karin StraussACMLarus, Luis Ceze, and Karin Strauss (Eds.). ACM, 467-481. https://doi.org/10.1145/3373376.3378512 Elastic Load Balancing Application Load Balancers. Elastic Load Balancing Application Load Balancers. 2021. https://docs.aws.amazon.com/elasticloadbalancing/latest/ application/elb-ag.pdf Execute mode in Fission. Execute mode in Fission 2021. https://docs.fission.io/docs/usage/executor/ Serverless is More: From PaaS to Present Cloud Computing. Erwin Van Eyk, Lucian Toader, Sacheendra Talluri, 10.1109/MIC.2018.053681358IEEE Internet Comput. 22Erwin Van Eyk, Lucian Toader, and Sacheendra Talluri. 2018. Serverless is More: From PaaS to Present Cloud Computing. IEEE Internet Comput. 22, 5 (2018), 8-17. https://doi.org/10.1109/MIC.2018.053681358 Fission Workflows: Fast, reliable and lightweight function composition for serverless functions 2021. Fission Workflows: Fast, reliable and lightweight function composition for serverless functions 2021. https: //github.com/fission/fission-workflows Encoding, Fast and Slow: Low-Latency Video Processing Using Thousands of Tiny Threads. Sadjad Fouladi, Riad S Wahby, Brennan Shacklett, Karthikeyan Balasubramaniam, William Zeng, Rahul Bhalerao, Anirudh Sivaraman, George Porter, Keith Winstein, 14th USENIX Symposium on Networked Systems Design and Implementation. Aditya Akella and Jon HowellBoston, MA, USAUSENIX AssociationSadjad Fouladi, Riad S. Wahby, Brennan Shacklett, Karthikeyan Balasubramaniam, William Zeng, Rahul Bhalerao, Anirudh Sivaraman, George Porter, and Keith Winstein. 2017. Encoding, Fast and Slow: Low-Latency Video Processing Using Thousands of Tiny Threads. In 14th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2017, Boston, MA, USA, March 27-29, 2017, Aditya Akella and Jon Howell (Eds.). USENIX Association, 363-376. https://www.usenix.org/conference/nsdi17/technical-sessions/presentation/fouladi Software engineer at Container Solutions. Deployment Strategies on KubernetesCloud Native Computing Foundations. 2021. "Deployment Strategies on Kubernetes", Software engineer at Container Solutions. https://www.cncf.io/wp-content/uploads/2020/08/CNCF-Presentation-Template-K8s-Deployment.pdf Google container runtime sandbox 2021. Google container runtime sandbox 2021. https://github.com/google/gvisor Application Oriented Dynamic Resource Allocation for Data Centers Using Docker Containers. Xinjie Guan, Xili Wan, Baek-Young Choi, Sejun Song, Jiafeng Zhu, 10.1109/LCOMM.2016.2644658IEEE Commun. Lett. 21Xinjie Guan, Xili Wan, Baek-Young Choi, Sejun Song, and Jiafeng Zhu. 2017. Application Oriented Dynamic Resource Allocation for Data Centers Using Docker Containers. IEEE Commun. Lett. 21, 3 (2017), 504-507. https: //doi.org/10.1109/LCOMM.2016.2644658 Slacker: Fast Distribution with Lazy Docker Containers. Tyler Harter, Brandon Salmon, Rose Liu, Andrea C Arpaci-Dusseau, Remzi H Arpaci-Dusseau, 14th USENIX Conference on File and Storage Technologies, FAST 2016. Santa Clara, CA, USAUSENIX AssociationTyler Harter, Brandon Salmon, Rose Liu, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. 2016. Slacker: Fast Distribution with Lazy Docker Containers. In 14th USENIX Conference on File and Storage Technologies, FAST 2016, Santa Clara, CA, USA, February 22-25, 2016. USENIX Association, 181-195. https://www.usenix.org/conference/ fast16/technical-sessions/presentation/harter Survey on serverless computing. Hassan B Hassan, A Saman, Qusay I Barakat, Sarhan, 10.1186/s13677-021-00253-7J. Cloud Comput. 1039Hassan B. Hassan, Saman A. Barakat, and Qusay I. Sarhan. 2021. Survey on serverless computing. J. Cloud Comput. 10, 1 (2021), 39. https://doi.org/10.1186/s13677-021-00253-7 Relational Joins on Graphics Processors. Bingsheng He, Ke Yang, Rui Fang, Mian Lu, Naga Govindaraju, Qiong Luo, Pedro Sander, 10.1145/1376616.1376670Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data. the 2008 ACM SIGMOD International Conference on Management of DataVancouver, Canada; New York, NY, USAAssociation for Computing MachinerySIGMOD '08)Bingsheng He, Ke Yang, Rui Fang, Mian Lu, Naga Govindaraju, Qiong Luo, and Pedro Sander. 2008. Relational Joins on Graphics Processors. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data (Vancouver, Canada) (SIGMOD '08). Association for Computing Machinery, New York, NY, USA, 511-524. https://doi.org/10.1145/1376616.1376670 Serverless Computing: One Step Forward, Two Steps Back. Joseph M Hellerstein, Jose M Faleiro, Joseph Gonzalez, CIDR 2019, 9th Biennial Conference on Innovative Data Systems Research. Asilomar, CA, USAOnline Proceedings. www.cidrdb.orgJoseph M. Hellerstein, Jose M. Faleiro, and Joseph Gonzalez. 2019. Serverless Computing: One Step Forward, Two Steps Back. In CIDR 2019, 9th Biennial Conference on Innovative Data Systems Research, Asilomar, CA, USA, January 13-16, 2019, Online Proceedings. www.cidrdb.org. http://cidrdb.org/cidr2019/papers/p119-hellerstein-cidr19.pdf Scott Hendrickson, Stephen Sturdevant, Edward Oakes, Tyler Harter, Venkateshwaran Venkataramani, Andrea C Arpaci-Dusseau, Remzi H Arpaci-Dusseau, Serverless Computation with OpenLambda. login Usenix Mag. 414Scott Hendrickson, Stephen Sturdevant, Edward Oakes, Tyler Harter, Venkateshwaran Venkataramani, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. 2016. Serverless Computation with OpenLambda. login Usenix Mag. 41, 4 (2016). https://www.usenix.org/publications/login/winter2016/hendrickson Honeycomb: DevOps tool for code inspection 2021. Honeycomb: DevOps tool for code inspection 2021. https://www.honeycomb.io/ A Model Predictive Controller for Managing QoS Enforcements and Microarchitecture-Level Interferences in a Lambda Platform. M , Reza Hoseinyfarahabady, Albert Y Zomaya, Zahir Tari, 10.1109/TPDS.2017.2779502IEEE Trans. Parallel Distributed Syst. 29M. Reza HoseinyFarahabady, Albert Y. Zomaya, and Zahir Tari. 2018. A Model Predictive Controller for Managing QoS Enforcements and Microarchitecture-Level Interferences in a Lambda Platform. IEEE Trans. Parallel Distributed Syst. 29, 7 (2018), 1442-1455. https://doi.org/10.1109/TPDS.2017.2779502 Hyper-V for Windows containers. Hyper-V for Windows containers. 2021. https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage- containers/hyperv-container Accurate Resource Prediction for Hybrid IaaS Clouds Using Workload-Tailored Elastic Compute Units. Shigeru Imai, Thomas Chestna, Carlos A Varela, 10.1109/UCC.2013.40IEEE/ACM 6th International Conference on Utility and Cloud Computing. UCC; Dresden, GermanyIEEE Computer SocietyShigeru Imai, Thomas Chestna, and Carlos A. Varela. 2013. Accurate Resource Prediction for Hybrid IaaS Clouds Using Workload-Tailored Elastic Compute Units. In IEEE/ACM 6th International Conference on Utility and Cloud Computing, UCC 2013, Dresden, Germany, December 9-12, 2013. IEEE Computer Society, 171-178. https://doi.org/10. 1109/UCC.2013.40 Uncertainty-Aware Elastic Virtual Machine Scheduling for Stream Processing Systems. Shigeru Imai, Stacy Patterson, Carlos A Varela, 18th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGRID 2018. Washington, DC, USA; Sandra Gesing, Amy WEsam El-Araby, Dhabaleswar K. PandaShigeru Imai, Stacy Patterson, and Carlos A. Varela. 2018. Uncertainty-Aware Elastic Virtual Machine Scheduling for Stream Processing Systems. In 18th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGRID 2018, Washington, DC, USA, May 1-4, 2018, Esam El-Araby, Dhabaleswar K. Panda, Sandra Gesing, Amy W. . Volodymyr V Apon, Massimo Kindratenko, Cafaro, 10.1109/CCGRID.2018.00021Alfredo CuzzocreaIEEE Computer SocietyApon, Volodymyr V. Kindratenko, Massimo Cafaro, and Alfredo Cuzzocrea (Eds.). IEEE Computer Society, 62-71. https://doi.org/10.1109/CCGRID.2018.00021 Implementation of a DevOps Pipeline for Serverless Applications. Vitalii Ivanov, Kari Smolander, Product-Focused Software Process Improvement -19th International Conference. Marco Kuhrmann, Kurt Schneider, Dietmar Pfahl, Sousuke Amasaki, Marcus Ciolkowski, Regina Hebig, Paolo Tell, Jil Klünder, and Steffen KüpperWolfsburg, Germany11271Proceedings (Lecture Notes in Computer ScienceVitalii Ivanov and Kari Smolander. 2018. Implementation of a DevOps Pipeline for Serverless Applications. In Product-Focused Software Process Improvement -19th International Conference, PROFES 2018, Wolfsburg, Germany, November 28-30, 2018, Proceedings (Lecture Notes in Computer Science, Vol. 11271), Marco Kuhrmann, Kurt Schneider, Dietmar Pfahl, Sousuke Amasaki, Marcus Ciolkowski, Regina Hebig, Paolo Tell, Jil Klünder, and Steffen Küpper (Eds.). . Springer, 10.1007/978-3-030-03673-7_4Springer, 48-64. https://doi.org/10.1007/978-3-030-03673-7_4 An Investigation of the Impact of Language Runtime on the Performance and Cost of Serverless Functions. David Jackson, Gary Clynch, 10.1109/UCC-Companion.2018.00050IEEE/ACM International Conference on Utility and Cloud Computing Companion, UCC. Alan Sill and Josef SpillnerIEEEDavid Jackson and Gary Clynch. 2018. An Investigation of the Impact of Language Runtime on the Performance and Cost of Serverless Functions. In 2018 IEEE/ACM International Conference on Utility and Cloud Computing Companion, UCC Companion 2018, Zurich, Switzerland, December 17-20, 2018, Alan Sill and Josef Spillner (Eds.). IEEE, 154-160. https://doi.org/10.1109/UCC-Companion.2018.00050 Jenkins: DevOps CI tool 2021. Jenkins: DevOps CI tool 2021. https://www.jenkins.io/ Occupy the cloud: distributed computing for the 99%. Eric Jonas, Qifan Pu, Shivaram Venkataraman, Ion Stoica, Benjamin Recht, 10.1145/3127479.3128601Proceedings of the 2017 Symposium on Cloud Computing. the 2017 Symposium on Cloud ComputingSanta Clara, CA, USAACMEric Jonas, Qifan Pu, Shivaram Venkataraman, Ion Stoica, and Benjamin Recht. 2017. Occupy the cloud: distributed computing for the 99%. In Proceedings of the 2017 Symposium on Cloud Computing, SoCC 2017, Santa Clara, CA, USA, September 24-27, 2017. ACM, 445-451. https://doi.org/10.1145/3127479.3128601 Cloud Programming Simplified: A Berkeley View on Serverless Computing. Eric Jonas, Johann Schleier-Smith, Vikram Sreekanti, Chia-Che Tsai, Anurag Khandelwal, Qifan Pu, Vaishaal Shankar, Joao Carreira, Karl Krauth, Jayant Neeraja, Joseph E Yadwadkar, Raluca Ada Gonzalez, Ion Popa, David A Stoica, Patterson, arXiv:1902.03383Eric Jonas, Johann Schleier-Smith, Vikram Sreekanti, Chia-che Tsai, Anurag Khandelwal, Qifan Pu, Vaishaal Shankar, Joao Carreira, Karl Krauth, Neeraja Jayant Yadwadkar, Joseph E. Gonzalez, Raluca Ada Popa, Ion Stoica, and David A. Patterson. 2019. Cloud Programming Simplified: A Berkeley View on Serverless Computing. CoRR abs/1902.03383 (2019). arXiv:1902.03383 http://arxiv.org/abs/1902.03383 Centralized Core-granular Scheduling for Serverless Functions. Kostis Kaffes, J Neeraja, Christos Yadwadkar, Kozyrakis, 10.1145/3357223.3362709Proceedings of the ACM Symposium on Cloud Computing. the ACM Symposium on Cloud ComputingSoCC; Santa Cruz, CA, USAACMKostis Kaffes, Neeraja J. Yadwadkar, and Christos Kozyrakis. 2019. Centralized Core-granular Scheduling for Serverless Functions. In Proceedings of the ACM Symposium on Cloud Computing, SoCC 2019, Santa Cruz, CA, USA, November 20-23, 2019. ACM, 158-164. https://doi.org/10.1145/3357223.3362709 Kata Containers: an open source container runtime, building lightweight virtual machines. Kata Containers: an open source container runtime, building lightweight virtual machines. 2021. https: //katacontainers.io/ Modified deep residual network architecture deployed on serverless framework of IoT platform based on human activity recognition application. Alireza Keshavarzian, Saeed Sharifian, Sanaz Seyedin, 10.1016/j.future.2019.06.009Future Gener. Comput. Syst. 101Alireza Keshavarzian, Saeed Sharifian, and Sanaz Seyedin. 2019. Modified deep residual network architecture deployed on serverless framework of IoT platform based on human activity recognition application. Future Gener. Comput. Syst. 101 (2019), 14-28. https://doi.org/10.1016/j.future.2019.06.009 Key characteristics of a container orchestration platform to enable a modern application. Asif Khan, 10.1109/MCC.2017.4250933IEEE cloud Computing. 4Asif Khan. 2017. Key characteristics of a container orchestration platform to enable a modern application. IEEE cloud Computing 4, 5 (2017), 42-48. https://doi.org/10.1109/MCC.2017.4250933 Automated Fine-Grained CPU Cap Control in Serverless Computing Platform. Young Ki Kim, M Reza Hoseinyfarahabady, Young Choon Lee, Albert Y Zomaya, 10.1109/TPDS.2020.2989771IEEE Trans. Parallel Distributed Syst. 31Young Ki Kim, M. Reza HoseinyFarahabady, Young Choon Lee, and Albert Y. Zomaya. 2020. Automated Fine-Grained CPU Cap Control in Serverless Computing Platform. IEEE Trans. Parallel Distributed Syst. 31, 10 (2020), 2289-2301. https://doi.org/10.1109/TPDS.2020.2989771 Pocket: Elastic Ephemeral Storage for Serverless Analytics. Ana Klimovic, Yawen Wang, Patrick Stuedi, Animesh Trivedi, 13th USENIX Symposium on Operating Systems Design and Implementation. Andrea C. Arpaci-Dusseau and Geoff VoelkerCarlsbad, CA, USAUSENIX AssociationAna Klimovic, Yawen Wang, Patrick Stuedi, and Animesh Trivedi. 2018. Pocket: Elastic Ephemeral Storage for Serverless Analytics. In 13th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2018, Carlsbad, CA, USA, October 8-10, 2018, Andrea C. Arpaci-Dusseau and Geoff Voelker (Eds.). USENIX Association, 427-444. https://www.usenix.org/conference/osdi18/presentation/klimovic Spectre attacks: exploiting speculative execution. Paul Kocher, Jann Horn, Anders Fogh, Daniel Genkin, Daniel Gruss, Werner Haas, Mike Hamburg, Moritz Lipp, Stefan Mangard, Thomas Prescher, Michael Schwarz, Yuval Yarom, 10.1145/3399742Commun. ACM. 63Paul Kocher, Jann Horn, Anders Fogh, Daniel Genkin, Daniel Gruss, Werner Haas, Mike Hamburg, Moritz Lipp, Stefan Mangard, Thomas Prescher, Michael Schwarz, and Yuval Yarom. 2020. Spectre attacks: exploiting speculative execution. Commun. ACM 63, 7 (2020), 93-101. https://doi.org/10.1145/3399742 Vulnerability Advisor -Secure your Dev + Ops across containers. Ricardo Koller, Alan Dawson, Ricardo Koller and Alan Dawson. 2021. Vulnerability Advisor -Secure your Dev + Ops across containers. https: //www.ibm.com/blogs/cloud-archive/2016/11/vulnerability-advisor-secure-your-dev-ops-across-containers/ Kubeless 2021. Kubeless 2021. https://kubeless.io/ . Kubernetes Cronjob, Kubernetes CronJob 2021. https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/ HyScale: Hybrid and Network Scaling of Dockerized Microservices in Cloud Data Centres. Anthony Kwan, Jonathon Wong, Hans-Arno Jacobsen, Vinod Muthusamy, 10.1109/ICDCS.2019.0001739th IEEE International Conference on Distributed Computing Systems, ICDCS 2019. Dallas, TX, USAIEEEAnthony Kwan, Jonathon Wong, Hans-Arno Jacobsen, and Vinod Muthusamy. 2019. HyScale: Hybrid and Network Scaling of Dockerized Microservices in Cloud Data Centres. In 39th IEEE International Conference on Distributed Computing Systems, ICDCS 2019, Dallas, TX, USA, July 7-10, 2019. IEEE, 80-90. https://doi.org/10.1109/ICDCS.2019. 00017 Evaluation of Production Serverless Computing Environments. Hyungro Lee, Kumar Satyam, Geoffrey C Fox, 10.1109/CLOUD.2018.0006211th IEEE International Conference on Cloud Computing. San Francisco, CA, USAIEEE Computer SocietyHyungro Lee, Kumar Satyam, and Geoffrey C. Fox. 2018. Evaluation of Production Serverless Computing Environments. In 11th IEEE International Conference on Cloud Computing, CLOUD 2018, San Francisco, CA, USA, July 2-7, 2018. IEEE Computer Society, 442-450. https://doi.org/10.1109/CLOUD.2018.00062 A mixed-method empirical study of Function-as-a-Service software development in industrial practice. Philipp Leitner, Erik Wittern, Josef Spillner, Waldemar Hummer, 10.1016/j.jss.2018.12.013J. Syst. Softw. 149Philipp Leitner, Erik Wittern, Josef Spillner, and Waldemar Hummer. 2019. A mixed-method empirical study of Function-as-a-Service software development in industrial practice. J. Syst. Softw. 149 (2019), 340-359. https: //doi.org/10.1016/j.jss.2018.12.013 DADI: Block-Level Image Service for Agile and Elastic Application Deployment. Huiba Li, Yifan Yuan, Rui Du, Kai Ma, Lanzheng Liu, Windsor Hsu, 2020 USENIX Annual Technical Conference, USENIX ATC 2020. Ada Gavrilovska and Erez ZadokUSENIX AssociationHuiba Li, Yifan Yuan, Rui Du, Kai Ma, Lanzheng Liu, and Windsor Hsu. 2020. DADI: Block-Level Image Service for Agile and Elastic Application Deployment. In 2020 USENIX Annual Technical Conference, USENIX ATC 2020, July 15-17, 2020, Ada Gavrilovska and Erez Zadok (Eds.). USENIX Association, 727-740. https://www.usenix.org/conference/ atc20/presentation/li-huiba Comparing Containers versus Virtual Machines for Achieving High Availability. Wubin Li, Ali Kanso, 10.1109/IC2E.2015.792015 IEEE International Conference on Cloud Engineering. Tempe, AZ, USAIEEE Computer Society2Wubin Li and Ali Kanso. 2015. Comparing Containers versus Virtual Machines for Achieving High Availability. In 2015 IEEE International Conference on Cloud Engineering, IC2E 2015, Tempe, AZ, USA, March 9-13, 2015. IEEE Computer Society, 353-358. https://doi.org/10.1109/IC2E.2015.79 Modeling and Optimization of Performance and Cost of Serverless Applications. Changyuan Lin, Hamzeh Khazaei, 10.1109/TPDS.2020.3028841IEEE Trans. Parallel Distributed Syst. 32Changyuan Lin and Hamzeh Khazaei. 2021. Modeling and Optimization of Performance and Cost of Serverless Applications. IEEE Trans. Parallel Distributed Syst. 32, 3 (2021), 615-632. https://doi.org/10.1109/TPDS.2020.3028841 Pigeon: A Dynamic and Efficient Serverless and FaaS Framework for Private Cloud. W Ling, L Ma, C Tian, Z Hu, 10.1109/CSCI49370.2019.002652019 International Conference on Computational Science and Computational Intelligence (CSCI). W. Ling, L. Ma, C. Tian, and Z. Hu. 2019. Pigeon: A Dynamic and Efficient Serverless and FaaS Framework for Private Cloud. In 2019 International Conference on Computational Science and Computational Intelligence (CSCI). 1416-1421. https://doi.org/10.1109/CSCI49370.2019.00265 . Guo Li, Cheng, Quan, Li, Guo, Cheng and Quan, et al. Don't Get Caught in the Cold. David Lion, Adrian Chu, Hailong Sun, Xin Zhuang, Nikola Grcevski, Ding Yuan, Warm Up Your JVM. login Usenix Mag. 421David Lion, Adrian Chu, Hailong Sun, Xin Zhuang, Nikola Grcevski, and Ding Yuan. 2017. Don't Get Caught in the Cold, Warm Up Your JVM. login Usenix Mag. 42, 1 (2017). https://www.usenix.org/publications/login/spring2017/lion Meltdown: reading kernel memory from user space. Moritz Lipp, Michael Schwarz, Daniel Gruss, Thomas Prescher, Werner Haas, Jann Horn, Stefan Mangard, Paul Kocher, Daniel Genkin, Yuval Yarom, 10.1145/3357033Commun. ACM. 63Moritz Lipp, Michael Schwarz, Daniel Gruss, Thomas Prescher, Werner Haas, Jann Horn, Stefan Mangard, Paul Kocher, Daniel Genkin, Yuval Yarom, Mike Hamburg, and Raoul Strackx. 2020. Meltdown: reading kernel memory from user space. Commun. ACM 63, 6 (2020), 46-56. https://doi.org/10.1145/3357033 Triggerflow: Trigger-Based Orchestration of Serverless Workflows. Pedro García López, Aitor Arjona, Josep Sampé, Aleksander Slominski, Lionel Villard, 10.1145/3401025.3401731Proceedings of the 14th ACM International Conference on Distributed and Event-Based Systems. the 14th ACM International Conference on Distributed and Event-Based SystemsMontreal, Quebec, Canada; New York, NY, USAAssociation for Computing MachineryDEBS '20)Pedro García López, Aitor Arjona, Josep Sampé, Aleksander Slominski, and Lionel Villard. 2020. Triggerflow: Trigger- Based Orchestration of Serverless Workflows. In Proceedings of the 14th ACM International Conference on Distributed and Event-Based Systems (Montreal, Quebec, Canada) (DEBS '20). Association for Computing Machinery, New York, NY, USA, 3-14. https://doi.org/10.1145/3401025.3401731 Unikernels: library operating systems for the cloud. Anil Madhavapeddy, Richard Mortier, Charalampos Rotsos, David J Scott, Balraj Singh, Thomas Gazagnaire, Steven Smith, Steven Hand, Jon Crowcroft, 10.1145/2451116.2451167Architectural Support for Programming Languages and Operating Systems, ASPLOS '13. Vivek Sarkar and Rastislav BodíkHouston, TX, USAACMAnil Madhavapeddy, Richard Mortier, Charalampos Rotsos, David J. Scott, Balraj Singh, Thomas Gazagnaire, Steven Smith, Steven Hand, and Jon Crowcroft. 2013. Unikernels: library operating systems for the cloud. In Architectural Support for Programming Languages and Operating Systems, ASPLOS '13, Houston, TX, USA -March 16 -20, 2013, Vivek Sarkar and Rastislav Bodík (Eds.). ACM, 461-472. https://doi.org/10.1145/2451116.2451167 SONIC: Application-aware Data Passing for Chained Serverless Applications. Ashraf Mahgoub, Karthick Shankar, Subrata Mitra, Ana Klimovic, Somali Chaterji, Saurabh Bagchi, 2021 USENIX Annual Technical Conference, USENIX ATC 2021. Irina Calciu and Geoff KuenningUSENIX AssociationAshraf Mahgoub, Karthick Shankar, Subrata Mitra, Ana Klimovic, Somali Chaterji, and Saurabh Bagchi. 2021. SONIC: Application-aware Data Passing for Chained Serverless Applications. In 2021 USENIX Annual Technical Conference, USENIX ATC 2021, July 14-16, 2021, Irina Calciu and Geoff Kuenning (Eds.). USENIX Association, 285-301. https: //www.usenix.org/conference/atc21/presentation/mahgoub Optimizing serverless computing: introducing an adaptive function placement algorithm. Nima Mahmoudi, Changyuan Lin, Hamzeh Khazaei, Marin Litoiu, https:/dl.acm.org/doi/abs/10.5555/3370272.3370294Proceedings of the 29th Annual International Conference on Computer Science and Software Engineering. Tima Pakfetrat, Guy-Vincent Jourdan, Kostas Kontogiannis, and Robert F. Enenkelthe 29th Annual International Conference on Computer Science and Software EngineeringMarkham, Ontario, CanadaACMNima Mahmoudi, Changyuan Lin, Hamzeh Khazaei, and Marin Litoiu. 2019. Optimizing serverless computing: introducing an adaptive function placement algorithm. In Proceedings of the 29th Annual International Conference on Computer Science and Software Engineering, CASCON 2019, Markham, Ontario, Canada, November 4-6, 2019, Tima Pakfetrat, Guy-Vincent Jourdan, Kostas Kontogiannis, and Robert F. Enenkel (Eds.). ACM, 203-213. https: //dl.acm.org/doi/abs/10.5555/3370272.3370294 Bartosz Balis, and Kamil Figiela. 2020. Serverless execution of scientific workflows: Experiments with HyperFlow, AWS Lambda and Google Cloud Functions. Maciej Malawski, Adam Gajek, Adam Zima, 10.1016/j.future.2017.10.029Future Gener. Comput. Syst. 110Maciej Malawski, Adam Gajek, Adam Zima, Bartosz Balis, and Kamil Figiela. 2020. Serverless execution of scientific workflows: Experiments with HyperFlow, AWS Lambda and Google Cloud Functions. Future Gener. Comput. Syst. 110 (2020), 502-514. https://doi.org/10.1016/j.future.2017.10.029 My VM is Lighter (and Safer) than your Container. Filipe Manco, Costin Lupu, Florian Schmidt, Jose Mendes, Simon Kuenzer, Sumit Sati, Kenichi Yasukata, Costin Raiciu, Felipe Huici, 10.1145/3132747.3132763Proceedings of the 26th Symposium on Operating Systems Principles. the 26th Symposium on Operating Systems PrinciplesShanghai, ChinaACMFilipe Manco, Costin Lupu, Florian Schmidt, Jose Mendes, Simon Kuenzer, Sumit Sati, Kenichi Yasukata, Costin Raiciu, and Felipe Huici. 2017. My VM is Lighter (and Safer) than your Container. In Proceedings of the 26th Symposium on Operating Systems Principles, Shanghai, China, October 28-31, 2017. ACM, 218-233. https://doi.org/10.1145/3132747. 3132763 Towards workflow scheduling in cloud computing: A comprehensive analysis. Mohammad Masdari, Sima Valikardan, Zahra Shahi, Sonay Imani Azar, 10.1016/j.jnca.2016.01.018J. Netw. Comput. Appl. 66Mohammad Masdari, Sima ValiKardan, Zahra Shahi, and Sonay Imani Azar. 2016. Towards workflow scheduling in cloud computing: A comprehensive analysis. J. Netw. Comput. Appl. 66 (2016), 64-82. https://doi.org/10.1016/j.jnca. 2016.01.018 Securing the infrastructure and the workloads of linux containers. Massimiliano Mattetti, Alexandra Shulman-Peleg, Yair Allouche, Antonio Corradi, Shlomi Dolev, Luca Foschini, 10.1109/CNS.2015.73468692015 IEEE Conference on Communications and Network Security. Florence, ItalyIEEEMassimiliano Mattetti, Alexandra Shulman-Peleg, Yair Allouche, Antonio Corradi, Shlomi Dolev, and Luca Foschini. 2015. Securing the infrastructure and the workloads of linux containers. In 2015 IEEE Conference on Communications and Network Security, CNS 2015, Florence, Italy, September 28-30, 2015. IEEE, 559-567. https://doi.org/10.1109/CNS. 2015.7346869 A Two-Tiered Approach to I/O Quality of Service in Docker Containers. Sean Mcdaniel, Stephen Herbein, Michela Taufer, 10.1109/CLUSTER.2015.772015 IEEE International Conference on Cluster Computing. Chicago, IL, USAIEEE Computer SocietySean McDaniel, Stephen Herbein, and Michela Taufer. 2015. A Two-Tiered Approach to I/O Quality of Service in Docker Containers. In 2015 IEEE International Conference on Cluster Computing, CLUSTER 2015, Chicago, IL, USA, September 8-11, 2015. IEEE Computer Society, 490-491. https://doi.org/10.1109/CLUSTER.2015.77 Serverless Computing: Design, Implementation, and Performance. M , Garrett Mcgrath, Paul R Brenner, 10.1109/ICDCSW.2017.3637th IEEE International Conference on Distributed Computing Systems Workshops, ICDCS Workshops. Ferreira, and Teruo HigashinoAtlanta, GA, USAIEEE Computer SocietyM. Garrett McGrath and Paul R. Brenner. 2017. Serverless Computing: Design, Implementation, and Performance. In 37th IEEE International Conference on Distributed Computing Systems Workshops, ICDCS Workshops 2017, Atlanta, GA, USA, June 5-8, 2017, Aibek Musaev, João Eduardo Ferreira, and Teruo Higashino (Eds.). IEEE Computer Society, 405-410. https://doi.org/10.1109/ICDCSW.2017.36 Playing Atari with Deep Reinforcement Learning. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin A Riedmiller, arXiv:1312.5602Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin A. Riedmiller. 2013. Playing Atari with Deep Reinforcement Learning. CoRR abs/1312.5602 (2013). arXiv:1312.5602 http://arxiv.org/abs/1312.5602 Agile Cold Starts for Scalable Serverless. Anup Mohan, Harshad Sane, Kshitij Doshi, Saikrishna Edupuganti, 11th USENIX Workshop on Hot Topics in Cloud Computing. Christina Delimitrou and Dan R. K. PortsRenton, WA, USAUSENIX AssociationAnup Mohan, Harshad Sane, Kshitij Doshi, and Saikrishna Edupuganti. 2019. Agile Cold Starts for Scalable Server- less. In 11th USENIX Workshop on Hot Topics in Cloud Computing, HotCloud 2019, Renton, WA, USA, July 8, 2019, Christina Delimitrou and Dan R. K. Ports (Eds.). USENIX Association. https://www.usenix.org/conference/hotcloud19/ presentation/mohan Accelerated serverless computing based on GPU virtualization. Diana M Naranjo, Sebastián Risco, Carlos De Alfonso, Alfonso Pérez, Ignacio Blanquer, Germán Moltó, 10.1016/j.jpdc.2020.01.004J. Parallel and Distrib. Comput. 139Diana M. Naranjo, Sebastián Risco, Carlos de Alfonso, Alfonso Pérez, Ignacio Blanquer, and Germán Moltó. 2020. Accelerated serverless computing based on GPU virtualization. J. Parallel and Distrib. Comput. 139 (2020), 32 -42. https://doi.org/10.1016/j.jpdc.2020.01.004 State machine replication in containers managed by Kubernetes. Lau Hylson Vescovi Netto, Miguel Cheuk Lung, Aldelir Correia, Luciana Moreira Sá De Fernando Luiz, Souza, 10.1016/j.sysarc.2016.12.007J. Syst. Archit. 73Hylson Vescovi Netto, Lau Cheuk Lung, Miguel Correia, Aldelir Fernando Luiz, and Luciana Moreira Sá de Souza. 2017. State machine replication in containers managed by Kubernetes. J. Syst. Archit. 73 (2017), 53-59. https: //doi.org/10.1016/j.sysarc.2016.12.007 AGILE: Elastic Distributed Resource Scaling for Infrastructure-as-a-Service. Hiep Nguyen, Zhiming Shen, Xiaohui Gu, Sethuraman Subbiah, John Wilkes, 10th International Conference on Autonomic Computing, ICAC'13. Jeffrey O. Kephart, Calton Pu, and Xiaoyun ZhuSan Jose, CA, USAUSENIX AssociationHiep Nguyen, Zhiming Shen, Xiaohui Gu, Sethuraman Subbiah, and John Wilkes. 2013. AGILE: Elastic Distributed Resource Scaling for Infrastructure-as-a-Service. In 10th International Conference on Autonomic Computing, ICAC'13, San Jose, CA, USA, June 26-28, 2013, Jeffrey O. Kephart, Calton Pu, and Xiaoyun Zhu (Eds.). USENIX Association, 69-82. https://www.usenix.org/conference/icac13/technical-sessions/presentation/nguyen SOCK: Rapid Task Provisioning with Serverless-Optimized Containers. Edward Oakes, Leon Yang, Dennis Zhou, Kevin Houck, Tyler Harter, Andrea C Arpaci-Dusseau, Remzi H Arpaci-Dusseau, 2018 USENIX Annual Technical Conference, USENIX ATC 2018. Haryadi S. Gunawi and Benjamin ReedBoston, MA, USAUSENIX AssociationEdward Oakes, Leon Yang, Dennis Zhou, Kevin Houck, Tyler Harter, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. 2018. SOCK: Rapid Task Provisioning with Serverless-Optimized Containers. In 2018 USENIX Annual Technical Conference, USENIX ATC 2018, Boston, MA, USA, July 11-13, 2018, Haryadi S. Gunawi and Benjamin Reed (Eds.). USENIX Association, 57-70. https://www.usenix.org/conference/atc18/presentation/oakes A binary-compatible unikernel. Pierre Olivier, Daniel Chiba, Stefan Lankes, Changwoo Min, Binoy Ravindran, 10.1145/3313808.3313817Proceedings of the 15th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments. Jennifer B. Sartor, Mayur Naik, and Chris Rossbachthe 15th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution EnvironmentsProvidence, RI, USAACMPierre Olivier, Daniel Chiba, Stefan Lankes, Changwoo Min, and Binoy Ravindran. 2019. A binary-compatible unikernel. In Proceedings of the 15th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments, VEE 2019, Providence, RI, USA, April 14, 2019, Jennifer B. Sartor, Mayur Naik, and Chris Rossbach (Eds.). ACM, 59-73. https://doi.org/10.1145/3313808.3313817 OpenWhisk: Serverless functions platform for building cloud applications 2021. OpenWhisk: Serverless functions platform for building cloud applications 2021. https://github.com/apache/openwhisk Prewarm in Apache OpenWhisk 2021. Prewarm in Apache OpenWhisk 2021. https://github.com/apache/openwhisk/blob/master/docs/actions-python.md Prewarm in Azure Functions 2021. Prewarm in Azure Functions 2021. https://docs.microsoft.com/en-us/azure/azure-functions/functions-premium-plan Qifan Pu, Shivaram Venkataraman, and Ion Stoica. 2019. Shuffling, Fast and Slow: Scalable Analytics on Serverless Infrastructure. Jay R. Lorch and Minlan YuBoston, MAUSENIX AssociationQifan Pu, Shivaram Venkataraman, and Ion Stoica. 2019. Shuffling, Fast and Slow: Scalable Analytics on Serverless Infrastructure. In 16th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2019, Boston, MA, February 26-28, 2019, Jay R. Lorch and Minlan Yu (Eds.). USENIX Association, 193-206. https://www.usenix.org/ conference/nsdi19/presentation/pu EC-Cache: Load-Balanced, Low-Latency Cluster Caching with Online Erasure Coding. K V Rashmi, Mosharaf Chowdhury, Jack Kosaian, Ion Stoica, Kannan Ramchandran, 12th USENIX Symposium on Operating Systems Design and Implementation. Kimberly Keeton and Timothy RoscoeSavannah, GA, USAUSENIX AssociationK. V. Rashmi, Mosharaf Chowdhury, Jack Kosaian, Ion Stoica, and Kannan Ramchandran. 2016. EC-Cache: Load- Balanced, Low-Latency Cluster Caching with Online Erasure Coding. In 12th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2016, Savannah, GA, USA, November 2-4, 2016, Kimberly Keeton and Timothy Roscoe (Eds.). USENIX Association, 401-417. Data-driven serverless functions for object storage. Josep Sampé, Marc Sánchez Artigas, Pedro García López, Gerard París, 10.1145/3135974.3135980Proceedings of the 18th ACM/IFIP/USENIX Middleware Conference. K. R. Jayaram, Anshul Gandhi, Bettina Kemme, and Peter R. Pietzuchthe 18th ACM/IFIP/USENIX Middleware ConferenceLas Vegas, NV, USAACMJosep Sampé, Marc Sánchez Artigas, Pedro García López, and Gerard París. 2017. Data-driven serverless functions for object storage. In Proceedings of the 18th ACM/IFIP/USENIX Middleware Conference, Las Vegas, NV, USA, December 11 -15, 2017, K. R. Jayaram, Anshul Gandhi, Bettina Kemme, and Peter R. Pietzuch (Eds.). ACM, 121-133. https: //doi.org/10.1145/3135974.3135980 Vertigo: Programmable Micro-controllers for Software-Defined Object Storage. Josep Sampé, Pedro García López, Marc Sánchez Artigas, 10.1109/CLOUD.2016.00339th IEEE International Conference on Cloud Computing. San Francisco, CA, USAIEEE Computer SocietyJosep Sampé, Pedro García López, and Marc Sánchez Artigas. 2016. Vertigo: Programmable Micro-controllers for Software-Defined Object Storage. In 9th IEEE International Conference on Cloud Computing, CLOUD 2016, San Francisco, CA, USA, June 27 -July 2, 2016. IEEE Computer Society, 180-187. https://doi.org/10.1109/CLOUD.2016.0033 Function-as-a-Service performance evaluation: A multivocal literature review. Joel Scheuner, Philipp Leitner, 10.1016/j.jss.2020.110708J. Syst. Softw. 170110708Joel Scheuner and Philipp Leitner. 2020. Function-as-a-Service performance evaluation: A multivocal literature review. J. Syst. Softw. 170 (2020), 110708. https://doi.org/10.1016/j.jss.2020.110708 The State of Research on Function-as-a-Service Performance Evaluation: A Multivocal Literature Review. Joel Scheuner, Philipp Leitner, arXiv:2004.03276Joel Scheuner and Philipp Leitner. 2020. The State of Research on Function-as-a-Service Performance Evaluation: A Multivocal Literature Review. CoRR abs/2004.03276 (2020). arXiv:2004.03276 https://arxiv.org/abs/2004.03276 What serverless computing is and should become: the next phase of cloud computing. Johann Schleier-Smith, Vikram Sreekanti, Anurag Khandelwal, Joao Carreira, Neeraja Jayant Yadwadkar, Raluca Ada Popa, Joseph E Gonzalez, Ion Stoica, David A Patterson, 10.1145/3406011Commun. ACM. 64Johann Schleier-Smith, Vikram Sreekanti, Anurag Khandelwal, Joao Carreira, Neeraja Jayant Yadwadkar, Raluca Ada Popa, Joseph E. Gonzalez, Ion Stoica, and David A. Patterson. 2021. What serverless computing is and should become: the next phase of cloud computing. Commun. ACM 64, 5 (2021), 76-84. https://doi.org/10.1145/3406011 uniprof: A Unikernel Stack Profiler. Florian Schmidt, 10.1145/3123878.3131976Posters and Demos Proceedings of the Conference of the ACM Special Interest Group on Data Communication, SIGCOMM 2017. Los Angeles, CA, USAACMFlorian Schmidt. 2017. uniprof: A Unikernel Stack Profiler. In Posters and Demos Proceedings of the Conference of the ACM Special Interest Group on Data Communication, SIGCOMM 2017, Los Angeles, CA, USA, August 21-25, 2017. ACM, 31-33. https://doi.org/10.1145/3123878.3131976 ZombieLoad: Cross-Privilege-Boundary Data Sampling. Michael Schwarz, Moritz Lipp, Daniel Moghimi, Jo Van Bulck, Julian Stecklina, Thomas Prescher, Daniel Gruss, 10.1145/3319535.3354252Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, CCS 2019. Lorenzo Cavallaro, Johannes Kinder, XiaoFeng Wang, and Jonathan Katzthe 2019 ACM SIGSAC Conference on Computer and Communications Security, CCS 2019London, UKACMMichael Schwarz, Moritz Lipp, Daniel Moghimi, Jo Van Bulck, Julian Stecklina, Thomas Prescher, and Daniel Gruss. 2019. ZombieLoad: Cross-Privilege-Boundary Data Sampling. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, CCS 2019, London, UK, November 11-15, 2019, Lorenzo Cavallaro, Johannes Kinder, XiaoFeng Wang, and Jonathan Katz (Eds.). ACM, 753-768. https://doi.org/10.1145/3319535.3354252 Realizing the Fault-Tolerance Promise of Cloud Storage Using Locks with Intent. T V Srinath, Chunzhi Setty, Jacob R Su, Lorch, 12th USENIX Symposium on Operating Systems Design and Implementation. Kimberly Keeton and Timothy RoscoeSavannah, GA, USAUSENIX AssociationSrinath T. V. Setty, Chunzhi Su, and Jacob R. Lorch. 2016. Realizing the Fault-Tolerance Promise of Cloud Storage Using Locks with Intent. In 12th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2016, Savannah, GA, USA, November 2-4, 2016, Kimberly Keeton and Timothy Roscoe (Eds.). USENIX Association, 501-516. https://www.usenix.org/conference/osdi16/technical-sessions/presentation/setty Hossein Shafiei, Ahmad Khonsari, Payam Mousavi, arXiv:1911.01296Serverless Computing: A Survey of Opportunities, Challenges and Applications. cs.NIHossein Shafiei, Ahmad Khonsari, and Payam Mousavi. 2021. Serverless Computing: A Survey of Opportunities, Challenges and Applications. arXiv:1911.01296 [cs.NI] Architectural Implications of Function-as-a-Service Computing. Mohammad Shahrad, Jonathan Balkind, David Wentzlaff, 10.1145/3352460.3358296Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture. the 52nd Annual IEEE/ACM International Symposium on MicroarchitectureMICRO; Columbus, OH, USAACMMohammad Shahrad, Jonathan Balkind, and David Wentzlaff. 2019. Architectural Implications of Function-as-a- Service Computing. In Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2019, Columbus, OH, USA, October 12-16, 2019. ACM, 1063-1075. https://doi.org/10.1145/3352460.3358296 Serverless in the Wild: Characterizing and Optimizing the Serverless Workload at a Large Cloud Provider. Mohammad Shahrad, Rodrigo Fonseca, Iñigo Goiri, Gohar Chaudhry, 2020 USENIX Annual Technical Conference, USENIX ATC 2020. Ada Gavrilovska and Erez ZadokUSENIX AssociationMohammad Shahrad, Rodrigo Fonseca, Iñigo Goiri, and Gohar Chaudhry. 2020. Serverless in the Wild: Characterizing and Optimizing the Serverless Workload at a Large Cloud Provider. In 2020 USENIX Annual Technical Conference, USENIX ATC 2020, July 15-17, 2020, Ada Gavrilovska and Erez Zadok (Eds.). USENIX Association, 205-218. https: //www.usenix.org/conference/atc20/presentation/shahrad numpywren: serverless linear algebra. Vaishaal Shankar, Karl Krauth, Qifan Pu, arXiv:1810.09679Vaishaal Shankar, Karl Krauth, and Qifan Pu. 2018. numpywren: serverless linear algebra. CoRR abs/1810.09679 (2018). arXiv:1810.09679 http://arxiv.org/abs/1810.09679 SNF: serverless network functions. Arjun Singhvi, Junaid Khalid, Aditya Akella, Sujata Banerjee, 10.1145/3419111.3421295SoCC '20: ACM Symposium on Cloud Computing, Virtual Event. Rodrigo Fonseca, Christina Delimitrou, and Beng Chin OoiUSAACMArjun Singhvi, Junaid Khalid, Aditya Akella, and Sujata Banerjee. 2020. SNF: serverless network functions. In SoCC '20: ACM Symposium on Cloud Computing, Virtual Event, USA, October 19-21, 2020, Rodrigo Fonseca, Christina Delimitrou, and Beng Chin Ooi (Eds.). ACM, 296-310. https://doi.org/10.1145/3419111.3421295 BabelFish: Fusing Address Translations for Containers. Dimitrios Skarlatos, Umur Darbaz, Bhargava Gopireddy, Nam Sung Kim, 10.1109/ISCA45697.2020.0004947th ACM/IEEE Annual International Symposium on Computer Architecture. Valencia, SpainIEEE2020Dimitrios Skarlatos, Umur Darbaz, Bhargava Gopireddy, and Nam Sung Kim. 2020. BabelFish: Fusing Address Translations for Containers. In 47th ACM/IEEE Annual International Symposium on Computer Architecture, ISCA 2020, Valencia, Spain, May 30 -June 3, 2020. IEEE, 501-514. https://doi.org/10.1109/ISCA45697.2020.00049 Sonarqube: Code quality and security platform 2021. Sonarqube: Code quality and security platform 2021. https://www.sonarqube.org/ Sparta: A Go framework for AWS Lambda microservices 2021. Sparta: A Go framework for AWS Lambda microservices 2021. http://gosparta.io/ A fault-tolerance shim for serverless computing. Vikram Sreekanti, Chenggang Wu, Saurav Chhatrapati, 10.1145/3342195.3387535EuroSys '20: Fifteenth EuroSys Conference 2020. Heraklion, GreeceACM15Vikram Sreekanti, Chenggang Wu, and Saurav Chhatrapati. 2020. A fault-tolerance shim for serverless computing. In EuroSys '20: Fifteenth EuroSys Conference 2020, Heraklion, Greece, April 27-30, 2020. ACM, 15:1-15:15. https: //doi.org/10.1145/3342195.3387535 Cloudburst: Stateful Functions-as-a-Service. Vikram Sreekanti, Chenggang Wu, Xiayue Charles Lin, Johann Schleier-Smith, Proc. VLDB Endow. VLDB Endow13Vikram Sreekanti, Chenggang Wu, Xiayue Charles Lin, and Johann Schleier-Smith. 2020. Cloudburst: Stateful Functions-as-a-Service. Proc. VLDB Endow. 13, 11 (2020), 2438-2452. http://www.vldb.org/pvldb/vol13/p2438- sreekanti.pdf Optimal cloud resource provisioning for auto-scaling enterprise applications. Alireza Satish Narayana Srirama, Ostovar, 10.1504/IJCC.2018.10014880Int. J. Cloud Comput. 7Satish Narayana Srirama and Alireza Ostovar. 2018. Optimal cloud resource provisioning for auto-scaling enterprise applications. Int. J. Cloud Comput. 7, 2 (2018), 129-162. https://doi.org/10.1504/IJCC.2018.10014880 FnSched: An Efficient Scheduler for Serverless Functions. Amoghavarsha Suresh, Anshul Gandhi, 10.1145/3366623.3368136Proceedings of the 5th International Workshop on Serverless Computing. the 5th International Workshop on Serverless ComputingWOSC@Middleware; Davis, CA, USAACMAmoghavarsha Suresh and Anshul Gandhi. 2019. FnSched: An Efficient Scheduler for Serverless Functions. In Proceedings of the 5th International Workshop on Serverless Computing, WOSC@Middleware 2019, Davis, CA, USA, December 09-13, 2019. ACM, 19-24. https://doi.org/10.1145/3366623.3368136 Understanding security implications of using containers in the cloud. Byungchul Tak, Canturk Isci, Sastry Duri, Nilton Bila, Shripad Nadgowda, James Doran, 2017 {USENIX} Annual Technical Conference ({USENIX} {ATC} 17. Byungchul Tak, Canturk Isci, Sastry Duri, Nilton Bila, Shripad Nadgowda, and James Doran. 2017. Understanding security implications of using containers in the cloud. In 2017 {USENIX} Annual Technical Conference ({USENIX} {ATC} 17). 313-319. Sequoia: enabling quality-ofservice in serverless computing. Ali Tariq, Austin Pahl, Sharat Nimmagadda, Eric Rozner, Siddharth Lanka, 10.1145/3419111.3421306SoCC '20: ACM Symposium on Cloud Computing, Virtual Event. Rodrigo Fonseca, Christina Delimitrou, and Beng Chin OoiUSAACMAli Tariq, Austin Pahl, Sharat Nimmagadda, Eric Rozner, and Siddharth Lanka. 2020. Sequoia: enabling quality-of- service in serverless computing. In SoCC '20: ACM Symposium on Cloud Computing, Virtual Event, USA, October 19-21, 2020, Rodrigo Fonseca, Christina Delimitrou, and Beng Chin Ooi (Eds.). ACM, 311-327. https://doi.org/10.1145/ 3419111.3421306 Cntr: Lightweight OS Containers. Jörg Thalheim, Pramod Bhatotia, Pedro Fonseca, Baris Kasikci, 2018 USENIX Annual Technical Conference, USENIX ATC 2018. Haryadi S. Gunawi and Benjamin ReedBoston, MA, USAUSENIX AssociationJörg Thalheim, Pramod Bhatotia, Pedro Fonseca, and Baris Kasikci. 2018. Cntr: Lightweight OS Containers. In 2018 USENIX Annual Technical Conference, USENIX ATC 2018, Boston, MA, USA, July 11-13, 2018, Haryadi S. Gunawi and Benjamin Reed (Eds.). USENIX Association, 199-212. https://www.usenix.org/conference/atc18/presentation/thalheim IOStack: Software-Defined Object Storage. Raúl Gracia Tinedo, Pedro García López, Marc Sánchez Artigas, Josep Sampé, 10.1109/MIC.2016.46IEEE Internet Comput. 20Raúl Gracia Tinedo, Pedro García López, Marc Sánchez Artigas, and Josep Sampé. 2016. IOStack: Software-Defined Object Storage. IEEE Internet Comput. 20, 3 (2016), 10-18. https://doi.org/10.1109/MIC.2016.46 Crystal: Software-Defined Storage for Multi-Tenant Object Stores. Raúl Gracia Tinedo, Josep Sampé, Edgar Zamora-Gómez, 15th USENIX Conference on File and Storage Technologies. Kuenning and Carl A. WaldspurgerSanta Clara, CA, USAUSENIX AssociationRaúl Gracia Tinedo, Josep Sampé, and Edgar Zamora-Gómez. 2017. Crystal: Software-Defined Storage for Multi- Tenant Object Stores. In 15th USENIX Conference on File and Storage Technologies, FAST 2017, Santa Clara, CA, USA, February 27 -March 2, 2017, Geoff Kuenning and Carl A. Waldspurger (Eds.). USENIX Association, 243-256. https://www.usenix.org/conference/fast17/technical-sessions/presentation/gracia-tinedo Adaptive AI-based auto-scaling for Kubernetes. László Toka, Gergely Dobreff, Balázs Fodor, Balázs Sonkoly, 10.1109/CCGrid49817.2020.00-3320th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing. Melbourne, AustraliaIEEE2020László Toka, Gergely Dobreff, Balázs Fodor, and Balázs Sonkoly. 2020. Adaptive AI-based auto-scaling for Kubernetes. In 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing, CCGRID 2020, Melbourne, Australia, May 11-14, 2020. IEEE, 599-608. https://doi.org/10.1109/CCGrid49817.2020.00-33 Using custom Docker images as the action runtime in OpenWhisk 2021. Using custom Docker images as the action runtime in OpenWhisk 2021. https://github.com/apache/openwhisk/blob/ master/docs/actions-docker.md Amazon Aurora: Design Considerations for High Throughput Cloud-Native Relational Databases. Alexandre Verbitski, Anurag Gupta, Debanjan Saha, 10.1145/3035918.3056101Proceedings of the 2017 ACM International Conference on Management of Data, SIGMOD Conference. Wenchao Zhou, Rada Chirkova, Jun Yang, and Dan Suciuthe 2017 ACM International Conference on Management of Data, SIGMOD ConferenceChicago, IL, USAACMSemih SalihogluAlexandre Verbitski, Anurag Gupta, and Debanjan Saha. 2017. Amazon Aurora: Design Considerations for High Throughput Cloud-Native Relational Databases. In Proceedings of the 2017 ACM International Conference on Manage- ment of Data, SIGMOD Conference 2017, Chicago, IL, USA, May 14-19, 2017, Semih Salihoglu, Wenchao Zhou, Rada Chirkova, Jun Yang, and Dan Suciu (Eds.). ACM, 1041-1052. https://doi.org/10.1145/3035918.3056101 Framework for automated partitioning and execution of scientific workflows in the cloud. Jaagup Viil, Satish Narayana Srirama, 10.1007/s11227-018-2296-7J. Supercomput. 74Jaagup Viil and Satish Narayana Srirama. 2018. Framework for automated partitioning and execution of scientific workflows in the cloud. J. Supercomput. 74, 6 (2018), 2656-2683. https://doi.org/10.1007/s11227-018-2296-7 Using machine learning for black-box autoscaling. Muhammad Wajahat, Anshul Gandhi, Alexei A Karve, Andrzej Kochut, 10.1109/IGCC.2016.7892598Seventh International Green and Sustainable Computing Conference, IGSC 2016. Hangzhou, ChinaMuhammad Wajahat, Anshul Gandhi, Alexei A. Karve, and Andrzej Kochut. 2016. Using machine learning for black-box autoscaling. In Seventh International Green and Sustainable Computing Conference, IGSC 2016, Hangzhou, China, November 7-9, 2016. IEEE Computer Society, 1-8. https://doi.org/10.1109/IGCC.2016.7892598 InfiniCache: Exploiting Ephemeral Serverless Functions to Build a Cost-Effective Memory Cache. Ao Wang, Jingyuan Zhang, Xiaolong Ma, Ali Anwar, Lukas Rupprecht, Dimitrios Skourtis, Vasily Tarasov, Feng Yan, Yue Cheng, 18th USENIX Conference on File and Storage Technologies. Sam H. Noh and Brent WelchSanta Clara, CA, USA2020USENIX AssociationAo Wang, Jingyuan Zhang, Xiaolong Ma, Ali Anwar, Lukas Rupprecht, Dimitrios Skourtis, Vasily Tarasov, Feng Yan, and Yue Cheng. 2020. InfiniCache: Exploiting Ephemeral Serverless Functions to Build a Cost-Effective Memory Cache. In 18th USENIX Conference on File and Storage Technologies, FAST 2020, Santa Clara, CA, USA, February 24-27, 2020, Sam H. Noh and Brent Welch (Eds.). USENIX Association, 267-281. https://www.usenix.org/conference/fast20/ presentation/wang-ao Distributed Machine Learning with a Serverless Architecture. Hao Wang, Di Niu, Baochun Li, 10.1109/INFOCOM.2019.87373912019 IEEE Conference on Computer Communications, INFOCOM 2019. Paris, FranceIEEEHao Wang, Di Niu, and Baochun Li. 2019. Distributed Machine Learning with a Serverless Architecture. In 2019 IEEE Conference on Computer Communications, INFOCOM 2019, Paris, France, April 29 -May 2, 2019. IEEE, 1288-1296. https://doi.org/10.1109/INFOCOM.2019.8737391 . Kai-Ting Amy Wang, Rayson Ho, Peng Wu, n.d.Kai-Ting Amy Wang, Rayson Ho, and Peng Wu. [n.d.]. Replayable Execution Optimized for Page Sharing for a Managed Runtime Environment. 10.1145/3302424.3303978Proceedings of the Fourteenth EuroSys Conference. the Fourteenth EuroSys ConferenceNew York, NY, USAAssociation for Computing MachineryEuroSys '19)Replayable Execution Optimized for Page Sharing for a Managed Runtime Environment. In Proceedings of the Fourteenth EuroSys Conference 2019 (New York, NY, USA, 2019) (EuroSys '19). Association for Computing Machinery. https://doi.org/10.1145/3302424.3303978 Peeking Behind the Curtains of Serverless Platforms. Liang Wang, Mengyuan Li, Yinqian Zhang, Thomas Ristenpart, Michael M Swift, 2018 USENIX Annual Technical Conference, USENIX ATC 2018. Haryadi S. Gunawi and Benjamin ReedBoston, MA, USAUSENIX AssociationLiang Wang, Mengyuan Li, Yinqian Zhang, Thomas Ristenpart, and Michael M. Swift. 2018. Peeking Behind the Curtains of Serverless Platforms. In 2018 USENIX Annual Technical Conference, USENIX ATC 2018, Boston, MA, USA, July 11-13, 2018, Haryadi S. Gunawi and Benjamin Reed (Eds.). USENIX Association, 133-146. https://www.usenix. org/conference/atc18/presentation/wang-liang Lineage stash: fault tolerance off the critical path. Stephanie Wang, John Liagouris, Robert Nishihara, 10.1145/3341301.3359653Proceedings of the 27th ACM Symposium on Operating Systems Principles, SOSP 2019. Tim Brecht and Carey Williamsonthe 27th ACM Symposium on Operating Systems Principles, SOSP 2019Huntsville, ON, CanadaACMStephanie Wang, John Liagouris, and Robert Nishihara. 2019. Lineage stash: fault tolerance off the critical path. In Proceedings of the 27th ACM Symposium on Operating Systems Principles, SOSP 2019, Huntsville, ON, Canada, October 27-30, 2019, Tim Brecht and Carey Williamson (Eds.). ACM, 338-352. https://doi.org/10.1145/3341301.3359653 Mirador: An Active Control Plane for Datacenter Storage. Jake Wires, Andrew Warfield, 15th USENIX Conference on File and Storage Technologies. Kuenning and Carl A. WaldspurgerSanta Clara, CA, USAUSENIX AssociationJake Wires and Andrew Warfield. 2017. Mirador: An Active Control Plane for Datacenter Storage. In 15th USENIX Conference on File and Storage Technologies, FAST 2017, Santa Clara, CA, USA, February 27 -March 2, 2017, Geoff Kuenning and Carl A. Waldspurger (Eds.). USENIX Association, 213-228. https://www.usenix.org/conference/fast17/ technical-sessions/presentation/wires A Survey on Serverless Computing and Its Implications for JointCloud Computing. Mingyu Wu, Zeyu Mi, Yubin Xia, 10.1109/JCC49151.2020.000232020 IEEE International Conference on Joint Cloud Computing. 94-101. Mingyu Wu, Zeyu Mi, and Yubin Xia. 2020. A Survey on Serverless Computing and Its Implications for JointCloud Computing. In 2020 IEEE International Conference on Joint Cloud Computing. 94-101. https://doi.org/10.1109/JCC49151. 2020.00023 Oasis: An active storage framework for object storage platform. Yulai Xie, Dan Feng, Yan Li, Darrell D E Long, 10.1016/j.future.2015.08.011Future Gener. Comput. Syst. 56Yulai Xie, Dan Feng, Yan Li, and Darrell D. E. Long. 2016. Oasis: An active storage framework for object storage platform. Future Gener. Comput. Syst. 56 (2016), 746-758. https://doi.org/10.1016/j.future.2015.08.011 Adaptive Function Launching Acceleration in Serverless Computing Platforms. Zhengjun Xu, Haitao Zhang, Xin Geng, Qiong Wu, Huadong Ma, 10.1109/ICPADS47876.2019.0001125th IEEE International Conference on Parallel and Distributed Systems, ICPADS 2019. Tianjin, ChinaIEEEZhengjun Xu, Haitao Zhang, Xin Geng, Qiong Wu, and Huadong Ma. 2019. Adaptive Function Launching Acceleration in Serverless Computing Platforms. In 25th IEEE International Conference on Parallel and Distributed Systems, ICPADS 2019, Tianjin, China, December 4-6, 2019. IEEE, 9-16. https://doi.org/10.1109/ICPADS47876.2019.00011 Profiling-Based Workload Consolidation and Migration in Virtualized Data Centers. Kejiang Ye, Zhaohui Wu, Chen Wang, Bing Bing Zhou, Weisheng Si, Xiaohong Jiang, Albert Y Zomaya, 10.1109/TPDS.2014.2313335IEEE Trans. Parallel Distributed Syst. 26Kejiang Ye, Zhaohui Wu, Chen Wang, Bing Bing Zhou, Weisheng Si, Xiaohong Jiang, and Albert Y. Zomaya. 2015. Profiling-Based Workload Consolidation and Migration in Virtualized Data Centers. IEEE Trans. Parallel Distributed Syst. 26, 3 (2015), 878-890. https://doi.org/10.1109/TPDS.2014.2313335 Characterizing serverless platforms with serverlessbench. Tianyi Yu, Qingyuan Liu, Dong Du, 10.1145/3419111.3421280SoCC '20: ACM Symposium on Cloud Computing, Virtual Event. Rodrigo Fonseca, Christina Delimitrou, and Beng Chin OoiUSAACMTianyi Yu, Qingyuan Liu, and Dong Du. 2020. Characterizing serverless platforms with serverlessbench. In SoCC '20: ACM Symposium on Cloud Computing, Virtual Event, USA, October 19-21, 2020, Rodrigo Fonseca, Christina Delimitrou, and Beng Chin Ooi (Eds.). ACM, 30-44. https://doi.org/10.1145/3419111.3421280 SP-cache: load-balanced, redundancyfree cluster caching with selective partition. Yinghao Yu, Renfei Huang, Wei Wang, Jun Zhang, Khaled Ben Letaief, Proceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis. the International Conference for High Performance Computing, Networking, Storage, and AnalysisSC; Dallas, TX, USA1Yinghao Yu, Renfei Huang, Wei Wang, Jun Zhang, and Khaled Ben Letaief. 2018. SP-cache: load-balanced, redundancy- free cluster caching with selective partition. In Proceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis, SC 2018, Dallas, TX, USA, November 11-16, 2018. IEEE / ACM, 1:1-1:13. http://dl.acm.org/citation.cfm?id=3291658 MArk: Exploiting Cloud Services for Cost-Effective, SLO-Aware Machine Learning Inference Serving. Chengliang Zhang, Minchen Yu, Wei Wang, Feng Yan, 2019 USENIX Annual Technical Conference (USENIX ATC 19). USENIX Association. Renton, WAChengliang Zhang, Minchen Yu, Wei Wang, and Feng Yan. 2019. MArk: Exploiting Cloud Services for Cost- Effective, SLO-Aware Machine Learning Inference Serving. In 2019 USENIX Annual Technical Conference (USENIX ATC 19). USENIX Association, Renton, WA, 1049-1062. https://www.usenix.org/conference/atc19/presentation/zhang- chengliang Fault-tolerant and transactional stateful serverless workflows. Haoran Zhang, Adney Cardoza, Peter Baile Chen, 14th USENIX Symposium on Operating Systems Design and Implementation. USENIX Association2020Haoran Zhang, Adney Cardoza, and Peter Baile Chen. 2020. Fault-tolerant and transactional stateful serverless workflows. In 14th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2020, Virtual Event, November 4-6, 2020. USENIX Association, 1187-1204. Narrowing the Gap Between Serverless and its State with Storage Functions. Tian Zhang, Dong Xie, Feifei Li, Ryan Stutsman, 10.1145/3357223.3362723Proceedings of the ACM Symposium on Cloud Computing. the ACM Symposium on Cloud ComputingSoCC; Santa Cruz, CA, USAACMTian Zhang, Dong Xie, Feifei Li, and Ryan Stutsman. 2019. Narrowing the Gap Between Serverless and its State with Storage Functions. In Proceedings of the ACM Symposium on Cloud Computing, SoCC 2019, Santa Cruz, CA, USA, November 20-23, 2019. ACM, 1-12. https://doi.org/10.1145/3357223.3362723 Kappa: a programming framework for serverless computing. Wen Zhang, Vivian Fang, Aurojit Panda, Scott Shenker, 10.1145/3419111.3421277SoCC '20: ACM Symposium on Cloud Computing, Virtual Event. Rodrigo Fonseca, Christina Delimitrou, and Beng Chin OoiUSAACMWen Zhang, Vivian Fang, Aurojit Panda, and Scott Shenker. 2020. Kappa: a programming framework for serverless computing. In SoCC '20: ACM Symposium on Cloud Computing, Virtual Event, USA, October 19-21, 2020, Rodrigo Fonseca, Christina Delimitrou, and Beng Chin Ooi (Eds.). ACM, 328-343. https://doi.org/10.1145/3419111.3421277 GlobalFlow: A Cross-Region Orchestration Service for Serverless Computing Services. Ge Zheng, Yang Peng, 10.1109/CLOUD.2019.0009312th IEEE International Conference on Cloud Computing. Elisa Bertino, Carl K. Chang, and Peter ChenMilan, ItalyIEEEGe Zheng and Yang Peng. 2019. GlobalFlow: A Cross-Region Orchestration Service for Serverless Computing Services. In 12th IEEE International Conference on Cloud Computing, CLOUD 2019, Milan, Italy, July 8-13, 2019, Elisa Bertino, Carl K. Chang, and Peter Chen (Eds.). IEEE, 508-510. https://doi.org/10.1109/CLOUD.2019.00093 FlowCon: Elastic Flow Configuration for Containerized Deep Learning Applications. Wenjia Zheng, Michael Tynes, Henry Gorelick, Ying Mao, Long Cheng, Yantian Hou, 10.1145/3337821.3337868Proceedings of the 48th International Conference on Parallel Processing. the 48th International Conference on Parallel ProcessingKyoto, JapanACM87Wenjia Zheng, Michael Tynes, Henry Gorelick, Ying Mao, Long Cheng, and Yantian Hou. 2019. FlowCon: Elastic Flow Configuration for Containerized Deep Learning Applications. In Proceedings of the 48th International Conference on Parallel Processing, ICPP 2019, Kyoto, Japan, August 05-08, 2019. ACM, 87:1-87:10. https://doi.org/10.1145/3337821. 3337868 Medusa: Simplified Graph Processing on GPUs. Jianlong Zhong, Bingsheng He, 10.1109/TPDS.2013.111IEEE Trans. Parallel Distrib. Syst. 256Jianlong Zhong and Bingsheng He. 2014. Medusa: Simplified Graph Processing on GPUs. IEEE Trans. Parallel Distrib. Syst. 25, 6 (June 2014), 1543-1552. https://doi.org/10.1109/TPDS.2013.111
[ "https://github.com/checkpoint-restore/criu", "https://github.com/google/gvisor", "https://github.com/apache/openwhisk", "https://github.com/apache/openwhisk/blob/master/docs/actions-python.md", "https://github.com/apache/openwhisk/blob/" ]
[ "SENSAR: A Visual Tool for Intelligent Robots for Collaborative Human-Robot Interaction", "SENSAR: A Visual Tool for Intelligent Robots for Collaborative Human-Robot Interaction" ]
[ "Andre Cleaver \nDepartment of Mechanical Engineering\n\n", "Faizan Muhammad \nDepartment of Computer Science\n\n", "Amel Hassan \nDepartment of Computer Science\n\n", "Elaine Short \nDepartment of Computer Science\n\n", "Jivko Sinapov \nDepartment of Computer Science\n\n" ]
[ "Department of Mechanical Engineering\n", "Department of Computer Science\n", "Department of Computer Science\n", "Department of Computer Science\n", "Department of Computer Science\n" ]
[]
Establishing common ground between an intelligent robot and a human requires communication of the robot's intention, behavior, and knowledge to the human to build trust and assure safety in a shared environment. This paper introduces SENSAR (Seeing Everything iN Situ with Augmented Reality), an augmented reality robotic system that enables robots to communicate their sensory and cognitive data in context over the real-world with rendered graphics, allowing a user to understand, correct, and validate the robot's perception of the world. Our system aims to support human-robot interaction research by establishing common ground where the perceptions of the human and the robot align.
null
[ "https://arxiv.org/pdf/2011.04515v1.pdf" ]
226,282,226
2011.04515
a9445bbed989db90a44e487ec989fcb1ec27015f
SENSAR: A Visual Tool for Intelligent Robots for Collaborative Human-Robot Interaction Andre Cleaver Department of Mechanical Engineering Faizan Muhammad Department of Computer Science Amel Hassan Department of Computer Science Elaine Short Department of Computer Science Jivko Sinapov Department of Computer Science SENSAR: A Visual Tool for Intelligent Robots for Collaborative Human-Robot Interaction Establishing common ground between an intelligent robot and a human requires communication of the robot's intention, behavior, and knowledge to the human to build trust and assure safety in a shared environment. This paper introduces SENSAR (Seeing Everything iN Situ with Augmented Reality), an augmented reality robotic system that enables robots to communicate their sensory and cognitive data in context over the real-world with rendered graphics, allowing a user to understand, correct, and validate the robot's perception of the world. Our system aims to support human-robot interaction research by establishing common ground where the perceptions of the human and the robot align. INTRODUCTION For intelligent robots to operate in the same environment as humans, they must communicate their intentions, behaviors, and knowledge in a way that is interpretable by humans. Robots rely on sensors and algorithms to support functions such as localization, obstacle avoidance, and object manipulation; however, robots can misidentify objects and communicate their misperceptions of the world, which can lead to undesired outcomes. As a result, humans may not understand what a robot is trying to do, with adverse effects on human-robot collaboration. One way to solve this problem is to allow the human collaborator to directly see a robot's perception to ensure the robot is performing adequately (Thorstensen, 2017). Augmented Reality (AR) renders spatially-grounded images over the real world. By leveraging AR, robot data that would typically be "hidden" can be visualized by rendering graphic images over the real world using ARsupported devices (e.g., smartphones, tablets, or the Microsoft HoloLens) (Frank, Moorhead, and Kapila, 2017). For example, a robot's depth sensor data rendered directly over a workstation shows not only what the robot "sees" but also what the robot does not "see": that is, it allows the human to see what the robot could not detect. AR gives humans the advantage of placing the robot's sensory data in the con-Copyright © 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: (Right) Physical world known by a human; robots are unaware of most objects within environment, (Left) Virtual World accessed only by the robot what includes the robot's algorithms and sensor readings, (Center) a shared reality world accessed by both humans and robots. A robot renders its path plan using markers which is seen by human. text of the real world unlike traditional visual programs (e.g., rviz 1 ). In this paper, we introduce SENSAR (Seeing Everything iN Sitiu with Augmented Reality), a novel AR-mediated communication tool. We describe the design of our system, which uses ROS and Unity, along with the data processing routines to convert raw data to visuals. Our system can create an interactive reality space containing the robot's sensory data and cognitive output that a user can access to reach a "common ground" with the robot (see Figure 1). In a technical demonstration of the system, we show that visualizing sensors and intended actions can be beneficial for humans in areas such as debugging, education, and serve as a tool for HRI studies. RELATED WORK Currently, there are frameworks that already exist that display robot data using external lights and projections (Fernandez et al., 2018;Chadalavada et al., 2015). Although these methods have the advantage of not requiring the user to hold or wear a separate device; they are not without limitations. These solutions require modifying the robot with hardware and staging the environment with flat surfaces to view the projections. Occlusions are another inevitable problem with this approach if any objects or humans themselves are positioned between the display surface and the projection source. AR does not face this problem as graphical images are rendered over the real-world, thus eliminating the need for any changes to the existing robot or environment. AR has been integrated with several robotic platforms with a focus in areas such as navigation, programming, and education (Walker et al., 2018;Gadre et al., 2019;Cheli et al., 2018). Work by Cheli et al. (2018) explored AR as an education tool for K-12 students. Middle school students were observed debugging their assigned robots (EV3 Kit) through Tablets and initiated group discussions around sensor readings. The EV3 kit contained actuators and sensors and such as touch, color detection, sonar, and gyro. Although their system was successful in visualizing the robot's data, the system can only function with the EV3 robot. We demonstrate SENSAR with a Turtlebot2 robot, a popular robotic platform for research; however, any ROS-based robot can work with the SENSAR system. In addition, SENSAR is highly modular: multiple sensors including the type of sensors provided in the EV3 kit can be connected to the robot such as depth and Lidar sensors. SENSAR is a continuation of prior work by Muhammad et al. (2019). SYSTEM ARCHITECTURE AND DESIGN SENSAR is an AR system 2 enabling users to "see into the mind" of a robot. Data collected by an intelligent robot is processed through various filtering techniques (e.g., downsampling) to render the output in a visual form that is intuitive for a user. The functional requirements of SENSAR included: • Customizable for different sensors and common data structures within a robotics system Astra Camera depth sensor. The robotic sensors and cognitive data are extracted and filtered from ROS using Python and C++ scripts. For the AR device, we used a Microsoft Hololens, a Samsung Galaxy S9 Android smartphone device with a 12 MP Camera, and an iPad to render visuals over the real-world. Our system can be installed on any smartphone, iPhone, or tablet device that meets the hardware requirements to support AR applications. The AR device utilizes Unity 3 to create visualizations to project data around the robot. Vuforia 4 is used to establish physical tracking of the robot using a target image. Once the pose of the robot within the incoming video feed is detected, the data received from the robot can then be projected onto the camera feed. In ROS, ROSBridge provides a JSON interface to the topics native to ROS, whereas in Unity, ROS Sharp provides C# functions and classes to interact with ROSBridge through JSON. Through a WebSocket connection, ROS messages of any type from the robot come out as native C# data structures at the other end of the pipeline. Visualization Options Below is a list of current visualizations that are equipped with the SENSAR system. SENSAR is an ongoing project and we plan to add additional visualizations in the future. Localization Particles This data type is based on the Monte Carlo localization algorithm or Particle Filter Localization that robots use to localize themselves within a given map (Moore and Stouch, 2014). Figure 2.A show markers in pink visualizing the robot's belief about its location within a given map. Laser Scan A 2D LIDAR reading is shown in figure 2.B Here, the laser scan detects the walls and objects with red markers. Human Detection A human detection algorithm leverages the Laser Scan data as input and compares the measurements against the profile of a pair of human legs. Legs are used as a target feature due to the placement of the LI-DAR sensor. A human avatar is rendered in place of a coordinate data point that represents a detected human. Figure 2.C shows the Human Detection visual as seem by the yellow human avatar. Occupancy Grid (Cost Map) Cost Map is a measure of occupancy within the surrounding area of a robot. The robot reports a planar grid consisting of cells using sensory data and information from a static map. Each cell represents a probability of occupancy [0-1]. The information for each cell influences the robot's path trajectory by selecting cells with low probability of containing an obstacle. For visualization, we related the probability value of a cell to correspond to a color. Figure 2.D shows the Cost Map. Here, the robot detects the chairs in its surroundings and then renders red markers (high probability of occupancy) in their location and green markers for areas that are clear. Path Trajectory Path Trajectory visual represents the robot's future trajectory to a selected destination point as a series of markers on the ground plane. The trajectory is generated by the robot's internal path planner algorithm. Figure 3 Top shows the robot's path trajectory around the obstacle (green plushie). Turning Signal The Turning Signal data type alerts a user on the robot's intended direction of navigation by flashing an arrow towards the respective direction. Turn Signal mimics a motor vehicle's turning signal system. Figure 3 Bottom shows the robot's flashing an arrow towards the direction it will move. Depth Camera Sensor (PointCloud) PointCloud renders the depth sensor data as particles in 3D space. Filtering and segmentation techniques are applied for model detection algorithms. APPLICATIONS Conveying Navigational Intent We are exploring how SENSAR can leverage the Path Trajectory and Turning Signal data types for users to interpret the robot's motion intent. These navigation visuals will be evaluated using videos clips. Participants will watch segments of the robot approaching an obstacle. The robot will travel either to the left or right around an obstacle as depicted in figure 3, and participants will select the direction they believe the robot will travel towards. Robotic Education for Kids we are exploring how an AR robotic platform performs as a teaching tool for robotic education for kids. We applied our SENSAR framework to work with an EV3 robot as seen in figure 5. Here, the EV3 robot contains additional sensors that can be projected to give kids a visual aid on key concepts to robots while interacting with their robot. The next steps are to deploy our system and observe how students in small groups interact with the novel system and collaborate with each other to accomplish learning objectives. Figure 4 shows a sequence of images during two cases when the robot was navigating through a building to reach a specific destination. In the top set of images (A-D), the robot operated with a functioning LIDAR sensor, as seen by the red markers detecting the nearby wall. The path-planner algorithm incorporates the laser scan data with the map of the building to determine available routes to the destination while avoiding obstacles. Here, the robot successfully avoids the wall to continue down the hallway. In the bottom set of images (E-H), the robot operated with a malfunctioning LI-DAR sensor, as evidenced by the absence of red markers in image E indicating that the wall is not detected and therefore is not an obstacle for the robot. As a result, the robot re-calculates a new shorter path, and the robot thus collides into the wall. SENSAR provided the user with insight into the reason for the collision, which would have been difficult to determine if the user had only visually observed the robot. Robotic Debugging CONCLUSIONS This paper presented SENSAR, a novel AR-based robotic tool that enables robots to visually communicate their cognitive and sensory data. The major significance of this project is introducing a new and attractive form of interaction in HRI. By translating complex data into intuitive visuals, we can enable humans of all levels of expertise to work with robots. We described SENSAR's open-source hardware and software components which are available online through GitHub repositories. We then describe domains that our system may benefit. We expect our system to become a useful tool for other HRI researchers investigating how and whether visualizing various robot data in AR can improve Figure 4: Planned path trajectory (yellow markers) with operating laser scan (Red markers) resulting in obstacle avoidance (top row). Robot operating with unexpected malfunctioning laser scan leading to wall collision (bottom row). laser scan did not detect the wall as the robot planned a right turn in image E. The lack of red markers in image E but present in the following image F provided insight that the laser scan is the source of the robot's behavior. Figure 5: EV3 robot is projecting its ultrasonic sensor data by visualizing the blue cone that extends out towards the object detected by the robot's sensor. Other sensor can be visualized as seen on the right-hand panel. human-robot collaboration and enable effective learning and reasoning. Figure 2 : 2This figure shows the first design of the robot and the types of visual data you display with AR. (A) Localization, (B) Laser scan, (C) People Detection, (D) Cost Map. (Quigley et al., 2009) running on Ubuntu 16.04. The robot is equipped with an RPLIDAR A2M8 Laser Scanner and an Figure 3 : 3Visualization projected by the robot to indicate intended direction of motion around an obstacle. Top image is path trajectory using yellow markers. Bottom image is turn signals which operates similar to turn signals of a motor vehicle. • Operational without external tracking or cloud service • Versatile to multiple AR-supported devices Hardware and Software SENSAR requires two components: an intelligent robot and an AR device. A Turtlebot2 served as the intelligent robot; it is controlled through the Robot Operating System (ROS) http://wiki.ros.org/rviz The SENSAR package repositories can be found at: https://github.com/tufts-ai-robotics-group/arfuros ros.git and https://github.com/tufts-ai-robotics-group/arfuros.git https://unity.com/ 4 https://developer.vuforia.com/ That's on my mind! robot to human intention communication through on-board projection on shared floor space. R T Chadalavada, H Andreasson, R Krug, A J Lilienthal, IEEEChadalavada, R. T.; Andreasson, H.; Krug, R.; and Lilien- thal, A. J. 2015. That's on my mind! robot to human intention communication through on-board projection on shared floor space. In 2015 ECMR, 1-6. IEEE. Towards an augmented reality framework for k-12 robotics education. M Cheli, J Sinapov, E E Danahy, C Rogers, Proceedings of Int. Workshop on (VAM-HRI). Int. Workshop on (VAM-HRI)Cheli, M.; Sinapov, J.; Danahy, E. E.; and Rogers, C. 2018. Towards an augmented reality framework for k- 12 robotics education. In Proceedings of Int. Workshop on (VAM-HRI). Passive demonstrations of light-based robot signals for improved human interpretability. R Fernandez, N John, S Kirmani, J Hart, J Sinapov, P Stone, 27th IEEE Int. Symp. RO-MANIEEEFernandez, R.; John, N.; Kirmani, S.; Hart, J.; Sinapov, J.; and Stone, P. 2018. Passive demonstrations of light-based robot signals for improved human interpretability. In 2018 27th IEEE Int. Symp.(RO-MAN), 234-239. IEEE. Mobile mixed-reality interfaces that enhance human-robot interaction in shared spaces. J A Frank, M Moorhead, V Kapila, Frontiers in Robotics and AI. 420Frank, J. A.; Moorhead, M.; and Kapila, V. 2017. Mobile mixed-reality interfaces that enhance human-robot inter- action in shared spaces. Frontiers in Robotics and AI 4:20. End-user robot programming using mixed reality. S Y Gadre, E Rosen, G Chien, E Phillips, S Tellex, G Konidaris, 2019 (ICRA). IEEEGadre, S. Y.; Rosen, E.; Chien, G.; Phillips, E.; Tellex, S.; and Konidaris, G. 2019. End-user robot programming using mixed reality. In 2019 (ICRA). IEEE. A generalized extended kalman filter implementation for the robot operating system. T Moore, D Stouch, Proceedings of the 13th Int. Con. on Intelligent Autonomous Sys. (IAS-13). the 13th Int. Con. on Intelligent Autonomous Sys. (IAS-13)SpringerMoore, T., and Stouch, D. 2014. A generalized extended kalman filter implementation for the robot operating sys- tem. In Proceedings of the 13th Int. Con. on Intelligent Autonomous Sys. (IAS-13). Springer. Creating a Shared Reality with Robots. F Muhammad, A Hassan, A Cleaver, J Sinapov, 14th ACM/IEEE Int. Con. on (HRI). Muhammad, F.; Hassan, A.; Cleaver, A.; and Sinapov, J. 2019. Creating a Shared Reality with Robots. In 2019 14th ACM/IEEE Int. Con. on (HRI), 614-615. ISSN: 2167-2121. Ros: an open-source robot operating system. M Quigley, K Conley, B Gerkey, J Faust, T Foote, J Leibs, R Wheeler, A Y Ng, ICRA workshop on open source software. Kobe, Japan3Quigley, M.; Conley, K.; Gerkey, B.; Faust, J.; Foote, T.; Leibs, J.; Wheeler, R.; and Ng, A. Y. 2009. Ros: an open-source robot operating system. In ICRA workshop on open source software, volume 3, 5. Kobe, Japan. Visualization of Robotic Sensor Data with Augmented Reality. M C Thorstensen, Thorstensen, M. C. 2017. Visualization of Robotic Sensor Data with Augmented Reality. Communicating robot motion intent with augmented reality. M Walker, H Hedayati, J Lee, D Szafir, Proceedings of the 2018 ACM/IEEE Int. Con. on (HRI). the 2018 ACM/IEEE Int. Con. on (HRI)Walker, M.; Hedayati, H.; Lee, J.; and Szafir, D. 2018. Com- municating robot motion intent with augmented reality. In Proceedings of the 2018 ACM/IEEE Int. Con. on (HRI), 316-324.
[ "https://github.com/tufts-ai-robotics-group/arfuros", "https://github.com/tufts-ai-robotics-group/arfuros.git" ]
[ "DYNAMICS OF ROLLING DISK", "DYNAMICS OF ROLLING DISK" ]
[ "A V Borisov [email protected]:[email protected]:[email protected] \nInstitute of Computer Science Universitetskaya\n1 426034IzhevskRussia\n", "I S Mamaev \nInstitute of Computer Science Universitetskaya\n1 426034IzhevskRussia\n", "A A Kilin \nInstitute of Computer Science Universitetskaya\n1 426034IzhevskRussia\n" ]
[ "Institute of Computer Science Universitetskaya\n1 426034IzhevskRussia", "Institute of Computer Science Universitetskaya\n1 426034IzhevskRussia", "Institute of Computer Science Universitetskaya\n1 426034IzhevskRussia" ]
[]
In the paper we present the qualitative analysis of rolling motion without slipping of a homogeneous round disk on a horisontal plane. The problem was studied by S. A. Chaplygin, P. Appel and D. Korteweg who showed its integrability. The behavior of the point of contact on a plane is investigated and conditions under which its trajectory is finit are obtained. The bifurcation diagrams are constructed.
null
[ "https://export.arxiv.org/pdf/nlin/0502039v1.pdf" ]
16,680,369
nlin/0502039
ee5af7b1c6ab8ba8530f95d72bdf4131b62d9cb7
DYNAMICS OF ROLLING DISK 18 Feb 2005 Received August 22, 2002 A V Borisov [email protected]:[email protected]:[email protected] Institute of Computer Science Universitetskaya 1 426034IzhevskRussia I S Mamaev Institute of Computer Science Universitetskaya 1 426034IzhevskRussia A A Kilin Institute of Computer Science Universitetskaya 1 426034IzhevskRussia DYNAMICS OF ROLLING DISK 18 Feb 2005 Received August 22, 200210.1070/RD2003v008n02ABEH000237arXiv:nlin/0502039v1 [nlin.CD] In the paper we present the qualitative analysis of rolling motion without slipping of a homogeneous round disk on a horisontal plane. The problem was studied by S. A. Chaplygin, P. Appel and D. Korteweg who showed its integrability. The behavior of the point of contact on a plane is investigated and conditions under which its trajectory is finit are obtained. The bifurcation diagrams are constructed. Introduction For the first time the motion of a heavy dynamically symmetrical round disk on a horizontal absolutely rough plane was investigated by G Slesser (1861) [28], N. Ferrers (1872) [9], K. Neumann (1886), and A. Firkandt (1892). These studies eventually (after unsuccessful attempts by Neumann and Lindelöf) lead to the correct form of equations of motion. This form differs from the usual (Lagrangian or Hamiltonian) equations of mechanics because of the nonholonomic constrain showing that the velocity of the point of contact of a disk with a plane is zero. We shall not discuss in detail the general forms of the equations of the nonholonomic mechanics (they are presented, for example, in [23], [26]). Instead, we concentrate on the pretty obvious form of these equations obtained from the general principle of dynamics -the conservation law of the moment of momentum written in the disk-fixed axes. S. A. Chaplygin (1897) was the first to show the integrability of the problem on rolling motion of a disk. He presented the reduction of the problem to the analysis of hypergeometric quadratures in paper [6], where he showed also the integrability of the problem on rolling motion of an arbitrary heavy dynamically symmetric body of rotation on a horizontal plane -in the latter case the problem is reduced to the integration of the linear differential second-order equation. The integration of equations of motion of a disk in hyperelliptic functions was also performed in 1900 independently from each other and from Chaplygin by P. Appel [2] and D. Korteweg [14]. Sometimes the problem on rolling motion of a disk is referred to as Appel-Korteweg problem (or simply Appel problem), but this is, probably, not quite correct. In 1903 the same result has been rediscovered by E. Gellop [10], however he used the Legendre functions. Despite of the explicit hypergeometric quadratures the various qualitative properties of disk motion were not studied for the long time. There were mainly studies of stationary motions and of their stability (the corresponding bibliography is presented in book [23]). Some qualitative properties of the disk motion have been discussed only in papers S. N. Kolesnikov [13] and Yu. N. Fedorov [8]. The first paper shows that for almost all initial conditions the disk never falls onto a plane and the second one present the procedure of investigation of the reduced system. Analogous results for the dynamically asymmetrical disk and disk moving on an inclined plane (nonintegrable problems) were obtained in [1], [18]. Among the modern works analyzing the rolling motion of the disk we shall note papers O. M. O'Reily [27], R. Cushman, J. Hermans, D. Kemppainen [7], and Mathematics Subject Classification 37J60, 37J35, 70G45 A. S. Kuleshov [21] devoted to the study of bifurcations and stability of stationary motions of the disk. General results of a qualitative analysis for the rolling motion of a heavy body of rotation were obtained in paper N. K. Moshuk [24]. The paper include the frequency analysis, application of the KAM-theory, and basic qualitative properties for the motion of the point of contact. It appears that the point of contact performs the composite bounded motion: it periodically traces some closed curve which rotates as a rigid body with some constant angular velocity about the fixed point. Thus the realization of some resonance relation between frequencies makes possible the drift of the body of rotation to the infinity. In this paper we develop these qualitative considerations and complement them with the computer analysis. We also present various types of trajectories which are traced by the point of contact in the body-fixed and relative frames of references since they have curious forms which are difficult to predict. Using the computer modelling we explicitly investigate the hypothesis about the drift to the infinity under the resonance conditions. We present the most general three-dimensional bifurcation diagram in the space of the first integrals and the complete atlas of its sections by various planes, constructed with the help of computer modelling. In this paper we also present a new method of reduction of the problem to an one-degree integrable Hamiltonian system and explicitly consider the existence of Hamiltonian formulation for different variants of equations of motion of the problem. The rolling motion of a rigid body on a plane Equations of motion and their integrals Let the rigid body in an exterior field of force perform a rolling motion on a plane without sliding. In this case the equations of motion have the most convenient form in the body-fixed frame of references which axes are directed along the principal axes of inertia of the body and the origin is situated at the center of mass. In the following text all vectors are assumed to be projected on these axes. The condition of absence of slipping thus becomes v + ω × r = 0, (2.1) where v , ω are the velocity of the center of mass and the angular velocity of the body and r is the vector directed from the center of mass to the point of contact (see fig. 1). Let's denote the projections of the fixed basis vectors to the moving axes by α, β, γ (the vector γ is perpendicular to the plane) and by (x, y) we shall denote the coordinates of the projection of the center of mass onto the plane in the fixed frame of references. We assume that the field of force is potential with a potential depending only on the orientation of the body U = U (α, β, γ). The complete set of the equations of motion defining the given system can be represented in the forṁ The expression of the vector of moment of momentum with respect to the point of contact M can be written in the following form M = M × ω + mṙ × (ω × r ) + α × ∂U ∂α + β × ∂U ∂β + γ × ∂U ∂γ , (2.2) α = α × ω,β = β × ω,γ = γ × ω.M = Iω + mr × (ω × r ),(2.5) where I = diag(I 1 , I 2 , I 3 ) is the tensor of inertia of the body. In turn r can be uniquely expressed (for a convex body) through the normal to the plane γ from the equation γ = − ∇F (r ) |∇F (r )| , (2.6) Here F (r ) = 0 is the equation of the body's surface. Let's consider a motion of the point of contact on a plane. If we denote the position of the point of contact on the plane in the fixed frame of references as (X, Y ), then the equation of motion for the point of contact can be presented in the forṁ X = (ṙ , α),Ẏ = (ṙ , β) . (2.7) whereṙ is determined from equations (2.2) -(2.6). ActuallyẊ andẎ are projections of the velocity of the point of contact in the relative frame of reference onto the fixed axes. The equations of motion in the form similar to (2.2) -(2.3) are presented, for example, in book [11]. They can be obtained also by means of Poincaré -Chetaev formalism [3] with undetermined Lagrangian coefficients; these coefficients shall be eliminated with the help of the constrains' equations (2.1). The system (2.2) -(2.3) generally has seven independent integrals of motion, six of them are trivial geometrical integrals: α 2 = 1, β 2 = 1, γ 2 = 1, (α, β) = 0, (β, γ) = 0, (γ, α) = 0. (2.8) The seventh is the integral of energy 1 2 (M , ω) + U (α, β, γ) = h = const. (2.9) Generally the given system has no other additional integrals and the possibility of its integrability in concrete cases depends on the presence of additional tensor invariants (measure, fields of symmetry, integrals). The rolling motion of a heavy disk Let's consider the case of rolling motion for an axially symmetric disk of radius R in the field of gravity. The field is, obviously, also axially symmetric with the potential depending only on γ. Moreover, we suppose that the disk is dynamically symmetric, i. e. I 1 = I 2 . The potential energy in this case has the following form U = −mg(r , γ) = mgR 1 − γ 2 3 . (2.10) The equation of surface for the disk is F (r ) = r 2 1 + r 2 2 − R 2 . Substituting it in the equation (2.6) and solving with respect to r we obtain r 1 = − Rγ 1 1 − γ 2 3 , r 2 = − Rγ 2 1 − γ 2 3 , r 3 = 0. (2.11) As the potential energy depends only on γ, in the equations of motion (2.2) -(2.3) we get the separate system of six equations. M = M × ω + mṙ × (ω × r ) + mgr × γ, γ = γ × ω. (2.12) Expressing ω, r from relations (2.5), (2.11) we shall get the closed system for the variables M , γ similar in many aspects is to the Euler-Poisson system in the Lagrange case, however the obtained system is much more complicated than the last one. The equations (2.12) preserve the geometrical integral γ 2 and the energy (2.9), in addition they allows the standard invariant measure (with a constant density). For the integrability (by Euler-Jacobi [17]) of these equations we need two additional integrals. In the following we describe the method of derivation of these integrals. The possibility of separation of the system (2.12) from the general system (2.2) -(2.3) is connected to the symmetry with respect to the rotations about the vertical axis defined by the vector γ. The system (2.12) is invariant with respect to the field of symmetries commuting with the vector field of the problem. v ψ = α 1 ∂ ∂β 1 − β 1 ∂ ∂α 1 + α 2 ∂ ∂β 2 − β 2 ∂ ∂α 2 + α 3 ∂ ∂β 3 − β 3 ∂ ∂α 3 , (2.13) It is possible to show that the variables M , γ are the integrals of field (2.13) that is v ψ (M i ) = 0, v ψ (γ i ) = 0, i = 1, 2, 3. According to the general Lie theory [19], variables M , γ define the reduced system. For the classical Euler-Poisson equations the corresponding reduction is the Raus reduction with respect to the cyclical angle of precession. In addition to the field of symmetries (2.13) the equations of motion (2.2) -(2.3) for the axially symmetric body allow one more field of symmetries corresponding to the rotation about the axis of symmetry of the disk. v ϕ = M 1 ∂ ∂M 2 − M 2 ∂ ∂M 1 + γ 1 ∂ ∂γ 2 − γ 2 ∂ ∂γ 1 + +α 1 ∂ ∂α 2 − α 2 ∂ ∂α 1 + β 1 ∂ ∂β 2 − β 2 ∂ ∂β 1 . (2.14) It is possible to show that integrals of the field (2.14) are projections of the moment and normal to the plane of disk onto the fixed axes of coordinates N = ((M , α), (M , β), (M , γ)), n = (α 3 , β 3 , γ 3 ). The equations of motion for these variables can be presented in the following forṁ N = m˙ r × ( ω × r ) + mg r × n, n = ω × n,(2.15) where symbols ω, r denote the same vectors, but projected onto the fixed axes (that is ω 1 = = (ω, α), . . . , r 1 = (r , α), . . .). The explicit expression of the components of the vector r is r = Rα 3 γ 3 1 − γ 2 3 , Rβ 3 γ 3 1 − γ 2 3 , −R 1 − γ 2 3 . (2.16) The vector N is expressed through ω by the formula N = I 1 ω + (I 3 − I 1 )( ω, n)n + m r × ( r × r ). (2.17) Remark 1. Such reduction is also possible for an arbitrary body of rotation. A reduction to the integrable one-degree Hamiltonian system Let's describe the process of reduction of order with respect to the both fields of symmetries (2.13) and (2.14). For that we shall choose the simultaneous integrals of these fields as variables of the reduced system. According to [5], the most convenient algebraic set of such variables is (N , n), (N , n), γ 3 , K 1 = M 1 γ 1 + M 2 γ 2 = N 3 − γ 3K 2 = I 1 I 3 + mR 2 M 3 = I 1 I 3 + mR 2K 3 = γ 1 M 2 − γ 2 M 1 = N 1 n 2 − N 2 n 1 . (2.18) The equations of motion in the new variables becomė γ 3 = K 3 I 1 + mR 2 , K 1 = − I 3 (I 1 + mR 2 ) I 1 (I 3 + mR 2 ) K 3 K 2 , K 2 = − mR 2 (I 1 + mR 2 ) I 1 (I 3 + mR 2 ) K 3 K 1 1 − γ 2 3 , K 3 = − γ 3 1 − γ 2 3 K 2 1 I 1 + K 2 2 I 1 + mR 2 + + I 1 (I 3 + mR 2 ) I 2 1 K 1 K 2 + mgRγ 3 1 − γ 2 3 . (2.19) The equations (2.19) preserve the invariant measure with density ρ = dK 1 dθ = I 3 sin θ I 1 (I 3 + mR 2 ) K 2 , dK 2 dθ = mR 2 I 1 (I 3 + mR 2 ) K 1 sin θ . (2.20) The general solution of these equations can be presented in the form [23] K 1 = C 1 I 3 sin 2 θ 2 I 1 (I 3 + mR 2 ) F (1 + ξ, 1 + η, 2, 1 − cos θ 2 )− −C 2 I 3 sin 2 θ 2 I 1 (I 3 + mR 2 ) F (1 + ξ, 1 + η, 2, 1 + cos θ 2 ), K 2 = C 1 F (ξ, η, 1, 1 − cos θ 2 ) + C 2 F (ξ, η, 1, 1 + cos θ 2 ),(2.21) where ξ and η are the solutions of the quadratic equation x 2 − x + I 3 mR 2 I 1 (I 3 + mR 2 ) = 0 and F (ξ, η, n, z) is the generalized hypergeometric function representable by series F (ξ, η, n, z) = ∞ k=0 Γ(ξ + k)Γ(η + k)Γ(n) Γ(ξ)Γ(η)Γ(n + k) z k k! (2.22) Thus, the relations (2.21) define (implicitly) the integrals of motion. In this case they are the "constants" C 1 and C 2 expressed through K 1 , K 2 , θ. The quadrature for the angle of nutation can be obtained from the integral of energy written in the variables K 1 , K 2 , K 3 , θθ 2 = 2 sin 2 θ(I 1 + mR 2 )P (θ), P (θ) = h − K 2 1 2I 1 sin 2 θ − 1 2 K 2 2 I 1 − mgR sin θ. (2.23) Here we assume that the variables K 1 , K 2 are expressed through the constants of integrals and angle θ according to the formulas (2.21). In this case the function P (θ) (depending on the constants of integrals) define the analog of gyroscopic function for the Lagrange top [3], [22]. Thus, the equation (2.23) at the fixed values C 1 and C 2 define the one-degree Hamiltonian system. The phase portraits of this system on the plane θ,θ are presented in fig. 2. All the variables γ 3 , K 1 , K 2 , K 3 are periodic functions of time with the period T θ and the corresponding frequency ω θ . Remark 2. According to [5], the system (2.19) is the Hamiltonian one with degenerated Poisson bracket which has two Casimir functions expressed through hypergeometric functions. Quadratures for angles of proper rotation and a precession According to the general Lie theory [19], if the variables of reduced system (2.18) are the given functions of time, then all the variables of initial system (2.12) (and accordingly (2.15)) can be obtained by one quadrature (if fields v ψ (2.13) and v φ (2.14) are commuting). Indeed, using the equalities tg ϕ = γ 1 γ 2 (and correspondingly tg ψ = − n 1 n 2 ) for angles ϕ and ψ, we obtainφ = − γ 3 1 − γ 2 3 K 1 I 1 + K 2 I 1 (I 3 + mR 2 ) ,ψ = − K 1 I 1 (1 − γ 2 3 ) . (2.24) Thus, for each of the angles the dependence on time is defined as an integral of a periodic function with the frequency ω θ , hence it can be presented in the form (see, for example, [17], [24]) ϕ = ω ϕ t + ϕ * (t), ψ = ω ψ t + ψ * (t), (2.25) where ϕ * (t), ψ * (t) are periodic function with frequency ω θ . Moreover, (2.24) and (2.25) imply also that all the frequencies ω θ , ω ϕ , ω ψ depend only on the constants of the first integrals. A motion of the point of contact Following papers [20], [24] we present the equation for the velocity of the point of contact in the forṁ Z = R γ 3 1 − γ 2 3 K 1 I 1 − K 2 I 1 (I 3 + mR 2 ) e iψ ,(2.26) where Z = X +iY and X, Y are the coordinates of the point of contact in the fixed frame of references. Thus the coordinates of the point of contact are determined by quadratures of quasiperiodic two-frequency (with the frequencies ω ψ , ω θ ) functions of time. The qualitative analysis and results Let's perform the qualitative analysis of the dynamics of the disk motion. We will make a classification of all possible motions depending on the constants of the first integrals. Some features of the considered case essentially complicate this work in comparison with the case of the Lagrange top for Euler-Poisson equations. For uniformity we recommend to study such analysis for the Lagrange case in book [3]. The complexity of analysis is caused by the facts that the integrals of motion can not be expressed in elementary functions (only in special one) and the system has no natural Hamiltonian presentation. Moreover, in addition to the motion of apexes of the body (disk) we shall classify trajectories of the point of contact obtained by additional quadratures of quasiperiodic functions. The bifurcation analysis of the reduced system Possible types of motion for the axis of symmetry of the body are completely determined by the form of the gyroscopic function P (θ) and by the energy level. Critical values of the integrals of motion C 1 , C 2 , h are determined by the equations P (θ) = 0, dP (θ) dθ = 0. (3.1) In three-dimensional space with coordinates C 1 , C 2 , h equations (3.1) define a three-dimensional surface, so-called surface of regular precessions [3] (see fig. 3). This name is connected to the fact that at the given values of integrals the coin performs motion with the fixed angle θ = const, which is analogous to the precession for Lagrange top [22]. The full atlas of sections of the surface of regular precessions (bifurcation diagrams) by planes C 1 + C 2 = const and C 1 − C 2 = const is presented in figs. 4 and 5 accordingly. In fig. 6 and 7 for two different sections we show the forms of the gyroscopic function P (θ) corresponding to various values of integrals C 1 , C 2 , h. Using these figures (and the rule of signs) we can easily study the stability of the corresponding solutions located on branches of the bifurcation diagram (branches corresponding to unstable solutions are represented on the diagram by a dotted line). In figs. 6 and 7 vertical straight lines represent cases when C 1 = 0 or C 2 = 0. In these cases the disk motion corresponds to the falling and planes determined by these equalities define in space of integrals C 1 , C 2 , h the two-dimensional manifold of fallings. Thus for almost all initial conditions the disk do not fall performing the rolling motion on a plane. Other remarkable motions correspond to the cases C 1 = C 2 , the rolling motion of the disk, and C 1 = −C 2 , the rotation of the disk about its axis passing through the diameter. During the latter motion the declination of the disk with respect to the vertical remains constant. Remark 3. The bifurcation diagram ( fig. 3, 4, 5) is different from one presented in papers [21], [27] since instead of the value of energy we use the value of angle of declination corresponding to the precession θ 0 and this function has no physical sense for other motions (when this angle is not preserved). Only the points on the surface of regular precessions have the physical sense. At the same time each value of constants C 1 , C 2 , h in space of integrals in fig. 3 corresponds to some motion whether this point is situated on the surface of regular precessions or not and this is important for the qualitative analysis. Remark 4. One of sections of the three-dimensional diagram by a plane h = const and the corresponding gyroscopic functions are presented in paper [7]. The qualitative analysis of motion of apexes The behavior of angles of proper rotation ϕ and precessions ψ that together with θ determined the motion of apexes is defined by relations (2.25). The important feature in this problem is the twofrequency behavior of each of these angles. That is not usual for integrable systems. For example, for the Kovalevskaya top the angle ψ(t) is defined by three frequencies [3]. In this case such phenomena is connected to the existence of two methods of reduction with respect to the symmetries of system (2.12), (2.15). From the geometrical point of view whole space of variables M , α, β, γ is foliated on threedimensional tori defined as the joint level surfaces of the integrals C 1 , C 2 , h and the geometrical integrals. The motion represents a winding of the three-dimensional torus with frequencies ω θ , ω ϕ , ω ψ [24]. (For the reduced systems (2.12) and (2.15) the corresponding tori are two-dimensional.) Since the frequencies depend only on constants of the first integrals, all motion on the torus have the identical frequency that not evident for nonholonomic systems. Even for the integrable nonholonomic systems on two-dimensional tori there is a non-uniform rectilinear motion and, generally speaking, the intermixing is possible (see paper [4]). Practically these arguments prove that the given system is Hamiltonian one in the analytical sense (though the Hamiltonian function can be a different from the energy (2.9) [24]. Moreover, taking into account only the analytical point of view we can say that near the nonsingular torus the system becomes the Hamiltonian one by the infinite number of methods [16]. (2.19) is the Hamiltonian one with some algebraic nonlinear bracket (see [5]), however the possibility of its lifting on the systems (2.12) and (2.15) is still not investigated. The analysis of motion of the point of contact. For the analysis of motion of the point of contact we decompose the velocity (2.26) in the Fourier series with respect to time. Then from (2.25) we geṫ Z = n∈Z v n e i(ω ψ +nω θ )t . Integrating with respect to time we obtain Z(t) = Z 0 + e iω ψ t n∈Z v n i(ω ψ + nω θ ) e inω θ t . Thus, if at ω ψ + nω θ = 0 we use the frame of references rotating about the point Z 0 with the angular velocity ω ψ , then the point of contact traces some closed curve (see [24], [20]). Various types of such closed curves and trajectories corresponding to them in the fixed space are presented in figure 8. At the resonance ω ψ + nω θ = 0 we observe the secular drift of the point of contact. Graphs of frequencies ω ψ (h), ω θ (h), ω ϕ (h) at the fixed values of integrals C 1 , C 2 are presented in fig. 9. They show that the relation ω ψ + nω θ = 0 can be fulfilled both in the case of existence of one and of three regular precessions. And at the same energy some initial conditions lead to a secular drift while the others are not (see fig. 8). Since all frequencies depend only on the values of the first integrals the relation ω ψ + nω θ = 0 define in three-dimensional space of integrals some two-dimensional manifold corresponding to the infinite trajectories of the disk. Fig. 1 1Fig. 1 ( 2 . 3 ) 23The equation(2.2) describes the evolution of the vector of moment of momentum for the body with respect to the point of contact M and (2.3) concerns the evolution of the fixed basis vectors in the body-fixed frame of references.The motion of the center of mass can be obtained in quadratures from solutions of the equa third equations on the first and choosing a new independent variable, the angle of nutation θ = arccos γ 3 , we shall get the system of linear equations Fig. 2 . 2Phase portraits of the system (2.23) at various values C 1 and C 2 . Left: the case of existence of three periodic solutions (C 1 = 0.05, C 2 = 0.01). Right: the case of existence of one periodic solution (C 1 = 0.08, C 2 = −0.02). Fig. 3 . 3The surface of regular precessions. Parameters of the system are I 1 = 0.25, I 2 = 0.5, R = 1, m = 1, g = 1. Fig. 4 . 4Sections of the surface of regular precessions represented in fig. 3 by planes C 1 + C 2 = const. Fig. 5 . 5Sections of the surface of regular precessions represented in fig. 3 by planes C 1 − C 2 = const. Fig. 6 . 6Various types of the gyroscopic function for the section of the surface of regular precessions by plane C 1 + C 2 = 0.08. Fig. 7 . 7Various types of the gyroscopic function for the section of the surface of regular precessions by plane C 1 − C 2 = 0.08. Fig. 8 . 8Trajectories of the point of contact of the disk in the absolute space at various values of the integral of energy. Parameters of the system correspond to figure a). The closed trajectories in the frame of references rotating with the angular velocity ω ψ (see explanations in the text) are presented in the right upper corner of each figure (except the case of infinite motions). In figures a) and b) we present various types of motion of the disk at the energy h = 0.86. Figures c) and d) correspond to the energy h = 0.92217 when one of the motions becomes resonant (ω θ = ω (2) ψ ) and the secular drift (fig. d) is observed. The increase of the energy in figures e) and f) to h = 0.961 makes both types of motion bounded again. In figure g) the motion of the disk is presented at h = 1.1 after merging of two domains of possible motions corresponding to various types of motion. The infinite motion in figure h) corresponds to the resonance ω ψ = 2ω θ at the energy h = 1.18169. In figure i) the motion of the point of contact of the disk is presented after the further increase of the energy up to h = 1.4. Remark 5 . 5N. K. Moschuk in[25] observed a related phenomena studying the nonholonomic Chaplygin system possessing some number of the linear with respect to velocities first integrals. Fig. 9 .g 9Dependencies of frequencies ω θ , ω ψ and ω ϕ from the energy at I 1 = 1, and various values of integrals C 1 , C 2 . In figures a) and c) the areas marked by the rectangles are separately presented in the increased scale. In the field of energies where two different types of motions are possible in the absolute space we denote the frequencies by ω 1 ψ , ω 1 ϕ and ω 2 ψ , ω 2 ϕ . The resonance energies are marked on the graphs by thick dots. The orders of the resonances are indicated near the dots. The values of integrals for the dependencies presented here are: a) C 1 = 0.04, C 2 = −0.02; b) C 1 = 0.09, C 2 = −0.07; c)C 1 = 0.065, C 2 = 0.055; d) C 1 = 0.09, C 2 = 0.03. At the same time the existence of a natural (algebraic) Poisson structure with a Hamiltonian defined by the energy (2.9) remains an open problem. A. V. Borisov and I. S. Mamaev show that the reduced system Thus, for almost all initial conditions (except the indicated manifold) all trajectories of the disk are bounded.We can consider this result to be opposite to the one obtained from research of dynamics of the point of contact for the Chaplygin ball on a horizontal plane (see[12]) where the majority of trajectories, on the contrary, were unbounded. Problem on falling of disk moving on horizontal plane. A A Afonin, V V Kozlov, Proc. of Russian Acad. of Sciences. of Russian Acad. of SciencesRigid body mech. í1. 1997. P. 7-13A. A. Afonin, V. V. Kozlov. Problem on falling of disk moving on horizontal plane. Proc. of Russian Acad. of Sciences, Rigid body mech. í1. 1997. P. 7-13. Sur l'intégration deséquations du mouvement d'un corps pesant de rédolution roulant par une arête circulaire sur up plan horizontal. P Appel, cas parficulier du cerceau. Rendiconti del circolo matematico di Palermo. V. 14. 1900. P. 1-6P. Appel. Sur l'intégration deséquations du mou- vement d'un corps pesant de rédolution roulant par une arête circulaire sur up plan horizontal; cas parfi- culier du cerceau. Rendiconti del circolo matematico di Palermo. V. 14. 1900. P. 1-6. Rigid body dynamics. Izhevsk: RCD publ. A V Borisov, I S Mamaev, 2001. P. 384A. V. Borisov, I. S. Mamaev. Rigid body dynamics. Izhevsk: RCD publ.. 2001. P. 384. Poisson structures and Lie algebras in Hamiltonian mechanics. Izhevsk: RCD publ. A V Borisov, I S Mamaev, 464A. V. Borisov, I. S. Mamaev. Poisson structures and Lie algebras in Hamiltonian mechanics. Izhevsk: RCD publ. 1999. P. 464. The rolling of rigid body on a plane and sphere. Hierarchy of dynamic. Regular and Chaotic Dynamics. A V Borisov, I S Mamaev, A. V. Borisov, I. S. Mamaev. The rolling of rigid body on a plane and sphere. Hierarchy of dynamic. Regular and Chaotic Dynamics. V. 7. í1. 2002. P. 177-200. On motion of heavy rigid body of revolution on horizontal plane. S A Chaplygin, Proc. of the Physical Sciences' section of the Society of Amateurs of Natural Sciences. V. 9. í1. 1897. P. of the Physical Sciences' section of the Society of Amateurs of Natural Sciences. V. 9. í1. 1897. PS. A. Chaplygin. On motion of heavy rigid body of revolution on horizontal plane. Proc. of the Physical Sciences' section of the Society of Amateurs of Natural Sciences. V. 9. í1. 1897. P. 10-16. The rolling disk. R Cushman, J Hermans, D Kemppainen, University of CalgaryPreprint. 1995. P. 51R. Cushman, J. Hermans, D. Kemppainen. The rolling disk. University of Calgary, Preprint. 1995. P. 51. On disk rolling on absolutely rough surface. Yu N Fedorov, Proc. of USSR Aced. of sciences. of USSR Aced. of sciencesRigid body mech. í4. 1987. P. 67-75Yu. N. Fedorov. On disk rolling on absolutely rough surface. Proc. of USSR Aced. of sciences, Rigid body mech. í4. 1987. P. 67-75. Extension of Lagrange's equations. N M Ferrers, Quart. J. of pure and applied Mathematics. V. 12. í45. 1872. P. 1-5N. M. Ferrers. Extension of Lagrange's equations. Quart. J. of pure and applied Mathematics. V. 12. í45. 1872. P. 1-5. On the rise of a Spinning Top. E G Gellop, Proc. Cambr. Phylos. Soc. V. 19pt. 3. 1904. P. 356-373E. G. Gellop. On the rise of a Spinning Top. Proc. Cambr. Phylos. Soc. V. 19, pt. 3. 1904. P. 356-373. Stability of stationary motions. A V Karapetyan, 168A. V. Karapetyan. Stability of stationary motions. M., "Editorial URRS". 1998. P. 168. The Dynamics of Chaplygin Ball: the Qualitative and Computer Analysis. Regular and Chaotic Dynamics. V. 6. í3. A A Kilin, A. A. Kilin. The Dynamics of Chaplygin Ball: the Qualitative and Computer Analysis. Regular and Chaotic Dynamics. V. 6. í3. 2001. P. 291-306. On rolling of disk on horizontal plane. S N Kolesnikov, MGU Bull. Math.& Mech. Series. í2. S. N. Kolesnikov. On rolling of disk on horizontal plane. MGU Bull. Math.& Mech. Series. í2. 1985. Extrait d'une lettreà M. Appel. Rendiconti del circolo matematico di Palermo. D Korteweg, D. Korteweg. Extrait d'une lettreà M. Appel. Rendi- conti del circolo matematico di Palermo. V. 14. 1900. P. 7-8. On integration theory of nonholonomic mechanics equations. Advances of mech. V. 8. í3. V V Kozlov, V. V. Kozlov. On integration theory of nonholonomic mechanics equations. Advances of mech. V. 8. í3. 1985. P. 85-101. Liouvillian property of invariant measures of completely integrable systems and Monzh-Ampere equation. V V Kozlov, Math. notes. V. 53. í4.V. V. Kozlov. Liouvillian property of invariant mea- sures of completely integrable systems and Monzh- Ampere equation. Math. notes. V. 53. í4. 1993. Methods of qualitative analysis in rigid body dynamics. V V Kozlov, Izhevsk: RCD publ. 2000. P. 256V. V. Kozlov. Methods of qualitative analysis in rigid body dynamics. Izhevsk: RCD publ. 2000. P. 256. On motion of disk on an inclined plain. V V Kozlov, Proc. of Russian Acad. of Sciences. of Russian Acad. of SciencesRigid body mech. í5. 1996. P. 29-35V. V. Kozlov. On motion of disk on an inclined plain. Proc. of Russian Acad. of Sciences, Rigid body mech. í5. 1996. P. 29-35. V V Kozlov, Symmetries, topology, and resonances in Hamiltonian mechanics. Izhevsk, Udm. State Univ. publ. V. V. Kozlov. Symmetries, topology, and resonances in Hamiltonian mechanics. Izhevsk, Udm. State Univ. publ.. 1995. On theorems of dynamics. V V Kozlov, N N Kolesnikov, Appl. Math. & Mech. V. 42. í1.V. V. Kozlov, N. N. Kolesnikov. On theorems of dy- namics. Appl. Math. & Mech. V. 42. í1. 1978. On stationary rollings of disk on rough surface. A S Kuleshov, Appl. Math. & Mech. V. 65. í1. A. S. Kuleshov. On stationary rollings of disk on rough surface. Appl. Math. & Mech. V. 65. í1. 2001. K Magnus, Gyroscope: theory and applications. M.: Mir. 1974. P. 526. K. Magnus. Gyroscope: theory and applications. M.: Mir. 1974. P. 526. A P Markeev, Dynamics of body contacting hard surface. M.: Nauka. 1992. P. 336. A. P. Markeev. Dynamics of body contacting hard sur- face. M.: Nauka. 1992. P. 336. Qualitative analysis of motion of heavy rigid body of rotation on absolutely rough plain. N K Moschuk, N. K. Moschuk. Qualitative analysis of motion of heavy rigid body of rotation on absolutely rough plain. . Appl. Math. & Mech. V. 52. í2. 1988. P. 203-210Appl. Math. & Mech. V. 52. í2. 1988. P. 203-210. On reduction of equations of motion of some nonholonomic Chaplygin systems to Lagrangian and Hamiltonian equations. N K Moschuk, Appl. Math. & Mech. N. K. Moschuk. On reduction of equations of motion of some nonholonomic Chaplygin systems to Lagrangian and Hamiltonian equations. Appl. Math. & Mech. Yu I Neimark, N A Fufaev, Dynamics of nonholonomic systems. M.: Nauka. 1967. P. 591. Yu. I. Neimark, N. A. Fufaev. Dynamics of nonholo- nomic systems. M.: Nauka. 1967. P. 591. The dynamics of rolling disks and sliding disks. O M O&apos;reilly, Nonlinear Dynamics. V. 10O. M. O'Reilly. The dynamics of rolling disks and sliding disks. Nonlinear Dynamics. V. 10. 1996. P. 287-305. Notes on rigid dynamics. G M Slesser, Quart. J. of mathematics. G. M. Slesser. Notes on rigid dynamics. Quart. J. of mathematics. V. 4. 1861. P. 65-67.
[]
[ "Automatic Modulation Recognition of PSK Signals with Sub-Nyquist Sampling Based on High Order Statistics Automatic Modulation Recognition of PSK Signals with Sub-Nyquist Sampling Based on High Order Statistics", "Automatic Modulation Recognition of PSK Signals with Sub-Nyquist Sampling Based on High Order Statistics Automatic Modulation Recognition of PSK Signals with Sub-Nyquist Sampling Based on High Order Statistics" ]
[ "Zhengli Xing [email protected] \nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China\n\nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China\n", "Jie Zhou \nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China\n\nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China\n", "Jiangfeng Ye \nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China\n\nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China\n", "Jun Yan \nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China\n\nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China\n", "Jifeng Zou \nSchool of Electronic Engineering\nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nUniversity of Electronic Science and Technology of China\nSichuan Province621900Chengdu, MianyangChina, China, China\n\nScience and Technology of China\nUniversity of Electronic\nChengduChina\n", "Lin Zou ", "Qun Wan \nSchool of Electronic Engineering\nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nUniversity of Electronic Science and Technology of China\nSichuan Province621900Chengdu, MianyangChina, China, China\n\nScience and Technology of China\nUniversity of Electronic\nChengduChina\n", "Jie Zhou \nSchool of Electronic Engineering\nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nUniversity of Electronic Science and Technology of China\nSichuan Province621900Chengdu, MianyangChina, China, China\n\nScience and Technology of China\nUniversity of Electronic\nChengduChina\n", "Zhengli Xing \nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China\n\nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China\n", "Jie Zhou \nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China\n\nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China\n", "Jiangfeng Ye \nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China\n\nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China\n", "Jun Yan \nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China\n\nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China\n", "Jifeng Zou \nSchool of Electronic Engineering\nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nUniversity of Electronic Science and Technology of China\nSichuan Province621900Chengdu, MianyangChina, China, China\n\nScience and Technology of China\nUniversity of Electronic\nChengduChina\n", "Lin Zou \nSchool of Electronic Engineering\nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nUniversity of Electronic Science and Technology of China\nSichuan Province621900Chengdu, MianyangChina, China, China\n\nScience and Technology of China\nUniversity of Electronic\nChengduChina\n", "Qun Wan \nSchool of Electronic Engineering\nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nUniversity of Electronic Science and Technology of China\nSichuan Province621900Chengdu, MianyangChina, China, China\n\nScience and Technology of China\nUniversity of Electronic\nChengduChina\n" ]
[ "Institute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China", "Institute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China", "Institute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China", "Institute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China", "Institute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China", "Institute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China", "Institute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China", "Institute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China", "School of Electronic Engineering\nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nUniversity of Electronic Science and Technology of China\nSichuan Province621900Chengdu, MianyangChina, China, China", "Science and Technology of China\nUniversity of Electronic\nChengduChina", "School of Electronic Engineering\nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nUniversity of Electronic Science and Technology of China\nSichuan Province621900Chengdu, MianyangChina, China, China", "Science and Technology of China\nUniversity of Electronic\nChengduChina", "School of Electronic Engineering\nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nUniversity of Electronic Science and Technology of China\nSichuan Province621900Chengdu, MianyangChina, China, China", "Science and Technology of China\nUniversity of Electronic\nChengduChina", "Institute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China", "Institute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China", "Institute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China", "Institute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China", "Institute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China", "Institute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China", "Institute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China", "Institute of Electronic Engineering\nAcademy of Engineering Physics\nMianyangChina, China", "School of Electronic Engineering\nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nUniversity of Electronic Science and Technology of China\nSichuan Province621900Chengdu, MianyangChina, China, China", "Science and Technology of China\nUniversity of Electronic\nChengduChina", "School of Electronic Engineering\nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nUniversity of Electronic Science and Technology of China\nSichuan Province621900Chengdu, MianyangChina, China, China", "Science and Technology of China\nUniversity of Electronic\nChengduChina", "School of Electronic Engineering\nInstitute of Electronic Engineering\nAcademy of Engineering Physics\nUniversity of Electronic Science and Technology of China\nSichuan Province621900Chengdu, MianyangChina, China, China", "Science and Technology of China\nUniversity of Electronic\nChengduChina" ]
[]
Sampling rate required in the N th Power Nonlinear Transformation (NPT) method is typically much greater than Nyquist rate, which causes heavy burden for the Analog to Digital Converter (ADC). Taking advantage of the sparse property of PSK signals' spectrum under NPT, we develop the NPT method for PSK signals with Sub-Nyquist rate samples. In this paper, combined the NPT method with Compressive Sensing (CS) theory, frequency spectrum reconstruction of the N th power nonlinear transformation of PSK signals is presented, which can be further used for AMR and rough estimations of unknown carrier frequency and symbol rate.
10.1109/isspit.2014.7300563
[ "https://arxiv.org/pdf/1501.00158v1.pdf" ]
14,449,696
1501.00158
b9e7d8442c5ae2ff7aded0ca8fd45f29d8285002
Automatic Modulation Recognition of PSK Signals with Sub-Nyquist Sampling Based on High Order Statistics Automatic Modulation Recognition of PSK Signals with Sub-Nyquist Sampling Based on High Order Statistics Mailbox 919-513, Zhengli Xing [email protected] Institute of Electronic Engineering Academy of Engineering Physics MianyangChina, China Institute of Electronic Engineering Academy of Engineering Physics MianyangChina, China Jie Zhou Institute of Electronic Engineering Academy of Engineering Physics MianyangChina, China Institute of Electronic Engineering Academy of Engineering Physics MianyangChina, China Jiangfeng Ye Institute of Electronic Engineering Academy of Engineering Physics MianyangChina, China Institute of Electronic Engineering Academy of Engineering Physics MianyangChina, China Jun Yan Institute of Electronic Engineering Academy of Engineering Physics MianyangChina, China Institute of Electronic Engineering Academy of Engineering Physics MianyangChina, China Jifeng Zou School of Electronic Engineering Institute of Electronic Engineering Academy of Engineering Physics University of Electronic Science and Technology of China Sichuan Province621900Chengdu, MianyangChina, China, China Science and Technology of China University of Electronic ChengduChina Lin Zou Qun Wan School of Electronic Engineering Institute of Electronic Engineering Academy of Engineering Physics University of Electronic Science and Technology of China Sichuan Province621900Chengdu, MianyangChina, China, China Science and Technology of China University of Electronic ChengduChina Jie Zhou School of Electronic Engineering Institute of Electronic Engineering Academy of Engineering Physics University of Electronic Science and Technology of China Sichuan Province621900Chengdu, MianyangChina, China, China Science and Technology of China University of Electronic ChengduChina Zhengli Xing Institute of Electronic Engineering Academy of Engineering Physics MianyangChina, China Institute of Electronic Engineering Academy of Engineering Physics MianyangChina, China Jie Zhou Institute of Electronic Engineering Academy of Engineering Physics MianyangChina, China Institute of Electronic Engineering Academy of Engineering Physics MianyangChina, China Jiangfeng Ye Institute of Electronic Engineering Academy of Engineering Physics MianyangChina, China Institute of Electronic Engineering Academy of Engineering Physics MianyangChina, China Jun Yan Institute of Electronic Engineering Academy of Engineering Physics MianyangChina, China Institute of Electronic Engineering Academy of Engineering Physics MianyangChina, China Jifeng Zou School of Electronic Engineering Institute of Electronic Engineering Academy of Engineering Physics University of Electronic Science and Technology of China Sichuan Province621900Chengdu, MianyangChina, China, China Science and Technology of China University of Electronic ChengduChina Lin Zou School of Electronic Engineering Institute of Electronic Engineering Academy of Engineering Physics University of Electronic Science and Technology of China Sichuan Province621900Chengdu, MianyangChina, China, China Science and Technology of China University of Electronic ChengduChina Qun Wan School of Electronic Engineering Institute of Electronic Engineering Academy of Engineering Physics University of Electronic Science and Technology of China Sichuan Province621900Chengdu, MianyangChina, China, China Science and Technology of China University of Electronic ChengduChina Automatic Modulation Recognition of PSK Signals with Sub-Nyquist Sampling Based on High Order Statistics Automatic Modulation Recognition of PSK Signals with Sub-Nyquist Sampling Based on High Order Statistics Mailbox 919-513,Correspondence information:compressive sensingPSK signalsmodulation classificationNth power nonlinear transformation Sampling rate required in the N th Power Nonlinear Transformation (NPT) method is typically much greater than Nyquist rate, which causes heavy burden for the Analog to Digital Converter (ADC). Taking advantage of the sparse property of PSK signals' spectrum under NPT, we develop the NPT method for PSK signals with Sub-Nyquist rate samples. In this paper, combined the NPT method with Compressive Sensing (CS) theory, frequency spectrum reconstruction of the N th power nonlinear transformation of PSK signals is presented, which can be further used for AMR and rough estimations of unknown carrier frequency and symbol rate. INTRODUCTION In cognitive radio (CR) applications, Automatic Modulation Recognition (AMR) is a basic task [1], which performs the classification of different types of modulation. As the preprocess of demodulation, AMR can be widely used for spectral monitoring and user identification in spectrum sensing. As an active research spot, various techniques, such as Wavelet/ Fourier transform, Cumulants, have been proposed [1]- [3]. However, according to the Nyquist Sampling rule, most of the typical methods require high sampling rate, which brings heavy burden for ADC, especially for wideband signals. Furthermore, the aim of AMR and carrier frequency and symbol rate estimation only extract quite little information from the enormous data. Recently, Compressive Sensing (CS) has been introduced as a new sampling theory [4]- [6], which can just recover signals from the Sub-Nyquist rate samples. And in CR, CS has been introduced to relieve the heavy burden of ADC [9], which utilized the sparsity of wireless signals. Currently, most applications of CS focus on reducing the sampling rate and reconstructing signals precisely, while little attention has been paid to AMR. Lim and Wakin [6] tried to estimate the N th power spectrum without reconstruction. Whereas, according to the Compressive Signal Processing (CSP) theory proposed by Davenport, Wakin, et al [7], [6] does not have strong anti-noise ability, and only exploit the peak of frequency, and it can just discriminate the BPSK, QPSK and 8PSK signals. Similarly, Chai proposed a new method in [8] to estimate the compressive higher order cyclostationary statistics, which can be easily affected by noise; based on Tian's work [9], Zhou and others derived some other methods [10]- [11] by reconstructing cyclic spectrum from Sub-Nyquist samples, but the calculations are quite complicated. In general, the task of AMR consists of two main parts: feature extraction and classification. In this paper, after reconstructing the frequency spectrum of NPT signals, we extract the primary elements. Then, with the Support Vector Machine (SVM), we can implement the classification effectively. The remainder of this paper is organized as follows. Section II analyzes the 2 nd , 4 th and 8 th nonlinear transformation of PSK type signals with uniform sampling in time domain. It describes the relationship between Sub-Nyquist rate samples and the frequency spectrum of NPT signals in Section III. Section IV introduces SVM, and propose the classification strategy with primary elements of the spectrum, as well as rough estimation methods of carrier frequency and symbol rate. Simulation results are presented in Section V. Finally, conclusions are made in Section VI. II. FEATURES OF NPT FOR PSK TYPE SIGNALS Determining the number of discrete peaks in frequency spectrum of signals undergone NPT is the key step. According to that, modulation type can be ascertained. Besides, exploiting the locations of the peaks, we can get rough estimations of the carrier frequency and symbol rate . Here, based on the previous work [4], we extend the analysis to 8PSK and OQPSK, and we summarize the spectrum features of NPT signals for PSK modulation. A. Signal Model For MPSK (M=2, 4,8), the signal models are as follows [11]: ( 1) g( ) exp 2 2 n MPSK s c n m s A t nT j j f M π π ∞ =−∞ − ⎛ ⎞ = − + ⎜ ⎟ ⎝ ⎠ ∑ (1) where A is the amplitude, 1/ is the symbol period, M ∈{2,4,8} is the number of unique phases used, is the th transmitted symbol, is the carrier frequency, g(t) is the square-root raised cosine filter (SRRC), α is the roll-off factor. For OQPSK, the signal model is [12]: /2 is the bit period. For MSK, the signal model is [12]: [ ] 2 * c MSK f n j t n s A I j Q e π = +(4) and rect ( (2 1) ) cos( ) 2 rect ( 2 ) sin( ) 2 n b n b n b n b n n t a t n T T t b t nT T I Q π π = − − − = ∑ ∑ (5) where , 1 are i.i.d. (independently identically distributed) random sequences. rec · denotes the rectangular function. B. Sparsity of Signals after NPT As given in [4], we can extend the analysis method to the 8PSK and OQPSK signals, whose spectrum features are shown in Table 1 and Table 2. According to Table 2, the number of discrete peaks is different for PSK signals, and the peaks' locations are determined by f c and . Thus, rough estimations of f c and can be fulfilled via those locations. As an example, in Fig. 1(a), it can be clearly seen that the spectrum of BPSK signals raised to the power of 2 are sparse. There only exist three prominent discrete peaks in the spectrum, which are in fact what we care. As the spectrum is approximately sparse, it suggests that the CS theory can be applied. What should also be considered is the influence of the roll-off factor, which is discussed in Section III-C below. III. FEATURES FROM SUB-NYQUIST SAMPLING A. CS Sampling In CS theory, high-rate uniform sampling is replaced by low-rate random sampling, which can significantly reduce the number of samples [4]- [6]. Here, from CS theory, we can design the measurement matrix in our application as Gaussian random matrices, whose coherence with a fixed orthonormal basis is very low [4]- [6]. Here, we define to be a hypothetical vector of uniform samples of the received signal, is the vector undergone NPT. Then, we have: [ ] ( [ ]) , 0 ,1,..., 1 N N l l l L = = − z z(6) Just as shown in Table 2, PSK signals raised to a certain power are sparse in frequency domain. Thus, we choose DFT synthesis matrix as the sparsifying matrix, and the compressive sampling procedure can be written as: N N = = y Φz ΦΨf (7) where is the measurement matrix ( ) , is the DFT synthesis matrix, N is the vector of DFT coefficients of , is the vector of Sub-Nyquist samples. Thanks to the sparsity of , it can be recovered with CS reconstruction method. B. Spectrum Recostruction The problem of recovering from is an NP-hard problem. It can be converted to -norm optimization problem in CS [15] as: 1 arg min . . N N N N N s t = = = f f y Φz ΦΨf (8) Here, we solve the convex problems by CVX [15]. In order to reduce the influence of the noises on the frequency spectrum, a piecewise smoothing procedure is used as: ( ) 1 1 ( ) ( ) ( ) ( ) 2 l N N N N j i i i j i j l = ⎛ ⎞ = − + + − ⎜ ⎟ ⎝ ⎠ ∑ f f f f (9) where denotes the smoothing result of , l is the number of smoothing points. The processing procedure of (9) can be rewritten as matrix operation: N N = f Bf(10)⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ← ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − ⎢ ⎥ ⎢ ⎥ ⎣ ⎦(11)f N N N N N c s f kR + N N N N N 2 (.) 2 c f Y N N N N 2 ( 0.5) c s f k R + + N N N N Y 2 c s f kR + Y N N Y N 4 (.) 4 c f Y Y N Y N 4 ( 0.5) c s f k R + + N N N N N 4 c s f kR + Y Y N Y Y 3(or 5) 3(or 5) 3(or 5) -- With the smoothing procedure, the model of (8) can be adapted as: (12) For example, if N=2, the model can be rewritten as: 1 arg min . . N N N N N s t = = = f B f y Φz ΦΨf By solving the model of (12), the smoothing frequency spectrum of signals after NPT can be obtained, which reduces the impact of Gaussian white noises. The spectrum of BPSK signal in power 2 is shown in Fig. 1(a). Fig. 1(b) is the reconstructed spectrum by CVX. It is obvious that the three major discrete peaks can be completely recovered. The Roll-off Factor Effect Here, for analytical simplicity, we define the energy ratio as the ratio of the peaks' energy to the entire signal, which measures whether the peaks are high enough to be used and for CS reconstruction methods to be applied: / p p s r E E =(14) where is the energy of the discrete peak's energy, and is the entire signal's energy. In our simulation, the BPSK signal contains 2048 symbols, and has 0.5 , 0.8 , 6.4 , and the roll-off ranges in (0, 1). The corresponding to 2 and 2 of BPSK signals raised to the power of 2 are shown in Fig. 2. As shown in Fig. 2, increases as the roll-off factor rises. When α is too small, the ratio of 2 will become too small for AMR, and the reconstruction can not work either. IV. CLASSIFICATION AND ESTIMATION Since the vector of spectrum is of high dimension, we use the Support Vector Machine (SVM) to implement AMR efficiently. A. SVM SVM can be used both for linear and nonlinear classifications with corresponding kernel function. Supposing the original data is , , , and the mapping function is Φ , then we are able to map the original data into a high-dimensional feature space, where the classification can be made. The function , Φ · Φ is called the kernel function. In SVM theory, there are various kernel functions, such as linear, polynomial and Gaussian RBF, et al, which are shown in Table 3. Thanks to its perfect performance, RBF function is used in our simulation. Table 3. Formulation of common used kernel functions Kernel ( , ) i j K x x Linear T i j x x ⋅ Polynomial ( ) T d i j x x r γ ⋅ + Gaussian RBF 2 2 exp( / 2 ) i j x x σ − − However, as we all know, practical classification problems always contain more than two classes. A common way to construct a k-class SVM is to combine several Binary classifers together. Some popular methods, such as 'oneagainst-one', 'one-against-all' and 'DDAGSVM', can be used to solve the k-class problem [19]. In our simulation below, we use the SVM toolbox (libsvm), provided by Chih [20], which uses the 'one-against-one' strategy. B. AMR Strategy As shown in Fig. 1, the energy of the signals are intensively distributed in those discrete peaks. Furthermore, in order to decrease the computation complexity, we need to reduce the dimension of the data. There are many classical methods to realize the purpose, such as PCA, MDS, et al [21]. Here, we propose a simple method to extract the main elements of the spectrum instead of those high complexity methods. From Table 2, there are at most 5 discrete peaks of the spectrum, and what we concerns is the energy of the peaks relative to other part. That means, instead of using the whole vector of spectrum, we only need to retain the largest elements of the spectrum, where is a preset parameter. In our simulation below, we set 20. Fig.3. The proposed AMR scheme Besides, to distinguish BPSK signals, we only need . With and , we can distinguish QPSK, OQPSK and MSK. As for 8PSK, it then make full use of , and . The proposed AMR framework is shown in Fig. 4. We use a random sampler at a Sub-Nyquist rate to sample the signals undergone NPT, feature reduction is performed after frequency spectrum reconstruction and the lower-dimensional feature is sent to SVM, which to make the final decision as a classifier. C. Estimation of and As given in Table 1, for each kind of signal, the locations of discrete peaks are determined by and except 8PSK. The locations of discrete peaks can be used to roughly estimate and . Here, we take estimation methods of and for QPSK for instance, and other kinds of signals can be obtained in a similar way. Here, denote , , as the locations of three dominant peaks of of QPSK signal, and it can be calculated that: ( ) 1 2 3 / 4 or ( ) / 8 c f A A A = + (15) 1 2 | |/ 2 s R A A = −(16) V. SIMULATIONS The proposed methods are tested and verified in this section. Simulation scenarios are set as follows: symbol number is 1024, α 0.5, 0.5 , 0.8 . Here, the Nyquist-rate of uniform sampling is 6.4 to avoid aliasing, which means 8192. And the uniform samples corresponding to the "Nyquist rate" curves in Fig. 5. Besides, the number of Sub-Nyquist rate samples is . Here, we can define the compression ratio as: / M L β = (17) here, we set 0.3. Here, in Fig. 5(a) and Fig. 5(b), we define as the rate of correct classification. Fig. 5(a) and Fig. 5(b) is that, for a given accuracy rate of classification, AMR using Sub-Nyquist rate requires about 2 dB (5dB for BPSK) of SNR more than uniform sampling. From Fig. 5(c), it is also about 2dB for the estimation accuracy of carrier frequency. As can be seen from the Figures, when SNR is low, the proposed method doesn't work as well as the "Nyquist rate" method. This is because when the noise' energy is high, and the spectrum is no longer sparse. Thus, the CS theory also doesn't apply to this condition. As discussed above, while reducing the computation amounts, the method proposed by [6] can just discriminate the BPSK, QPSK and 8PSK, while the method we proposed here extend the analysis to OQPSK and MSK. Some other signals we doesn't mention here, such as FSK and ASK, whose spectrums are distinguishable [3] can also use the method we proposed to classify. What's more, as the statistical characters we use here concern much about the symbol number. [6] used 5000 symbols in the simulation, while in this paper, we only use 1024 symbols to finish the same task, and the SNR we need for the same rate of correct classification is only much lower. VII. i.i.d. (independently identically distributed) random sequences. Fig. 1 .Fig. 2 . 12Simulation results for BPSK signals after NPT rp rp The effect of roll-off factor on energy ratio of peaks Fig. 5 5(a) and Fig. 5(b) depict the rate of correct classifications versus varying signal-to-noise ratios (SNR). Fig. 5(c) shows the estimation results of carrier frequency of QPSK signal. From Fig. 4 . 4Simulation results of AMR and estimation of VI. CONCLUSION Combined CS with NPT method, we fulfill AMR and rough estimations of carrier frequency and symbol rate. Simulation results show the effectiveness of the proposed method. Table 1 . 1Peak lines in different nonlinearities for PSK type signalsN: None Y: Exist ( k ∈ ) Nonlinearity Frequency Modulation Type BPSK QPSK 8PSK OQPSK MSK None c Table 2 . 2Number of discrete peaks for PSK type signals Nonlinearity Modulation Type BPSK QPSK 8PSK OQPSK MSK None 0 0 0 0 0 2 (.) 3 0 0 2 2 4 (.) 3(or 5) 3(or 5) 0 3(or 5) 2 8 (.) Cognitive radio: making software radios more personal. J Mitola, Jr G Q Maguire, Personal Communications, IEEE. 6J. Mitola and Jr. G. Q. Maguire, "Cognitive radio: making software radios more personal," Personal Communications, IEEE, vol. 6, pp. 13-18, 1999. Survey of automatic modulation classification techniques: classical approaches and new trends. O A Dobre, A Abdi, Y Bar-Ness, IET Commun. O.A. Dobre, A. Abdi, Y. Bar-Ness, "Survey of automatic modulation classification techniques: classical approaches and new trends," IET Commun, pp.137-156, 2007. Automatic classification of communication signals using higher order statistics. J Reichert, Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing (ICASSP). IEEE Int. Conf. Acoustics, Speech, and Signal essing (ICASSP)San Francisco, California, USAJ. Reichert, "Automatic classification of communication signals using higher order statistics," in Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing (ICASSP). San Francisco, California, USA. 1992. An Introduction To Compressive Sampling. E J Candes, M B Wakin, Signal Processing Magazine, IEEE. 25E. J. Candes and M. B. Wakin, "An Introduction To Compressive Sampling," Signal Processing Magazine, IEEE, vol. 25, pp. 21-30, 2008. Sparse Signal Detection from Incoherent Projections. M F Duarte, M A Davenport, M B Wakin, R G Baraniuk, Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing. IEEE Int. Conf. Acoustics, Speech and Signal essingToulouse, FranceM. F. Duarte, M. A. Davenport, M. B. Wakin, and R. G. Baraniuk, "Sparse Signal Detection from Incoherent Projections," in Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing, (ICASSP), Toulouse, France. 2006. Automatic modulation recognition for spectrum sensing using nonuniform compressive samples. C W Lim, M B Wakin, Proc. IEEE Int. Conf. International Conference on Communications (ICC). IEEE Int. Conf. International Conference on Communications (ICC)Ottawa CanadaC. W. Lim and M. B. Wakin, "Automatic modulation recognition for spectrum sensing using nonuniform compressive samples," in Proc. IEEE Int. Conf. International Conference on Communications (ICC), Ottawa Canada. 2012. Signal Processing With Compressive Measurements. M A Davenport, P T Boufounos, M B Wakin, R G Baraniuk, Selected Topics in Signal Processing. 4M. A. Davenport, P. T. Boufounos, M. B. Wakin, and R. G. Baraniuk, "Signal Processing With Compressive Measurements," Selected Topics in Signal Processing, vol. 4, pp. 445-460, 2010. CHOCS: A Framework for Estimating Compressive Higher Order Cyclostationary Statistics. C W Lim, M B Wakin, Proc. IEEE Int. Conf. SPIE Defense, Security, and Sensing (DSS). IEEE Int. Conf. SPIE Defense, Security, and Sensing (DSS)C. W. Lim and M. B. Wakin, "CHOCS: A Framework for Estimating Compressive Higher Order Cyclostationary Statistics," in Proc. IEEE Int. Conf. SPIE Defense, Security, and Sensing (DSS). . Maryland Baltimore, Usa , Baltimore, Maryland, USA. 2012. Cyclic Feature Based Wideband Spectrum Sensing Using Compressive Sampling. T Zhi, Proc. IEEE Int. Conf. International Conference on Communication (ICC). IEEE Int. Conf. International Conference on Communication (ICC)Kyoto, JapanT. Zhi, "Cyclic Feature Based Wideband Spectrum Sensing Using Compressive Sampling," in Proc. IEEE Int. Conf. International Conference on Communication (ICC), Kyoto, Japan. 2011. Wavelet Cyclic Feature Based Automatic Modulation Recognition Using Nonuniform Compressive Samples. Z Lei, M Hong, Vehicular Technology Conference (VTC Fall). Z. Lei and M. Hong, "Wavelet Cyclic Feature Based Automatic Modulation Recognition Using Nonuniform Compressive Samples," Vehicular Technology Conference (VTC Fall), pp. 1-6, 2013. Cyclostationaritybased wideband spectrum sensing using random sampling. Z Lingchen, L Chenchi, J H Mcclellan, Proc. IEEE Int. Conf. Vehicular Technology Conference (VTC Fall). IEEE Int. Conf. Vehicular Technology Conference (VTC Fall)Las Vegas, USAZ. Lingchen, L. Chenchi and J. H. McClellan, "Cyclostationarity- based wideband spectrum sensing using random sampling," in Proc. IEEE Int. Conf. Vehicular Technology Conference (VTC Fall), Las Vegas, USA. 2013. . Liang Hong, Liang Hong; Modified CRLB on the modulation parameters of OQPSK signal and MSK signal. K C Ho, Proc. IEEE Int. IEEE IntHo, K.C., "Modified CRLB on the modulation parameters of OQPSK signal and MSK signal," in Proc. IEEE Int. Conf. Wireless Communications and Networking Confernce. Chicago, IL, USAConf. Wireless Communications and Networking Confernce. Chicago, IL, USA. 2000. Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?. E J Candes, T Tao, Information Theory. 52E. J. Candes and T. Tao, "Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?" Information Theory, vol. 52, pp. 5406-5425, 2006. On sparse reconstruction from Fourier and Gaussian measurements. M Rudelson, R Vershynin, Information Sciences and Systems. M. RUDELSON and R. VERSHYNIN, "On sparse reconstruction from Fourier and Gaussian measurements," Information Sciences and Systems, pp. 22-24 March 2006. Stable signal recovery from incomplete and inaccurate measurements. E J Candes, J Romberg, T Tao, Comm Pure Appl Math. 59Candes E J, Romberg J, Tao T. "Stable signal recovery from incomplete and inaccurate measurements". Comm Pure Appl Math, 59: 1207-1223, 2006 CYX: Matlab software for disciplined convex programming, version 2.0 beta. I Research, Software available at. I. CYX Research, "CYX: Matlab software for disciplined convex programming, version 2.0 beta." Software available at: http://cvxr.com/cvx , Sept. 2012 V Vapnik, The Nature of Statistical Learning Theory. SpringerV. Vapnik, "The Nature of Statistical Learning Theory". Springer, 1995. Cloud detection with SVM technique. C Latry, C Panem, P Dejean, Proc. IEEE Int. Conf. Geoscience and Remote Sensing Symposium (IGARSS). IEEE Int. Conf. Geoscience and Remote Sensing Symposium (IGARSS)Barcelona, SpainC. Latry, C. Panem and P. Dejean, "Cloud detection with SVM technique," in Proc. IEEE Int. Conf. Geoscience and Remote Sensing Symposium (IGARSS). Barcelona, Spain, 2007. An algorithm of soft fault diagnosis for analog circuit based on the optimized SVM by GA. L Hua, X Z Yong, Proc. IEEE Int. Conf. Electronic Measurement & Instruments (ICEMI). IEEE Int. Conf. Electronic Measurement & Instruments (ICEMI)Beijing, ChinaL. Hua and X. Z. Yong, "An algorithm of soft fault diagnosis for analog circuit based on the optimized SVM by GA," in Proc. IEEE Int. Conf. Electronic Measurement & Instruments (ICEMI), Beijing, China, 2009. LIBSVM: a library for support vector machines. Chih-Chung Chang, Chih-Jen Lin, Software available atChih-Chung Chang and Chih-Jen Lin, LIBSVM: a library for support vector machines, 2001. Software available at: http://www.csie.ntu.edu.tw/~cjlin/libsvm Effect of dimensionality reduction on performance in artificial neural network for user authentication. S Chauhan, K V Prema, Proc. IEEE Int. Advance Computing Conference (IACC). IEEE Int. Advance Computing Conference (IACC)Ghaziabad, IndiaChauhan, S.; Prema, K.V., "Effect of dimensionality reduction on performance in artificial neural network for user authentication," in Proc. IEEE Int. Advance Computing Conference (IACC), Ghaziabad, India. 2013
[]
[ "On Decentralized Policies for the Stochastic k-Server Problem", "On Decentralized Policies for the Stochastic k-Server Problem" ]
[ "Randy Cogill ", "Sanjay Lall " ]
[]
[]
In this paper we study a dynamic resource allocation problem which we call the stochastic k-server problem. In this problem, requests for some service to be performed appear at various locations over time, and we have a collection of k mobile servers which are capable of servicing these requests. When servicing a request, we incur a cost equal to the distance traveled by the dispatched server. The goal is to find a strategy for choosing which server to dispatch to each incoming request which keeps the average service cost as small as possible.In the model considered in this paper, the locations of service requests are drawn according to an IID random process. We show that, given a statistical description of this process, we can compute a simple decentralized state-feedback policy which achieves an average cost within a factor of two of the cost achieved by an optimal statefeedback policy. In addition, we demonstrate similar results for several extensions of the basic stochastic k-server problem.IntroductionRecently, there has been great interest in the study of coordination strategies for teams of Unmanned Aerial Vehicles (UAVs). In particular, many researchers have focused on methods for designing efficient mission plans, under which a series of tasks can be carried out by a team of vehicles. A common high-level formulation of this type of problem consists of a series of waypoints that must be visited by the vehicles, with the goal of designing a strategy for visiting each of the waypoints in a manner which minimizes some measure of the overall travel time. When the set of locations to visit is known ahead of time, it is possible to plan the mission offline, and each vehicle can perform its own tasks without requiring communication among the vehicles[6,9]. In a dynamic environment, where waypoints may appear as the system is operation, the mission cannot be planned entirely ahead of time. Such a formulation is considered in[5]. However, due to limited computational and communication resources, it is generally not feasible to consider coordination strategies which require complete communication among the vehicles during system operation. The general problem considered in this paper is motivated by the problem of multi-vehicle coordination in a dynamic environment.
null
[ "https://export.arxiv.org/pdf/math/0605188v1.pdf" ]
16,542,227
math/0605188
3e341207a1d6f7f630eb4fee027dab782fb59d02
On Decentralized Policies for the Stochastic k-Server Problem 8 May 2006 Randy Cogill Sanjay Lall On Decentralized Policies for the Stochastic k-Server Problem 8 May 2006 In this paper we study a dynamic resource allocation problem which we call the stochastic k-server problem. In this problem, requests for some service to be performed appear at various locations over time, and we have a collection of k mobile servers which are capable of servicing these requests. When servicing a request, we incur a cost equal to the distance traveled by the dispatched server. The goal is to find a strategy for choosing which server to dispatch to each incoming request which keeps the average service cost as small as possible.In the model considered in this paper, the locations of service requests are drawn according to an IID random process. We show that, given a statistical description of this process, we can compute a simple decentralized state-feedback policy which achieves an average cost within a factor of two of the cost achieved by an optimal statefeedback policy. In addition, we demonstrate similar results for several extensions of the basic stochastic k-server problem.IntroductionRecently, there has been great interest in the study of coordination strategies for teams of Unmanned Aerial Vehicles (UAVs). In particular, many researchers have focused on methods for designing efficient mission plans, under which a series of tasks can be carried out by a team of vehicles. A common high-level formulation of this type of problem consists of a series of waypoints that must be visited by the vehicles, with the goal of designing a strategy for visiting each of the waypoints in a manner which minimizes some measure of the overall travel time. When the set of locations to visit is known ahead of time, it is possible to plan the mission offline, and each vehicle can perform its own tasks without requiring communication among the vehicles[6,9]. In a dynamic environment, where waypoints may appear as the system is operation, the mission cannot be planned entirely ahead of time. Such a formulation is considered in[5]. However, due to limited computational and communication resources, it is generally not feasible to consider coordination strategies which require complete communication among the vehicles during system operation. The general problem considered in this paper is motivated by the problem of multi-vehicle coordination in a dynamic environment. The well known k-server problem is a natural model for dynamic task assignment problems with distance-based costs. Roughly speaking, the k-server problem is as follows. We are given a set of locations, and requests for services to be performed originate sequentially from these locations. We have a collection of k mobile servers which are capable of servicing these requests. At each point in time we must choose a server to serve the current request, and we incur a cost equal to the distance traveled by the dispatched server. The goal is to find a strategy for choosing which servers to dispatch to each incoming request which keeps the average service cost as small as possible. The k-server problem has been well studied for the problem formulation where the demand sequence may be arbitrary. Most of the literature on the k-server problem has focused on the competitive analysis of online algorithms. An online algorithm is a strategy which makes decisions based only on the knowledge of present and past requests, and competitive analysis seeks to compare the performance of specific online algorithms with the performance of an optimal strategy which knows the entire request sequence. The best known results for the k-server problem show that a particular online algorithm (which requires intensive computation to implement) achieves an overall cost which is essentially within a factor of 2k − 1 of optimal [7]. The reader is referred to [4] for a survey on the k-server problem, online algorithms and competitive analysis. In this paper we consider a variation of the k-server problem, where the locations of the service requests are drawn at random according to an IID random process. With this stronger assumption on the demand sequence, it is possible to show that a simple, practical strategy can achieve performance comparable to an optimal state feedback strategy. Specifically, we show that, given a statistical description of the request sequence, we can compute a simple decentralized state-feedback policy achieves an overall cost within a factor of two of the cost achievable by an optimal state-feedback policy. A decentralized policy has the property that, once the policy is determined, no communication between the servers is required for its implementation. In addition, we demonstrate similar results for several extensions of the basic stochastic k-server problem. Problem formulation In this section we give a precise formulation of the stochastic k-server problem. In our formulation, the servers are positioned and the requests originate at points in some finite set S. The set S is equipped with a metric d : S × S → R + . At each time step t ∈ Z + , service at some point x(t) ∈ S is requested and the k servers reside at the points x 1 (t), . . . , x k (t) ∈ S. Exactly one server must be chosen to service the request at x(t). If server u(t) is chosen, then a cost of d(x u(t) (t), x(t)) is incurred and server u(t) is relocated to the point x(t). That is, x u(t) (t + 1) = x(t) and x i (t + 1) = x i (t) for all 1 ≤ i ≤ k such that i = u(t). The next service request x(t + 1) is then randomly chosen. In our model, each x(t) is drawn according to the probability mass function p : S → [0, 1], and is independent of x(τ ) for all τ = t. The goal of the problem is to determine a strategy for assigning servers to service requests which keeps the average cost incurred in each time step as small as possible. This is illustrated in Figure 1. The problem described in the previous paragraph can be formulated as a finite state Markov decision process with average cost criteria (see, for example, [8]). In general, finite state Markov decision processes have a finite state space X , and a finite set of actions U available at each time step. Taking action u ∈ U when in state y ∈ X incurs a cost r(y, u). After taking action u in state y, the system state in the next time period is x ∈ X Figure 1: Gray squares represent server locations and the black square represents the location of the service request. One server will move to the request location and incur a cost equal to the distance traveled. with probability Pr(X(t + 1) = x | X(t) = y, U (t) = u). A static state-feedback policy is a decision rule in which each u(t) is chosen according to a function µ : X → U of the current state x(t). The steady-state average per-period cost under the policy µ is J(µ, x(0)) = lim t→∞ 1 t + 1 t k=0 E[r(X(k), µ(X(k))) | X(0) = x(0)]. We denote a policy which minimizes this cost by µ * . The obvious formulation of the stochastic k-server problem as a Markov decision process has the state at time t given by x(t) = ( x(t), x 1 (t), . . . , x k (t)), the current service request location together with the set of current server locations. The state space is as a subset of S k+1 since we may exclude, without loss of generality, all states which have more than one server assigned to a particular location. The action u(t) taken at time t is the index of the chosen server, and the action space is U = {1, . . . , k}. The cost incurred at time t is r(x(t), u(t)) = d(x u(t) (t), x(t)) , the distance from the dispatched server to the current service request. Under a static state-feedback policy, the state evolves according to a Markov chain since x is an IID random process and for each t, x 1 (t), . . . , x k (t) depends only on the previous state. Although algorithms exist for determining an optimal state-feedback policy for average cost Markov decision processes, they are generally not practical for this problem. One reason is that, under the formulation above, the system has |S| k+1 discrete states. Numerical computation of an optimal policy will be intractable even for relatively small values of |S| and k. Also, even if the optimal policy could be computed, this policy may not lend itself to practical implementation. In particular, the optimal policy may be structured so that the decision u(t) must be made based on the knowledge of all server locations at time t. This means that all servers would be required to communicate their current locations to all other servers before each decision could be made. In the next section, we will show that a fairly simple decentralized strategy can achieve an average per-period cost within a factor of two of an optimal centralized strategy. Main result In this section we will consider decentralized policies for the k-server problem. After introducing decentralized policies, we will show that there is a decentralized policy that can achieve performance close to that of an optimal policy. In a general state feedback policy, the decision of which server to dispatch to a request depends on the location of the request as well as the current location of all servers. In contrast, a decentralized policy is a policy in which each server makes a decision to serve the current request without knowledge of the locations of other servers. Given that one and only one server must respond to each request, it is necessary that decentralized policies have a special 'partition' structure. That is, decentralized policies partition the set S into k disjoint sets S 1 , . . . , S k , and server i serves location x if and only if x ∈ S i . This is illustrated in Figure 2. It turns out that there is always a decentralized policy for any instance of the stochastic k-server problem that can achieve an average cost comparable to the optimal centralized cost. This policy, which we will call µ d , is constructed as follows: 1. Compute the m * 1 , . . . , m * k which minimize s∈S p(s) min i∈U {d(m i , s)}. 2. Construct the disjoint partitions S 1 , . . . , S k , where S i = {s ∈ S | d(m * i , s) ≤ d(m * j , s) for j = 1, . . . , k}. 3. Let µ d (x) = i if x ∈ S i . Performance of this policy relative to an optimal policy is characterized in the following theorem, which is the main result of this paper. Theorem 1. The cost of the decentralized policy µ d satisfies J(µ d , x(0)) ≤ 2J(µ * , x(0)) for all x(0) ∈ X . In order to prove Theorem 1, we will employ a result which allows one to generate performance bounds for general Markov decision processes. This result is proven in [3] for the case of general measurable state spaces, and is presented here for the finite state space case. Lemma 2. Consider a finite state Markov decision process with average cost criteria. For any state feedback policy µ : X → U and any function h U : X → R, J(µ, x(0)) ≤ sup x∈X {r(x, µ(x)) + ∆ U (x)} for all x(0) ∈ X , where ∆ U (x)=E[h U (X(t + 1)) |X(t)=x, U (t)=µ(x)] − h U (x). Moreover, for any function h L : X → U, J(µ * , x(0)) ≥ inf x∈X ,u∈U {r(x, u) + ∆ L (x, u)} for all x(0) ∈ X , where ∆ L (x, u)=E[h L (X(t + 1)) |X(t)=x, U (t)=u] − h L (x). Given the result in Lemma 2, we can now prove Theorem 1. Proof of Theorem 1. First we will find a lower bound on J(µ * , x(0)) using Lemma 2 with h L (x) = min i∈U {d(x i , x)}. For this choice of h L , ∆ L (x, u) = s∈S p(s) min i∈U {d(x i (t + 1), s)} − min i∈U {d(x i , x)}, where x i (t + 1) = x if i = u x i otherwise . Since d(x u , x) ≥ h L (x) for all x ∈ X and u ∈ U, by Lemma 2 we have J(µ * , x(0)) ≥ min m∈S k s∈S p(s) min i∈U {d(m i , s)}(1) for all x(0) ∈ X . Let m * denote the minimizing m in (1). Recall that the decentralized policy µ d divides the set S into disjoint partitions S 1 , . . . , S k , where S i = {s ∈ S | d(m * i , s) ≤ d(m * j , s) for j = 1, . . . , k}. We will find an upper bound on J(µ d , x(0)) using Lemma 2 with h U (x) = 2 min i∈U {d(m * i , x)} + k i=1 d(x i , m * i ). For this choice of h U , r(x, µ d (x)) + ∆ U (x) = 2 s∈S p(s) min i∈U {d(m * i , s)} + d(x µ d (x) , x) − d( x, m * µ d (x) ) − d(x µ d (x) , m * µ d (x) ). Since d is a metric, d(x µ d (x) , x) ≤ d(x µ d (x) , m * µ d (x) ) + d(m * µ d (x) , x), Computing decentralized policies It was shown in the last section that finding a decentralized policy which achieves an average cost within a factor of two of optimal reduces to finding the m * 1 , . . . , m * k minimizing s∈S p(s) min i∈U {d(m i , s)}. In other words, a decentralized policy for our dynamic problem can be determined by solving a static combinatorial optimization problem. This static problem has been well studied, and is known as the k-median problem. The number of possible solutions to the k-median problem is |S| k . Unfortunately, there are no known algorithms for finding an optimal solution with computational requirements that scale well with k. However, much study has been devoted to efficient approximation algorithms for this problem. In this section we will show that the result of the previous section can be combined with known results on approximation algorithms for the k-median problem to obtain efficient algorithms for computing decentralized policies for the stochastic k-server problem. Suppose m 1 , . . . , m k is a suboptimal solution to the k-median problem. Let µ d be the decentralized policy constructed with the disjoint partitions S 1 , . . . , S k , where S i = {s ∈ S | d( m i , s) ≤ d( m j , s) for j = 1, . . . , k}. The following lemma relates the performance of the policy µ d to the quality of the suboptimal k-median solution m 1 , . . . , m k . m i , s)} ≤ ρ s∈S p(s) min i∈U {d(m * i , s)} for some ρ ≥ 1. Then J µ d , x(0) ≤ 2ρJ(µ * , x(0)) for all x(0) ∈ X . Proof. We can find an upper bound on J µ d , x(0) using Lemma 2 with h U (x) = 2 min i∈U {d( m i , x)} + k i=1 d(x i , m i ). Proceeding exactly as in the proof of Theorem 1, we obtain J µ d , x(0) ≤ 2 s∈S p(s) min i∈U {d( m i , s)} ≤ 2ρ s∈S p(s) min i∈U {d(m * i , s)} ≤ 2ρJ(µ * , x(0)). In other words, an approximation algorithm which produces factor ρ suboptimal solutions to the k-median problem leads to a method for computing factor 2ρ suboptimal decentralized policies for the stochastic k-server problem. One particularly attractive approximation algorithm for the k-median problem is the local search heuristic of [1]. This algorithm is particularly simple to implement and capable of achieving an approximation ratio of 3 + ǫ for any ǫ > 0, where there is a tradeoff between computational requirements and approximation ratio. Extensions In this section we will discuss several extensions of the basic stochastic k-server problem and show that results analogous to Theorem 1 can be established. Server-dependent processing costs The first extension we consider generalizes the k-server model to the case where the servers are not equal in their processing capabilities. In particular, we model the cost of serving a job at location x by server u at location x u as r(x, u) = d u (x u , x) + c u ( x). The amount of resources consumed (time, fuel, etc.) by moving from location x u to location x depends on the server, and is modeled by the metric d u if server u is chosen. Once the server arrives at the service location, an additional cost of c u ( x) ≥ 0 is incurred when processing the job at location x by server u. As before, decentralized policies partition the state space and assign exactly one server to each partition. We have the following theorem regarding decentralized policies for the case of server-dependent processing costs. Theorem 4. For the problem with server-dependent processing costs, there exists a decentralized policy µ d such that J(µ d , x(0)) ≤ 2J(µ * , x(0)) for all x(0) ∈ X . Proof. Similar to the proof of Theorem 1, we will find a lower bound on J(µ * ) using Lemma 2 with h L (x) = min i∈U {d i (x i , x) + c i ( x)}. For this choice of h L , we obtain the lower bound J(µ * , x(0))≥ min m∈S k s∈S p(s) min i∈U {d i (m i , s)+c i (s)}(2) for all x(0) ∈ X . Note that, unlike the proof of Theorem 1, the order in which m 1 , . . . , m k are indexed effects the lower bound in (2). Let m * denote the minimizing m in (2). The decentralized policy µ d divides the set S into disjoint partitions S 1 , . . . , S k where S i = {s ∈ S | d i (m * i , s) + c i (s) ≤ d j (m * j , s) + c j (s) ∀j}. We will find an upper bound on µ d using Lemma 2 with h U (x) = 2 min i∈U {d i (m * i , x) + c i ( x)} + k i=1 d i (x i , m * i ). Denoting u d = µ d (x), we have r(x, u d ) + ∆ U (x) = 2 s∈S p(s) min i∈U {d i (m * i , s) + c i (s)} − c u d ( x) + d u d (x u d , x) − d u d ( x, m * u d ) − d u d (x u d , m * u d ). Since d u d is a metric, d u d (x u d , x) ≤ d u d (x u d , m * u d ) + d u d (m * u d , x). Since c u d ( x) ≥ 0, we have J(µ d , x(0)) ≤ 2 s∈S p(s) min i∈U {d i (m * i , s) + c i (s)} ≤ 2J(µ * , x(0)). Multiple requests per period Next we consider the case when some fixed number n ≤ k of requests is generated and must be served in each time step. Specifically, at time step t, service is requested at some set of points x 1 (t), . . . , x n (t) ∈ S, and exactly n servers must be chosen to service these requests. Here the state at time t is given by x(t) = (x 1 (t), . . . , x k (t), x 1 (t), . . . , x n (t)). Let u j (t) denote the index of the server chosen to service request j. For this case the action at time t is u(t) = (u 1 (t), . . . , u n (t)) and the action space is U = {u ∈ {1, . . . , k} n | u i = u j for i = j}. At time t, a cost of n j=1 d(x uj (t) (t), x j (t)) is incurred. Server u j (t) is then relocated to the point x j (t), and the next set of requests is drawn according to some probability mass function p : S n → [0, 1]. Decentralized policies for this case are a natural extension of the partition policies for the single request case. We will analyze the performance of the decentralized policy µ d which is constructed as follows. Let µ d (x) = argmin u∈U    n j=1 d(m * uj , x j )    . In this policy, the server at point x i is always associated with the median at point m * i . When a new batch of requests arrives, each request is matched to one of the medians. No two requests are matched to the same median. If the request at point x j is matched to the median at point m * i , then this request is served by the server at point x i . Note that, unlike the single request case, servers may move between partitions associated with several medians. This is because multiple requests may appear in the same partition, and must be served by multiple servers. Analysis of this case is much like that of the single request case, and is presented in the following theorem. Theorem 5. The cost of the decentralized policy µ d satisfies J(µ d , x(0)) ≤ 2J(µ * , x(0)) for all x(0) ∈ X . Proof. The lower bound on J(µ * , x(0)) is is determined using Lemma 2 with h L (x) = min u∈U    n j=1 d(x uj , x j )    . For this choice of h L , we obtain J(µ * , x(0)) ≥ min m∈S k    s∈S n p(s) min u∈U    n j=1 d(m uj , s j )       = s∈S n p(s)   n j=1 d(m * µ d (s)j , s j )   for all x(0) ∈ X . The upper bound on J(µ d , x(0)) is determined using Lemma 2 with h U (x) = 2 min u∈U    n j=1 d(m * uj , x j )    + k i=1 d(x i , m * i ). Let x i (t + 1) = x j if i = µ d (x) j x i otherwise For this choice of h U , r(x, µ d (x)) + ∆ U (x, µ d (x)) = 2 s∈S n p(s)   n j=1 d(m * µ d (s)j , s j )   + n j=1 d(x µ d (x)j , x j ) + k i=1 d(x i (t + 1), m * i ) − k i=1 d(x i , m * i ) − 2   n j=1 d(m * µ d (x)j , x j )   = 2 s∈S n p(s)   n j=1 d(m * µ d (s)j , s j )   + n j=1 d(x µ d (x)j , x j ) + n j=1 d(m * µ d (x)j , x j ) − n j=1 d(x µ d (x)j , m * µ d (x)j ) − 2   n j=1 d(m * µ d (x)j , x j )   = 2 s∈S n p(s)   n j=1 d(m * µ d (s)j , s j )   + n j=1 d(x µ d (x)j , x j )−d(x µ d (x)j , m * µ d (x)j )−d(m * µ d (x)j , x j ) . Since d is a metric, It is worth noting that for the two extensions presented in this section, computing decentralized policies requires solving generalizations of the k-median problem. Whether any of the existing approximation algorithms for the k-median problem can be extended to these generalizations is not clear, and is a topic for further research. Conclusion In this paper we presented the stochastic k-server problem, and showed that a simple decentralized state-feedback policy achieves an average cost within a factor of two of the cost achieved by an optimal state-feedback policy. These results were then extended to several variations of the basic stochastic k-server problem. In this paper, we presented a formulation where the set of possible locations to be served is finite. We have focused on this formulation because low complexity algorithms for computing decentralized policies exist in this case. In fact, it is straightforward to use the results of [3] to show that the results of this paper hold in infinite bounded metric spaces as well. Figure 2 : 2Locations are separated into disjoint partitions. Servers only serve locations in their partition. ≤ 2J(µ * , x(0)). Lemma 3 . 3Suppose the suboptimal k-median solution m 1 , . . . , m k satisfies s∈S p(s) min i∈U {d( d(x µ d (x)j , x j ) ≤ d(x µ d (x)j , m * µ d (x)j ) + d(m * µ d (x)j , x j )for all j. Therefore, Local search heuristics for k-median and facility location problems. V Arya, N Garg, R Khandekar, A Meyerson, K Mungala, V Pandit, SIAM Journal of Computing. 333V. Arya, N. Garg, R. Khandekar, A. Meyerson, K. Mungala, and V. Pandit. Local search heuristics for k-median and facility location problems. SIAM Journal of Comput- ing, 33(3):544-562, 2004. Stochastic and dynamic vehicle routing in the euclidean plane with multiple capacitated vehicles. D Bertsimas, G Van Ryzin, Operations Research. 411D. Bertsimas and G. van Ryzin. Stochastic and dynamic vehicle routing in the euclidean plane with multiple capacitated vehicles. Operations Research, 41(1):60-76, 1993. Suboptimality bounds in stochastic control: A queueing example. R Cogill, S Lall, Proceedings of the 2006 American Control Conf. the 2006 American Control ConfTo appear in theR. Cogill and S. Lall. Suboptimality bounds in stochastic control: A queueing example. To appear in the Proceedings of the 2006 American Control Conf., 2006. The on-line k-server problem. A Floratos, R Boppana, TR1997-732New York UniversityTechnical ReportA. Floratos and R. Boppana. The on-line k-server problem. Technical Report TR1997-732, New York University, 1997. Decentralized algorithms for vehicle routing in a stochastic timevarying environment. E Frazzoli, F Bullo, Proceedings of the IEEE Conf. on Decision and Control. the IEEE Conf. on Decision and ControlE. Frazzoli and F. Bullo. Decentralized algorithms for vehicle routing in a stochastic time- varying environment. Proceedings of the IEEE Conf. on Decision and Control, pages 3357- 3363, 2004. Cooperative scheduling of tasks for networked uninhabited autonomous vehicles. A Gil, K Passino, A Sparks, Proceedings of the IEEE Conf. on Decision and Control. the IEEE Conf. on Decision and ControlA. Gil, K. Passino, and A. Sparks. Cooperative scheduling of tasks for networked uninhabited autonomous vehicles. Proceedings of the IEEE Conf. on Decision and Control, pages 522-527, 2003. On the k-server conjecture. E Koutsoupias, C Papadimitriou, Proceedings of the 26th ACM Symposium on Theory of Computing. the 26th ACM Symposium on Theory of ComputingE. Koutsoupias and C. Papadimitriou. On the k-server conjecture. Proceedings of the 26th ACM Symposium on Theory of Computing, pages 507-511, 1994. Markov decision processes. M Puterman, John Wiley and SonsNew YorkM. Puterman. Markov decision processes. John Wiley and Sons, New York, 1994. Coordination and control of multiple uavs. A Richards, J Bellingham, M Tillerson, J How, Proceedings of the AIAA Conf. on Guidance, Navigation, and Control. the AIAA Conf. on Guidance, Navigation, and ControlA. Richards, J. Bellingham, M. Tillerson, and J. How. Coordination and control of multiple uavs. Proceedings of the AIAA Conf. on Guidance, Navigation, and Control, 2002.
[]
[ "Causal Intervention for Weakly-Supervised Semantic Segmentation", "Causal Intervention for Weakly-Supervised Semantic Segmentation" ]
[ "Dong Zhang \nSchool of Computer Science and Engineering\nNanjing University of Science and Technology\n\n", "Hanwang Zhang \nNanyang Technological University\n\n", "Jinhui Tang \nSchool of Computer Science and Engineering\nNanjing University of Science and Technology\n\n", "Xiansheng Hua \nDamo Academy\nAlibaba Group\n\n", "Qianru Sun \nSingapore Management University\n\n" ]
[ "School of Computer Science and Engineering\nNanjing University of Science and Technology\n", "Nanyang Technological University\n", "School of Computer Science and Engineering\nNanjing University of Science and Technology\n", "Damo Academy\nAlibaba Group\n", "Singapore Management University\n" ]
[]
We present a causal inference framework to improve Weakly-Supervised Semantic Segmentation (WSSS). Specifically, we aim to generate better pixel-level pseudomasks by using only image-level labels -the most crucial step in WSSS. We attribute the cause of the ambiguous boundaries of pseudo-masks to the confounding context, e.g., the correct image-level classification of "horse" and "person" may be not only due to the recognition of each instance, but also their co-occurrence context, making the model inspection (e.g., CAM) hard to distinguish between the boundaries. Inspired by this, we propose a structural causal model to analyze the causalities among images, contexts, and class labels. Based on it, we develop a new method: Context Adjustment (CONTA), to remove the confounding bias in image-level classification and thus provide better pseudo-masks as ground-truth for the subsequent segmentation model. On PASCAL VOC 2012 and MS-COCO, we show that CONTA boosts various popular WSSS methods to new state-of-the-arts.
null
[ "https://arxiv.org/pdf/2009.12547v2.pdf" ]
221,970,958
2009.12547
cff4d87fcd98c65b352e9dabe1e6f444d99e6aad
Causal Intervention for Weakly-Supervised Semantic Segmentation Dong Zhang School of Computer Science and Engineering Nanjing University of Science and Technology Hanwang Zhang Nanyang Technological University Jinhui Tang School of Computer Science and Engineering Nanjing University of Science and Technology Xiansheng Hua Damo Academy Alibaba Group Qianru Sun Singapore Management University Causal Intervention for Weakly-Supervised Semantic Segmentation We present a causal inference framework to improve Weakly-Supervised Semantic Segmentation (WSSS). Specifically, we aim to generate better pixel-level pseudomasks by using only image-level labels -the most crucial step in WSSS. We attribute the cause of the ambiguous boundaries of pseudo-masks to the confounding context, e.g., the correct image-level classification of "horse" and "person" may be not only due to the recognition of each instance, but also their co-occurrence context, making the model inspection (e.g., CAM) hard to distinguish between the boundaries. Inspired by this, we propose a structural causal model to analyze the causalities among images, contexts, and class labels. Based on it, we develop a new method: Context Adjustment (CONTA), to remove the confounding bias in image-level classification and thus provide better pseudo-masks as ground-truth for the subsequent segmentation model. On PASCAL VOC 2012 and MS-COCO, we show that CONTA boosts various popular WSSS methods to new state-of-the-arts. Figure 1 : The prevailing pipeline for training WSSS. Our contribution is to improve the Classification Model, which is the foundation for better pseudo-masks. Semantic segmentation aims to classify each image pixel into its corresponding semantic class [37]. It is an indispensable computer vision building block for scene understanding applications such as autonomous driving [60] and medical imaging [20]. However, the pixel-level labeling is expensive, e.g., it costs about 1.5 man-hours for one 500 × 500 daily life image [14]. Therefore, to scale up, we are interested in Weakly-Supervised Semantic Segmentation (WSSS), where the "weak" denotes a much cheaper labeling cost at the instance-level [10,33] or even at the image-level [26,63]. In particular, we focus on the latter as it is the most economic way -only a few man-seconds for tagging an image [31]. The prevailing pipeline for training WSSS is depicted in Figure 1. Given training images with only image-level class labels, we first train a multi-label classification model. Second, for each image, we infer the class-specific seed areas, e.g., by applying Classification Activation Map (CAM) [74] to the above trained model. Finally, we expand them to obtain the Pseudo-Masks [22,63,65], which are used as the pseudo ground-truth for training a standard supervised semantic segmentation model [9]. You might be concerned, there is no free lunch -it is essentially ill-posed to infer pixel-level masks from only image-level labels, especially when the visual scene is complex. Although most previous works have noted this challenge [1,22,63], as far as we know, no one answers the whys and wherefores. In this paper, we contribute a formal answer based on causal inference [42] and propose a principled and fundamental solution. As shown in Figure 2, we begin with illustrating the three basic problems that cause the complications in pseudo-mask generation: Object Ambiguity: Objects are not alone. They usually co-occur with each other under certain contexts. For example, if most "horse" images are about "person riding horse", a classification model will wrongly generalize to "most horses are with people" and hence the generated pseudo-masks are ambiguous about the boundary between "person" and "horse". Incomplete Background: Background is composed of (unlabeled) semantic objects. Therefore, the above ambiguity also holds due to the co-occurrence of foreground and background objects, e.g., some parts of the background "floor" are misclassified as the foreground "sofa". Incomplete Foreground: Some semantic parts of the foreground object, e.g., the "window" of "car", co-vary with different contexts, e.g., the window reflections of the surroundings. Therefore, the classification model resorts to using the less context-dependent (i.e., discriminative) parts to represent the foreground, e.g., the "wheel" part is the most representative of "car". So far, we can see that all the above problems are due to the context prior in dataset. Essentially, the context is a confounder that misleads the image-level classification model to learn spurious correlations between pixels and labels, e.g., the inconsistency between the CAM-expanded pseudomasks and the ground-truth masks in Figure 2. More specifically, although the confounder is helpful for a better association between the image pixels X and labels Y via a model P (Y |X), e.g., it is likely a "sofa" when seeing a "floor" region, P (Y |X) mistakenly 1) associates non-causal but positively correlated pixels to labels, e.g., the "floor" region wrongly belongs to "sofa", 2) disassociates causal but negatively correlated ones, e.g., the "window" region is wrongly classified as "non-car". To this end, we propose to use P (Y |do(X)) instead of P (Y |X) to find what pixels truly cause the labels, where the do-operation denotes the pursuit of the causality between the cause X and the effect Y without the confounding effect [44]. The ideal way to calculate P (Y |do(X)) is to "physically" intervene X (a.k.a., randomised controlled trial [8]) -if we could have photographed any "sofa" under any context [13], then P (sof a|do(X)) = P (sof a|X). Intrigued, you are encouraged to think about the causal reason why P (car|X) can robustly localize the "wheel" region in Figure 2? 2 In Section 3.1, we formulate the causalities among pixels, contexts, and labels in a unified Structural Causal Model [41] (see Figure 3 (a)). Thanks to the model, we propose a novel WSSS pipeline called: 2 Answer: the "wheel" was photographed in every "car" under any context by the dataset creator Context Adjustment (CONTA). CONTA is based on the backdoor adjustment [42] for P (Y |do(X)). Instead of the prohibitively expensive "physical" intervention, CONTA performs a practical "virtual" one from only the observational dataset (the training data per se). Specifically, CONTA is an iterative procedure that generates high-quality pseudo-masks. We achieve this by proposing an effective approximation for the backdoor adjustment, which fairly incorporates every possible context into the multi-label classification, generating better CAM seed areas. In Section 4.3, we demonstrate that CONTA can improve pseudo-marks by 2.0% mIoU on average and overall achieves a new state-of-the-art by 66.1% mIoU on the val set and 66.7% mIoU on the test set of PASCAL VOC 2012 [14], and 33.4% mIoU on the val set of MS-COCO [35]. Related Work Weakly-Supervised Semantic Segmentation (WSSS). To address the problem of expensive labeling cost in fully-supervised semantic segmentation, WSSS has been extensively studied in recent years [1,65]. As shown in Figure 1, the prevailing WSSS pipeline [26] with only the image-level class labels [2,63] mainly consists of the following two steps: pseudo-mask generation and segmentation model training. The key is to generate the pseudo-masks as perfect as possible, where the "perfect" means that the pseudo-mask can reveal the entire object areas with accurate boundaries [1]. To this end, existing methods mainly focus on generating better seed areas [30,63,65,64] and expanding these seed areas [1,2,22,26,61]. In this paper, we also follow this pipeline and our contribution is to propose an iterative procedure to generate high-quality seed areas. Visual Context. Visual context is crucial for recognition [13,50,59]. The majority of WSSS models [1,22,63,65] implicitly use context in the backbone network by enlarging the receptive fields with the help of dilated/atrous convolutions [70]. There is a recent work that explicitly uses contexts to improve the multi-label classifier [55]: given a pair of images, it encourages the similarity of the foreground features of the same class and the contrast of the rest. In this paper, we also explicitly use the context, but in a novel framework of causal intervention: the proposed context adjustment. Causal Inference. The purpose of causal inference [44,48] is to empower models the ability to pursue the causal effect: we can remove the spurious bias [6], disentangle the desired model effects [7], and modularize reusable features that generalize well [40]. Recently, there is a growing number of computer vision tasks that benefit from causality [39,45,57,58,62,69,71]. In our work, we adopt the Pearl's structural causal model [41]. Although the Rubin's potential outcome framework [47] can also be used, as the two are fundamentally equivalent [18,43], we prefer Pearl's because it can explicitly introduce the causality in WSSS -every node in the graph can be located and implemented in the WSSS pipeline. Nevertheless, we encourage readers to explore Rubin's when some causalities cannot be explicitly hypothesized and modeled, such as using the prospensity scores [3]. Context Adjustment Recall in Figure 1 that the pseudo-mask generation is the bottleneck of WSSS, and as we discussed in Section 1, the inaccurate CAM-generated seed areas are due to the context confounder C that misleads the classification model between image X and label Y . In this section, we will use a causal graph to fundamentally reveal how the confounder C hurts the pseudo-mask quality (Section 3.1) and how to remove it by using causal intervention (Section 3.2). Structural Causal Model We formulate the causalities among pixel-level image X, context prior C, and image-level labels Y , with a Structural Causal Model (SCM) [41]. As illustrated in Figure 3 (a), the direct links denote the causalities between the two nodes: cause → effect. Note that the newly added nodes and links other than X → Y 3 are not deliberately imposed on the original image-level classification; in contrast, they are the ever-overlooked causalities. Now we detail the high-level rationale behind the SCM and defer its implementation in Section 3.2. C → X. Context prior C determines what to picture in image X. By "context prior", we adopt the general meaning in vision: the relationships among objects in a visual scene [38]. Therefore, C tells us where to put "car", "road", and "building" in an image. Although building a generative model for C → X is extremely challenging for complex scenes [24], fortunately, as we will introduce later in Section 3.2, we can avoid it in causal intervention. C → M ← X. M is an imagespecific representation using the contextual templates from C. For example, a car image can be delineated by using a "car" context template filled with detailed attributes, where the template is the prototypical shape and location of "car" (foreground) in a scene (background). Note that this assumption is not ad hoc in our model, in fact, it underpins almost every concept learning method from the classic Deformable Part Models [15] to modern CNNs [17], whose cognitive evidence can be found in [29]. A plausible realization of M and C used in Section 3.2 is illustrated in Figure 3 (c). X → Y ← M . A general C cannot directly affect the labels Y of an image. Therefore, besides the conventional classification model X → Y , Y is also the effect of the X-specific mediation M . M → Y denotes an obvious causality: the contextual constitution of an image affects the image labels. It is worth noting that even if we do not explicitly take M as an input for the classification model, M → Y still holds. The evidence lies in the fact that visual contexts will emerge in higher-level layers of CNN when training image classifiers [72,74], which essentially serve as a feature map backbone for modern visual detection that highly relies on contexts, such as Fast R-CNN [16] and SSD [36]. To think conversely, if M / → Y in Figure 3 (a), the only path left from C to Y : C → X → Y , is cut off conditional on X, then no contexts are allowed to contribute to the labels by training P (Y |X), and thus we would never uncover the context, e.g., the seed areas. So, WSSS would be impossible. So far, we have pinpointed the role of context C played in the causal graph of image-level classification in Figure 3 (a). Thanks to the graph, we can clearly see how C confounds X and Y via the backdoor path X ← C → M → Y : even if some pixels in X have nothing to do with Y , the backdoor path can still help to correlate X and Y , resulting the problematic pseudo-masks in Figure 2. Next, we propose a causal intervention method to remove the confounding effect. Causal Intervention via Backdoor Adjustment We propose to use causal intervention: P (Y |do(X)), as the new image-level classifier, which removes the confounder C and pursues the true causality from X to Y so as to generate better CAM seed areas. As the "physical" intervention -collecting objects in any context -is impossible, we apply the backdoor adjustment [44] to "virtually" achieve P (Y |do(X)). The key idea is to 1) cut off the link C → X in Figure 3 (b), and 2) stratify C into pieces C = {c}. Formally, we have: P (Y |do(X)) = c P (Y |X, M = f (X, c)) P (c),(1) where f (⋅) is a function defined later in Eq. (3). As C is no longer correlated with X, the causal intervention makes X have a fair opportunity to incorporate every context c into Y 's prediction, subject to a prior P (c). However, C is not observable in WSSS, let alone stratifying it. To this end, as illustrated in Figure 3 (c), we use the class-specific average mask in our proposed Context Adjustment (CONTA) to approximate the confounder set C = {c 1 , c 2 , ..., c n }, where n is the class size in dataset and c ∈ R h×w corresponds to the h × w average mask of the i-th class images. M is the X-specific mask which can be viewed "car" "person" "bicycle" CAMs X Y Mt "car" "person" "bicycle" Multi-Label Classification Model Step 1. Selection Expansion Step 2. Pseudo-Mask X Training Data 1 Training Data 2 "cat" "car" "bus" Segmentation Model Mask ... Mt+1 t = t + 1 Output if t = T Step 3. Step 4. as a linear combination of {c}. Note that the rationale behind our C's implementation is based on the definition of context: the relationships among the objects [38], and thus each stratification is about one class of object interacting with others (i.e., the background). So far, how do we obtain the unobserved masks? In CONTA, we propose an iterative procedure to establish the unobserved C. Figure 4 illustrates the overview of CONTA. The input is training images with only class labels (Training Data 1, t = 0), the output is a segmentation model (t = T ), which is trained on CONTA generated pseudo-masks (Training Data 2). Before we delve into the steps below, we highlight that CONTA is essentially an EM algorithm [66], if you view Eq. (1) as an objective function (where we omit the model parameter Θ) of observed data X and missing data C. Thus, its convergence is theoretically guaranteed. As you may realize soon, the E-step is to calculate the expectation (∑ c in Eq. (1)) over the estimated masks in C|(X, Θ t ) (Step 2, 3, 4); and the M-step is to maximize Eq. (1) for Θ t+1 (Step 1). Concat Step 1. Image Classification. We aim to maximize P (Y |do(X)) for learning the multi-label classification model, whereby the subsequent CAM will yield better seed areas. Our implementation for Eq. (1) is: P (Y |do(X); Θ t ) = n i=1 1 i∈Y 1 1 + exp(−s i ) + 1 i∉Y 1 1 + exp(s i ) ,(2)where 1 is 1/0 indicator, s i = f (X, M t ; θ i t ) is the i-th class score function, consisting of a classshared convolutional network on the channel-wise concatenated feature maps [X, M t ], followed by a class-specific fully-connected network (the last layer is based on a global average pooling [34]). Overall, Eq. (2) is a joint probability over all the n classes that encourages the ground-truth labels i ∈ Y and penalizes the opposite i ∉ Y . In fact, the negative log-likelihood loss of Eq. (2) is also known as the multi-label soft-margin loss [49]. Note that the expectation ∑ c is absorbed in M t , which will be detailed in Step 4. Step 2. Pseudo-Mask Generation. For each image, we can calculate a set of class-specific CAMs [74] using the trained classifier above. Then, we follow the conventional two post-processing steps: 1) We select hot CAM areas (subject to a threshold) for seed areas [2,63]; and 2) We expand them to be the final pseudo-masks [1,26]. Step 3. Segmentation Model Training. Each pseudo-mask is used as the pseudo ground-truth for training any standard supervised semantic segmentation model. If t = T , this is the model for delivery; otherwise, its segmentation mask can be considered as an additional post-processing step for pseudo-mask smoothing. For fair comparisons with other WSSS methods, we adopt the classic DeepLab-v2 [9] as the supervised semantic segmentation model. Performance boost is expected if you adopt more advanced ones [32]. Step 4. Computing M t+1 . We first collect the predicted segmentation mask X m of every training image from the above trained segmentation model. Then, each class-specific entry c in the confounder set C is the averaged mask of X m within the corresponding class and is reshaped into a hw × 1 vector. So far, we are ready to calculate Eq. (1). However, the cost of the network forward pass for all the n classes is expensive. Fortunately, under practical assumptions (see Appendix 2), we can adopt the Normalized Weighted Geometric Mean [68] to move the outer sum ∑ c P (⋅) into the feature level: ∑ c P (Y |X, M )P (c) ≈ P (Y |X, M = ∑ c f (X, c)P (c)) , thus, we only need to feed-forward the network once. We have: M t+1 = n i=1 α i c i P (c i ), α i = sof tmax (W 1 X m ) T (W 2 c i ) n ,(3) where α i is the normalized similarity (softmax over n similarities) between X m and the i-th entry c i in the confounder set C. To make CONTA beyond the dataset statistics per se, P (c i ) is set as the uniform 1/n. W 1 , W 2 ∈ R n×hw are two learnable projection matrices, which are used to project X m and c i into a joint space. n is a constant scaling factor that is used as for feature normalization as in [62]. Experiments We evaluated the proposed CONTA in terms of the model performance quantitatively and qualitatively. Below we introduce the datasets, evaluation metric, and baseline models. We demonstrate the ablation study, show the effectiveness of CONTA on different baselines, and compare it to the state-of-the-arts. Further details and results are given in Appendix. Settings Datasets. PASCAL VOC 2012 [14] contains 21 classes (one background class) which includes 1,464, 1,449 and 1,456 images for training, validation (val) and test, respectively. As the common practice in [1,63], in our experiments, we used an enlarged training set with 10,582 images, where the extra images and labels are from [19]. MS-COCO [35] contains 81 classes (one background class), 80k, and 40k images for training and val. Although pixel-level labels are provided in these benchmarks, we only used image-level class labels in the training process. Evaluation Metric. We evaluated three types of masks: CAM seed area mask, pseudo-mask, and segmentation mask, compared with the ground-truth mask. The standard mean Intersection over Union (mIoU) was used on the training set for evaluating CAM seed area mask and pseudo-mask, and on the val and test sets for evaluating segmentation mask. Baseline Models. To demonstrate the applicability of CONTA, we deployed it on four popular WSSS models including one seed area generation model: SEAM [63], and three seed area expansion models: IRNet [1], DSRG [22], and SEC [26]. Specially, DSRG requires the extra saliency mask [23] as the supervision. General architecture components include a multi-label image classification model, a pseudo-mask generation model, and a segmentation model: DeepLab-v2 [9]. Since the experimental settings of them are different, for fair comparison, we adopted the same settings as reported in the official codes. The detailed implementations of each baseline + CONTA are given in Appendix 3. [14] in mIoU (%). "*" denotes our re-implemented results. "Seg. Ablation Study Mask" refers to the segmentation mask on the val set. "-" denotes that it is N.A. for the fully-supervised models. Our ablation studies aim to answer the following questions. Q1: Does CONTA merely take the advantage of the mask refinement? Is M t indispensable? We validated these by concatenating the segmentation mask (which is more refined compared to the pseudo-mask) with the backbone feature map, fed into classifiers. Then, we compared the newly generated results with the baseline ones. improvement. Q4: What is in the confounder set? We compared the effectiveness of using the pseudo-mask and the segmentation mask to construct the confounder set C. Due to page limit, we only showed ablation studies on the state-of-the-art WSSS model: SEAM [63], and the commonly used dataset -PASCAL VOC 2012; other methods on MS-COCO are given in Appendix 4. We treated the performance of the fully-supervised DeepLab-v2 [9] as the upperbound. Table 1 (Q1) show that using the segmentation mask instead of the proposed M t (concatenated to block-5) is even worse than the baseline. Therefore, the superiority of CONTA is not merely from better (smoothed) segmentation masks and M t is empirically indispensable. A1: Results in A2: Here, [X, M t ] was applied to block-5, and the segmentation masks were used to establish the confounder set C. From Table 1 (Q2), we can observe that the performance starts to saturated at round 3. In particular, when round = 3, CONTA can achieve the unanimously best mIoU on CAM, pseudo-mask, and segmentation mask. Therefore, we set #round = 3 in the following CONTA experiments. We also visualized some qualitative results of the pseudo-masks in Figure 5. We can observe that CONTA can gradually segment clearer boundaries when compared to the baseline results, e.g., person's leg vs. horse, person's body vs. sofa, chair's leg vs. background, and horse's leg vs. background. A3: In addition to [X, M t ] on various backbone blocks, we also reported a dense result, i.e., [X, M t ] on block-2 to block-5. In particular, [X, M t ] was concatenated to the last layer of each block. Before the feature map concatenation, the map size of M t should be down-sampled to match the corresponding block. Results in Table 1 (Q3) show that the performance at block-2/-3 are similar, and block-4/-5 are slightly higher. In particular, when compared to the baseline, block-5 has the most mIoU gain by 1.1% on CAM, 2.3% on pseudo-mask, and 1.8% on segmentation mask. One possible reason is that feature maps at block-5 contain higher-level contexts (e.g., bigger parts, and more complete boundaries), which are more consistent with M t , which are essential contexts. Therefore, we applied [X, M t ] on block-5. [14] dataset in mIoU (%). "*" denotes our reimplemented results. "Seg. Mask" refers to the segmentation mask on the val set. A4: From Table 1 (Q4), we can observe that using both of the pseudo-mask and the segmentation mask established C (C Pseudo-Mask and C Seg. Mask ) can boost the performance when compared to the baseline. In particular, the segmentation mask has a larger gain. The reason may be that the trained segmentation model can smooth the pseudo-mask and thus using higher-quality masks to approximate the unobserved confounder set is better. Effectiveness on Different Baselines To demonstrate the applicability of CONTA, in addition to SEAM [63], we also deployed CONTA on IRNet [1], DSRG [22], and SEC [26]. In particular, the round was set to 3 for SEAM, IRNet and Table 2. We can observe that deploying CONTA on different WSSS models improve all their performances. There are the averaged mIoU improvements of 0.9% on CAM, 2.0% on pseudo-mask, and 2.0% on segmentation mask. In particular, CONTA deployed on SEAM can achieve the best performance of 56.2% on CAM and 66.1% on segmentation mask. Besides, CONTA deployed on IRNet can achieve the best performance of 67.9% on the pseudo-mask. The above results demonstrate the applicability and effectiveness of CONTA. Table 3: Comparison with state-of-the-arts in mIoU (%). "*" denotes our re-implemented results. The best and second best performance under each set are marked with corresponding formats. Figure 6 shows the qualitative segmentation mask comparisons between SEAM+CONTA and SEAM [63]. From the first four columns, we can observe that CONTA can make more accurate predictions on object location and boundary, e.g., person's leg, dog, car, and cow's leg. Besides, we also show two failure cases of SEAM+CONTA in the last two columns, where bicycle and plant can not be well predicted. One possible explanation is that the segmentation mask is directly obtained from the 8× down-sampled feature maps, so some complex-contour objects can not be accurately delineated. This problem may be alleviated by using the encoder-decoder segmentation model, e.g., SegNet [4], and U-Net [46]. More visualization results are given in Appendix 5. Comparison with State-of-the-arts Conclusion We started from summarizing the three basic problems in existing pseudo-masks of WSSS. Then, we argued that the reasons are due to the context prior, which is a confounder in our proposed causal graph. Based on the graph, we used causal intervention to remove the confounder. As it is unobserved, we devised a novel WSSS framework: Context Adjustment (CONTA), based on the backdoor adjustment. CONTA can promote all the prevailing WSSS methods to the new state-ofthe-arts. Thanks to the causal inference framework, we clearly know the limitations of CONTA: the approximation of the context confounder, which is proven to be ill-posed [11]. Therefore, as moving forward, we are going to 1) develop more advanced confounder set discovery methods and 2) incorporate observable expert knowledge into the confounder. Broader Impact The positive impacts of this work are two-fold: 1) it improves the fairness of the weakly-supervised semantic segmentation model, which can prevent the potential discrimination of deep models, e.g., an unfair AI could blindly cater to the majority, causing gender, racial or religious discrimination; 2) it allows some objects to be accurately segmented without extensive multi-context training images, e.g., to segment a car on the road, by using our proposed method, we don't need to photograph any car under any context. The negative impacts could also happen when the proposed weakly-supervised semantic segmentation technique falls into the wrong hands, e.g., it can be used to segment the minority groups for malicious purposes. Therefore, we have to make sure that the weakly-supervised semantic segmentation technique is used for the right purpose. This appendix includes the derivation of backdoor adjustment for the proposed structural causal model (Section 1), the normalized weighted geometric mean (Section 2), the detailed implementations for different baseline models (Section 3), the supplementary ablation studies (Section 4), and more visualization results of segmentation masks (Section 5). Derivation of Backdoor Adjustment for the Proposed Causal Graph In the main paper, we used backdoor adjustment [44] to perform the causal intervention. In this section, we show the derivation of backdoor adjustment for the proposed causal graph (in Figure 3(b) of the main paper), by leveraging the following three do-calculus rules [41]. Given an arbitrary causal directed acyclic graph G, there are four nodes respectively represented by X, Y , Z, and W . Particularly, G X denotes the intervened causal graph where all incoming arrows to X are deleted, and G X denotes another intervened causal graph where all outgoing arrows from X are deleted. We use the lower cases x, y, z, and w to represent the respective values of nodes: X = x, Y = y, Z = z, and W = w. For any interventional distribution compatible with G, we have the following three rules: Rule 1. Insertion/deletion of observations: P (y|do(x), z, w) = P (y|do(x), w), if (Y ⫫ Z|X, W ) G X . (A1) Rule 2. Action/observation exchange: P (y|do(x), do(z), w) = P (y|do(x), z, w), if (Y ⫫ Z|X, W ) G XZ . (A2) Rule 3. Insertion/deletion of actions: P (y|do(x), do(z), w) = P (y|do(x), w), if (Y ⫫ Z|X, W ) G XZ(W ) ,(A3) where Z(W ) is a subset of Z that are not ancestors of any specific nodes related to W in G X . Based on these three rules, we can derive the interventional distribution P (Y |do(X)) for our proposed causal graph (in Figure 3(b) of the main paper) by: P (Y |do(X)) = c P (Y |do(X), c)P (c|do(X)) (A4) = c P (Y |do(X), c)P (c) (A5) = c P (Y |X, c)P (c) (A6) = c P (Y |X, c, M )P (M |X, c)P (c) (A7) = c P (Y |X, c, M = f (X, c))P (c) (A8) = c P (Y |X, M = f (X, c))P (c),(A9) where Eq. A4 and Eq. A7 follow the law of total probability. We can obtain Eq. A5 via Rule 3 that given c ⫫ X in G X , and Eq. A6 can be obtained via Rule 2 which changes the intervention term into observation as Y ⫫ X|c in G X . Eq. A8 is because in our causal graph, M is an image-specific context representation given by the function f (X, c), and Eq. A9 is essentially equal to Eq. A8. Normalized Weighted Geometric Mean This is Appendix to Section 3.2 "Step 4. Computing M t+1 ". In Section 3.2 of the main paper, we used the Normalized Weighted Geometric Mean (NWGM) [68] to move the outer sum ∑ c P (⋅) into the feature level: ∑ c P (Y |X, M )P (c) ≈ P (Y |X, M = ∑ c f (X, c)P (c)). Here, we show the detailed derivation. Formally, our implementation for the positive term (i.e., 1 i∈Y in Eq.(2) of the main paper) can be derived by: P (Y |do(X)) = c exp(s 1 (c)) exp(s 1 (c)) + exp(s 2 (c)) P (c) (A10) = c Sof tmax(s 1 (c))P (c) (A11) ≈ NWGM(Sof tmax(s 1 (c))) (A12) = ∏ c [exp(s 1 (c)] P (c) ∏ c [exp(s 1 (c)] P (c) + ∏ c [exp(s 2 (c)] P (c) (A13) = exp(∑ c (s 1 (c)P (c))) exp(∑ c (s 1 (c)P (c))) + exp(∑ c (s 2 (c)P (c))) (A14) = exp(E c (s 1 (c))) exp(E c (s 1 (c))) + exp(E c (s 2 (c))) (A15) = Sof tmax(E c (s 1 (c)),(A16) where s 1 (⋅) denotes the positive predicted score for the class label which is indeed associated with the input image, and s 2 (c) = 0 under this condition. We can obtain Eq. A10 via our implementation of the multi-label image classification model, and obtain Eq. A11 and Eq. A16 via the definition of the softmax function. Eq. A12 can be obtained via the results in [5]. Eq. A13 to Eq. A15 follow the derivation in [68]. Since s 1 (⋅) in our implementation is a linear model, we can use Eq.(3) in the main paper to compute M t+1 . In addition to the positive term, we can also obtain derivation for the negative term (i.e., 1 i∉Y in Eq.(2) of the main paper) through the similar process as above. More Implementation Details This is Appendix to Section 4.1 "Settings". In Section 4.1 of the main paper, we deployed CONTA on four popular WSSS models including SEAM [63], IRNet [1], DSRG [22], and SEC [26]. In this section, we show the detailed implementations of these four models. Implementation of SEAM+CONTA Backbone. ResNet-38 [67] was adopted as the backbone network. It was pre-trained on Ima-geNet [12] and its convolution layers of the last three blocks were replaced by dilated convolutions [70] with a common input stride of 1 and their dilation rates were adjusted, such that the backbone network can return a feature map of stride 8, i.e., the output size of the backbone network was 1/8 of the input. Setting. The input images were randomly re-scaled in the range of [448, 768] by the longest edge and then cropped into a fix size of 448 × 448 using zero padding if needed. Training Details. The initial learning rate was set to 0.01, following the poly policy lr init = lr init (1 − itr/max_itr) ρ with ρ = 0.9 for decay. Online hard example mining [53] was employed on the training loss to preserve only the top 20% pixel losses. The model was trained with batch size as 8 for 8 epochs using Adam optimizer [25]. We deployed the same data augmentation strategy (i.e., horizontal flip, random cropping, and color jittering [28]), as in AffinityNet [2], in our training process. Hyper-parameters. The hard threshold parameter for CAM was set to 16 by default and changed to 4 and 24 to amplify and weaken background activation, respectively. The fully-connected CRF [27] was used to refine CAM, pseudo-mask, and segmentation mask with the default parameters in the public code. For seed areas expansion, the AffinityNet [2] was used with the search radius as γ = 5, the hyper-parameter in the Hadamard power of the affinity matrix as β = 8, and the number of iterations in random walk as t = 256. Implementation of IRNet+CONTA Backbone. ResNet-50 [21] was used as the backbone network (pre-trained on ImageNet [12]). The adjusted dilated convolutions [70] were used in the last two blocks with a common input stride of 1, such that the backbone network can return a feature map of stride 16, i.e., the output size of the backbone network was 1/16 of the input. Setting. The input image was cropped into a fix size of 512 × 512 using zero padding if needed. Training Details. The stochastic gradient descent was used for optimization with 8, 000 iterations. Learning rate was initially set to 0.1, and decreased using polynomial decay lr init = lr init (1 − itr/max_itr) ρ with ρ = 0.9 at every iteration. The batch size was set to 16 for the image classification model and 32 for the inter-pixel relation model. The same data augmentation strategy (i.e., horizontal flip, random cropping, and color jittering [28]) as in AffinityNet [2] was used in the training process. Hyper-parameters. The fully-connected CRF [27] was used to refine CAM, pseudo-mask, and segmentation mask with the default parameters given in the original code. The hard threshold parameter for CAM was set to 16 by default and changed to 4 and 24 to amplify and weaken the background activation, respectively. The radius γ that limits the search space of pairs was set to 10 when training, and reduced to 5 at inference (conservative propagation in inference). The number of random walk iterations t was fixed to 256. The hyper-parameter β in the Hadamard power of the affinity matrix was set to 10. Implementation of DSRG+CONTA Backbone. ResNet-101 [21] was used as the backbone network (pre-trained on ImageNet [12]) where dilated convolutions [70] were used in the last two blocks, such that the backbone network can return a feature map of stride 16, i.e., the output size of the backbone network was 1/16 of the input. Setting. The input image was cropped into a fix size of 321 × 321 using zero padding if needed. Training Details. The stochastic gradient descent with mini-batch was used for network optimization with 10, 000 iterations. The momentum and the weight decay were set to 0.9 and 0.0005, respectively. The batch size was set to 20, and the dropout rate was set to 0.5. The initial learning rate was set to 0.0005 and it was decreased by a factor of 10 every 2, 000 iterations. Hyper-parameters. For seed generation, pixels with the top 20% activation values in the CAM were considered as foreground (objects) as in [74]. For saliency masks, the model in [23] was used to produce the background localization cues with the normalized saliency value 0.06. For the similarity criteria, the foreground threshold and the background threshold were set to 0.99 and 0.85, respectively. The fully-connected CRF [27] was used to refine pseudo-mask and segmentation mask with the default parameters in the public code. Implementation of SEC+CONTA Backbone. VGG-16 [54] was used as the backbone network (pre-trained on ImageNet [12]), where the last two fully-connected layers were substituted with randomly initialized convolutional layers, which have 1024 output channels and kernels of size 3, such that the output size of the backbone network was 1/8 of the input. Setting. The input image was cropped into a fix size of 321 × 321 using zero padding if needed. Training Details. The weights for the last (prediction) layer were randomly initialized from a normal distribution with mean 0 and variance 0.01. The stochastic gradient descent was used for the network optimization with 8, 000 iterations, the batch size was set to 15, the dropout rate was set to 0. 5 [14] in mIoU (%). "*" denotes our re-implemented results. "Seg. Mask" refers to the segmentation mask of the val set. "-" denotes that the result is N.A. for the fully-supervised model. weight decay parameter was set to 0.0005. The initial learning rate was 0.001 and it was decreased by a factor of 10 every 2, 000 iterations. Hyper-parameters. For seed generation, pixels with the top 20% activation values in the CAM were considered as foreground (objects) as in [74]. The fully-connected CRF [27] was used to refine pseudo-mask and segmentation mask with the spatial distance was multiplied by 12 to reflect the fact that the original image was down-scaled to match the size of the predicted segmentation mask, and the other parameters are consistent with the public code. More Ablation Study Results This is Appendix to Section 4.2 "Ablation Study". In Section 4.2 of the main paper, we showed the ablation study results of SEAM [63]+CONTA on PASCAL VOC 2012 [14]. Table A1, Table A2, and Table A3 show ablation results of IRNet+CONTA, DSRG+CONTA, and SEC+CONTA on PASCAL VOC 2012, respectively. We can observe that IRNet+CONTA and SEC+CONTA can achieve the best performance at round= 3, and DSRG+CONTA can achieve the best mIoU score at round= 2. In addition to results of SEAM+CONTA in our main paper, we can see that IRNet+CONTA can achieve the second best mIoU results: 48.8% on CAM, 67.9% on pseudo-mask, and 65.3% on segmentation mask. [14] in mIoU (%). "*" denotes our re-implemented results. "Seg. Mask" refers to the segmentation mask of the val set. "-" denotes that the result is N.A. for the fully-supervised model. mIoU scores of IRNet+CONTA are the best on MS-COCO as respectively 28.7% on CAM, 35.2% on pseudo-mask, and 33.4% on segmentation mask. PASCAL VOC 2012 MS-COCO More Visualizations This is Appendix to Section 4.4 "Comparison with State-of-the-arts". More segmentation results are visualized in Figure A1. We can observe that most of our resulting masks are of high quality. The [35] in mIoU (%). "*" denotes our re-implemented results. "Seg. Mask" refers to the segmentation mask of the val set. "-" denotes that the result is N.A. for the fully-supervised model. segmentation masks predicted by SEAM+CONTA are more accurate and have better integrity, e.g., for cow, horse, bird, person lying next to the dog, and person standing next to the cows. In particular, SEAM+CONTA works better to prediction the edges of some thin objects or object parts, e.g., the tail (or the head) of bird, car, and person in the car. [35] in mIoU (%). "*" denotes our re-implemented results. "Seg. Mask" refers to the segmentation mask of the val set. "-" denotes that the result is N.A. for the fully-supervised model. [35] in mIoU (%). "*" denotes our re-implemented results. "Seg. Mask" refers to the segmentation mask of the val set. "-" denotes that the result is N.A. for the fully-supervised model. [14]. Red rectangles highlight the improved regions predicted by SEAM [63]+CONTA. Figure 2 : 2Three basic problems in existing pseudo-masks [63] (dataset: PASCAL VOC 2012 [14]): (a) Object Ambiguity, (b) Incomplete Background, (c) Incomplete Foreground. They usually combine to cause other complications. The context (mean image per class) may provide clues for the reasons. Figure 3 : 3(a) The proposed Structural Causal Model (SCM) for causality of multi-label classifier in WSSS, (b) The intervened SCM for the causality of multi-label classifier in WSSS, (c) The realization of each component in CONTA. Figure 4 : 4Overview of our proposed Context Adjustment (CONTA). M t is an empty set when t = 0. = 3 Figure 5 : 35Q2: How many rounds? We recorded the performances of CONTA in each round. Q3: Where to concatenate M t ? We adopted the channel-wise feature map concatenation [X, M t ] on different blocks of the backbone feature maps and tested which block has the most Image Ground-Truth Baseline Round = 2 Round = 1 Round Visualization of pseudo-masks (baseline: SEAM [63], dataset: PASCAL VOC 2012[14]). Figure 6 : 6Visualization of segmentation masks, the last two columns show two failure cases (dataset: PASCAL VOC 2012[14]). The red rectangle highlights the better areas for SEAM+CONTA. SEC, and was set to 2 for DSRG. Experimental results on PASCAL VOC 2012 are shown in Figure A1 : A1More visualization results. Samples are from PASCAL VOC 2012 Table 1 : 1Ablation results on PASCAL VOC 2012 Table 2 : 2Different baselines+CONTA on PASCAL VOC 2012 Table 3 lists 3CONTA on SEAM achieves state-of-the-art 66.1% and 66.7% mIoU on the val set and the test set, which surpasses the previous best model 1.2% and 1.0%, respectively. On MS-COCO, CONTA deployed on SEC with VGG-16[54] achieves 23.7% mIoU on the val set, which surpasses the previous best model by 1.3% mIoU. Besides, on stronger backbones and WSSS models, CONTA can also boost the performance by 0.9% mIoU on average.the overall WSSS perfor- mances. On PASCAL VOC 2012, we can observe that CONTA deployed on IRNet with ResNet-50 [21] achieves the very competitive 65.3% and 66.1% mIoU on the val set and the test set. Based on a stronger backbone ResNet-38 [67] (with fewer layers but wider channels), and the Setting CAM Pseudo-Mask Seg. MaskUpperbound [37] - - 72.3 Baseline * [1] 48.3 65.9 63.0 (A1) M t ← Seg. Mask 48.1 65.5 62.1 (A2) Round = 1 48.5 66.9 64.2 Round = 2 48.7 67.6 65.0 Round = 3 48.8 67.9 65.3 Round = 4 48.6 67.2 64.9 (A3) Block-2 48.3 66.2 63.4 Block-3 48.4 66.6 63.8 Block-4 48.7 67.3 64.6 Block-5 48.8 67.9 65.3 Dense 48.7 67.6 65.1 (A4) C Pseudo-Mask 48.6 67.4 65.0 C Seg. Mask 48.8 67.9 65.3 Table A1 : A1Ablations of IRNet [1]+CONTA on PASCAL VOC 2012 In this section, we show the results of IRNet [1]+CONTA, DSRG[22]+CONTA, and SEC[26]+CONTA on PASCAL VOC 2012. Besides, we also show the results of SEAM+CONTA, IRNet+CONTA, DSRG+CONTA, and SEC+CONTA on MS-COCO[35]. Table A4 , A4Table A5, Table A6, and Table A7show the respective ablation results of SEAM+CONTA, IRNet+CONTA, DSRG+CONTA, and SEC+CONTA on MS-COCO. We can see that SEAM+CONTA, IRNet+CONTA and, SEC+CONTA can achieve the top mIoU at round= 3, and DSRG+CONTA can achieve the best performance at round= 2. In particular, we see that the Setting CAM Pseudo-Mask Seg. MaskUpperbound [37] - - 77.7 Baseline * [22] 47.3 62.7 61.4 (A1) M t ← Seg. Mask 47.0 61.9 61.1 (A2) Round = 1 47.7 63.5 62.2 Round = 2 48.0 64.0 62.8 Round = 3 47.8 63.8 62.5 Round = 4 47.4 63.5 62.1 (A3) Block-2 47.4 62.9 61.7 Block-3 47.6 63.2 62.1 Block-4 47.9 63.7 62.6 Block-5 48.0 64.0 62.8 Dense 47.8 63.8 62.7 (A4) C Pseudo-Mask 47.8 63.6 62.5 C Seg. Mask 48.0 64.0 62.8 Table A2 : A2Ablations of DSRG[22]+CONTA on PASCAL VOC 2012[14] in mIoU (%). "*" denotes our re-implemented results. "Seg. Mask" refers to the segmentation mask of the val set. "-" denotes that the result is N.A. for the fully-supervised model.Setting CAM Pseudo-Mask Seg. Mask Upperbound [37] - - 71.6 Baseline * [26] 46.5 53.4 50.7 (A1) M t ← Seg. Mask 46.4 53.1 50.3 (A2) Round = 1 47.1 54.3 51.7 Round = 2 47.6 55.1 52.6 Round = 3 47.9 55.7 53.2 Round = 4 47.7 55.6 53.0 (A3) Block-2 46.8 53.9 51.2 Block-3 47.1 54.5 51.5 Block-4 47.6 55.1 52.4 Block-5 47.9 55.7 53.2 Dense 47.8 55.6 53.0 (A4) C Pseudo-Mask 47.7 55.3 52.9 C Seg. Mask 47.9 55.7 53.2 Table A3 : A3Ablations of SEC [26]+CONTA on PASCAL VOC 2012 Setting CAM Pseudo-Mask Seg. MaskUpperbound * [37] - - 44.8 Baseline * [63] 25.1 31.5 31.9 (A1) M t ← Seg. Mask 24.8 31.1 31.4 (A2) Round = 1 25.7 31.9 32.4 Round = 2 26.2 32.2 32.7 Round = 3 26.5 32.5 32.8 Round = 4 26.3 32.1 32.6 (A3) Block-2 25.7 32.0 32.3 Block-3 25.9 32.1 32.4 Block-4 26.3 32.4 32.6 Block-5 26.5 32.5 32.8 Dense 26.5 32.4 32.5 (A4) C Pseudo-Mask 26.4 32.0 32.6 C Seg. Mask 26.5 32.5 32.8 Table A4 : A4Ablation results of SEAM[63]+CONTA on MS-COCO[35] in mIoU (%). "*" denotes our re-implemented results. "Seg. Mask" refers to the segmentation mask of the val set. "-" denotes that the result is N.A. for the fully-supervised model.SettingCAM Pseudo-Mask Seg. Mask A1) M t ← Seg. Mask 27.1Upperbound * [37] - - 42.5 Baseline * [1] 27.4 34.0 32.6 (33.5 32.3 (A2) Round = 1 28.0 34.3 32.9 Round = 2 28.4 34.8 33.2 Round = 3 28.7 35.2 33.4 Round = 4 28.5 35.0 33.2 (A3) Block-2 27.7 34.3 32.8 Block-3 27.9 34.5 32.9 Block-4 28.4 34.9 33.2 Block-5 28.7 35.2 33.4 Dense 28.6 35.2 33.1 (A4) C Pseudo-Mask 28.5 35.0 33.2 C Seg. Mask 28.7 35.2 33.4 Table A5 : A5Ablation results of IRNet [1]+CONTA on MS-COCO Setting CAM Pseudo-Mask Seg. MaskUpperbound [37] - - 45.0 Baseline * [22] 19.8 26.1 25.6 (A1) M t ← Seg. Mask 19.5 25.9 25.5 (A2) Round = 1 20.5 26.9 26.1 Round = 2 20.9 27.5 26.4 Round = 3 20.7 27.2 26.2 Round = 4 20.4 26.9 26.0 (A3) Block-2 20.1 26.8 25.9 Block-3 20.2 27.0 26.0 Block-4 20.5 27.2 26.2 Block-5 20.9 27.5 26.4 Dense 20.8 27.3 26.1 (A4) C Pseudo-Mask 20.7 27.2 26.1 C Seg. Mask 20.9 27.5 26.4 Table A6 : A6Ablation results of DSRG [22]+CONTA on MS-COCO SettingCAM Pseudo-Mask Seg. Mask A1) M t ← Seg. Mask 18.1Upperbound [37] - - 41.0 Baseline * [26] 18.7 24.0 22.4 (23.5 21.2 (A2) Round = 1 20.1 24.4 23.0 Round = 2 21.2 24.7 23.4 Round = 3 21.8 24.9 23.7 Round = 4 21.4 24.5 23.5 (A3) Block-2 19.5 24.2 22.7 Block-3 19.9 24.4 22.9 Block-4 20.6 24.7 23.5 Block-5 21.8 24.9 23.7 Dense 21.8 24.6 23.5 (A4) C Pseudo-Mask 21.5 24.7 23.4 C Seg. Mask 21.8 24.9 23.7 Table A7 : A7Ablation results of SEC [26]+CONTA on MS-COCO Some studies[51] show that label causes image (X ← Y ). We believe that such anti-causal assumption only holds when the label is as simple as the disentangled causal mechanisms[40,56] (e.g., 10-digit in MNIST). AcknowledgementsThe authors would like to thank all the anonymous reviewers for their constructive comments and suggestions. ThisAppendix for "Causal Intervention for Weakly-SupervisedSemantic Segmentation" Weakly supervised learning of instance segmentation with inter-pixel relations. Jiwoon Ahn, Sunghyun Cho, Suha Kwak, CVPR. 1719Jiwoon Ahn, Sunghyun Cho, and Suha Kwak. Weakly supervised learning of instance segmen- tation with inter-pixel relations. In CVPR, 2019. 2, 3, 5, 6, 7, 8, 15, 17, 19 Learning pixel-level semantic affinity with image-level supervision for weakly supervised semantic segmentation. Jiwoon Ahn, Suha Kwak, CVPR. 16Jiwoon Ahn and Suha Kwak. Learning pixel-level semantic affinity with image-level supervision for weakly supervised semantic segmentation. In CVPR, 2018. 3, 5, 8, 15, 16 An introduction to propensity score methods for reducing the effects of confounding in observational studies. C Peter, Austin, Multivariate Behavioral Research. 463Peter C Austin. An introduction to propensity score methods for reducing the effects of confounding in observational studies. Multivariate Behavioral Research, 46(3):399-424, 2011. 3 Segnet: A deep convolutional encoder-decoder architecture for image segmentation. Vijay Badrinarayanan, Alex Kendall, Roberto Cipolla, TPAMI. 3912Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. TPAMI, 39(12):2481-2495, 2017. 8 The dropout learning algorithm. Pierre Baldi, Peter Sadowski, Artificial Intelligence. 21015Pierre Baldi and Peter Sadowski. The dropout learning algorithm. Artificial Intelligence, 210:78-122, 2014. 15 Controlling selection bias in causal inference. Elias Bareinboim, Judea Pearl, In Artificial Intelligence and Statistics. 3Elias Bareinboim and Judea Pearl. Controlling selection bias in causal inference. In Artificial Intelligence and Statistics, 2012. 3 Counterfactuals uncover the modular structure of deep generative models. Michel Besserve, Rémy Sun, Bernhard Schölkopf, ICLR. 2020Michel Besserve, Rémy Sun, and Bernhard Schölkopf. Counterfactuals uncover the modular structure of deep generative models. In ICLR, 2020. 3 A method for assessing the quality of a randomized control trial. C Thomas, Harry Chalmers, Bradley SmithJr, Bernard Blackburn, Biruta Silverman, Dinah Schroeder, Alexander Reitman, Ambroz, Controlled Clinical Trials. 21Thomas C Chalmers, Harry Smith Jr, Bradley Blackburn, Bernard Silverman, Biruta Schroeder, Dinah Reitman, and Alexander Ambroz. A method for assessing the quality of a randomized control trial. Controlled Clinical Trials, 2(1):31-49, 1981. 2 Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, Alan L Yuille, TPAMI. 4047Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. TPAMI, 40(4):834-848, 2017. 2, 5, 6, 7 Boxsup: Exploiting bounding boxes to supervise convolutional networks for semantic segmentation. Jifeng Dai, Kaiming He, Jian Sun, ICCV. Jifeng Dai, Kaiming He, and Jian Sun. Boxsup: Exploiting bounding boxes to supervise convolutional networks for semantic segmentation. In ICCV, 2015. 1 On multi-cause causal inference with unobserved confounding: Counterexamples, impossibility, and alternatives. Alexander D&apos; Amour, AISTATS. Alexander D'Amour. On multi-cause causal inference with unobserved confounding: Coun- terexamples, impossibility, and alternatives. In AISTATS, 2019. 9 Imagenet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, CVPR. 1516Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. 15, 16 Modeling visual context is key to augmenting object detection datasets. Nikita Dvornik, Julien Mairal, Cordelia Schmid, ECCV. 23Nikita Dvornik, Julien Mairal, and Cordelia Schmid. Modeling visual context is key to augmenting object detection datasets. In ECCV, 2018. 2, 3 The pascal visual object classes challenge: A retrospective. Mark Everingham, Ali Eslami, Luc Van Gool, K I Christopher, John Williams, Andrew Winn, Zisserman, IJCV. 111121Mark Everingham, SM Ali Eslami, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes challenge: A retrospective. IJCV, 111(1):98-136, 2015. 1, 2, 3, 6, 7, 8, 17, 18, 21 Object detection with discriminatively trained part-based models. F Pedro, Ross B Felzenszwalb, David Girshick, Deva Mcallester, Ramanan, TPAMI. 329Pedro F Felzenszwalb, Ross B Girshick, David McAllester, and Deva Ramanan. Object detection with discriminatively trained part-based models. TPAMI, 32(9):1627-1645, 2009. 4 Fast r-cnn. Ross Girshick, ICCV. Ross Girshick. Fast r-cnn. In ICCV, 2015. 4 Deformable part models are convolutional neural networks. Ross Girshick, Forrest Iandola, Trevor Darrell, Jitendra Malik, CVPR. Ross Girshick, Forrest Iandola, Trevor Darrell, and Jitendra Malik. Deformable part models are convolutional neural networks. In CVPR, 2015. 4 A survey of learning causality with data: Problems and methods. Ruocheng Guo, Lu Cheng, Jundong Li, Richard Hahn, Huan Liu, CSUR53Ruocheng Guo, Lu Cheng, Jundong Li, P Richard Hahn, and Huan Liu. A survey of learning causality with data: Problems and methods. CSUR, 53(4):1-37, 2020. 3 Semantic contours from inverse detectors. Pablo Bharath Hariharan, Lubomir Arbeláez, Subhransu Bourdev, Jitendra Maji, Malik, ICCV. Bharath Hariharan, Pablo Arbeláez, Lubomir Bourdev, Subhransu Maji, and Jitendra Malik. Semantic contours from inverse detectors. In ICCV, 2011. 6 Brain tumor segmentation with deep neural networks. Mohammad Havaei, Axel Davy, David Warde-Farley, Antoine Biard, Aaron Courville, Yoshua Bengio, Chris Pal, Pierre-Marc Jodoin, Hugo Larochelle, Medical Image Analysis. 353Mohammad Havaei, Axel Davy, David Warde-Farley, Antoine Biard, Aaron Courville, Yoshua Bengio, Chris Pal, Pierre-Marc Jodoin, and Hugo Larochelle. Brain tumor segmentation with deep neural networks. Medical Image Analysis, 35(3):18-31, 2017. 1 Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, CVPR. 816Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016. 8, 16 Weaklysupervised semantic segmentation network with deep seeded region growing. Zilong Huang, Xinggang Wang, Jiasi Wang, Wenyu Liu, Jingdong Wang, CVPR. 1820Zilong Huang, Xinggang Wang, Jiasi Wang, Wenyu Liu, and Jingdong Wang. Weakly- supervised semantic segmentation network with deep seeded region growing. In CVPR, 2018. 1, 2, 3, 6, 7, 15, 17, 18, 20 Salient object detection: A discriminative regional feature integration approach. Huaizu Jiang, Jingdong Wang, Zejian Yuan, Yang Wu, Nanning Zheng, Shipeng Li, CVPR. 16Huaizu Jiang, Jingdong Wang, Zejian Yuan, Yang Wu, Nanning Zheng, and Shipeng Li. Salient object detection: A discriminative regional feature integration approach. In CVPR, 2013. 6, 16 Image generation from scene graphs. Justin Johnson, Agrim Gupta, Li Fei-Fei, CVPR. Justin Johnson, Agrim Gupta, and Li Fei-Fei. Image generation from scene graphs. In CVPR, 2018. 4 Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, ICLR. 15Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. 15 Seed, expand and constrain: Three principles for weakly-supervised image segmentation. Alexander Kolesnikov, Christoph H Lampert, ECCV. 1820Alexander Kolesnikov and Christoph H Lampert. Seed, expand and constrain: Three principles for weakly-supervised image segmentation. In ECCV, 2016. 1, 3, 5, 6, 7, 8, 15, 17, 18, 20 Efficient inference in fully connected crfs with gaussian edge potentials. Philipp Krähenbühl, Vladlen Koltun, NeurIPS. 1517Philipp Krähenbühl and Vladlen Koltun. Efficient inference in fully connected crfs with gaussian edge potentials. In NeurIPS, 2011. 15, 16, 17 Imagenet classification with deep convolutional neural networks. Alex Krizhevsky, Ilya Sutskever, Geoffrey E Hinton, NeurIPS. 1516Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In NeurIPS, 2012. 15, 16 Human-level concept learning through probabilistic program induction. Ruslan Brenden M Lake, Joshua B Salakhutdinov, Tenenbaum, Science. 3506266Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332-1338, 2015. 4 Ficklenet: Weakly and semi-supervised semantic image segmentation using stochastic inference. Jungbeom Lee, Eunji Kim, Sungmin Lee, Jangho Lee, Sungroh Yoon, CVPR. Jungbeom Lee, Eunji Kim, Sungmin Lee, Jangho Lee, and Sungroh Yoon. Ficklenet: Weakly and semi-supervised semantic image segmentation using stochastic inference. In CVPR, 2019. 3 Weakly-and semi-supervised panoptic segmentation. Qizhu Li, Anurag Arnab, Philip Hs Torr, ECCV. Qizhu Li, Anurag Arnab, and Philip HS Torr. Weakly-and semi-supervised panoptic segmenta- tion. In ECCV, 2018. 1 Learning dynamic routing for semantic segmentation. Yanwei Li, Lin Song, Yukang Chen, Zeming Li, Xiangyu Zhang, Xingang Wang, Jian Sun, CVPR. Yanwei Li, Lin Song, Yukang Chen, Zeming Li, Xiangyu Zhang, Xingang Wang, and Jian Sun. Learning dynamic routing for semantic segmentation. In CVPR, 2020. 5 Scribblesup: Scribble-supervised convolutional networks for semantic segmentation. Di Lin, Jifeng Dai, Jiaya Jia, Kaiming He, Jian Sun, CVPR. Di Lin, Jifeng Dai, Jiaya Jia, Kaiming He, and Jian Sun. Scribblesup: Scribble-supervised convolutional networks for semantic segmentation. In CVPR, 2016. 1 Network in network. Min Lin, Qiang Chen, Shuicheng Yan, ICLR. Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. In ICLR, 2014. 5 Microsoft coco: Common objects in context. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, ECCV. 1920Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014. 3, 6, 8, 17, 19, 20 Ssd: Single shot multibox detector. Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C Berg, ECCV. Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. Ssd: Single shot multibox detector. In ECCV, 2016. 4 Fully convolutional networks for semantic segmentation. Jonathan Long, Evan Shelhamer, Trevor Darrell, CVPR. 1820Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. 1, 6, 17, 18, 19, 20 Vision: A computational investigation into the human representation and processing of visual information. David Marr, MIT Press45David Marr. Vision: A computational investigation into the human representation and processing of visual information. MIT Press, 1982. 4, 5 Counterfactual vqa: A cause-effect look at language bias. Yulei Niu, Kaihua Tang, Hanwang Zhang, Zhiwu Lu, Xian-Sheng Hua, Ji-Rong Wen, arXiv, 2020. 3Yulei Niu, Kaihua Tang, Hanwang Zhang, Zhiwu Lu, Xian-Sheng Hua, and Ji-Rong Wen. Counterfactual vqa: A cause-effect look at language bias. In arXiv, 2020. 3 Learning independent causal mechanisms. Giambattista Parascandolo, Niki Kilbertus, Mateo Rojas-Carulla, Bernhard Schölkopf, ICML. Giambattista Parascandolo, Niki Kilbertus, Mateo Rojas-Carulla, and Bernhard Schölkopf. Learning independent causal mechanisms. In ICML, 2018. 3 Causality: Models, Reasoning and Inference. Judea Pearl, Springer214Judea Pearl. Causality: Models, Reasoning and Inference. Springer, 2000. 2, 3, 14 Interpretation and identification of causal mediation. Judea Pearl, Psychological Methods. 1943Judea Pearl. Interpretation and identification of causal mediation. Psychological Methods, 19(4):459-481, 2014. 2, 3 Causal inference in statistics: An overview. Judea Pearl, Statistics surveys. 33Judea Pearl et al. Causal inference in statistics: An overview. Statistics surveys, 3:96-146, 2009. 3 Causal inference in statistics: A primer. Judea Pearl, Madelyn Glymour, Nicholas P Jewell, John Wiley & Sons14Judea Pearl, Madelyn Glymour, and Nicholas P Jewell. Causal inference in statistics: A primer. John Wiley & Sons, 2016. 2, 3, 4, 14 Two causal principles for improving visual dialog. Jiaxin Qi, Yulei Niu, Jianqiang Huang, Hanwang Zhang, CVPR. 2020Jiaxin Qi, Yulei Niu, Jianqiang Huang, and Hanwang Zhang. Two causal principles for improving visual dialog. In CVPR, 2020. 3 U-net: Convolutional networks for biomedical image segmentation. Olaf Ronneberger, Philipp Fischer, Thomas Brox, MICCAI. Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015. 8 Causal inference using potential outcomes: Design, modeling, decisions. Donald B Rubin, Journal of the American Statistical Association. 100469Donald B Rubin. Causal inference using potential outcomes: Design, modeling, decisions. Journal of the American Statistical Association, 100(469):322-331, 2005. 3 Essential concepts of causal inference: a remarkable history and an intriguing future. Donald B Rubin, Biostatistics & Epidemiology. 31Donald B Rubin. Essential concepts of causal inference: a remarkable history and an intriguing future. Biostatistics & Epidemiology, 3(1):140-155, 2019. 3 Moon: A mixed objective optimization network for the recognition of facial attributes. M Ethan, Manuel Rudd, Terrance E Günther, Boult, ECCV. Ethan M Rudd, Manuel Günther, and Terrance E Boult. Moon: A mixed objective optimization network for the recognition of facial attributes. In ECCV, 2016. 5 Built-in foreground/background prior for weaklysupervised semantic segmentation. Fatemehsadat Saleh, Mohammad Sadegh Aliakbarian, Mathieu Salzmann, Lars Petersson, Stephen Gould, Jose M Alvarez, ECCV. 3Fatemehsadat Saleh, Mohammad Sadegh Aliakbarian, Mathieu Salzmann, Lars Petersson, Stephen Gould, and Jose M Alvarez. Built-in foreground/background prior for weakly- supervised semantic segmentation. In ECCV, 2016. 3, 8 On causal and anticausal learning. Bernhard Schölkopf, Dominik Janzing, Jonas Peters, Eleni Sgouritsa, Kun Zhang, Joris Mooij, ICML. Bernhard Schölkopf, Dominik Janzing, Jonas Peters, Eleni Sgouritsa, Kun Zhang, and Joris Mooij. On causal and anticausal learning. In ICML, 2012. 3 Self-supervised difference detection for weakly-supervised semantic segmentation. Wataru Shimoda, Keiji Yanai, ICCV. Wataru Shimoda and Keiji Yanai. Self-supervised difference detection for weakly-supervised semantic segmentation. In ICCV, 2019. 8 Training region-based object detectors with online hard example mining. Abhinav Shrivastava, Abhinav Gupta, Ross Girshick, CVPR. 15Abhinav Shrivastava, Abhinav Gupta, and Ross Girshick. Training region-based object detectors with online hard example mining. In CVPR, 2016. 15 Very deep convolutional networks for large-scale image recognition. Karen Simonyan, Andrew Zisserman, ICLR. 816Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. 8, 16 Mining cross-image semantics for weakly supervised semantic segmentation. Guolei Sun, Wenguan Wang, Jifeng Dai, Luc Van Gool, ECCV. 2020Guolei Sun, Wenguan Wang, Jifeng Dai, and Luc Van Gool. Mining cross-image semantics for weakly supervised semantic segmentation. In ECCV, 2020. 3 Robustly disentangled causal mechanisms: Validating deep representations for interventional robustness. Raphael Suter, Djordje Miladinovic, Bernhard Schölkopf, Stefan Bauer, ICML. Raphael Suter, Djordje Miladinovic, Bernhard Schölkopf, and Stefan Bauer. Robustly disen- tangled causal mechanisms: Validating deep representations for interventional robustness. In ICML, 2019. 3 Long-tailed classification by keeping the good and removing the bad momentum causal effect. Kaihua Tang, Jianqiang Huang, Hanwang Zhang, NeurIPS. 2020Kaihua Tang, Jianqiang Huang, and Hanwang Zhang. Long-tailed classification by keeping the good and removing the bad momentum causal effect. In NeurIPS, 2020. 3 Unbiased scene graph generation from biased training. Kaihua Tang, Yulei Niu, Jianqiang Huang, Jiaxin Shi, Hanwang Zhang, CVPR. 2020Kaihua Tang, Yulei Niu, Jianqiang Huang, Jiaxin Shi, and Hanwang Zhang. Unbiased scene graph generation from biased training. In CVPR, 2020. 3 Learning to compose dynamic tree structures for visual contexts. Kaihua Tang, Hanwang Zhang, Baoyuan Wu, Wenhan Luo, Wei Liu, CVPR. Kaihua Tang, Hanwang Zhang, Baoyuan Wu, Wenhan Luo, and Wei Liu. Learning to compose dynamic tree structures for visual contexts. In CVPR, 2019. 3 Speeding up semantic segmentation for autonomous driving. Michael Treml, José Arjona-Medina, Thomas Unterthiner, Rupesh Durgesh, Felix Friedmann, Peter Schuberth, Andreas Mayr, Martin Heusel, Markus Hofmarcher, Michael Widrich, NeurIPS. Michael Treml, José Arjona-Medina, Thomas Unterthiner, Rupesh Durgesh, Felix Friedmann, Peter Schuberth, Andreas Mayr, Martin Heusel, Markus Hofmarcher, Michael Widrich, et al. Speeding up semantic segmentation for autonomous driving. In NeurIPS, 2016. 1 Learning random-walk label propagation for weaklysupervised semantic segmentation. Paul Vernaza, Manmohan Chandraker, CVPR. Paul Vernaza and Manmohan Chandraker. Learning random-walk label propagation for weakly- supervised semantic segmentation. In CVPR, 2017. 3 Visual commonsense r-cnn. Tan Wang, Jianqiang Huang, Hanwang Zhang, Qianru Sun, CVPR. 36Tan Wang, Jianqiang Huang, Hanwang Zhang, and Qianru Sun. Visual commonsense r-cnn. In CVPR, 2020. 3, 6 Self-supervised equivariant attention mechanism for weakly supervised semantic segmentation. Yude Wang, Jie Zhang, Meina Kan, Shiguang Shan, Xilin Chen, CVPR. 1921Yude Wang, Jie Zhang, Meina Kan, Shiguang Shan, and Xilin Chen. Self-supervised equivariant attention mechanism for weakly supervised semantic segmentation. In CVPR, 2020. 1, 2, 3, 5, 6, 7, 8, 15, 17, 19, 21 Object region mining with adversarial erasing: A simple classification to semantic segmentation approach. Yunchao Wei, Jiashi Feng, Xiaodan Liang, Ming-Ming Cheng, Yao Zhao, Shuicheng Yan, CVPR. Yunchao Wei, Jiashi Feng, Xiaodan Liang, Ming-Ming Cheng, Yao Zhao, and Shuicheng Yan. Object region mining with adversarial erasing: A simple classification to semantic segmentation approach. In CVPR, 2017. 3 Revisiting dilated convolution: A simple approach for weakly-and semi-supervised semantic segmentation. Yunchao Wei, Huaxin Xiao, Honghui Shi, Zequn Jie, Jiashi Feng, Thomas S Huang, CVPR. 13Yunchao Wei, Huaxin Xiao, Honghui Shi, Zequn Jie, Jiashi Feng, and Thomas S Huang. Revisiting dilated convolution: A simple approach for weakly-and semi-supervised semantic segmentation. In CVPR, 2018. 1, 3 On the convergence properties of the em algorithm. Jeff Cf, Wu, The Annals of Statistics. 11CF Jeff Wu. On the convergence properties of the em algorithm. The Annals of Statistics, 1(1):95-103, 1983. 5 Wider or deeper: Revisiting the resnet model for visual recognition. Zifeng Wu, Chunhua Shen, Anton Van Den, Hengel, Pattern Recognition. 90115Zifeng Wu, Chunhua Shen, and Anton Van Den Hengel. Wider or deeper: Revisiting the resnet model for visual recognition. Pattern Recognition, 90(1):119-133, 2019. 8, 15 Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, ICML. 515Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In ICML, 2015. 5, 15 Deconfounded image captioning: A causal retrospect. Xu Yang, Hanwang Zhang, Jianfei Cai, arXiv, 2020. 3Xu Yang, Hanwang Zhang, and Jianfei Cai. Deconfounded image captioning: A causal retrospect. In arXiv, 2020. 3 Multi-scale context aggregation by dilated convolutions. Fisher Yu, Vladlen Koltun, ICLR. 1516Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. In ICLR, 2016. 3, 15, 16 Interventional few-shot learning. Zhongqi Yue, Hanwang Zhang, Qianru Sun, Xiansheng Hua, NeurIPS. 2020Zhongqi Yue, Hanwang Zhang, Qianru Sun, and Xiansheng Hua. Interventional few-shot learning. In NeurIPS, 2020. 3 Neural motifs: Scene graph parsing with global context. Rowan Zellers, Mark Yatskar, Sam Thomson, Yejin Choi, CVPR. Rowan Zellers, Mark Yatskar, Sam Thomson, and Yejin Choi. Neural motifs: Scene graph parsing with global context. In CVPR, 2018. 4 Reliability does matter: An end-to-end weakly supervised semantic segmentation approach. Bingfeng Zhang, Jimin Xiao, Yunchao Wei, Mingjie Sun, Kaizhu Huang, AAAI. Bingfeng Zhang, Jimin Xiao, Yunchao Wei, Mingjie Sun, and Kaizhu Huang. Reliability does matter: An end-to-end weakly supervised semantic segmentation approach. In AAAI, 2020. 8 Learning deep features for discriminative localization. Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, Antonio Torralba, CVPR. 1617Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In CVPR, 2016. 1, 4, 5, 16, 17
[]
[ "PettingZoo: A Standard API for Multi-Agent Reinforcement Learning", "PettingZoo: A Standard API for Multi-Agent Reinforcement Learning" ]
[ "J K Terry [email protected] ", "Benjamin Black [email protected] ", "Nathaniel Grammel [email protected] ", "Mario Jayakumar [email protected] ", "Ananth Hari [email protected] ", "Ryan Sullivan [email protected] ", "Luis Santos ", "Rodrigo Perez ", "Caroline Horsch [email protected] ", "Clemens Dieffendahl [email protected] ", "Niall L Williams [email protected] ", "Yashas Lokesh ", "Praveen Ravi [email protected] " ]
[]
[]
This paper introduces the PettingZoo library and the accompanying Agent Environment Cycle ("AEC") games model. PettingZoo is a library of diverse sets of multi-agent environments with a universal, elegant Python API. PettingZoo was developed with the goal of accelerating research in Multi-Agent Reinforcement Learning ("MARL"), by making work more interchangeable, accessible and reproducible akin to what OpenAI's Gym library did for single-agent reinforcement learning. PettingZoo's API, while inheriting many features of Gym, is unique amongst MARL APIs in that it's based around the novel AEC games model. We argue, in part through case studies on major problems in popular MARL environments, that the popular game models are poor conceptual models of games commonly used in MARL and accordingly can promote confusing bugs that are hard to detect, and that the AEC games model addresses these problems.
null
[ "https://arxiv.org/pdf/2009.14471v7.pdf" ]
222,066,674
2009.14471
3a70562df004e08d91b125e6d15255e31e445efa
PettingZoo: A Standard API for Multi-Agent Reinforcement Learning J K Terry [email protected] Benjamin Black [email protected] Nathaniel Grammel [email protected] Mario Jayakumar [email protected] Ananth Hari [email protected] Ryan Sullivan [email protected] Luis Santos Rodrigo Perez Caroline Horsch [email protected] Clemens Dieffendahl [email protected] Niall L Williams [email protected] Yashas Lokesh Praveen Ravi [email protected] PettingZoo: A Standard API for Multi-Agent Reinforcement Learning This paper introduces the PettingZoo library and the accompanying Agent Environment Cycle ("AEC") games model. PettingZoo is a library of diverse sets of multi-agent environments with a universal, elegant Python API. PettingZoo was developed with the goal of accelerating research in Multi-Agent Reinforcement Learning ("MARL"), by making work more interchangeable, accessible and reproducible akin to what OpenAI's Gym library did for single-agent reinforcement learning. PettingZoo's API, while inheriting many features of Gym, is unique amongst MARL APIs in that it's based around the novel AEC games model. We argue, in part through case studies on major problems in popular MARL environments, that the popular game models are poor conceptual models of games commonly used in MARL and accordingly can promote confusing bugs that are hard to detect, and that the AEC games model addresses these problems. Introduction Multi-Agent Reinforcement Learning (MARL) has been behind many of the most publicized achievements of modern machine learning -AlphaGo Zero [Silver et al., 2017], OpenAI Five [OpenAI, 2018], AlphaStar [Vinyals et al., 2019]. These achievements motivated a boom in MARL research, with Google Scholar indexing 9,480 new papers discussing multi-agent reinforcement learning in 2020 alone. Despite this boom, conducting research in MARL remains a significant engineering challenge. A large part of this is because, unlike single agent reinforcement learning which has OpenAI's Gym, no de facto standard API exists in MARL for how agents interface with environments. This makes the reuse of existing learning code for new purposes require substantial effort, consuming researchers' time and preventing more thorough comparisons in research. This lack of a standardized API has also prevented the proliferation of learning libraries in MARL. While a massive number of Gym-based single-agent reinforcement learning libraries or code bases exist (as a rough measure 669 pip-installable packages depend on it at the time of writing GitHub [2021]), only 5 MARL libraries with large user bases exist [Lanctot et al., 2019, Weng et al., 2020, Liang et al., 2018, Samvelyan et al., 2019, Nota, 2020. The proliferation of these Gym based learning libraries has proved essential to the adoption of applied RL in fields like robotics or finance and without them the growth of applied MARL is a significantly greater challenge. Motivated by this, this paper introduces the PettingZoo library and API, which was created with the goal of making research in MARL more accessible and serving as a multi-agent version of Gym. Prior to PettingZoo, the numerous single-use MARL APIs almost exclusively inherited their design from the two most prominent mathematical models of games in the MARL literature-Partially Observable Stochastic Games ("POSGs") and Extensive Form Games ("EFGs"). During our development, we discovered that these common models of games are not conceptually clear for multi-agent games implemented in code and cannot form the basis of APIs that cleanly handle all types of multi-agent environments. To solve this, we introduce a new formal model of games, Agent Environment Cycle ("AEC") games that serves as the basis of the PettingZoo API. We argue that this model is a better conceptual fit for games implemented in code. and is uniquely suitable for general MARL APIs. We then prove that any AEC game can be represented by the standard POSG model, and that any POSG can be represented by an AEC game. To illustrate the importance of the AEC games model, this paper further covers two case studies of meaningful bugs in popular MARL implementations. In both cases, these bugs went unnoticed for a long time. Both stemmed from using confusing models of games, and would have been made impossible by using an AEC games based API. The PettingZoo library can be installed via pip install pettingzoo, the documentation is available at https://www.pettingzoo.ml, and the repository is available at https://github.com/ Farama-Foundation/PettingZoo. Background and Related Works Here we briefly survey the state of modeling and APIs in MARL, beginning by briefly looking at Gym's API ( Figure 1). This API is the de facto standard in single agent reinforcement learning, has largely served as the basis for subsequent multi-agent APIs, and will be compared to later. import gym env = gym.make('CartPole-v0') observation = env.reset() for _ in range (1000) The Gym API is a fairly straightforward Python API that borrows from the POMDP conceptualization of RL. The API's simplicity and conceptual clarity has made it highly influential, and it naturally accompanying the pervasive POMDP model that's used as the pervasive mental and mathematical model of reinforcement learning [Brockman et al., 2016]. This makes it easier for anyone with an understanding of the RL framework to understand Gym's API in full. Partially Observable Stochastic Games and RLlib Multi-agent reinforcement learning does not have a universal mental and mathematical model like the POMDP model in single-agent reinforcement learning. One of the most popular models is the partially observable stochastic game ("POSG"). This model is very similar to, and strictly more general than, multi-agent MDPs [Boutilier, 1996], Dec-POMDPs [Bernstein et al., 2002], and Stochastic ("Markov") games [Shapley, 1953]). In a POSG, all agents step together, observe together, and are rewarded together. The full formal definition is presented in Appendix C.1 This model of simultaneous stepping naturally translates into Gym-like APIs, where the actions, observations, rewards, and so on are lists or dictionaries of individual values for agents. This design choice has become the standard for MARL outside of strictly turn-based games like poker, where simultaneous stepping would be a poor conceptual fit [Lowe et al., 2017, Zheng et al., 2017, Gupta et al., 2017, Liu et al., 2019, Liang et al., 2018, Weng et al., 2020. One example of this is shown in Figure 2 with the multi-agent API in RLlib [Liang et al., 2018], where agent-keyed dictionaries of actions, observations and rewards are passed in a simple extension of the Gym API. This model has made it much easier to apply single agent RL methods to multi-agent settings. However, there are two immediate problems with this model: 1. Supporting strictly turn-based games like chess requires constantly passing dummy actions for non-acting agents (or using similar tricks). 2. Changing the number of agents for agent death or creation is very awkward, as learning code has to cope with lists suddenly changing sizes. OpenSpiel and Extensive Form Games In the cases of strictly turn based games where POSG models are poorly suited (e.g. Chess), MARL researchers generally mathematically model the games as Extensive Form Games ("EFG"). The EFG represents games as a tree, explicitly representing every possible sequence of actions as a root to leaf path in the tree. Stochastic aspects of a game (or MARL environment) are captured by adding a "Nature" player (sometimes also called "Chance") which takes actions according to some given probability distribution. For a full definition of EFGs, we refer the reader to Osborne and Rubinstein [1994] The EFG model has been successfully used for solving problems involving theory of mind with methods like game theoretic analysis and tree search. However, for application in general MARL problems, three immediate concerns arise with the EFG model: 1. The model, and the corresponding API, is very complex compared to that of POSGs, and isn't suitable for beginners the way Gym is-this environment API is much more complicated than Gym's API or RLLib's POSG API for example. Furthermore, due to the complexity of the EFG model, reinforcement learning researchers don't ubiquitously use it as a mental model of games in the same way that they use the POSG or POMDP model. 2. The formal definition only includes rewards at the end of games, while reinforcement learning often requires frequent rewards. While this is possible to work around in the API implementation, it is not ideal. 3. The OpenSpiel API does not handle continuous actions (a common and important case in RL), though this was a choice that is not inherent to the EFG model. It's also worth briefly noting that some simple strictly turn based games are modeled with the single-agent Gym API, with the environment alternating which agent is controlled, [Ha, 2020]. This approach is unable to reasonably scale beyond two agents due to the difficulties of handling changes in agent order (e.g. Uno), agent death, and agent creation. PettingZoo Design Goals Our development of PettingZoo both as a general library and an API centered around the following goals. Be like Gym In PettingZoo, we wanted to leverage Gym's ubiquity, simplicity and universality. This created two concrete goals for us: • Make the API look and feel like Gym, and relatedly make the API pythonic and simple • Include numerous reference implementations of games with the main package Reusing as many design metaphors from Gym as possible will help its massive existing user base to almost instantly understand PettingZoo's API. Similarly, for an API to become standardized, it must support a large collection of useful environments to attract users and for adoption to begin, similar to what Gym did. Be a Universal API If there is to be a Gym-like API for MARL, it has to be able to support all use cases and types of environments. Accordingly, several technically difficult cases exist that have to be carefully considered: • Environments with large numbers of agents • Environments with agent death and creation • Environments where different agents can be chosen to participate in each episode • Learning methods that require access to specialty low level features Two related softer design goals for universal design are ensuring the API is simple enough for beginners to easily use, and making the API easily changeable if the direction of research in the field dramatically changes. Case Studies of Problems With The POSG Model in MARL To supplement the description of the problems with the POSG models described in Section 2.1, we overview problems with basing APIs around these models that could theoretically occur in software games, and then examine real cases of those problems occurring in popular MARL environments. We specifically focus on POSGs here because EFG based APIs are extraordinarily rare (OpenSpiel is the only major one), while POSG based ones are almost universal. POSGs Don't Allow Access To Information You Should Have Another problem with modeling environments using simultaneous actions in the POSG model is that all of an agent's rewards (from all sources) are summed and returned all at once. In a multi-agent game though, this combined reward is often the composite reward from the actions of other agentss and the environment. Similarly, you might want to be able to attribute the source of this reward for various learning reasons, or for debugging purposes to find out the origin of your rewards. However, in thinking about reward origins, having all rewards emitted at once proves to be very confusing because rewards from different sources are all combined. Accessing this information via an API modeled after a POSG requires deviating from the model. This would come in the form of returning a 2D array of rewards instead of a list, which would be difficult to standardize and inconvenient for learning code to parse. A notable case where this caused an issue in practice is in the popular pursuit gridworld environment from Gupta et al. [2017], shown in Figure 4. In it, 8 red controllable pursuer must work together to surround and capture 30 randomly moving blue evaders. The action space of each pursuer is discrete (cardinal directions or do nothing), and the observation space is a 7 × 7 box centered around a pursuer (depicted by the orange box). When an evader is surrounded on all sides by pursuers or the game boundaries, each contributing pursuer gets a reward of 5. In pursuit, pursuers move first, and then evaders move randomly, before it's determined if an evader is captured and rewards are emitted. Thus an evader that "should have" been captured is not actually captured. Having the evaders move second isn't a bug, it's just way of adding complexity to the classic genre of pursuer/evader multi-agent environments [Vidal et al., 2002], and is representative of real problems. When pursuit is viewed as an AEC game, we're forced to attribute rewards to individual steps, and the breakdown becomes pursuers receiving deterministic rewards from surrounding the evader, and then random reward due to the evader moving after. Removing this random component of the reward (the part caused by the evaders action after the pursuers had already moved), should then lead to superior performance. In this case the problem was so innocuous that fixing it required switching two lines of code where their order made no obvious difference. We experimentally validate this performance improvement in Appendix A.1, showing that on average this change resulted in up to a 22% performance in the expected reward of a learned policy. Bugs of this family could easily happen in almost any MARL environment, and analyzing and preventing them is made much easier when using the POSG model. Because every agent's rewards are summed together in the POSG model, this specific problem when looking at the code was extraordinarily non-obvious, whereas when forced to attribute the reward of individual agents this becomes clear. Moreover if an existing environment had this problem, by exposing the actual sources of rewards to learning code researchers are able to remove differing sources of reward to more easily find and remove bugs like this, and in principle learning algorithms could be developed that automatically differently weighted different sources of reward. POSGs Based APIs Are Not Conceptually Clear For Games Implemented In Code Introducing race conditions is a very easy mistake to make in MARL code in practice, and this occurs because simultaneous models of multi-agent games are not representative of how game code normally executes. This stems from a very common scenario in multi-agent environments where two agents are able to take conflicting actions (i.e. moving into the same space). This discrepancy has to be resolved by the environment (i.e. collision handling); which we call "tie-breaking." Consider an environment with two agents, Alice and Bob, in which Alice steps first and tie-breaking is biased in Alice's favor. If such an environment were assumed to have simultaneous actions, then observations for both agents would be taken before either acted, causing the observation Bob acts on to no longer be an accurate representation of the environment if a conflict with biased tie-breaking occurs. For example, if both agents tried to step into the same square and Alice got the square because she was first in the list, Bob's observation before acting was effectively inaccurate and the environment was not truly parallel. This behavior is a true race condition-the result of stepping through the environment can inadvertently differ depending on the internal resolution order of agent actions. In any environment that's even slightly complex, a tremendous number of instances where tie-breaking must be handled will typically occur. In any cases where a single one is missed, the environment will have race conditions that your code will attempt to learn. While finding these will always be important, a valuable tool to mitigate these possibilities is to use an API that treats each agent as acting sequentially, returning new observations afterwards. This entirely prevents the opportunity for introducing race conditions. Moreover, this entire problem stems from the fact that using APIs that model agents as updating sequentially for software MARL environments generally makes more conceptual sense than modeling the updates as simultaneous-unless the authors of environments use very complex parallelization, the environments will actually be updated one agent at a time. It is worth mentioning that this race condition cannot occur in an environment simulated in the physical world with continuous time or a simulated environment with a sufficient amount of observation delay (though most actively researched environment in MARL do not currently have any observation delay). In Appendix A.1 we go through a case study of a race condition like this happening in the open source implementation of the social sequential dilemma game environments [Vinitsky et al., 2019]. These are popular multi-agent grid world environments intended to study emergent behaviors for various forms of resource management, and has imperfect tie-breaking in a case where two agents try to act on resources in the same grid while using a simultaneous API. This bug in particular illustrates how extraordinarily difficult making all tie-breaking truly unbiased is in practice even for fairly simple environments. We defer this to the appendix as explaining the specific origin requires a large amount of exposition and diagrams about the rules of the environment. The Agent Environment Cycle Games Model Motivated by the problems with applying the POSG and EFG models to MARL APIs, we developed the Agent Environment Cycle ("AEC") Game. In this model, agents sequentially see their observation, agents take actions, rewards are emitted from the other agents, and the next agent to act is chosen. This is effectively a sequentially stepping form of the POSG model. Modeling multi-agent environments sequentially for APIs has numerous benefits: • It allows for clearer attribution of rewards to different origins, allowing for various learning improvements, as described in Section 4.1. • It prevents developers adding confusing and easy-to-introduce race conditions, as described in Section 4.2. • It more closely models how computer games are executed in code, as described in Section 4.2. • It formally allows for rewards after every step as is required in RL, but is not generally a part of the EFG model, as discussed in Section 2.2. • It is simple enough to serve as a mental model, especially for beginners, unlike the EFG model as discussed in Section 2.2 and illustrated in the definition in Appendix C.2. • Changing the number of agents for agent death or creation is less awkward, as learning code does not have to account for lists constantly changing sizes, as discussed in Section 2.1. • It is the least bad option for a universal API, compared to simultaneous stepping, as alluded to in Section 2.1. Simultaneous stepping requires the use of no-op actions if not all agents can act which are very difficult to deal with, whereas sequentially stepping agents that could all act simultaneously and queuing up their actions is not especially inconvenient. In Appendix C.3 we mathematically formalize the AEC games model, however understanding the formalism in full is not essential to understanding the paper. In Appendix D we further prove that for every AEC game an equivalent POSG exists and that for every POSG an equivalent AEC game exists. This shows that the AEC games model is as powerful a model as the most common current model of multi-agent environments. One additional conceptual feature of the AEC games model exists that we have not previously discussed because it does not usually play a role in APIs (see Section 6.4). In the AEC games model, we deviate from the POSG model by introducing the "environment" agent, which is analogous to the Nature agent from EFGs. When this agent acts in the model it indicates the updating of the environment itself, realizing and reacting to submitting agent actions. This allows for a more comprehensive attribution of rewards, causes of agent death, and discussion of games with strange updating rules and race conditions. An example of the transitions for Chess is shown in Figure 5, which serves as the inspiration for the name "agent environment cycle". Player 1 Environment Step 1 Player 2 Environment Step 2 Figure 5: The AEC diagram of Chess 6 API Design Basic API The PettingZoo API is shown in Figure 6, and the strong similarities to the Gym API ( Figure 1) should be obvious -each agent provides an action to a step function and receives observation, reward, done, info as the return values. The observation and state spaces also use the the exact same space objects as Gym. The render and close methods also function identically to Gym's, showing a current visual frame representing the environment to the screen whenever called. The reset method similarly has identical function to Gym -it resets the environment to a starting configuration after being played through. PettingZoo really only has two deviations from the regular Gym API -the last and agent_iter methods and the corresponding iteration logic. The agent_iter Method The agent_iter method is a generator method of an environment that returns the next agent that the environment will be acting upon. Because the environment is providing the next agent to act, this cleanly abstracts away any issues surrounding changing agent orders, agent generation, and agent death. This generation also parallels the functionality of the next agent function from the AEC games model. This method, combined with one agent acting at once, allows for the support of every conceivable variation of the set of agents changing. The last Method An odd aspect of multi-agent environments is that from the perspective of one agent, the other agents are part of the environment. Whereas in the single agent case the observation and rewards can be given immediately, in the multi-agent case an agent has to wait for all other agents to act before it's observation, reward, done and info can be fully determined. For this reason, these values are given by the last method, and they can then be passed into a policy to choose an action. Less robust implementations would not allow for features like changing agent orders (like the reverse card in Uno). Additional API Features The agents attribute is a list of all agents in the environment, as strings. The rewards, dones, infos attributes are agent-keyed dictionaries for each attribute (note that the rewards are the instantaneous ones resulting from the most recent action). These allow access to agent properties at all points on a trajectory, regardless of which is selected. The action_space(agent) and observation_space(agent) functions return the static action and observation spaces respectively for the agent given as an argument. The observe(agent) method provides the observation for a single agent by passing its name as an argument, which can be useful if you need to observe an agent in an unusual context. The state method is an optional method returns the global state of an environment, as is required for centralized critic methods. The agent_selection method returns the agent that can currently be acted upon per agent_iter. The motivation for allowing access to all these lower level pieces of information is to let researchers to attempt novel, unusual experiments. The space of multi-agent RL has not yet been comprehensively explored, and there are many perfectly plausible reasons you might want access to other agents rewards, observations, and so on. For an API to be universal in an emerging field, it inherently has to allow access to all the information researchers could plausibly want. For this reason we allow access to a fairly straightforward set of lower level attributes and methods in addition to the standard higher level API. As we outline in Section 6.5, we've structured PettingZoo in a way such that including these low-level features doesn't introduce engineering overhead in creating environments, as discussed further in the documentation website. To handle environments where different agents can be present on each reset of an environment, PettingZoo has an optional possible_agents attribute which lists all the agents that might exist in an environment at any point. Environments which generate arbitrary numbers or types of agents will not define a possible_agents list, requiring the user to check for new agents being instantiated as the environment runs. After resetting the environment, the agents attribute becomes accessible and lists all agents that are currently active. For similar reasons, num_agents, rewards, dones, infos, and agent_selection are not available until after a reset. To handle cases where environments need to have environment agents as per the formal AEC Games model, the standard is to put it into the agents with the name env and have it take None as it's action. We do not require this for all environments by default as it's rarely used and makes the API more cumbersome, but this is an important feature for certain edge cases in research. This connects to the formal model in that, when this feature is not used, the environment actor from the formal model and the agent actor that acted before it are merged together. Environment Creation and the Parallel API PettingZoo environments actually only expose the reset, seed, step, observe, render, and close base methods and the agents, rewards, dones, infos, state and agent_iter base attributes. These are then wrapped to add the last method. Only having environments implement primitive methods makes creating new environments simpler, and reduces code duplication. This has the useful side effect of allowing all PettingZoo environments to be easily changed to an alternative API by simply writing a new wrapper. We've actually already done this for the default environments and added an additional "parallel API" to them that's almost identical to the RLlib POSG-based API via a wrapper. We added this secondary API because in environments with very large numbers of agents, this can improve runtime by reducing the number of Python function calls. Default Environments Similar to Gym's default environments, PettingZoo includes 63 environments. Half of the included environment classes (MPE, MAgent, and SISL), despite their popularity, existed as unmaintained "research grade" code, have not been available for installation via pip, and have required large amounts of maintenance to run at all before our cleanup and maintainership. We additionally included multiplayer Atari games from Terry and Black [2020], Butterfly environments which are original and of our own creation, and popular classic board and card game environments. All default environments included are surveyed in depth in Appendix B. Adoption In it's relatively short lifespan, PettingZoo has already achieved a meaningful amount of adoption. It is supported by the following learning libraries: The Autonomous Learning Library [Nota, 2020], AI-Traineree [Laszuk, 2020] [Weng et al., 2020]. Perhaps more significantly than any of this, PettingZoo is already being used to teach in both graduate and undergraduate reinforcement learning classes all over the world. Conclusion This paper introduces PettingZoo, a Python library of many diverse multi-agent reinforcement learning environments under one simple API, akin to a multi-agent version of OpenAI's Gym library, and introduces the agent environment cycle game model of multi-agent games. Given the importance of multi-agent reinforcement learning, we believe that PettingZoo is capable of democratizing the field similar to what Gym previously did for single agent reinforcement learning, making it accessible to university scale research and to non-experts. As evidenced by it's early adoption into numerous MARL libraries and courses, PettingZoo is moving in the direction of accomplishing this goal. We're aware of one notable limitation of the PettingZoo API. Games with significantly more than 10,000 agents (or potential agents) will have meaningful performance issues because you have to step each agent at once. Efficiently updating environments like this, and inferencing with the associated policies, requires true parallel support which almost certainly should be done in a language other than Python. Because of this, we view this as a practically acceptable limitation. We see three directions for future work. The first is additions of more interesting environments under our API (possibly from the community, as has happened with Gym). The second direction we envision is a service to allow different researchers' agents to play against each other in competitive games, leveraging the standardized API and environment set. Finally, we envision the development of procedurally generated multi-agent environments to test how well methods generalize, akin to the Gym procgen environments [ (a) If there are no dirty river tiles in the path of the cleaning beams, the beams will extend to the full length of five tiles. Agent 1 Agent 2 (b) If there is a dirty river tile in the path of a beam, the beam will stop at the tile, changing it to a "clean" river tile. Figure 8: An example of Agent 1 using the "clean" action while facing East. The beams extend to a length of up to five tiles. The "main" beam extends directly in front of the agent, while two auxiliary beams start at the tiles directly next to the agent (one to the left and one to the right) and also extend up to five tiles. A beam stops when it hits a dirty river tile. A Additional Case Study Information A.1 Race Conditions in Sequential Social Dilemma Games The Sequential Social Dilemma Games, introduced in Leibo et al. [2017], are a kind of MARL environment where good short-term strategies for single agents lead to bad long-term results for all of the agents. New SSD environments, including the Cleanup environment, were introduced in Hughes et al. [2018]. All of these have open source implementations in [Vinitsky et al., 2019]. The states of these games are represented by a grid of tiles, where each tile represents either an agent or a piece of the environment. In the Cleanup environment, the environment tiles can be empty tiles, river tiles, and apple tiles. Collecting apple tiles results in a reward for the agent and the agents must clean the river tiles with a "cleaning beam" for apple tiles to spawn. The cleaning beam extends in front of agents, one tile at a time, until it hits a dirty river tile ("waste") or extends to its maximum length of 5 tiles. Additionally, two more beams extend in front of the agent-one starting in the tile directly to the agent's left, and one from the tile on the right-until each hits a "waste" tile or reaches a length of 5 tiles. The cleaning beam is shown in Figure 8a. Note that while beams stop at "waste" tiles, they will continue to extend past clean river tiles. The agents act sequentially in the same order every turn, including the firing of their beams. In the case of two agents trying to occupy the same space, one is chosen randomly, however the tie breaking with regards to the beams is biased, due to a bug. Consider the setup in Figure 7 where each agent chooses the "clean" action for the next step. This results in Agent 1 firing their cleaning beam first, clearing the close river tile. Next, Agent 2 fires their cleaning beam and they are able to clean the Agent 2 Agent 1 (a) The same setup as in Figure 7, but with the agent labels reversed. Agent 2 Agent 1 (b) The result of both agents performing the "clean" action, with this agent assignment. Figure 9: The impact of switching the internal agent order on how the environment evolves. When both agents clean, agent 1's action is resolved first, and the main beam stops when it hits the near dirty river tile, so the far river tile is not cleaned. In Figure 7, Agent 2's beam was able to reach the far beam because Agent 1's beam cleaned the near tile first. far river tile because the close tile has already been cleared by Agent 1. However, if we keep the same placement and actions but switch the labels of the agents, we get a different result, seen in Figure 9. Now, Agent 1 fires first and hits the close river tile and can no longer reach the far river tile. In situations like these, the observation the second agent's policy is using to act on is going to be inherently wrong, and if it had the true environment state before acting it would very likely wish to make a different choice. This is a serious class of bug that's very easy to introduce when using parallel action-based APIs, while using AEC games-based APIs prevents the class entirely. In this specific instance, the bug had gone unnoticed for years. A.2 Reward Defects in Pursuit We B Default Environments This section surveys all the environments that are included in PettingZoo by default. Atari Atari games represent the single most popular and iconic class of benchmarks in reinforcement learning. Recently, a multi-agent fork of the Arcade Learning Environment was created that allows programmatic control and reward collection of Atari's iconic multi-player games [Terry and Black, 2020]. As in the single player Atari environments, the observation is the rendered frame of the game, which is shared between all agents, so there is no partial observability. Most of these games have competitive or mixed reward structures, making them suitable for general study of adversarial and mixed reinforcement learning. In particular, Terry and Black [2020] categorizes the games into 7 different types: 1v1 tournament games, mixed sum survival games (Space Invaders, shown in Figure 11a. is an example of this), competitive racing games, long term strategy games, 2v2 tournament games, a four-player free-for-all game and a cooperative game. Butterfly Of all the default environments included, the majority of them are competitive. We wanted to supplement this with a set of interesting graphical cooperative environments. Pistonball, depicted in Figure 11b, is an environment where pistons need to coordinate to move a ball to the left, while only being able to observe a local part of the screen. It requires learning nontrivial emergent behavior and indirect communication to perform well. Knights Archers Zombies is a game in which agents work together to defeat approaching zombies before they can reach the agents. It is designed to be a fast paced, graphically interesting combat game with partial observability and heterogeneous agents, where achieving good performance requires extraordinarily high levels of agent coordination. In Cooperative pong two dissimilar paddles work together to keep a ball in play as long as possible. It was intended to be a be very simple cooperative continuous control-type task, with heterogeneous agents. Prison was designed to be the simplest possible game in MARL, and to be used as a debugging tool. In this environment, no agent has any interaction with the others, and each agent simply receives a reward of 1 when it paces from one end of its prison cell to the other. Prospector was created to be a very challenging game for conventional methods-it has two classes of agents with different goals, action spaces, and observation spaces (something many current cooperative MARL algorithms struggle with), and has very sparse rewards (something all RL algorithms struggle with). It is intended to be a very difficult benchmark for MARL, in the same vein as Montezuma's Revenge. Classic Classical board and card games have long been some of the most popular environments in reinforcement learning [Tesauro, 1995, Silver et al., 2016, Bard et al., 2019. We include all of the standard multiplayer games in RLCard [Zha et al., 2019]: Dou Dizhu, Gin Rummy, Leduc Hold'em, Limit Texas Hold'em, Mahjong, No-limit Texas Hold'em, and Uno. We additionally include all AlphaZero games, using the same observation and action spaces-Chess and Go. We finally included Backgammon, Connect Four, Checkers, Rock Paper Scissors, Rock Paper Scissors Lizard Spock, and Tic Tac Toe to add a diverse set of simple, popular games to allow for more robust benchmarking of RL methods. MAgent The MAgent library, from Zheng et al. [2017] was introduced as a configurable and scalable environment that could support thousands of interactive agents. These environments have mostly been studied as a setting for emergent behavior [Pokle, 2018], heterogeneous agents [Subramanian et al., 2020], and efficient learning methods with many agents [Chen et al., 2019]. We include a number of preset configurations, for example the Adversarial Pursuit environment shown in Figure 11d. We make a few changes to the preset configurations used in the original MAgent paper. The global "minimap" observations in the battle environment are turned off by default, requiring implicit communication between the agents for complex emergent behavior to occur. The rewards in Gather and Tiger-Deer are also slightly changed to prevent emergent behavior from being a direct result of the reward structure. MPE The Multi-Agent Particle Environments (MPE) were introduced as part of Mordatch and Abbeel [2017] and first released as part of Lowe et al. [2017]. These are 9 communication oriented environments where particle agents can (sometimes) move, communicate, see each other, push each other around, and interact with fixed landmarks. Environments are cooperative, competitive, or require team play. They have been popular in research for general MARL methods Lowe et al. [2017], emergent communication [Mordatch andAbbeel, 2017], team play [Palmer, 2020], and much more. As part of their inclusion in PettingZoo, we converted the action spaces to a discrete space which is the Cartesian product of the movement and communication action possibilities. We also added comprehensive documentation, parameterized any local reward shaping (with the default setting being the same as in Lowe et al. [2017]), and made a single render window which captures all the activities of all agents (including communication), making it easier to visualize. SISL We finally included the three cooperative environments introduced in Gupta et al. [2017]: Pursuit, Waterworld, and Multiwalker. Pursuit is a standard pursuit-evasion game Vidal et al. [2002] where pursuers are controlled in a randomly generated map. Pursuer agents are rewarded for capturing randomly generated evaders by surrounding them on all sides. Waterworld is a continuous control game where the pursuing agents cooperatively hunt down food targets while trying to avoid poison targets. Multiwalker (Figure 11f) is a more challenging continuous control task that is based on Gym's BipedalWalker environment. In Multiwalker, a package is placed on three independently controlled robot legs. Each robot is given a small positive reward for every unit of forward horizontal movement of the package, while they receive a large penalty for dropping the package. B.1 Butterfly Baselines Whne environments are introduced to the literature, it is customary for them to include baselines to provide a general sense of the difficulty of the environment and to provide something to compare against. We do this here for the Butterfly environments that this library introduces for the first time; similar baselines exist in the papers introducing all other environments. For our baseline learning method we used used fully parameter shared PPO [Schulman et al., 2017] from Stable-Baselines3 (SB3) [Raffin et al., 2019]. We use the SuperSuit wrapper library [Terry et al., 2020c] for preprocessing similar to that in Mnih et al. [2015], convert the observations to grayscale, resize them to 96x96 images, and use frame-stacking to combine the last four observations. Furthermore, for cooperative_pong_v3 and knights_archers_zombies_v7, we invert the color of alternating agent's observations by subtracting it from the maximum observable value to improve learning and differentiate which agent type an observation came from for the parameter shared neural network, per Terry et al. [2020a]. On the prospector_v4 environment, we add an extra channel to the observations which is set to the maximum possible value if the agent belongs to the opposite agent type, else zero. Both these modifications allow us to use parameter-shared PPO across non-homogeneous agents. On prospector_v4 we also pad observation and agent spaces as described in Terry et al. [2020a] to allow for learning with a single fully parameter shared neural network. After tuning hyperparameters with RL Baselines3 Zoo [Raffin, 2020], our baselines learns an optimal policy in the Pistonball environment and Cooperative Pong environments and learns reasonably in the Knights Archers Zombies and Prospector environments without achieving optimal policies. Plots showing results of 10 training runs of the best hyperparameters are shown in Figure 12. All code and hyperparameters for these runs is available at https://github.com/jkterry1/ Butterfly-Baselines. C Formal Definitions C.1 Partially Observable Stochastic Games The formal definition of a POSG is shown in Definition 1. This definition can be viewed as the typical Stochastic Games model [Shapley, 1953] with the addition of POMDP-style partial observability. Definition 1. A Partially-Observable Stochastic Game (POSG) is a tuple S, s 0 , N, (A i ) i∈[N ] , P, (R i ) i∈[N ] , , (Ω i ) i∈[N ] , , (O i ) i∈[N ] , where: • S is the set of possible states. • s 0 is the initial state. • N is the number of agents. The set of agents is [N ]. • A i is the set of possible actions for agent i. • P : S × i∈[N ] A i × S → [0, 1] is the transition function. It has the property that for all s ∈ S, for all (a 1 , a 2 , . . . , a N ) ∈ i∈[N ] A i , s ∈S P (s, a 1 , a 2 , . . . , a N , s ) = 1. • R i : S × i∈[N ] A i × S → R is the reward function for agent i. • Ω i is the set of possible observations for agent i. • O i : A i × S × Ω i → [0, 1] is the observation function. It has the property that ω∈Ωi O i (a, s, ω) = 1 for all a ∈ A i and s ∈ S. C.2 Extensive Form Games The definition given here follows closely that of Osborne and Rubinstein [1994], to which we refer the reader for a more in-depth discussion of Extensive Form Games and their formal definition. Definition 2. An Extensive Form Game is defined by: • A set of agents [N ] = {1, 2, . . . , N }. • A "Nature" player denoted as "agent" 0. For convenience, we define N := [N ] ∪ {0}. The Nature agent is responsible for describing the random, stochastic, or luck-based elements of the game, as described below. • A setà of action sequences. An action sequence is a tuple a = (a 1 , a 2 , . . . , a k ) where each element indicates an action taken by an agent. In infinite games, action sequences need not be finite. The setà indicates all possible sequences of actions that may be taken in the game (i.e., "histories" of players' moves or agents' actions). It satisfies the following properties: -The empty sequence is in the set: ∅ ∈Ã. -If (a 1 , . . . , a k ) ∈Ã, then for l < k we also have (a 1 , . . . , a l ) ∈Ã. -In infinite games, if an infinite sequence (a 1 , a 2 , . . . ) satisfies the property that for all k, (a 1 , a 2 , . . . , a k ) ∈Ã, then (a 1 , a 2 , . . . ) ∈Ã. For a finite sequence a = (a 1 , . . . , a k ), denote by ( a, a) the sequence (a 1 , . . . , a k , a). Then the set of actions available in the next turn following a sequence a is given by A( a) := {a | ( a, a) ∈Ã} (for convenience, we define A( a) = ∅ if a is infinite). We say a sequence of actions a is terminal if it is either infinite or if it is a maximal finite sequence, i.e. a is terminal if and only if A( a) = ∅. We denote the set of terminal sequences by T := { a | A( a) = ∅}. • A function τ : (à \ T ) → N , which specifies the agent whose turn it is to act next after a given sequence of actions. Note that this is not stochastic, but random player order can be captured by inserting a Nature turn. • A probability distribution P ( a, ·) for Nature's actions. It is defined only for action sequences for which Nature acts next, i.e. sequences a ∈à for which τ ( a) = 0. Specifically, P ( a, a) is the probability that Nature takes action a after the sequence of actions a has occurred. • For each agent i ∈ [N ], a reward function R i : T → R. C.3 Agent Environment Cycle Games As mentioned in Section 5, the stochastic nature of the state transitions is modeled as an "environment" agent, which does not take an action but rather transitions randomly from the current state to a new state according to some given probability distribution. With the stochasticity of state transitions separated out as a distinct "environment" agent, we can then model the transitions of the actual agents deterministically. To this end, each (non-environment) agent i has a deterministic transition function T i which depends only on the current state and the action taken, while the environment has a stochastic transition function P which transitions to a new state randomly depending on the current state (it may depend on the actions taken previously by the agents, since the current state is determined by these actions). Definition 3. An Agent-Environment Cycle Game (AEC Game) is a tuple S, s 0 , N, (A i ) i∈[N ] , (T i ) i∈[N ] , P, (R i ) i∈[N ] , (R i ) i∈[N ] , , (Ω i ) i∈[N ] , , (O i ) i∈[N ] , , ν , where: • S is the set of possible states. • s 0 is the initial state. • N is the number of agents. The agents are numbered 1 through N . There is also an additional "environment" agent, denoted as agent 0. We denote the set of agents along with the environment by N : = [N ] ∪ {0}. • A i is the set of possible actions for agent i. For convenience, we further define A 0 = {∅} (i.e., a single "null action" for environment steps) and A := i∈N A i . • T i : S × A i → S is the transition function for agents. State transitions for agent actions are deterministic. • P : S × S → [0, 1] is the transition function for the environment. State transitions for environment steps are stochastic: P (s, s ) is the probability that the environment transitions into state s from state s. • R i ⊆ R is the set of possible rewards for agent i. We assume this is finite. • R i : S × N × A × S × R i → [0, 1] is the reward function for agent i. R i ⊆ R denotes the set of all possible rewards for agent i (which we assume to be finite). R i is the reward function for agent i. The set of all possible rewards for each agent is assumed to be finite, which we denote R i ⊆ R. It is stochastic: R i (s, j, a, s , r) is the probability of agent i receiving reward r when agent j takes action a while in state s, and the game transitions to state s . We also define R := i∈[N ] R i . • Ω i is the set of possible observations for agent i. • O i : S × Ω i → [0, 1] is the observation function for agent i. O i (s, ω) is the probability of agent i observing ω while in state s. • ν : S × N × A × N → [0, 1] is the next agent function. This means that ν(s, i, a, j) is the probability that agent j will be the next agent permitted to act given that agent i has just taken action a in state s. This should attribute a non-zero probability only when a ∈ A i . In this definition, the game starts in state s 0 and the environment agent acts first. Having the environment agent act first allows the first actual agent to act to be determined randomly if desired (choosing the first agent deterministically can be done easily by having the environment simply do nothing in this first step). The game then evolves in "turns" where in each turn an agent i receives an observation ω i ∈ Ω i (any given observation ω is seen with probability O i (s, ω)) and, based on this observation, chooses an action a i ∈ A i . The game then transitions from the current state s to a new state s according to the transition function. If i ∈ [N ], the state transition is deterministically T i (s, a i ). If i = 0, the new state is stochastic, so state s occurs with probability P (s, s ). Then, a new agent i is determined according to the "next agent" function, so that i is next to act with probability ν(s, i, a i , i ). The observation ω i that is received is random, occurring with probability O i (s, ω i ). Note that we can allow for the state to transition randomly in response to an agent's action by simply inserting an "environment step" immediately following an agent's action, by setting ν(s, i, a i , 0) = 1 and allowing the following environment step to transition the state randomly. At every step, every agent j receives the partial reward r with probability R j (s, i, a i , s , r ). D Omitted Proofs D.1 POSGs are Equivalent to AEC Games The inclusion of the stochastic ν (next-agent) function in the definition of AEC games allows for capturing many turn-based games with complex turn orders (consider Uno, for instance, where players may be skipped or the order reversed). It is not immediately obvious that this allows for representing games in which agents act simultaneously. However, we show here that in fact AEC games can be used to theoretically model games with simultaneous actions. To see this, imagine simulating a POSG by way of a "black box" which takes the actions of all agents simultaneously, and then -one by one -feeds them to a purpose-built AEC game whose states are designed to "encode" each agent's action, "queueing" them up over the course of N steps (one for each agent). Once all of the actions have been fed to the AEC game, a single environment step resolves these "queued up" actions all at once. If we design the AEC game in the right way, this total of N + 1 steps (N for queueing the actions, and one for the environment to resolve the joint action) produces an outcome that is identical to the result of a single step in the original POSG. This is formalized below. Theorem 1. For every POSG, there is an equivalent AEC Game. Proof of Theorem 1. Let G = S, N, {A i }, P, {R i }, {Ω i }, {O i } be a POSG. To prove this, it will be necessary to show precisely what is meant by "equivalent." We will construct a new AEC Game G AEC in such a way that for every N + 1 steps of G AEC the probability distribution over possible states is identical to the state distribution for G after a single step, the distributions over observations received by each agent is identical in G and in G AEC , and the reward obtained by each agent is the same. We define G AEC as follows: G AEC = S , N, {A i }, {T i }, P , {R i }, {Ω i }, {O i }, ν where • S = S × A 1 × A 2 × · · · × A N . That is, an element of S is a tuple (s, a 1 , a 2 , . . . , a N ) where s ∈ S and for each i ∈ [N ], a i ∈ A i . • T i ((s, a 1 , a 2 , . . . , a i , . . . , a N ), a i ) = (s, a 1 , a 2 , . . . , a i , . . . , a N ). • For s = (s, a 1 , a 2 , . . . , a N ) and s = (s , a 1 , a 2 , . . . , a N ), we define P (s, s ) = P (s, a 1 , a 2 , . . . , a N , s ). If s and s are such that a i = a i for any i ∈ [N ], then P (s, s ) = 0. • For s = (s, a 1 , a 2 , . . . , a N ), s = (s , a 1 , a 2 , . . . , a N ), and r = R i (s, a 1 , a 2 , . . . , a N , s ), we let R i (s, 0, ∅, s , r) = 1. We define R i = 0 for all other cases. • O i (s, a 1 , a 2 , . . . , a N ) = O i (s) • ν((s, a 1 , a 2 , . . . , a N ), i, a i , j) = 1 if j ≡ i + 1 (mod N + 1) (and equals 0 otherwise). The AEC game G AEC begins with agent 1. If the initial state of the POSG G was s 0 , then the initial state of G AEC is (s 0 , ·, ·, . . . , ·), where all but the first element of the tuple are chosen arbitrarily. Let P t,s be the probability that the POSG G is in state s after t steps. For an action vector a = (a 1 , . . . , a N ) ∈ A 1 × · · · × A N , let P t,s,a be the probability that G AEC is in state (s, a 1 , . . . , a N ) after t steps. Finally, let P t,s = a∈A1×···×A N P t,s,a . Trivially, P 0,s = P 0,s for all s ∈ S. Now, suppose that after t steps of G, P t,s = P t(N +1),s for all s ∈ S (our inductive hypothesis). For any joint action a = (a 1 , . . . , a N ), the state distribution of G at step t + 1 if the joint action a is taken is given by P t+1,s = P t,s · P (s, a 1 , . . . , a N , s ). Further, the reward obtained by agent i for this joint action, if the new state is s , is R i (s, a 1 , . . . , a N , s ). Let s = (s, a 1 , . . . , a N ) and s = (s , a 1 , . . . , a N ). Then, in G AEC , if the agents take actions a 1 , a 2 , . . . , a N respectively on their turns, the state distribution of G AEC at step (t + 1)(N + 1) is given by P (t+1)(N +1),s = P (t+1)(N +1),s ,a = P t(N +1),s P (s, s ). By the inductive hypothesis, P t(N +1),s = P t,s , and by the definition of P (s, s ) in G AEC , it is clear that P (s, s ) = P (s, a 1 , . . . , a N , s ). Thus, P (t+1)(N +1),s = P t,s P (s, a 1 , . . . , a N , s ) = P t+1,s . The above establishes a strict equivalence between the state distributions of G at step t and G AEC at step t(N + 1) for any t. Between steps t(N + 1) + 1 and (t + 1)(N + 1) of G AEC , each agent in turn receives an observation and then chooses its action. Specifically, agent i acts at step t(N ) + i immediately after receiving an observation ω i with probability O i (s, a 1 , . . . , a N ) = O i (s). Thus, the marginal probability distribution (when conditioned on transitioning into state s) of the observation received by agent i immediately after acting at time t in G is identical to the marginal distribution of the observation received by i immediately before acting at time t(N + 1) + i in G AEC , i.e. Pr G,t (ω i = ω | s t = s) = Pr GAEC,t(N +1)+i (ω i = ω | s t(N +1),0 = s). The second part of the equivalence is observing that the reward received by an agent i in G after the joint action a is taken is equivalent to the total reward received by agent i in G AEC across all steps from t(N + 1) + 1 through (t + 1)(N + 1) when the agents take actions a 1 , . . . , a N respectively. We can see that this is indeed the case, since the rewards received by agent i in G AEC from step t(N + 1) + 1 through step (t + 1)(N + 1) is 0 at every step but the environment step (t + 1)(N + 1). By definition of R in G AEC , R i (s, 0, ∅, s , R i (s, a 1 , . . . , a N , s )) = 1, so the total reward received by any agent i in G AEC is R i (s, a 1 , . . . , a N , s ). This establishes the second part of our equivalence (that the reward at step t(N + 1) in G AEC is identical to the reward at step t of G, if the actions are the same). One way to think of this construction is that the actions are still resolved simultaneously via the environment step (which is responsible for the stochastic state transition and the production of rewards); we simply break down the production of the joint action into smaller units whereby each agent chooses and "locks in" their actions one step at a time. A toy example to see this equivalence is to imagine a multiplayer card game in which each player has a hand of cards and each turn consists of all players choosing one card from their hand which is revealed simultaneously with all other players. An equivalent game has each player in sequence choosing a card and placing it face down on their turn, followed by a final action (the "environment step" in which all players simultaneously reveal their selected card. At first, it may appear as though the AEC game is in fact more powerful than the POSG, since in addition to being able to handle simultaneous-action games as shown above, it can represent sequential games, including sequential games with complex and dynamic turn orders such as Uno (another aspect of our AEC definition that seems more general than in POSGs is the fact that the reward function in an AEC game is stochastic, allowing rewards to be randomly determined). However, it turns out that a POSG can be used to model a sequential Handling the stochastic rewards and stochastic next-agent function is non-obvious and is omitted here due to space constraints; the construction and proof can be found in Appendix D.1. We next show how to convert an AEC game to a POSG for the case of deterministic rewards. Definition 4. An AEC Game G = S, N, {A i }, {T i }, P, {R i }, {Ω i }, {O i }, ν is said to have deterministic rewards if for all i, j ∈ N , all a ∈ A j , and all s, s ∈ S, there exists a R * i (s, j, a, s ) such that R i (s, j, a, s , r) = 1 for r = R * i (s, j, a, s ) (and 0 for all other r). Notice that an AEC Game with deterministic rewards may still depend on the new state s which can itself be stochastic in the case of the environment (j = 0). Theorem 2. Every AEC Game with deterministic rewards has an equivalent POSG. Proof. Suppose we have an AEC game In this construction, the new state in the POSG encodes information about which agent is meant to act. State transitions in the POSG therefore encode both the state transition of the original AEC game and the transition for determining the next agent to act. In each step, the state transition depends only on the agent who's turn it is to act (which is included as part of the state). This construction adapts POSGs to be strictly turn-based so that it is able to represent AEC Games. We now present the full proof. The state set is S = S × N × R N . An element of S is a tuple (s, i, r), where r = (r 1 , r 2 , . . . , r N ) is a vector of rewards for each agent. The transition function is given by The reward function is given by R i ((s, j, r), a, (s , j , r )) = r i Figure 1 :Figure 2 : 12: action = policy(observation) observation, reward, done, info = env.step(action) An example of the basic usage of Gym from ray.rllib.examples.env.multi_agent import MultiAgentCartPole env = MultiAgentCartPole() observation = env.reset() for _ in range(1000): actions = policies(agents, observation) observation, rewards, dones, infos = env.step(actions) An example of the basic usage of RLlib Figure 3 : 3An example of the basic usage of OpenSpiel Figure 4 : 4The pursuit environment from Gupta et al.[2017]. Figure 6 : 6from pettingzoo.butterfly import pistonball_v0 env = pistonball_v0.env() env.reset() for agent in env.agent_iter(1000):env.render() observation, reward, done, info = env.last() action = policy(observation, agent) env.step(action) env.close() An example of the basic usage of Pettingzoo Figure 7 : 7Cleanup, a Sequential Social Dilemma Game from Vinitsky et al. [2019]. validated the impact of reward pruning experimentally by training parameter shared Ape-X DQN [Horgan et al., 2018] (the best performing model on pursuit [Terry et al., 2020d]) four times using RLLib [Liang et al., 2017] with and without reward pruning, achieving better results with reward pruning every time and 22.03% more total reward on average Figure 10a, while PPO [Schulman et al., 2017] learned 16.12% more reward on average with this Figure 10b. Saved training logs and all code needed to reproduce the experiments and plots is available in the supplemental materials. on the pursuit environment with and without pruned rewards, using parameter sharing based on Ape-X DQN. This shows an average of an 22.03% improvement by using this method. on the pursuit environment with and without reward pruning, using parameter sharing based on PPO. Reward pruning increased the total reward by 16.12% on average. Figure 11 : 11Example environments from each class. Figure 12 : 12Total reward when learning on each Butterfly environment via parameter-shared PPO. • For each agent i ∈ [N ], a partition H i of the sequences of actionsà i := { a | τ ( a) = i}. The partition H i is called the information partition of agent i, and elements of H i are called information sets. For convenience, define H := i∈[N ] H i . The information sets must obey the additional property that for any information set h ∈ H and any two action sequences a, a ∈ H, we have τ ( a) = τ ( a ) and A( a) = A( a ). G = S, N, {A i }, {T i }, P, {R i }, {Ω i }, {O i }, ν with deterministic rewards. We define G POSG = S , N, {A i }, P , {R i }, {Ω i }, {O i } as follows. • S = S × N • P ((s, i), a 1 , . . . , a N , (s , i )) = ν(s, i, a i , s , i ) · Pr(s | s, i, a i ), where Pr(s | s, i, a i i > 0, T (s, a i ) = s P (s, s ) if i = 0 0 o/w • R i ((s, j), a, (s , j )) = R * i (s, j, a, s ) Theorem 3 . 3Every AEC Game has an equivalent POSG. Proof. Suppose we have an AEC game G = S, N, {A i }, {T i }, P, {R i }, {Ω i }, {O i }, ν , and R is the (finite) set of all possible rewards. We define G POSG = S , N, {A i }, P , {R i }, {Ω i }, {O i } as follows. P ((s, i, r), a 1 , a 2 , . . . , a N , (s , i , r )) = ν(s, i, a i , s , i ) Pr(s | s, i, a i )j∈[N ]R j (s, i, a i , s , r i ) or Appendix C.2. OpenSpiel[Lanctot et al., 2019], a major library with a large collection of classical board and card games for MARL bases their API off of the EFG paradigm, the API of which is shown inFigure 3.import pyspiel import numpy as np game = pyspiel.load_game("kuhn_poker") state = game.new_initial_state() while not state.is_terminal(): if state.is_chance_node(): # Step the stochastic environment. action_list, prob_list = zip(*state.chance_outcomes()) state.apply_action(np.random.choice(action_list, p=prob_list)) else: # sample an action for the agent legal_actions = state.legal_actions() observations = state.observation_tensor() action = policies(state.current_agent(), legal_actions, observations) state.apply_action(action) rewards = state.rewards() , PyMARL (ongoing) [Samvelyan et al., 2019], RLlib [Liang et al., 2018], Stable Baselines 2 [Hill et al., 2018] and Stable Baselines 3 [Raffin et al., 2019], similar libraries such as CleanRL [Huang et al., 2020] (through SuperSuit [Terry et al., 2020b]), and Tianshou (ongoing) Cobbe et al., 2019].Craig Boutilier. Planning, learning and coordination in multiagent decision processes. In Proceedings Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, Pieter Abbeel, and Igor Mordatch. Multi-agent actorcritic for mixed cooperative-competitive environments.Neural Information Processing Systems (NIPS), 2017. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. nature, 518(7540):529-533, 2015. Gerald Tesauro. Temporal difference learning and td-gammon. Commun. ACM, 38(3):58-68, March 1995. ISSN 0001-0782. doi: 10.1145/203330.203343. URL https://doi.org/10.1145/ 203330.203343. Rene Vidal, Omid Shakernia, H Jin Kim, David Hyunchul Shim, and Shankar Sastry. Probabilistic pursuit-evasion games: theory, implementation, and experimental evaluation. IEEE transactions on robotics and automation, 18(5):662-669, 2002. Eugene Vinitsky, Natasha Jaques, Joel Leibo, Antonio Castenada, and Edward Hughes. An open source implementation of sequential social dilemma games. https://github.com/ eugenevinitsky/sequential_social_dilemma_games/, 2019. GitHub repository. Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature, 575(7782):350-354, 2019. Jiayi Weng, Minghao Zhang, Alexis Duburcq, Kaichao You, Dong Yan, Hang Su, and Jun Zhu. Tianshou. https://github.com/thu-ml/tianshou, 2020. Daochen Zha, Kwei-Herng Lai, Yuanpu Cao, Songyi Huang, Ruzhe Wei, Junyu Guo, and Xia Hu. Rlcard: A toolkit for reinforcement learning in card games. arXiv preprint arXiv:1910.04376, 2019. Lianmin Zheng, Jiacheng Yang, Han Cai, Weinan Zhang, Jun Wang, and Yong Yu. Magent: A many-agent reinforcement learning platform for artificial collective intelligence. arXiv preprint arXiv:1712.00600, 2017. Agent 1 Agent 2 River Tiles (a) The initial setup with two agents and two river tiles. When the river tiles become dirty, they are shown as a brownish color instead. Cleaning Beam Tiles Agent 1 Agent 2 (b) The result of both agents perform the "clean" action. Both river tiles can be are cleaned since Agent 1's action is resolved first. Acknowledgements J.K. Terry was supported during part of this work by the QinetiQ Fundamental Machine Learning Fellowship. Thank you to Kyle Sang for their contributions to the documentation website. Thank you Rohan Potdar and Sang Hyun Son for their contributions to the Butterfly benchmarks. Thank you to Deepthi Raghunandan and Christian Clauss for their contributions to testing and continuous integration. Thank you to the PettingZoo community for the numerous bug reports and contributions to the package, especially Ross Allen and their group. The hanabi challenge: A new frontier for AI research. Nolan Bard, Jakob N Foerster, Sarath Chandar, Neil Burch, Marc Lanctot, H Francis Song, Emilio Parisotto, Vincent Dumoulin, Subhodeep Moitra, Edward Hughes, Iain Dunning, Shibl Mourad, Hugo Larochelle, Marc G Bellemare, Michael Bowling, abs/1902.00506CoRRNolan Bard, Jakob N. Foerster, Sarath Chandar, Neil Burch, Marc Lanctot, H. Francis Song, Emilio Parisotto, Vincent Dumoulin, Subhodeep Moitra, Edward Hughes, Iain Dunning, Shibl Mourad, Hugo Larochelle, Marc G. Bellemare, and Michael Bowling. The hanabi challenge: A new frontier for AI research. CoRR, abs/1902.00506, 2019. URL http://arxiv.org/abs/1902.00506. The complexity of decentralized control of markov decision processes. Daniel S Bernstein, Robert Givan, Neil Immerman, Shlomo Zilberstein, 10.1287/moor.27.4.819.297Mathematics of Operations Research. 274Daniel S. Bernstein, Robert Givan, Neil Immerman, and Shlomo Zilberstein. The complexity of decentralized control of markov decision processes. Mathematics of Operations Research, 27(4): 819-840, 2002. doi: 10.1287/moor.27.4.819.297. URL https://doi.org/10.1287/moor.27. 4.819.297.
[ "https://github.com/jkterry1/", "https://github.com/thu-ml/tianshou," ]
[ "FedHAP: Federated Hashing with Global Prototypes for Cross-silo Retrieval", "FedHAP: Federated Hashing with Global Prototypes for Cross-silo Retrieval" ]
[ "Meilin Yang \nInstitute for AI Industry Research\nTsinghua-Berkeley Shenzhen Institute (TBSI)\nTsinghua University Shenzhen\nChina\n", "Jian Xu \nInstitute for AI Industry Research\nTsinghua-Berkeley Shenzhen Institute (TBSI)\nTsinghua University Shenzhen\nChina\n", "Yang Liu \nInstitute for AI Industry Research\nTsinghua-Berkeley Shenzhen Institute (TBSI)\nTsinghua University Shenzhen\nChina\n", "Wenbo Ding \nInstitute for AI Industry Research\nTsinghua-Berkeley Shenzhen Institute (TBSI)\nTsinghua University Shenzhen\nChina\n", "Meilin Yang \nTsinghua University\nBeijingChina\n", "Jian Xu \nTsinghua University\nBeijingChina\n", "Yang Liu \nTsinghua University\nBeijingChina\n" ]
[ "Institute for AI Industry Research\nTsinghua-Berkeley Shenzhen Institute (TBSI)\nTsinghua University Shenzhen\nChina", "Institute for AI Industry Research\nTsinghua-Berkeley Shenzhen Institute (TBSI)\nTsinghua University Shenzhen\nChina", "Institute for AI Industry Research\nTsinghua-Berkeley Shenzhen Institute (TBSI)\nTsinghua University Shenzhen\nChina", "Institute for AI Industry Research\nTsinghua-Berkeley Shenzhen Institute (TBSI)\nTsinghua University Shenzhen\nChina", "Tsinghua University\nBeijingChina", "Tsinghua University\nBeijingChina", "Tsinghua University\nBeijingChina" ]
[]
Deep hashing has been widely applied in large-scale data retrieval due to its superior retrieval efficiency and low storage cost. However, data are often scattered in data silos with privacy concerns, so performing centralized data storage and retrieval is not always possible. Leveraging the concept of federated learning (FL) to perform deep hashing is a recent research trend. However, existing frameworks mostly rely on the aggregation of the local deep hashing models, which are trained by performing similarity learning with local skewed data only. Therefore, they cannot work well for non-IID clients in a real federated environment. To overcome these challenges, we propose a novel federated hashing framework that enables participating clients to jointly train the shared deep hashing model by leveraging the prototypical hash codes for each class. Globally, the transmission of global prototypes with only one prototypical hash code per class will minimize the impact of communication cost and privacy risk. Locally, the use of global prototypes are maximized by jointly training a discriminator network and the local hashing network. Extensive experiments on benchmark datasets are conducted to demonstrate that our method can significantly improve the performance of the deep hashing model in the federated environments with non-IID data distributions.
10.48550/arxiv.2207.05525
[ "https://arxiv.org/pdf/2207.05525v1.pdf" ]
250,451,135
2207.05525
8bcfbc13f58e6d24d0e3927a47acc2f7b6535492
FedHAP: Federated Hashing with Global Prototypes for Cross-silo Retrieval Meilin Yang Institute for AI Industry Research Tsinghua-Berkeley Shenzhen Institute (TBSI) Tsinghua University Shenzhen China Jian Xu Institute for AI Industry Research Tsinghua-Berkeley Shenzhen Institute (TBSI) Tsinghua University Shenzhen China Yang Liu Institute for AI Industry Research Tsinghua-Berkeley Shenzhen Institute (TBSI) Tsinghua University Shenzhen China Wenbo Ding Institute for AI Industry Research Tsinghua-Berkeley Shenzhen Institute (TBSI) Tsinghua University Shenzhen China Meilin Yang Tsinghua University BeijingChina Jian Xu Tsinghua University BeijingChina Yang Liu Tsinghua University BeijingChina FedHAP: Federated Hashing with Global Prototypes for Cross-silo Retrieval ACM Reference Format: Deep hashing has been widely applied in large-scale data retrieval due to its superior retrieval efficiency and low storage cost. However, data are often scattered in data silos with privacy concerns, so performing centralized data storage and retrieval is not always possible. Leveraging the concept of federated learning (FL) to perform deep hashing is a recent research trend. However, existing frameworks mostly rely on the aggregation of the local deep hashing models, which are trained by performing similarity learning with local skewed data only. Therefore, they cannot work well for non-IID clients in a real federated environment. To overcome these challenges, we propose a novel federated hashing framework that enables participating clients to jointly train the shared deep hashing model by leveraging the prototypical hash codes for each class. Globally, the transmission of global prototypes with only one prototypical hash code per class will minimize the impact of communication cost and privacy risk. Locally, the use of global prototypes are maximized by jointly training a discriminator network and the local hashing network. Extensive experiments on benchmark datasets are conducted to demonstrate that our method can significantly improve the performance of the deep hashing model in the federated environments with non-IID data distributions. INTRODUCTION With the explosive increase of data generated from different institutions, achieving fast and storage-saving information retrieval across multiple institutions has attracted much attention in recent years. Deep hashing is a widely used method that aims to reduce storage cost and improve retrieval efficiency by encoding data points into non-invertible and compact binary hash codes with deep neural networks (DNNs) [8,14]. Most existing deep hashing methods assume that data storage is centralized. For example, TDHPPIR [38] Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. , , © 2022 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. . . $15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn is an efficient privacy-preserving image retrieval method based on deep hashing, in which data owners upload the encrypted data set to the central cloud server, and provide data retrieval services via the indexes established from the database. However, centralized storage of private client data is not always feasible due to space limitations, increasing privacy concerns and tough data protection regulations such as GDPR 1 . Therefore, it is increasingly desirable to learn to hash over distributed data with privacy protection. In recent years, federated learning (FL) [12] has emerged as a promising paradigm for collaborative learning with privacy preservation. In the original FL framework, the popular FedAvg algorithm is proposed. In FedAvg, the selected clients first locally perform multiple training epochs by stochastic gradient descent (SGD), and then transmit their model updates to a central server, where the model updates are aggregated to obtain a new global model. Previous works [36,41] have explored the combination of deep hashing with FL and demonstrated their effectiveness, but these works simply rely on model aggregation to achieve global hash learning, which does not sufficiently address the non-IID 2 nature of data in federated environments. For example, in a patient hashing problem across multiple hospitals, the distribution of patients from an oncology hospital and a psychiatric hospital can be quite different, so can the data quantity. Since the core of deep hashing models is data similarity learning, skewed and highly imbalanced local data distributions can result in biased local models that cannot be sufficiently corrected by the FedAvg algorithm. To tackle the above issues, in this paper, we introduce a federated deep hashing method with global prototypes (FedHAP) for crosssilo retrieval. In the FedHAP framework, each client in the federation can jointly train the hashing model using its own local data and global prototypical hash codes of each class to guide the local training. Specifically, the global prototypes are generated in the server by aggregating the class-averaged hash codes from clients. Then the prototypes, together with the global model, are broadcast to clients for local training. To better utilize the global prototypes locally, we not only design similarity learning algorithms with supervision from the global hash codes, but also creatively design a discriminator network to ensure the distribution consistency between the locally generated binary hash codes and the global ones. By this way, we maximize the usage of global prototypes to enhance the local training of each client without exchanging sample-level information, thereby significantly improving the performance of the federated hashing model while preserving the privacy of local data. During the retrieval process, the hash code of query generated by the trained hashing model will be sent to each client, and the best matching data will be retrieved by finding the data with the shortest similarity distance. As an example for demonstrating the efficacy of our proposed FedHAP, we compare the hash codes generated by our approach and the naive FedAvg approach as in Fig. 1, where each data point is visualized using t-SNE [29]. We observe that the hash codes learned by FedHAP exhibit favorable intra-class compactness and inter-class separability compared with the baseline that adopts deep supervised hashing using pairwise labels (DPSH) [18] with FedAvg [17] for hashing. It is worth noting that our approach works especially better for under-represented classes with fewer samples, as demonstrated by the clear discrimination of the two categories in blue and purple. The major contributions of this paper are summarized as follows: • We present a novel federated supervised hashing method named FedHAP for efficient and effective cross-silo retrieval. This method integrates hashing learning with federated learning and takes advantage of the global prototypes, to enhance the performance of the hashing model with minimal impact on privacy. • The global prototypes are leveraged in both hashing learning and our introduced adversarial learning to enforce the semantic consistency between local hash codes and global prototypes, which can align the local learned distributions of hash codes and thus facilitate the global model aggregation. • Experimental results on three benchmark datasets demonstrate that our approach outperforms existing methods and can achieve significantly improved mAPs in both IID and non-IID scenarios. Furthermore, we verify the efficacy of each component in our proposed method by ablation experiments. RELATED WORK 2.1 Deep Hashing Hashing functions have been attractive due to their irreversible nature, which can map sensitive data into compact binary codes. Analytics has proven that the learned binary hash codes can lead to less memory consumption and short query time [3]. Existing hashing methods can be generally organized into two categories: unsupervised hashing and supervised hashing. Unsupervised hashing methods [6,13,21,33] learn hashing functions that map input data points into binary codes by exploiting the similarity distances of samples. Supervised hashing methods [1,5,18,22,26,28,37] aim to further exploit available supervised information (such as labels or the semantic affinities of training data) to improve performance. In recent years, supervised hashing has attracted more attention as it can achieve better accuracy than unsupervised hashing. Deep Convolutional Neural Network based hashing methods [5,15,35,39,40] are proposed to learn data representations in binary codes that preserve the locality and similarity of data samples. By coupling data feature extraction and binary code learning, these methods have been shown to greatly improve retrieval accuracy. Federated Learning FL was first proposed by Google in 2016 [12], and has emerged as a paradigm for distributed training of machine learning models without direct data-sharing. It enables multiple clients to collectively learn a model under the coordination of a central server while keeping the data decentralized and protecting the data privacy of each client. FedAvg [17] is a popular optimization method in federated learning but relies on the premise that each local solver is a copy of the same stochastic process (due to the IID assumption). FedProx [24] is presented as a generalization and re-parametrization of Fe-dAvg, and it achieves stable and accurate convergence behavior compared to FedAvg in highly heterogeneous settings. To address the non-IID nature of FL, various algorithms have been proposed since then [9,16,19,25,30,31]. FedProc [25] designs a local network architecture and a contrastive loss to regulate the training of local models with the class-wise logits transmission. However, unprocessed raw intermediate logits may cause leakage of the original data and local data distribution. Knowledge Distillation based federated frameworks [19,23] have recently emerged to tackle the non-IID issue by refining the server model using aggregated knowledge from heterogeneous users, which may need a proxy dataset or require the client to provide the server with the label distribution. Federated Hashing Recently, the framework of federated learning has been applied to hashing for various tasks [36,41]. For example, Federated Patient Hashing (FPH) [36] has been proposed to collaboratively train a patient information retrieval model stored in a shared memory while keeping all the patient-level information in local clients. Federated Cross-Modal Retrieval (FedCMR) [41] is the first attempt to combine federated learning with cross-modal retrieval. While the abovementioned methods have certainly proved the feasibility of federated hashing to some extent, they did not exploit the relationship between the global hashing model and the local hashing model in the FL framework, nor did they sufficiently address the prominent non-IID problem in the federated environments, such as label distribution skewness. Different from these methods, our approach design a mechanism to extract the statistics of class-wise global prototypes, and symbolic operations are also performed on the class-wise global prototypes, which greatly avoids the leakage of data and label distribution. The class-wise global prototypes will participate the local training process and alleviate the model drift issue. Furthermore, illuminated by adversarial learning, we design a discriminator network to further bridge the global and local generation of hash codes. THE PROPOSED METHOD In this section, we first present the problem definitions. Then, we introduce the details of our FedHAP approach and provide analysis on each of the design components. Problem Formulation Without losing generality, we assume there are clients whose private data are denoted as D = {x } =1 , where x ∈ R 1× is the input features, is the number of data in client , and is the dimension of features. All clients collaboratively train a hashing model with parameters , which maps the input features x into output b ∈ R 1× , where is the number of bits in Hamming space. The hash code of x is denoted as h and can be gained by h = (b ), where denotes the element-wise sign function. Let H = {h } =1 , the problem to tackle here is for clients to collaboratively train a deep hashing model without exposing their data D , so that the similarities among all data sample pairs are preserved as follows: min ∑︁ =1 L ℎ ℎ (D , H ).(1) where L ℎ ℎ denotes the hashing loss which we will explain next. Learning the Hashing Model The deep hashing model is comprised of a feature learning module and a hash learning module. The feature learning module is a deep convolutional neural network to extract representations from the input data, which is then fed into the hash learning module. The hash learning module consists of multiple fully connected layers to obtain b . Similarity preserving loss (L ). To ensure that the hash codes of similar data pairs are pulled together and the codes of dissimilar data pairs are pushed away from each other, we choose a widely used triplet ranking loss [32] to preserve the similarity structure between data pairs. In order to explain the triplet loss briefly, we suppose there is a set of triplets b , b + , b − . b + is a positive pair of b , which indicates b and b + have the same class. b − is a negative pair of b , which indicates b and b − are of the different classes. The similarity between the codes can be evaluated using a general distance metric (·, ·), e.g., cosine distance. For example, b , b + computes the dissimilarity between the sample pairs. The triplet loss mentioned above can be formulated as follows: L = ∑︁ =1 b , b + − b , b − + , 0 .(2) where is the margin parameter. Note this loss can be only computed locally for each client as denoted by the mark on L , as clients are not supposed to exchange raw data. To enhance hash learning by leveraging other clients' prototypes, we further consider a novel global triplet loss between local data and global prototypes of the hash codes for each class, denoted as L , see Eq. (3). Here, the global prototypes collectively are denoted asĤ = {ĥ } =1 , where is the number of classes andĥ is the prototypical hash code for class . In Eq. (3),ĥ + denotes the global hash code that is of the same class as b , andĥ − denotes the global hash code that is of a different class from b . We will discuss how to generateĤ in the following. L = ∑︁ =1 b ,ĥ + − b ,ĥ − + , 0 .(3)L = L + L .(4) At each round, the global hash codesĤ will be computed bŷ H = B , andB = {b } =1 is obtained by aggregating all the class-level vectors from clients as illustrated in Fig.3. First, each client aggregates their class prototypical hash codes by Eq. (5). b , = , =1 b , , .(5) where , indicates the number of data samples of class in client and b , represents the ouput feature vector of the -th data of class . Next, clients send their class-level hash codes to the server which performs the aggregation: h = b = 1 ∑︁ =1b , .(6) Quantization loss (L ). The binary constraints on b require thresholding the network outputs (e.g. with a sign function), which will make it difficult to train the network with backpropagation. To simplify the optimization process during the hashing learning, the common way is to solve the relaxed problem through dropping the sign function, which will introduce the non-negligible quantization loss. To overcome this problem, we introduce the approximation loss for the learned hash codes in Eq. (7), L = ∑︁ =1 b − b 2 .(7) Adversarial loss (L ). In the federated scenario with non-IID distributions, the consistency of the generated hash codes from different clients cannot be guaranteed. In order to preserve the consistency of local and global distributions of hash codes, we further introduce a local discriminator network for each client with trainable parameters . The discriminator network is initially used in adversarial learning to identify whether the data come from a real dataset or a neural network [7] and its output is the probability that the input data come from the real dataset. In this paper, we treat the global hash codesĤ = {ĥ } =1 as the real dataset and the hash codes H = {h } =1 generated by the local hashing model as the latter. We utilize the local labels = {y } =1 and global labels^= {ŷ } =1 as constraints on H andĤ to realize the discrimination of hash codes of a specific class. Specifically, we use the one-hot vector of the class label as extra information and concatenate it with H /Ĥ together as the input vectors of , and the output is the probability score (between 0 and 1) that the input data come from the global prototypes as shown in Fig.4. Specifically, Figure 2: The framework of our proposed FedHAP. the score approaches to "1" when the input vector is classified as global prototypes, and vice versa. We define the adversarial loss as L , which is a cross-entropy loss. The adversarial loss L of client can be written as follows: ! ! b!"# # ... Avg " ̅ !"# " ̅ !"$ ... Client 1 . . . Server Avg " $ !"# ... sign % $ !"# % $ !"$ ... ! ' ... Avg " ̅ !"# " ̅ !"$ ... Client m " !"# b !"# $ b !"# %! "#! ... b!"& # ... b !"& %$ "#% " !"$ b !"# # b!"# # b!"# $ ... ... " !"# " !"$ b !"# %! "#! b !"& %$ "#% " $ !"$L = − 1 ∑︁ =1 1 − ( h |y + 1 ∑︁ =1 ( ĥ |ŷ . (8) where the first term in the above equation is the cross-entropy loss for the local dataset, followed by the cross-entropy loss for the global prototypes. Overall local objective. The overall local loss function L ℎ ℎ can be obtained by combining Eq. (4), Eq. (7) and Eq. (8), formulated as follows: for the global prototypes. L ℎ ℎ = L + * L + * L .(9) where and are two penalty parameters to balance different loss components. FedHAP Framework and Algorithm For all clients to learn the above hashing model collaboratively, we propose FedHAP, which is shown in Fig. 2 and algorithm 3.3. The general framework mainly consists of a central server and clients. First, each client trains the deep hashing model and discriminator network with its local data and global prototypes, uploading the local updated model parameters , and the locally generated prototypical codesb = {b , } =1 to the central server. The central server is responsible for coordinating clients in the model training process by aggregating , andb received from clients and then delivering the aggregated models , andĤ to them for the next training round. Local training procedure. During each local training step, the original input of data will be converted into low-dimensional features by the convolutional neural network and the hash learning module, which are then used to compute the similarity preserving loss L with the guidance of global prototypes. Next, the feature embedding will be converted into binary hash codes using the sign function with quantization loss L . Furthermore, the local and global hash codes with their semantic labels will be simultaneously fed into the discriminator network to generate the corresponding adversarial loss L . It is worth noting that the parameters of the hashing network and discriminator network rely on each other in the training process, and both of the two training phases will update all model parameters. =1 Update global discriminator +1 ← 1 =1 Update global prototypesĤ +1 end for Privacy and Communication concerns. Our proposed FedHAP requires the transmission of per-class prototypes between clients and server. However, this does not raise higher communication costs and privacy risks since the prototypes represent only an averaged statistic over all local data of low-dimensional feature representations, such as a 12-bit vector containing only -1 or 1. Similar methods have also been investigated in the literature [25,34], where some statistics or prior knowledge under the premise of privacy protection are transmitted to facilitate the learning of federated models. EXPERIMENTS In this section, we conduct extensive experiments to verify the effectiveness of our proposed approach and compare it with other state-of-the-art methods in federated environments, considering both IID and non-IID scenarios. We evaluate all methods on three benchmark datasets, including NUS-WIDE [4], MIRFlickr25K [10] and MS-COCO [20], which are widely used in the data retrieval area. In addition, we design a set of ablation experiments to further verify the individual efficacy of different loss components. Datasets 1) NUS-WIDE contains 269,648 web images and we use the images associated with the 21 most frequent concepts, where each of these concepts associates with at least 5,000 images, resulting in a total of 195,834 images. A total of 2,100 data pairs in the dataset are selected randomly as the query set and the remainder of the dataset is used as the retrieval database. 2) MIRFlickr25K is a commonly used dataset consisting of 25000 images that were downloaded from the social photography site Flickr.com. In our experiment, we select 20,015 data points in total, among which 10,000 pictures are randomly selected for training. For the remaining data, 2000 data pairs are selected randomly as the query set and the rest is used as the retrieval database. We randomly select 4992 pairs for the query set and leave the remaining pairs as the retrieval database. In addition, 10,000 pairs are randomly selected from the retrieval database for training. Baselines and Experimental Settings We compare FedHAP with the following state-of-the-art deep hashing methods: DCH [1], DPSH [18], Greedy Hash [28], and CSQ [37]. To ensure a fair comparison with the previous works, the abovementioned methods are deployed under the benchmark learning frameworks: FedAvg [17], FedProx [24]. We also compare our method with existing federated hashing framework FedCMR [41] and MOON [16] which is proposed to handle non-IID local data distributions across clients. The state-of-the-art methods deployed in four federated frameworks are regarded as our baselines and the parameter settings are based on the original papers. The feature extraction networks of the baselines are derived from CNN-F [2], which has been pre-trained on the ImageNet dataset [27] in order to extract a 4,096-dimensional representation vector for each data point and the discriminator network is a two-layer feed-forward neural network. The experiments are conducted in both IID and non-IID scenarios, where the training settings of each scenario are identical for all baselines and our method. We consider a federated learning setup with = 20 participating clients. For the IID scenarios, we simulate the IID data distributions by randomly and evenly partitioning the shuffled training sets into 20 clients, and thus each client is assigned with data from a uniform distribution. For the non-IID scenarios, as previous works [17,24], the data are sorted by class and each client receives a data shard that contains samples belonging to a randomly selected set of classes. It is worth noting that this partition method can result in a deeper heterogeneity of data samples across clients than Dirichlet distribution based partition as in [16]. In our experiments, the numbers of global training rounds and local training epochs are set to 100 and 5, respectively. In non-IID scenarios, the data category owned by each client is set to 3. Adam [11] is employed as the local optimizer, and the initial learning rate is set to 0.005. The detailed settings of each dataset are summarized in Table. 1. To find a better combination of hyper-parameters in our method, we conduct sensitivity analysis of these hyper-parameters and achieve high results with = 0.05 and = 0.1. Evaluation Metric Hamming ranking is a kind of classical retrieval method that is used to evaluate the performance of the image retrieval task. In our experiments, we evaluate the retrieval quality based on Mean average precision (mAP). As an intuitive illustration, the standard evaluation metrics, including precision-recall curves (PR), recall curves with different numbers of top retrieved samples and precision curves with different numbers of top retrieved samples on MIRFlickr25K dataset, are also provided. For a fair comparison, all methods use the identical training and testing sets. Performance Comparison We validate the effectiveness of data retrieval in federated environments as well as the generality of our approach in different databases. Our approach is compared with the baselines mentioned above using mAP results, precision-recall curves (PR), precision curves and recall curves, which are shown in Table. 2, Table. 3, Fig.5 and Fig. 6. In terms of the mAP results, it can be found that regardless of the IID or non-IID scenarios, our approach achieves the best performance in all three databases. Results in the IID scenarios. Compared with the existing methods, our method further improves the mAP results by approximately 1-2%, 4-6% and 5-7% under the constraints of hash codes with different bits on NUS-WIDE, MIRFlickr25K and MS-COCO, respectively. Moreover, it is noticeable that the improvement of mAP on MS-COCO is much larger than that on the other two datasets. Considering that MS-COCO has the largest data categories and thus the average amount of per-class samples at each client is much less, it can be concluded that our proposed method can overcome the scarcity of local data by leveraging global prototypes and reduce the local over-fitting risks, thus achieving improved model performance. Results in the non-IID scenarios. In the non-IID scenarios, we also achieve significant improvements of 8-9%, 2-4% and 1-2% in average mAPs for different bits on the above three datasets, respectively. An interesting phenomenon is that the performance boost on MS-COCO in the non-IID scenario is slightly reduced. This may be caused by the fact that the total number of data categories in MS-COCO exceeds the total number of data categories of the other two datasets, which will lead to an increased non-IID degree across clients. The extensive retrieval performance results on MIRFlickr25K with regard to precision-recall curves (PR), precision curves and recall curves with respect to different numbers of top returned samples in Fig.5 and Fig. 6 show that FedHAP outperforms baseline methods impressively, which is desirable for practical precisionfirst retrieval. Specifically, FedHAP achieves higher precision when the recall levels are low or the number of retrieved samples is small. In conclusion, these results demonstrate that learning the hash function using our proposed method can boost the retrieval performance remarkably in federated environments. Analysis on Ablation Experiments To better demonstrate our contributions, we design a set of ablation experiments to verify the utility of different components in our FedHAP framework. The ablation experiment is defined as: FedHAP-1: FedHAP-1 is designed based on FedHAP without the participation of global prototypes, which means that the process of discriminator network training and the global prototypes' participation in similarity preserving loss calculation are all removed. The remainder of the method is the same as FedHAP. FedHAP-2: FedHAP-2 is built based on the design of FedHAP, in which the global prototypes do not participate in the calculation of similarity preserving loss but still participates in the training process of the discrimination network, which means we only apply the global prototypes to the adversarial module. FedHAP-3: FedHAP-3 is designed based on FedHAP. Contrary to FedHAP-2, global prototypes only participate in the calculation of similarity preserving loss and the module of the discriminator network is removed in the framework. The results of ablation experiments in IID and non-IID scenarios are reported in Table. 4 and Table. 5. Two points can be concluded from the results. First, comparing the results of FedHAP-1 and Fed-HAP, it can be found that the model performance will degrade significantly in the absence of global prototypes, which demonstrates the efficacy and importance of global prototypes in promoting retrieval performance. Second, each component in the framework can play a significant role in improving the model performance independently. The optimal results are achieved through the mutual promotion between different components in the FedHAP framework. Effect of The Number of Clients To analyze the performance of our proposed method when the client number varies, we further test the above-mentioned baselines and our method with different numbers of clients from 20 to 100, where the data samples are randomly distributed and the length of hash code is 48 bits. The mAP results are reported in Table 6, from which we can see that our method still consistently outperforms all baselines under different system sizes. We also notice that as the number of clients increases, the performance of the model will decrease slightly. This is not surprising since the amount of data per client will decrease when the number of clients increases and the total amount of data remains the same, which will result in enlarged distribution discrepancy of local data and higher probability of local model over-fitting. Effect of Distance Metrics Here, we compare the results of our FedHAP using two different distance metrics in computing triplet loss, including the Euclidean distance and cosine distance, reporting the results in Table 7. We can observe that both Euclidean distance and cosine distance can significantly improve the performance compared to the baselines in Table 3, and the cosine distance outperforms the Euclidean distance. We consider the reason is that the cosine distance could eliminate the influence of different norms of output feature vectors. CONCLUSION In this paper, we propose a novel federated hashing approach Fed-HAP for efficient cross-silo retrieval, which aims to collectively train the hashing models from decentralized data. Besides the general federated manner, we innovatively introduce the global prototypes to maintain the distribution alignment of the locally generated and globally generated hash codes, achieving a significant improvement in the model effectiveness. Since the global prototypes are composed of fixed-length (12-48 bits) binary hash codes and the number of the hash codes does not exceed the number of data categories, which guarantees almost negligible communication cost and does not raise data privacy issues. Comprehensive experimental results on three widely used databases have demonstrated the superiority of FedHAP compared with other baselines in both IID and non-IID scenarios. Figure 1 : 1Visualization of hash codes generated by the two deep hash learning methods, trained on 10000 training data points of NUS-WIDE for different classes. (For ease of visualization, we sample six categories) Figure 3 : 3The generation of the global prototypes. Figure 4 : 4The workflow of the proposed discriminator . Algorithm 1 FedHAP 1Input: Image set X, Number of clients , Hashing model , Discriminator network , communication rounds , Local training epochs . Initialize: Initialize 0 , 0 ,Ĥ 0 for = 0 to − 1 do Server broadcasts global model parameters of , and global prototypesĤ to each client . for each client in parallel do for = 1 to do Start training ( ) , . Calculate adversarial loss L with Eq. Calculate adversarial loss L , cosine triplet loss L , quantization loss L L ℎ ℎ = L + * L + * L Update using back propagation end for Send local model parameters , , and local prototypesb to the central server end for Server executes: Update global hashing model +1 ← 1 3 ) 3MS-COCO 2014 originates from the Microsoft COCO dataset, the 2014 release of MS-COCO contains 82,783 training, 40,504 validation, and 40,775 testing images (approximately 1/2 train, 1/4 val, and 1/4 test). Figure 5 : 5The precision and recall results of DCH (FedAvg), DPSH (FedAvg), Greedyhash (FedAvg), CSQ (FedAvg) and our method FedHAP on MIRFlickr25K in IID scenarios: (a) Precision-recall curves (PR) @ 48 bits. (b) Recall curves with respect to different numbers of top retrieved samples. (c) Precision curves with respect to different numbers of top retrieved samples. Figure 6 : 6The precision and recall results of DCH (FedAvg), DPSH (FedAvg), Greedyhash (FedAvg), CSQ (FedAvg) and our method FedHAP on MIRFlickr25K in non-IID scenarios: (a) Precision-recall curves (PR) @ 48 bits. (b) Recall curves with respect to different numbers of top retrieved samples. (c) Precision curves with respect to different numbers of top retrieved samples. Table 1 : 1Experiment settings of different databases.Database Database Size Training Size Category Quantity NUS-WIDE IID 193734 10000 24 non-IID 193734 10000 24 MIRFlickr IID 18015 10000 21 non-IID 18015 10000 21 MS-COCO IID 112226 10000 80 non-IID 112226 10000 80 Table 2 : 2mAP results for the retrieval task in the IID scenarios.Model NUS-WIDE MIRFlickr MS-COCO 12bit(%) 24bit(%) 48bit(%) 12bit(%) 24bit(%) 48bit(%) 12bit(%) 24bit(%) 48bit(%) DPSH (FedAvg) 73.78 75.14 76.29 76.92 78.71 79.32 56.48 57.86 59.54 DPSH (FedProx) 76.71 78.25 78.32 79.94 82.35 82.97 59.65 62.18 63.94 DPSH (FedCMR) 76.87 78.21 78.64 79.45 81.36 83.41 61.41 64.13 65.75 DPSH (MOON) 77.52 78.32 78.89 78.74 79.54 80.32 56.78 59.42 61.73 DCH (FedAvg) 70.94 73.97 75.39 76.93 79.62 80.09 55.46 57.89 58.96 DCH (FedProx) 74.28 75.98 76.53 78.97 80.90 81.42 55.54 57.31 58.99 DCH (FedCMR) 74.61 76.17 76.57 79.91 81.10 81.84 56.63 58.65 59.65 DCH (MOON) 72.58 75.76 77.19 77.67 79.92 81.35 55.14 57.80 60.21 GreedyHash (FedAvg) 73.56 75.83 77.38 69.37 72.64 75.82 52.65 58.79 62.64 GreedyHash (FedProx) 72.45 76.17 78.02 68.72 74.86 75.05 50.92 55.98 61.06 GreedyHash (FedCMR) 73.28 76.02 77.81 69.40 74.48 75.96 53.17 59.82 62.94 GreedyHash (MOON) 75.87 77.82 79.43 72.56 76.89 79.24 50.83 55.79 60.31 CSQ (FedAvg) 75.61 78.02 78.94 77.56 78.53 80.85 56.86 61.13 67.31 CSQ (FedProx) 76.43 78.42 78.96 73.17 74.16 74.54 57.46 60.33 67.64 CSQ (FedCMR) 76.71 78.74 79.05 70.56 72.29 73.91 57.05 60.57 67.26 CSQ (MOON) 73.87 75.69 76.91 71.45 73.76 75.43 57.42 60.75 68.54 FedHAP (ours) 78.59 80.31 81.55 86.07 86.17 87.78 66.70 70.14 72.18 Table 3 : 3mAP results for the retrieval task in the non-IID scenarios.Model NUS-WIDE MIRFlickr MS-COCO 12bit(%) 24bit(%) 48bit(%) 12bit(%) 24bit(%) 48bit(%) 12bit(%) 24bit(%) 48bit(%) DPSH (FedAvg) 42.57 43.72 44.16 74.52 75.01 78.30 54.76 57.39 59.65 DPSH (FedProx) 44.86 45.81 45.86 74.85 75.07 78.64 56.77 59.63 60.37 DPSH (FedCMR) 48.41 53.52 50.21 75.01 76.21 77.74 55.73 60.57 62.31 DPSH (MOON) 42.67 47.74 49.75 74.87 76.13 78.69 54.65 57.53 62.11 DCH (FedAvg) 38.67 39.45 40.23 70.11 70.74 77.65 51.94 56.35 58.64 DCH (FedProx) 38.42 39.25 40.98 70.81 70.81 77.82 53.18 53.28 59.25 DCH (FedCMR) 40.69 43.47 44.09 69.65 69.92 73.86 53.42 57.61 58.94 DCH (MOON) 39.5 43.21 44.11 74.64 75.08 77.85 53.89 57.80 58.92 GreedyHash (FedAvg) 53.68 55.82 55.94 69.93 71.21 76.60 48.57 52.16 57.25 GreedyHash (FedProx) 49.29 48.41 48.90 67.59 69.62 73.56 46.57 56.80 60.41 GreedyHash (FedCMR) 51.06 63.42 63.59 65.57 69.76 70.23 48.15 55.80 55.86 GreedyHash (MOON) 50.92 56.79 61.28 72.15 73.46 76.23 48.74 56.51 57.98 CSQ (FedAvg) 51.87 52.18 53.09 66.37 68.45 73.54 56.37 58.76 61.42 CSQ (FedProx) 54.08 55.29 55.73 69.64 72.11 73.23 50.63 58.84 62.14 CSQ (FedCMR) 59.45 60.72 60.88 66.78 68.14 72.09 50.42 58.17 62.07 CSQ (MOON) 48.76 53.84 55.72 65.52 68.92 72.31 56.71 58.95 61.79 FedHAP (ours) 67.74 69.09 70.28 77.34 78.67 80.49 57.65 61.89 63.37 Table 4 : 4mAP results of the ablation experiments (IID).Method NUS-WIDE MS-COCO 12bit(%) 24bit(%) 48bit(%) 12bit(%) 24bit(%) 48bit(%) FedHAP-1 74.67 75.83 76.44 62.23 63.77 68.37 FedHAP-2 75.43 77.93 78.78 65.42 68.09 71.71 FedHAP-3 76.74 78.09 80.02 65.63 68.29 71.19 FedHAP 78.59 80.31 81.55 66.70 70.14 72.18 Table 5 : 5mAP results of the ablation experiments (non-IID). FedHAP 67.74 69.99 70.28 57.65 61.89 63.37Method NUS-WIDE MS-COCO 12bit(%) 24bit(%) 48bit(%) 12bit(%) 24bit(%) 48bit(%) FedHAP-1 57.16 59.66 60.56 55.60 58.59 60.85 FedHAP-2 64.27 65.60 66.51 56.31 60.22 61.89 FedHAP-3 65.61 67.99 68.18 56.80 60.63 61.86 Table 6 : 6mAP results under different numbers of clients.Method 20 clients 40 clients 60 clients 100 clients DPSH FedAvg 79.32 79.18 78.93 78.42 FedProx 82.97 82.80 82.54 82.21 CSQ FedAvg 80.85 79.81 79.46 79.10 FedProx 74.54 73.28 72.91 72.65 Our method 87.78 87.25 87.16 86.83 Table 7 : 7mAP results under different distance metrics in non-IID scenarios over 48 bits.Distance Metric NUS-WIDE MIRFlickr MS-COCO Euclidean Distance 69.41 79.23 62.78 Cosine Distance 70.28 80.49 63.37 Deep cauchy hashing for hamming space retrieval. Yue Cao, Mingsheng Long, Bin Liu, Jianmin Wang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionYue Cao, Mingsheng Long, Bin Liu, and Jianmin Wang. 2018. Deep cauchy hashing for hamming space retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1229-1237. Ken Chatfield, Karen Simonyan, Andrea Vedaldi, Andrew Zisserman, arXiv:1405.3531Return of the devil in the details: Delving deep into convolutional nets. arXiv preprintKen Chatfield, Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2014. Return of the devil in the details: Delving deep into convolutional nets. arXiv preprint arXiv:1405.3531 (2014). Hashing techniques: A survey and taxonomy. Lianhua Chi, Xingquan Zhu, ACM Computing Surveys (CSUR). 50Lianhua Chi and Xingquan Zhu. 2017. Hashing techniques: A survey and taxon- omy. ACM Computing Surveys (CSUR) 50, 1 (2017), 1-36. Nus-wide: a real-world web image database from national university of singapore. Jinhui Tat-Seng Chua, Richang Tang, Haojie Hong, Zhiping Li, Yantao Luo, Zheng, Proceedings of the ACM international conference on image and video retrieval. the ACM international conference on image and video retrievalTat-Seng Chua, Jinhui Tang, Richang Hong, Haojie Li, Zhiping Luo, and Yantao Zheng. 2009. Nus-wide: a real-world web image database from national university of singapore. In Proceedings of the ACM international conference on image and video retrieval. 1-9. Deep hashing for compact binary codes learning. Venice Erin Liong, Jiwen Lu, Gang Wang, Pierre Moulin, Jie Zhou, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionVenice Erin Liong, Jiwen Lu, Gang Wang, Pierre Moulin, and Jie Zhou. 2015. Deep hashing for compact binary codes learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2475-2483. Iterative quantization: A procrustean approach to learning binary codes for large-scale image retrieval. Yunchao Gong, Svetlana Lazebnik, Albert Gordo, Florent Perronnin, 35Yunchao Gong, Svetlana Lazebnik, Albert Gordo, and Florent Perronnin. 2012. Iterative quantization: A procrustean approach to learning binary codes for large-scale image retrieval. IEEE transactions on pattern analysis and machine intelligence 35, 12 (2012), 2916-2929. Generative adversarial nets. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Advances in neural information processing systems. 27Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. Advances in neural information processing systems 27 (2014). Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770-778. Measuring the effects of non-identical data distribution for federated visual classification. Tzu-Ming Harry Hsu, Hang Qi, Matthew Brown, arXiv:1909.06335arXiv preprintTzu-Ming Harry Hsu, Hang Qi, and Matthew Brown. 2019. Measuring the effects of non-identical data distribution for federated visual classification. arXiv preprint arXiv:1909.06335 (2019). The mir flickr retrieval evaluation. J Mark, Michael S Huiskes, Lew, Proceedings of the 1st ACM international conference on Multimedia information retrieval. the 1st ACM international conference on Multimedia information retrievalMark J Huiskes and Michael S Lew. 2008. The mir flickr retrieval evaluation. In Proceedings of the 1st ACM international conference on Multimedia information retrieval. 39-43. Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic opti- mization. arXiv preprint arXiv:1412.6980 (2014). Federated learning: Strategies for improving communication efficiency. Jakub Konečnỳ, Brendan Mcmahan, X Felix, Peter Yu, Ananda Richtárik, Dave Theertha Suresh, Bacon, arXiv:1610.05492arXiv preprintJakub Konečnỳ, H Brendan McMahan, Felix X Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492 (2016). Isotropic hashing. Weihao Kong, Wu-Jun Li, Advances in Neural Information Processing Systems. 25Weihao Kong and Wu-Jun Li. 2012. Isotropic hashing. Advances in Neural Information Processing Systems 25 (2012), 1646-1654. Imagenet classification with deep convolutional neural networks. Alex Krizhevsky, Ilya Sutskever, Geoffrey E Hinton, Advances in Neural Information Processing Systems. 25Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classifi- cation with deep convolutional neural networks. Advances in Neural Information Processing Systems 25 (2012), 1097-1105. Simultaneous feature learning and hash coding with deep neural networks. Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionHanjiang Lai, Yan Pan, Ye Liu, and Shuicheng Yan. 2015. Simultaneous feature learning and hash coding with deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3270-3278. Model-contrastive federated learning. Qinbin Li, Bingsheng He, Dawn Song, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionQinbin Li, Bingsheng He, and Dawn Song. 2021. Model-contrastive federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10713-10722. Ameet Talwalkar, and Virginia Smith. 2020. Federated optimization in heterogeneous networks. Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Proceedings of Machine Learning and Systems. Machine Learning and Systems2Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. 2020. Federated optimization in heterogeneous networks. Proceedings of Machine Learning and Systems 2 (2020), 429-450. Feature learning based deep supervised hashing with pairwise labels. Wu-Jun Li, Sheng Wang, Wang-Cheng Kang, arXiv:1511.03855arXiv preprintWu-Jun Li, Sheng Wang, and Wang-Cheng Kang. 2015. Feature learning based deep supervised hashing with pairwise labels. arXiv preprint arXiv:1511.03855 (2015). Ensemble distillation for robust model fusion in federated learning. Tao Lin, Lingjing Kong, U Sebastian, Martin Stich, Jaggi, Advances in Neural Information Processing Systems. 33Tao Lin, Lingjing Kong, Sebastian U Stich, and Martin Jaggi. 2020. Ensemble distillation for robust model fusion in federated learning. Advances in Neural Information Processing Systems 33 (2020), 2351-2363. Microsoft coco: Common objects in context. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, European Conference on Computer Vision. SpringerTsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European Conference on Computer Vision. Springer, 740-755. Discrete graph hashing. Wei Liu, Cun Mu, Sanjiv Kumar, Shih-Fu Chang, Wei Liu, Cun Mu, Sanjiv Kumar, and Shih-Fu Chang. 2014. Discrete graph hashing. (2014). Supervised hashing with kernels. Wei Liu, Jun Wang, Rongrong Ji, Yu-Gang Jiang, Shih-Fu Chang, 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEEWei Liu, Jun Wang, Rongrong Ji, Yu-Gang Jiang, and Shih-Fu Chang. 2012. Su- pervised hashing with kernels. In 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2074-2081. Data-free knowledge distillation for deep neural networks. Stefano Raphael Gontijo Lopes, Thad Fenu, Starner, arXiv:1710.07535arXiv preprintRaphael Gontijo Lopes, Stefano Fenu, and Thad Starner. 2017. Data-free knowl- edge distillation for deep neural networks. arXiv preprint arXiv:1710.07535 (2017). Communication-efficient learning of deep networks from decentralized data. Brendan Mcmahan, Eider Moore, Daniel Ramage, Seth Hampson, Blaise Aguera Y Arcas, PMLRArtificial intelligence and statistics. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep net- works from decentralized data. In Artificial intelligence and statistics. PMLR, 1273-1282. Xutong Mu, Yulong Shen, Ke Cheng, Xueli Geng, Jiaxuan Fu, Tao Zhang, Zhiwei Zhang, arXiv:2109.12273FedProc: Prototypical Contrastive Federated Learning on Non-IID data. arXiv preprintXutong Mu, Yulong Shen, Ke Cheng, Xueli Geng, Jiaxuan Fu, Tao Zhang, and Zhiwei Zhang. 2021. FedProc: Prototypical Contrastive Federated Learning on Non-IID data. arXiv preprint arXiv:2109.12273 (2021). Attribute discovery via predictable discriminative binary codes. Mohammad Rastegari, Ali Farhadi, David Forsyth, European Conference on Computer Vision. SpringerMohammad Rastegari, Ali Farhadi, and David Forsyth. 2012. Attribute discovery via predictable discriminative binary codes. In European Conference on Computer Vision. Springer, 876-889. Imagenet large scale visual recognition challenge. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, International journal of computer vision. 115Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. 2015. Imagenet large scale visual recognition challenge. International journal of computer vision 115, 3 (2015), 211-252. Greedy hash: Towards fast optimization for accurate hash coding in cnn. Shupeng Su, Chao Zhang, Kai Han, Yonghong Tian, Proceedings of the 32nd International Conference on Neural Information Processing Systems. the 32nd International Conference on Neural Information Processing SystemsShupeng Su, Chao Zhang, Kai Han, and Yonghong Tian. 2018. Greedy hash: Towards fast optimization for accurate hash coding in cnn. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 806-815. Visualizing data using t-SNE. Laurens Van Der Maaten, Geoffrey Hinton, Journal of machine learning research. 911Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of machine learning research 9, 11 (2008). Hongyi Wang, Mikhail Yurochkin, Yuekai Sun, arXiv:2002.06440Dimitris Papailiopoulos, and Yasaman Khazaeni. 2020. Federated learning with matched averaging. arXiv preprintHongyi Wang, Mikhail Yurochkin, Yuekai Sun, Dimitris Papailiopoulos, and Yasaman Khazaeni. 2020. Federated learning with matched averaging. arXiv preprint arXiv:2002.06440 (2020). Tackling the objective inconsistency problem in heterogeneous federated optimization. Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, H Vincent Poor, arXiv:2007.07481arXiv preprintJianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, and H Vincent Poor. 2020. Tackling the objective inconsistency problem in heterogeneous federated opti- mization. arXiv preprint arXiv:2007.07481 (2020). Distance metric learning for large margin nearest neighbor classification. Q Kilian, John Weinberger, Lawrence K Blitzer, Saul, Advances in Neural Information Processing Systems. Kilian Q Weinberger, John Blitzer, and Lawrence K Saul. 2006. Distance metric learning for large margin nearest neighbor classification. In Advances in Neural Information Processing Systems. 1473-1480. Yair Weiss, Antonio Torralba, Robert Fergus, Spectral hashing.. In Nips. 14Yair Weiss, Antonio Torralba, Robert Fergus, et al. 2008. Spectral hashing.. In Nips, Vol. 1. Citeseer, 4. Hierarchical personalized federated learning for user modeling. Jinze Wu, Qi Liu, Zhenya Huang, Yuting Ning, Hao Wang, Enhong Chen, Jinfeng Yi, Bowen Zhou, Proceedings of the Web Conference 2021. the Web Conference 2021Jinze Wu, Qi Liu, Zhenya Huang, Yuting Ning, Hao Wang, Enhong Chen, Jinfeng Yi, and Bowen Zhou. 2021. Hierarchical personalized federated learning for user modeling. In Proceedings of the Web Conference 2021. 957-968. Supervised hashing for image retrieval via image representation learning. Rongkai Xia, Yan Pan, Hanjiang Lai, Cong Liu, Shuicheng Yan, Twentyeighth AAAI conference on artificial intelligence. Rongkai Xia, Yan Pan, Hanjiang Lai, Cong Liu, and Shuicheng Yan. 2014. Super- vised hashing for image retrieval via image representation learning. In Twenty- eighth AAAI conference on artificial intelligence. Federated patient hashing. Jie Xu, Zhenxing Xu, Peter Walker, Fei Wang, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Jie Xu, Zhenxing Xu, Peter Walker, and Fei Wang. 2020. Federated patient hashing. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 6486-6493. Central similarity quantization for efficient image and video retrieval. Li Yuan, Tao Wang, Xiaopeng Zhang, E H Francis, Zequn Tay, Wei Jie, Jiashi Liu, Feng, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionLi Yuan, Tao Wang, Xiaopeng Zhang, Francis EH Tay, Zequn Jie, Wei Liu, and Jiashi Feng. 2020. Central similarity quantization for efficient image and video retrieval. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3083-3092. TDHPPIR: an efficient deep hashing based privacy-preserving image retrieval method. Chengyuan Zhang, Lei Zhu, Shichao Zhang, Weiren Yu, Neurocomputing. 406Chengyuan Zhang, Lei Zhu, Shichao Zhang, and Weiren Yu. 2020. TDHPPIR: an efficient deep hashing based privacy-preserving image retrieval method. Neuro- computing 406 (2020), 386-398. Bit-scalable deep hashing with regularized similarity learning for image retrieval and person re-identification. Ruimao Zhang, Liang Lin, Rui Zhang, Wangmeng Zuo, Lei Zhang, IEEE Transactions on Image Processing. 24Ruimao Zhang, Liang Lin, Rui Zhang, Wangmeng Zuo, and Lei Zhang. 2015. Bit-scalable deep hashing with regularized similarity learning for image retrieval and person re-identification. IEEE Transactions on Image Processing 24, 12 (2015), 4766-4779. Deep semantic ranking based hashing for multi-label image retrieval. Fang Zhao, Yongzhen Huang, Liang Wang, Tieniu Tan, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionFang Zhao, Yongzhen Huang, Liang Wang, and Tieniu Tan. 2015. Deep semantic ranking based hashing for multi-label image retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1556-1564. FedCMR: Federated Cross-Modal Retrieval. Linlin Zong, Qiujie Xie, Jiahui Zhou, Peiran Wu, Xianchao Zhang, Bo Xu, Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. the 44th International ACM SIGIR Conference on Research and Development in Information RetrievalLinlin Zong, Qiujie Xie, Jiahui Zhou, Peiran Wu, Xianchao Zhang, and Bo Xu. 2021. FedCMR: Federated Cross-Modal Retrieval. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 1672-1676.
[]
[ "QUASI-LOCAL MASS AT NULL INFINITY IN BONDI-SACHS COORDINATES", "QUASI-LOCAL MASS AT NULL INFINITY IN BONDI-SACHS COORDINATES" ]
[ "Po-Ning Chen ", "Mu-Tao Wang ", "ANDYe-Kai Wang ", "Shing-Tung Yau " ]
[]
[]
There are two important statements regarding the Trautman-Bondi mass[3,28,36,32,33]at null infinity: one is the positivity[29,19], and the other is the Bondi mass loss formula[3], which are both global in nature. In this note, we compute the limit of the Wang-Yau quasi-local mass on unit spheres at null infinity of an asymptotically flat spacetime in the Bondi-Sachs coordinates. The quasi-local mass leads to a local description of the radiation that is purely gravitational at null infinity. In particular, the quasi-local mass is evaluated in terms of the news function of the Bondi-Sachs coordinates.
10.4310/pamq.2019.v15.n3.a5
[ "https://arxiv.org/pdf/1901.06952v1.pdf" ]
119,152,584
1901.06952
371c1e243ce6f7bbb5f062fd710ea192dbf093e0
QUASI-LOCAL MASS AT NULL INFINITY IN BONDI-SACHS COORDINATES 21 Jan 2019 Po-Ning Chen Mu-Tao Wang ANDYe-Kai Wang Shing-Tung Yau QUASI-LOCAL MASS AT NULL INFINITY IN BONDI-SACHS COORDINATES 21 Jan 2019 There are two important statements regarding the Trautman-Bondi mass[3,28,36,32,33]at null infinity: one is the positivity[29,19], and the other is the Bondi mass loss formula[3], which are both global in nature. In this note, we compute the limit of the Wang-Yau quasi-local mass on unit spheres at null infinity of an asymptotically flat spacetime in the Bondi-Sachs coordinates. The quasi-local mass leads to a local description of the radiation that is purely gravitational at null infinity. In particular, the quasi-local mass is evaluated in terms of the news function of the Bondi-Sachs coordinates. INTRODUCTION An observer of the gravitational radiation created by an astronomical event is situated at future null infinity, where light rays emitted from the source approach. The study of the theory of gravitational radiation at null infinity in the last century culminated in a series of papers by Bondi and his collaborators [3,28,36,32,33], in which the Bondi-Trautman mass and the mass loss formula at null infinity are well understood. In particular, the Bondi-Trautman mass was proved to be positive in the work of Schoen-Yau [29] and Horowitz-Perry [19]. Both the positivity of mass and the mass loss formula are global statements on null infinity: knowledge of the mass aspect is required in every direction. For reasons that are both theoretical and experimental, it is highly desirable to have a quasi-local statement of mass/radiation at null infinity. In [11,12], we embarked on the evaluation the Wang-Yau quasi-local mass on surfaces of fixed size near null infinity of a linear gravitational perturbation of the Schwarzschild spacetime. The ideas and technique in [11,12] were further developed to address the case of the Vaidya spacetime in [15]. The construction of these spheres of unit size at null infinity will be reviewed in the next section. In the Vaidya case, we proved in [15] that the quasi-local mass of a unit size sphere at null infinity is directly related to the derivative of the mass aspect function with respect to the retarded time u. In particular, the positivity of the quasi-local mass is implied by the decreasing of the mass aspect function in u. In this article, we take on the general case of an asymptotically flat spacetime described in the Bondi-Sachs coordinates. The Vaidya spacetime contains matter which contributes to the radiation. A general vacuum spacetime in the Bondi-Sachs coordinates allows us to investigate radiation that is purely gravitational. A new ingredient in this article is a variational formula (see Theorem 4.1) which facilitates a much more straightforward computation of the O(d −2 ) than the one in [15]. Similar to [15], it is still crucial to compute the O(d −1 ) term of the optimal embedding. This is done in Lemma 5.1 and Lemma 5.2 of the current article. As in Lemma 3.3 of [15], the optimal embedding equation is reduced to two ordinary differential equations. However, it does not seem possible to obtain explicit solutions to the ODE's as in the Vaidya case. The quasi-local mass is then evaluated by combining Theorem 4.1 and the optimal embedding. The structure of the paper is as follows: in Section 2, we review the general framework of the quasilocal mass at null infinity. In Section 3, we compute the geometric quantities on the spheres at null infinity that are necessary to evaluate the quasi-local mass. In Section 4, we derived the formula for the The authors would like to thank the National Center for Theoretical Sciences at National Taiwan University where part of this research was carried out. 1 leading order term of the quasi-local mass. In Section 5, we evaluate the quasi-local mass based on the formula derived in Section 4. See Theorem 5.3. In the last section, Section 6, we look at several special examples. GENERAL FRAMEWORK OF QUASILOCAL MASS AT NULL INFINITY We consider a null geodesic γ parametrized by an affine parameter d with d 0 ≤ d < ∞ and a family of surfaces Σ d (s) for s > 0 centered at γ(d) in the following sense. For each fixed d and s, Σ d (s) is a surface that bounds a ball B d (s) with ∂B d (s) = Σ d (s), such that as s → 0, we have lim s→0 B d (s) = lim s→0 Σ d (s) = γ(d). We evaluate the quasilocal mass of Σ d (s) as d → ∞. In particular, when s = 1, lim d→∞ Σ d (1) is the unit sphere limit referred on our previous work. In practice, such an evaluation is conducted by choosing a family of parametrizations F d from the unit ball B 3 , F d : B 3 → B d (1) and considering the pull-backs of geometric quantities on B d (1) as geometric quantities on B 3 that depend on the parameter d. In particular, Σ d (s) is the image of the sphere of radius s in B 3 under F d . The unit sphere limit is obtained by setting s = 1 and taking the limit as d → ∞. When the spacetime is equipped with a global structure at null infinity that corresponds to limits of null geodesics, these unit sphere limits provide information of gravitational radiation observed at null infinity. We illustrate the construction in the Vaidya case where the spacetime metric takes the simple form: − 1 − M (u) r du 2 − 2dudr + r 2 dθ 2 + r 2 sin 2 θdφ 2 . We first consider a global coordinate change from (u, r, θ, φ) to (t, y 1 , y 2 , y 3 ) with t = u + r, y 1 = r sin θ sin φ, y 2 = r sin θ cos φ, and y 3 = r cos θ. In terms of the new coordinate system (t, y 1 , y 2 , y 3 ), the parametrization is then given by F d = (s,θ,φ) → (t, y 1 , y 2 , y 3 ) = (d, dd 1 + s sinθ sinφ, dd 2 + s sinθ cosφ, dd 3 + s cosθ), where (s,θ,φ) is a coordinate system on B 3 and the constants (d 1 ,d 2 ,d 3 ) satisfiesd 2 1 +d 2 2 +d 2 3 = 1 and indicates the direction of the null geodesic which is parametrized by d → (d, dd 1 , dd 2 , dd 3 ). Along the ball centered at a point on the null geodesic in the direction of (d 1 ,d 2 ,d 3 ), we have r = d 2 + 2sdZ + s 2 , u = d − d 2 + 2sdZ + s 2 , y 1 r = dd 1 + s sinθ sinφ √ d 2 + 2sdZ + s 2 , etc. where Z =d 1 sinθ sinφ +d 2 sinθ cosφ +d 3 cosθ. The pull-back of the global coordinate (u, r, θ, φ) under F d defines functions on B 3 depending on d. As d → ∞ we have (2.1) lim d→∞ F * d u = −sZ, lim d→∞ F * d θ =θ, lim d→∞ F * d φ =φ, whereθ,φ are defined such thatd 1 = sinθ sinφ,d 2 = sinθ cosφ, andd 3 = cosθ. UNIT SPHERE AT NULL INFINITY IN BONDI-SACHS COORDINATES The spacetime metric in Bondi-Sachs coordinates is given by − 1 − M r + O(r −2 ) du 2 − 2 1 + O(r −2 ) dudr − 2 U (−2) A + O(r −1 ) dudv A +(r 2σ AB + rC AB + O(1))dv A dv B . Substituting u = t − r, the metric becomes, up to lower order terms, − 1 − M r dt 2 + 1 + M r dr 2 − 2M r dtdr − 2U (−2) A (dt − dr)dv A + (r 2σ AB + rC AB )dv A dv B . The unit timelike normal of t = d slice is given by n = 1 + M r ∂ t + M r ∂ r + U (−2) A r ∂ A r + O(r −2 ). We compute ∇ ∂r ∂ r , ∂ t = − 1 2 M u r + O(r −2 ), ∇ ∂ A ∂ B , ∂ t = − r 2 (C AB ) u + O(1), to get the second fundamental form of t = d slice k rr = 1 2 M u r + O(r −2 ) k AB = r 2 (C AB ) u + O(1). (3.2) A null geodesic with u = 0, θ =θ, φ =φ corresponds to points with the new coordinates (t, y 1 , y 2 , y 3 ) = (d, dd 1 , dd 2 , dd 3 ). Let d i = dd i . We consider the sphere Σ d of (Euclidean) radius 1 centered at a point (d, d 1 , d 2 , d 3 ) on the null geodesic and the ball B d bounded by Σ d in t-slice. Namely, Σ d = {(t, y 1 , y 2 , y 3 )| t = d, i (y i − d i ) 2 = 1}, (3.3) Σ d (s) = {(t, y 1 , y 2 , y 3 )| t = d, i (y i − d i ) 2 = s 2 }, (3.4) B d = {(t, y 1 , y 2 , y 3 )| t = d, i (y i − d i ) 2 ≤ 1}. (3.5) In this article, we study the Wang-Yau quasi-local mass of the family of surfaces Σ d defined in (3.3) as d → ∞ using the frame work outlined in Section 2. Namely, we consider a family of embedding F d = (s,θ,φ) → (t, y 1 , y 2 , y 3 ) = (d, dd 1 + s sinθ sinφ, dd 2 + s sinθ cosφ, dd 3 + s cosθ). In particular, F d maps the sphere of radius s, Σ(s) in B 3 onto Σ d (s). The pull-backs of M , U (−2) A and C AB under F d defines tensors on B 3 depending on d. By (2.1), their limits as d → ∞ depend only on sZ. We define the following: Definition 3.1. We define F (x), P AB (x) and Q A (x) to be functions of a single variable x such that F (sZ) = lim d→∞ M P AB (sZ) = lim d→∞ C AB Q A (sZ) = lim d→∞ U (−2) A . We use F ′ , P ′ AB and Q ′ A to denote the derivative of these functions with respect to x. We consider the following two functions (cosθ cosφ) sinθ cosφ+(cosθ sinφ) sinθ sinφ−sinθ cosθ and − sinφ sinθ cosφ+cosφ sinθ sinφ. Together with Z = sinθ sinφ sinθ sinφ+sinθ cosφ sinθ cosφ+ cosθ cosθ, they form an orthogonal basis of first eigenfunctions on S 2 . We refer to these two functions as Z A . In terms of Z and Z A , the transformation formula [15, page 3] gives dr = Zds + sZ b du b + O(d −1 ) dv A = ( 1 r Z A )ds + ( s r Z A b )du b + O(d −2 ). (3.6) Letḡ be the pull-back of the metric on the hypersurface t = d by F d . In terms of the coordinate system {s, u a } on B 3 , we havē g ss =1 + 1 d F (sZ)Z 2 + 2Q A (sZ)ZZ A + P AB (sZ)Z A Z B + O( 1 d 2 ) g sa = s d F (sZ)ZZ a + Q A (sZ)(ZZ A a + Z a Z A ) + P AB (sZ)Z A a Z B + O( 1 d 2 ) g ab =s 2σ ab + s 2 d F (sZ)Z a Z b + Q A (sZ)(Z a Z A b + Z b Z A a ) + P AB (sZ)Z A a Z B b + O( 1 d 2 ). (3.7) We first compute geometric data on Σ d (s). Lemma 3.2. On Σ d (s), σ (−1) ab = s 2 F (sZ)Z a Z b + Q A (sZ)(Z a Z A b + Z b Z A a ) + P AB (sZ)Z A a Z B b 1 2 ∇ a∇b σ (−1) ab − trσ (−1) −∆(trσ (−1) ) = s 2 − 1 2 sF ′ (sZ)Z(1 − Z 2 ) − F (sZ)(1 − 2Z 2 ) + sQ ′ A (sZ)Z 2 − sQ ′ A (sZ) + 4Q A (sZ)Z Z A + s 2 P ′′ AB (sZ) + sP ′ AB (sZ)Z + 4P AB (sZ) Z A Z B Remark. In the proof, we denote functions such as F (sZ), F ′ (sZ) and Q A (sZ) by F , F ′ and Q A . Proof. On Σ d , we havẽ ∇ a∇b σ (−1) ab = F ′′ (1 − Z 2 ) 2 − 7F ′ Z(1 − Z 2 ) 2 − 3F (1 − 3Z 2 ) + −2Q ′′ A Z(1 − Z 2 ) − 6Q ′ A (1 − Z 2 ) + 8Q ′ A Z 2 + 18Q A Z Z A + P ′′ AB Z 2 + 7P ′ AB Z + 9P AB Z A Z B ∆(trσ (−1) ) = F ′′ (1 − Z 2 ) 2 − 6F ′ Z(1 − Z 2 ) 2 + F (6Z 2 − 2) − 2 Q ′′ A Z(1 − Z 2 ) − 6Q ′ A Z 2 + 2Q ′ A − 6Q A Z Z A − P ′′ AB (1 − Z 2 ) − 6P ′ AB Z − 6P AB Z A Z B The computation on Σ d (s) is similar. We get a factor of s after each derivative. Lemma 3.3. On Σ d , (α (−1) H ) a = −F ′ ZZ a + 1 4 F ′′ (1 − Z 2 )Z a + 1 4 P ′′ AB Z a Z A Z B + 1 2 P ′ AB Z A a Z B 2∇ a (α (−1) H ) a = 1 2 F ′′′ (1 − Z 2 ) 2 − 4F ′′ Z(1 − Z 2 ) − 2F ′ (1 − 3Z 2 ) + 1 2 P ′′′ AB (1 − Z 2 ) − 4P ′′ AB Z − 6P ′ AB Z A Z B . Proof. The unit normal of Σ d is ν = ∂ s + O(d −1 ). By (3.6), we have ∂ s = Z∂ r + 1 d Z A ∂ A + O(d −2 ), ∂ a = Z a ∂ r + 1 d Z A a ∂ A + O(d −2 ). By (3.2), we get −k(ν, ∂ a ) = 1 2 M u d Z a Z − 1 2d (C AB ) u Z A a Z B + O(d −2 ), tr Σ k = − 1 2 M u d (1 − Z 2 ) − 1 2d (C AB ) u Z A Z B + O(d −2 ). The assertion follows from α H = −k(ν, ∂ a ) + ∂ a tr Σ k |H| + O(d −2 ). THE EXPANSION OF THE WANG-YAU QUASI-LOCAL MASS We consider the Wang-Yau quasi-local mass on the unit sphere constructed in the previous section. E(Σ d , X, T 0 ) = 1 8πd 2 B 3 1 8σ ADσBE (C AB ) u (C DE ) u − det(h (−1) 0 − h (−1) ) + 1 4 S 2 (tr Σ k (−1) ) 2 − τ (−1)∆ (∆ + 2)τ (−1) + O(d −3 ) (4.8) where τ (−1) is the solution to the optimal embedding equatioñ ∆(∆ + 2)τ (−1) = 1 2 F ′′′ (1 − Z 2 ) 2 − 4F ′′ Z(1 − Z 2 ) − 2F ′ (1 − 3Z 2 ) + 1 2 P ′′′ AB (1 − Z 2 ) − 4P ′′ AB Z − 6P ′ AB Z A Z B . Proof. We write E(Σ d , X, T 0 ) = E BY (Σ d ) + (E LY (Σ d ) − E BY (Σ d )) + (E(Σ d , X, T 0 ) − E LY ) where E BY and E LY denote the Brown-York mass and the Liu-Yau mass, respectively. From Lemma 3.1 of [7], we conclude E BY = 1 8πd 2 B 3 |k (−1) | 2 − (trk (−1) ) 2 2 − det(h (−1) 0 − h (−1) ) + O(d −3 ), where we also use the vacuum constraint equation R = |k| 2 − (trk) 2 . It is easy to see that E LY − E BY = 1 32πd 2 S 2 (tr Σ k (−1) ) 2 + O(d −3 ). From the second variation of the Wang-Yau mass in [8,9], we have E(Σ d , X, T 0 ) − E LY = 1 32πd 2 S 2 τ (−1)∆ (∆ + 2)τ (−1) + O(d −3 ). Finally, we apply (3.2) to evaluate |k (−1) | and trk (−1) . EVALUATING THE QAUSI-LOCAL MASS Recall the O( 1 d ) terms of the metric coefficients on B d g (−1) ss = F (sZ)Z 2 + 2Q A (sZ)ZZ A + P AB (sZ)Z A Z B g (−1) as = s F (sZ)ZZ a + Q A (sZ)(ZZ A a + Z a Z A ) + P AB (sZ)Z A a Z B ḡ (−1) ab = s 2 F (sZ)Z a Z b + Q A (sZ)(Z a Z A b + Z b Z A a ) + P AB (sZ)Z A a Z B b To apply Theorem 4.1, we need to compute h (−1) 0 − h (−1) and τ (−1) . We first derive a formula for h (−1) 0 − h (−1) . Lemma 5.1. Let A AB (Z, s ) be a trace-free, symmetric 2-tensor that solves the ODE A ′′ AB (Z, s)(1 − Z 2 ) − 6A ′ AB (Z, s)Z − 4A AB (Z, s) = − s 3 2 P ′′ AB (sZ) − s 2 2 P ′ AB (sZ)Z − 2sP AB (sZ), (5.9) for each 0 < s ≤ 1. Here A ′ AB means ∂A AB ∂Z . Then the difference of second fundamental forms on the sphere of radius s is given by h (−1) 0 − h (−1) = − A ′′ AB Z a Z b Z A Z B + ( s 2 2 P ′ AB (sZ) − 2A ′ AB ) Z a Z A b + Z b Z A a Z B + A ′ AB Z + A AB − s 2 P AB (sZ) Z A Z Bσ ab + sP AB (sZ) − s 2 2 P ′ AB (sZ)Z − 2A AB Z A a Z B b . Proof. We start with h (−1) . The unit normal is given bȳ ν = 1 −ḡ (−1) ss 2d ∂ s − s −2σabḡ (−1) as d ∂ b + O(d −2 ). We compute h ab = 1 2 ( D ∂aν , ∂ b + D ∂ bν , ∂ a ) = sσ ab + 1 d 1 2 ∂ sḡ (−1) ab −∇ aḡ (−1) bs +∇ bḡ (−1) as 2 −ḡ (−1) ss 2 sσ ab +O(d −2 ). For h (−1) 0 , we expand the isometric embedding X as X = sX + 1 d X (−1) + O(d −2 ) whereX denote the unit sphere in R 3 . We decompose X (−1) into X (−1) = α a ∂ a + βν. The linearized isometric embedding equation reads (5.10) σ (−1) ab = s 2 (σ ac∇b α c +σ bc∇a α c ) + 2βsσ ab . From the computation in [35, pages 938-939], (5.10) implies that (5.11) h (−1) 0 = −∇ a∇b β − βσ ab + 1 s σ (−1) ab . Putting these together, we obtain To solve β, we consider the expansion of the Gauss curvature K(d, s) of Σ d (s). Let h (−1) 0 − h (−1) = −∇ a∇b β − βσ ab + 1 s σ (−1) ab − 1 2 (∂ sḡab ) (−1) +∇ aḡ (−1K(d, s) = 1 s 2 + 1 d K (−1) + O(d −2 ) On the one hand, from the metric expansion, we get K (−1) = 1 s 2 −∇ a∇b σ (−1) ab + tr S 2 σ (−1) +∆tr S 2 σ (−1) . On the other hand, combining (5.11) and the Gauss equation, we conclude that K (−1) = 2 s (∆ + 2)β As a result, β is the solution of 2s(∆ + 2)β = −∇ a∇b σ (−1) ab + tr S 2 σ (−1) +∆tr S 2 σ (−1) . (5.13) For the right hand side, we compute −∇ a∇b σ (−1) ab + tr S 2 σ (−1) +∆tr S 2 σ (−1) = s 3 F ′ (sZ)Z(1 − Z 2 ) + s 2 F (2 − 4Z 2 ) + s 3 Q ′ A (sZ)(2 − 2Z 2 )Z A − 8s 2 Q A (sZ)ZZ A + −s 4 P ′′ AB (sZ) − s 3 P ′ AB (sZ) − 4s 2 P AB (sZ) Z A Z B On the other hand, let F and Q A be an antiderivative of F and Q A respectively, and A AB satisfy (5.9). One verifies that (5.14) β = F(sZ) 2 Z + Q A (sZ)Z A + A AB (Z, s)Z A Z B solves the linearized isometric embedding equation (5.13) since, for a trace-free, symmetric 2-tensor A AB (Z, s), (∆ + 2) A AB (Z, s)Z A Z B = A ′′ AB (Z, s)(1 − Z 2 ) − 6A ′ AB (Z, s)Z − 4A AB (Z, s) Z A Z B . We are ready compute (5.12) where β is given in (5.14). We have −∇ a∇b β − βσ ab = − s 2 2 F ′ ZZ a Z b + s 2 F Z 2σ ab − s 2 Q ′ A Z a Z b Z A + sQ A ZZ Aσ ab − A ′′ AB Z a Z b Z A Z B − 2A ′ AB (Z a Z A b + Z b Z A a )Z B + (sP AB − 2A AB ) Z A a Z B b + (A ′ AB Z + A AB )Z A Z Bσ ab − 1 s σ (−1) ab , 1 s σ (−1) ab − 1 2 ∂ sḡ (−1) ab = − s 2 2 F ′ ZZ a Z b + Q ′ A Z(Z a Z A b + Z b Z A a ) + P ′ AB ZZ A a Z B b 1 2 (∇ aḡ (−1) bs +∇ bḡ (−1) as ) = s 2 F ′ ZZ a Z b − sF Z 2σ ab + s 2 2 Q ′ A Z(Z a Z A b + Z b Z A a ) + s 2 Q ′ A Z a Z b Z A − 2sQ A ZZ Aσ ab + s 2 2 P ′ AB (Z a Z A b + Z b Z A a )Z B − sP AB Z A Z Bσ ab + 1 s σ (−1) ab 1 2ḡ (−1) ss sσ ab = s 1 2 F Z 2 + Q A ZZ A + 1 2 P AB Z A Z B σ ab . We see that terms involving F, Q A cancel and the result has the asserted form. Next we compute τ (−1) . Lemma 5.2. Define the second order differential operator LG (Z) = (1 − Z 2 )G ′ ′ (Z) − 4G ′ (Z)Z − 6G(Z). Let B AB (Z) be a traceless, symmetric 2-tensor that solves the ODE L(L + 2)B AB = 1 2 P ′′′ AB (Z)(1 − Z 2 ) − 4P ′′ AB (Z)Z − 6P ′ AB (Z).∆(∆ + 2)τ (−1) = 1 2 F ′′′ (1 − Z 2 ) 2 − 4F ′′ Z(1 − Z 2 ) − 2F ′ (1 − 3Z 2 ) + 1 2 P ′′′ AB (Z)(1 − Z 2 ) − 4P ′′ AB (Z)Z − 6P ′ AB (Z) Z A Z B . Proof. The equation is linear. We look for τ (−1) 1 and τ (−1) 2 such that ∆(∆ + 2)τ (−1) 1 = 1 2 F ′′′ (1 − Z 2 ) 2 − 4F ′′ Z(1 − Z 2 ) − 2F ′ (1 − 3Z 2 ), ∆(∆ + 2)τ (−1) 2 = 1 2 P ′′′ AB (Z)(1 − Z 2 ) − 4P ′′ AB (Z)Z − 6P ′ AB (Z) Z A Z B . From Lemma 3.3 of [15], τ (−1) 1 = ZF(Z) solves the first equatioñ ∆(∆ + 2)(ZF(Z)) = 1 2 F ′′′ (1 − Z 2 ) 2 − 4F ′′ Z(1 − Z 2 ) − 2F ′ (1 − 3Z 2 ). It is straightforward to verify that τ We are ready to state the main theorem for the quasi-local mass, Theorem 5.3. For T 0 = (1, 0, 0, 0) and X solves the leading order term of the optimal embedding equation, the Wang-Yau quasi-local energy E(Σ d , T 0 , X) = 1 d 2 B 3 1 8 A,B P ′ AB (sZ)P ′ AB (sZ) − det(h (−1) 0 − h (−1) ) + 1 4 S 2 1 4 (P ′ AB Z A Z B ) 2 − B DE Z D Z E 1 2 P ′′′ AB (Z)(1 − Z 2 ) − 4P ′′ AB (Z)Z − 6P ′ AB (Z) Z A Z B + O(d −3 ) where h (−1) 0 − h (−1) is(tr Σ k (−1) ) 2 − τ (−1)∆ (∆ + 2)τ (−1) = S 2 1 4 F 2 (1 − Z 2 ) 2 − τ (−1) 1∆ (∆ + 2)τ (−1) 1 + S 2 1 4 P ′ AB (Z)Z A Z B 2 − τ (−1) 2∆ (∆ + 2)τ (−1) 2 . We have S 2 1 4 F 2 (1 − Z 2 ) 2 − τ (−1) 1∆ (∆ + 2)τ (−1) 1 = 0 by [15, (3.6)]. This finishes the proof of the theorem. In particular, we observe that the answer depends on the leading order term of the news function on B 3 since both ODEs in Lemma 5.1 and Lemma 5.2 are linear ODEs where the right-hand side depends on P AB and their derivatives. In general, we do not have explicit solutions to these ODEs. In the following section, we compute the quasi-local mass explicitly for a few special examples. SPECIAL CASES Write E(Σ d , T 0 , X) = d −2 E (−2) + O(d −3 ) . We evaluate E (−2) for a few special cases of P AB . Let p AB , q AB be two constant symmetric traceless 2-tensors. Proposition 6.1. If P AB (x) = p AB + q AB x, E (−2) = 0. Proof. One verifies that E (−2) = 1 8π   1 8 A,B q AB q AB · 4π 3 + 1 4 S 2 1 4 q AB Z A Z B 2 + 1 4 q DE Z D Z E · (−6q AB Z A Z B )   = 0, where we used the identity S 2 Z A Z B Z D Z E = 4π 15 (δ AB δ DE + δ AD δ BE + δ AE δ BD ).− h (−1) = s 3 3 Z A Z Bσ ab − Z a Z b Z A Z B + Z Z a Z A b + Z b Z A a Z B − (Z) 2 + 2 Z A a Z B b p AB . We compute Denote |p| 2 = A,B p AB p AB . The volume integral contributes 1 3 S 2 (Z) 2 2 |p| 2 + 1 18 (2(Z) 2 − 8)δ AD Z B Z E p AB p DE + ((Z) 2 + 2) 2 |p| 2 = 4π 9 |p| 2 and the surface integral contributes h (−1) 0 − h (−1) 2 σ = s 2 9 9 p AB Z A Z B 2 + (2(Z) 2 − 8)δ AD Z B Z E p AB p DE + ((Z) 2 + 2) 2 δ AD δ BE p AB p DE , trσ h (−1) 0 − h (−1) = sZ A Z B p AB1 4 S 2 (Z) 2 (p AB Z A Z B ) 2 − 10 3 (Z) 2 Z D Z E p DE Z A Z B p AB = − 2π 45 |p| 2 , where we used the identity S 2 (Z) 2 Z A Z B Z D Z E = 4π 105 (δ AB δ DE + δ AD δ BE + δ AE δ BD ). P.-N. Chen is supported by NSF grant DMS-1308164 and Simons Foundation collaboration grant #584785, M.-T. Wang is supported by NSF grant DMS-1405152 and DMS-1810856, Y.-K. Wang is supported by MOST Taiwan grant 105-2115-M-006-016-MY2, 107-2115-M-006-001-MY2, and S.-T. Yau is supported by NSF grants PHY-0714648 and DMS-1308244. Then τ (−1) = ZF(Z) + B(Z) AB Z A Z Bsolves the leading order of optimal embedding equatioñ = B AB (Z)Z A Z B solves the second equation if the traceless, symmetric 2-tensor B AB (Z) solves (5.15). . If P AB (x) = p AB x 2 . Then E (−2) = 1 20 A,B p AB p AB . Proof. One verifies that A AB (Z, s) = s 3 (Z) Z) 2 − 8 δ AD Z B Z E + (Z) 2 + 2 2 δ AD δ BE p AB p DE . as determined in Lemma 5.1 and B AB is as determined in Lemma 5.2. Proof. We start with Theorem 4.1 in which h − h (−1) is as determined in Lemma 5.1 and τ (−1) is as determined in Lemma 5.2. We simplify the expression(−1) 0 S 2 Quasi-spherical metrics and prescribed scalar curvature. R Bartnik, J. Differential Geom. 371R. Bartnik, Quasi-spherical metrics and prescribed scalar curvature, J. Differential Geom. 37 (1993), no. 1, 31-71. New definition of quasi-local mass. R Bartnik, Phys. Rev. Lett. 6220R. Bartnik, New definition of quasi-local mass, Phys. Rev. Lett. 62 (1989), no. 20, 2346-2348. Gravitational waves in general relativity. VII. Waves from axi-symmetric isolated systems. H Bondi, M G J Van Der Burg, A W K Metzner, Proc. Roy. Soc. Ser. A. 269H. Bondi, M. G. J. van der Burg, and A. W. K. Metzner, Gravitational waves in general relativity. VII. Waves from axi-symmetric isolated systems, Proc. Roy. Soc. Ser. A 269 (1962) 21-52. . I S Booth, R B Mann, Phys. Rev. D. 5964021I. S. Booth and R. B. Mann, Phys. Rev. D 59, 064021 (1999). Quasi-local energy and conserved charges derived from the gravitational action. J D Brown, J W York, Phys. Rev. D. 3J. D. Brown and J. W. York, Quasi-local energy and conserved charges derived from the gravitational action, Phys. Rev. D (3) 47 (1993), no. 4, 1407-1419. The mathematical theory of black holes, reprint of the 1992 edition, Oxford Classic Texts in the Physical Sciences. S Chandrasekhar, Oxford Univ. PressNew YorkS. Chandrasekhar, The mathematical theory of black holes, reprint of the 1992 edition, Oxford Classic Texts in the Physical Sciences, Oxford Univ. Press, New York. Quasi-local mass on unit spheres at spatial infinity. P.-N Chen, M.-T Wang, Y.-K , Wang , S.-T Yau, in preparationP.-N. Chen, M.-T. Wang, Y.-K, Wang, and S.-T. Yau, Quasi-local mass on unit spheres at spatial infinity, in preparation Evaluating quasi-local energy and solving optimal embedding equation at null infinity. P.-N Chen, M.-T Wang, S.-T Yau, Comm. Math. Phys. 3083P.-N. Chen, M.-T. Wang, and S.-T. Yau, Evaluating quasi-local energy and solving optimal embedding equation at null infinity, Comm. Math. Phys. 308 (2011), no.3, 845-863. Minimizing properties of critical points of quasi-local energy. P.-N Chen, M.-T Wang, S.-T Yau, Comm. Math. Phys. 3293P.-N. Chen, M.-T. Wang, and S.-T. Yau, Minimizing properties of critical points of quasi-local energy, Comm. Math. Phys. 329 (2014), no.3, 919-935 Conserved quantities in general relativity: from the quasi-local level to spatial infinity. P.-N Chen, M.-T Wang, S.-T Yau, Comm. Math. Phys. 3381P.-N. Chen, M.-T. Wang, and S.-T. Yau, Conserved quantities in general relativity: from the quasi-local level to spatial infinity, Comm. Math. Phys. 338 (2015), no.1, 31-80. Quasi-local energy in presence of gravitational radiation. P.-N Chen, M.-T Wang, S.-T Yau, Int. J. Mod. Phys. D. 25164501P.-N. Chen, M.-T. Wang, and S.-T. Yau, Quasi-local energy in presence of gravitational radiation, Int. J. Mod. Phys. D 25, 164501 (2016). Quasi-local mass in the gravitational perturbations of black holes. P.-N Chen, M.-T Wang, S.-T Yau, in preparationP.-N. Chen, M.-T. Wang, and S.-T. Yau, Quasi-local mass in the gravitational perturbations of black holes, in prepa- ration. Evaluating small sphere limit of the Wang-Yau quasi-local energy. P.-N Chen, M.-T Wang, S.-T Yau, Comm. Math. Phys. 3572P.-N. Chen, M.-T. Wang, and S.-T. Yau, Evaluating small sphere limit of the Wang-Yau quasi-local energy, Comm. Math. Phys. 357 (2018), no. 2, 731-774 Quasilocal mass constructions with positive energy. A J Dougan, L J Mason, Phys. Rev. Lett. 67A. J. Dougan and L. J. Mason, Quasilocal mass constructions with positive energy, Phys. Rev. Lett., 67, 2119-2122, (1991). Quasi-local mass at the null infinity of the Vaidya spacetime. P.-N Chen, M.-T Wang, S.-T Yau, Nonlinear analysis in geometry and applied mathematics. Somerville, MAInt. Press1P.-N. Chen, M.-T. Wang, and S.-T. Yau, Quasi-local mass at the null infinity of the Vaidya spacetime, Nonlinear analysis in geometry and applied mathematics, 33-48, Harv. Univ. Cent. Math. Sci. Appl. Ser. Math., 1, Int. Press, Somerville, MA, 2017 Nonlinear nature of gravitation and gravitational-wave experiments. D Christodoulou, Phys. Rev. Lett. 6712D. Christodoulou, Nonlinear nature of gravitation and gravitational-wave experiments, Phys. Rev. Lett. 67 (1991), no. 12, 1486-1489. Gravitational radiation in an expanding universe. S W Hawking, J. Math. Phys. 9598S. W. Hawking, Gravitational radiation in an expanding universe, J. Math. Phys. 9, 598 (1968). The gravitational Hamiltonian, action, entropy and surface terms. S W Hawking, G T Horowitz, Classical Quantum Gravity. 136S. W. Hawking and G. T. Horowitz, The gravitational Hamiltonian, action, entropy and surface terms, Classical Quantum Gravity 13 (1996), no. 6, 1487-1498. Gravitational energy cannot become negative. G T Horowitz, M J Perry, Phys. Rev. Lett. 486G. T. Horowitz and M. J. Perry, Gravitational energy cannot become negative, Phys. Rev. Lett. 48 (1982), no. 6, 371-374. A simple derivation of canonical structure and quasi-local Hamiltonians in general relativity. J Kijowski, Gen. Relativity Gravitation. 293231102J. Kijowski, A simple derivation of canonical structure and quasi-local Hamiltonians in general relativity, Gen. Rela- tivity Gravitation 29 (1997), no. 3, 307-343. 90 (2003), no. 23, 231102. Positivity of quasilocal mass. C C M Liu, S T Yau, Phys. Rev. Lett. 90231102C.C. M. Liu and S.T. Yau, Positivity of quasilocal mass, Phys. Rev. Lett. 90, 231102 (2003) Positivity of quasi-local mass II. C.-C M Liu, S.-T Yau, J. Amer. Math. Soc. 191C.-C. M. Liu and S.-T. Yau, Positivity of quasi-local mass II, J. Amer. Math. Soc. 19 (2006), no. 1, 181-204. Positivity of quasi-local mass. N Murchadha, L B Szabados, K P Tod, Comment On, Phys. Rev. Lett. 9225259001N.Ó Murchadha, L. B. Szabados and K. P. Tod, Comment on: "Positivity of quasi-local mass" Phys. Rev. Lett. 92 (2004), no. 25, 259001, 1 p. The Weyl and Minkowski problems in differential geometry in the large. L Nirenberg, Comm. Pure Appl. Math. 6L. Nirenberg, The Weyl and Minkowski problems in differential geometry in the large, Comm. Pure Appl. Math. 6 (1953), 337-394. Some unsolved problems in classical general relativity. R Penrose, Seminar on Differential Geometry. Princeton, N.J.Princeton Univ. Press102R. Penrose, Some unsolved problems in classical general relativity, Seminar on Differential Geometry, pp. 631-668, Ann. of Math. Stud., 102, Princeton Univ. Press, Princeton, N.J., 1982. Quasi-local mass and angular momentum in general relativity. R Penrose, no. 1780Proc. Roy. Soc. London Ser. A. 381R. Penrose, Quasi-local mass and angular momentum in general relativity, Proc. Roy. Soc. London Ser. A 381 (1982), no. 1780. Regularity of a convex surface with given Gaussian curvature. A V Pogorelov, Russian) Mat. Sbornik N.S. 3173A. V. Pogorelov, Regularity of a convex surface with given Gaussian curvature, (Russian) Mat. Sbornik N.S. 31(73), (1952), 88-103. Gravitational waves in general relativity, VIII. Waves in asymptotically flat space-time. R K Sachs, Proc. Roy. Soc. Ser. A. 270Sachs, R. K. Gravitational waves in general relativity, VIII. Waves in asymptotically flat space-time. Proc. Roy. Soc. Ser. A 270 1962 103-126. Proof that the Bondi mass is positive. R Schoen, S.-T Yau, Phys. Rev. Lett. 486R. Schoen and S.-T. Yau, Proof that the Bondi mass is positive, Phys. Rev. Lett. 48 (1982), no. 6, 369-371. Positive mass theorem and the boundary behaviors of compact manifolds with nonnegative scalar curvature. Y Shi, L.-F Tam, J. Differential Geom. 621Y. Shi and L.-F. Tam, Positive mass theorem and the boundary behaviors of compact manifolds with nonnegative scalar curvature, J. Differential Geom. 62 (2002), no. 1, 79-125. Penrose's quasi-local mass. K P Tod, Twistors in mathematics and physics. CambridgeCambridge Univ. Press156K. P. Tod, Penrose's quasi-local mass, in Twistors in mathematics and physics, 164-188, London Math. Soc. Lecture Note Ser., 156, Cambridge Univ. Press, Cambridge. Boundary conditions at infinity for physical theories. A Trautman, arXiv:1604.03144Bull. Acad. Polon. Sci. 6reprinted asA. Trautman, Boundary conditions at infinity for physical theories, Bull. Acad. Polon. Sci. 6 (1958), 403-406; reprinted as arXiv:1604.03144. Radiation and boundary conditions in the theory of gravitation. A Trautman, arXiv:1604.03145Bull. Acad. Polon. Sci. 6reprinted asA. Trautman, Radiation and boundary conditions in the theory of gravitation, Bull. Acad. Polon. Sci., 6 (1958), 407- 412; reprinted as arXiv:1604.03145. Quasi-local mass in general relativity. M.-T Wang, S.-T Yau, no. 021101Phys. Rev. Lett. 1022M.-T. Wang, and S.-T. Yau, Quasi-local mass in general relativity, Phys. Rev. Lett. 102 (2009), no. 2, no. 021101. Isometric embeddings into the Minkowski space and new quasi-local mass. M.-T Wang, S.-T Yau, Comm. Math. Phys. 2883M.-T. Wang, and S.-T. Yau, Isometric embeddings into the Minkowski space and new quasi-local mass, Comm. Math. Phys. 288 (2009), no. 3, 919-942. Gravitational waves in general relativity. M G J Van Der Burg, IX. Conserved quantities Proc. Roy. Soc. Ser. A. 294van der Burg, M. G. J. Gravitational waves in general relativity, IX. Conserved quantities Proc. Roy. Soc. Ser. A 294 1966 112-122.
[]
[ "MuscleMap: Towards Video-based Activated Muscle Group Estimation", "MuscleMap: Towards Video-based Activated Muscle Group Estimation" ]
[ "Kunyu Peng \nKarlsruhe Institute of Technology\n\n", "David Schneider \nKarlsruhe Institute of Technology\n\n", "Alina Roitberg \nKarlsruhe Institute of Technology\n\n", "Kailun Yang \nHunan University\n\n", "Jiaming Zhang \nKarlsruhe Institute of Technology\n\n", "M Saquib Sarfraz \nKarlsruhe Institute of Technology\n\n\nMercedes-Benz Tech Innovation\n\n", "Rainer Stiefelhagen \nKarlsruhe Institute of Technology\n\n" ]
[ "Karlsruhe Institute of Technology\n", "Karlsruhe Institute of Technology\n", "Karlsruhe Institute of Technology\n", "Hunan University\n", "Karlsruhe Institute of Technology\n", "Karlsruhe Institute of Technology\n", "Mercedes-Benz Tech Innovation\n", "Karlsruhe Institute of Technology\n" ]
[]
In this paper, we tackle the new task of video-based Activated Muscle Group Estimation (AMGE) aiming at identifying active muscle regions during physical activity. To this intent, we provide the MuscleMap136 dataset featuring >15K video clips with 136 different activities and 20 labeled muscle groups. This dataset opens the vistas to multiple video-based applications in sports and rehabilitation medicine. We further complement the main MuscleMap136 dataset, which specifically targets physical exercise, with Muscle-UCF90 and Muscle-HMDB41, which are new variants of the well-known activity recognition benchmarks extended with AMGE annotations. To make the AMGE model applicable in real-life situations, it is crucial to ensure that the model can generalize well to types of physical activities not present during training and involving new combinations of activated muscles. To achieve this, our benchmark also covers an evaluation setting where the model is exposed to activity types excluded from the training set. Our experiments reveal that generalizability of existing architectures adapted for the AMGE task remains a challenge. Therefore, we also propose a new approach, TRANSM 3 E, which employs a transformer-based model with cross-modal multilabel knowledge distillation and surpasses all popular video classification models when dealing with both, previously seen and new types of physical activities. 1
10.48550/arxiv.2303.00952
[ "https://export.arxiv.org/pdf/2303.00952v2.pdf" ]
257,279,945
2303.00952
89ba72ad1e60fe68792d80804119a00a0d57324f
MuscleMap: Towards Video-based Activated Muscle Group Estimation Kunyu Peng Karlsruhe Institute of Technology David Schneider Karlsruhe Institute of Technology Alina Roitberg Karlsruhe Institute of Technology Kailun Yang Hunan University Jiaming Zhang Karlsruhe Institute of Technology M Saquib Sarfraz Karlsruhe Institute of Technology Mercedes-Benz Tech Innovation Rainer Stiefelhagen Karlsruhe Institute of Technology MuscleMap: Towards Video-based Activated Muscle Group Estimation In this paper, we tackle the new task of video-based Activated Muscle Group Estimation (AMGE) aiming at identifying active muscle regions during physical activity. To this intent, we provide the MuscleMap136 dataset featuring >15K video clips with 136 different activities and 20 labeled muscle groups. This dataset opens the vistas to multiple video-based applications in sports and rehabilitation medicine. We further complement the main MuscleMap136 dataset, which specifically targets physical exercise, with Muscle-UCF90 and Muscle-HMDB41, which are new variants of the well-known activity recognition benchmarks extended with AMGE annotations. To make the AMGE model applicable in real-life situations, it is crucial to ensure that the model can generalize well to types of physical activities not present during training and involving new combinations of activated muscles. To achieve this, our benchmark also covers an evaluation setting where the model is exposed to activity types excluded from the training set. Our experiments reveal that generalizability of existing architectures adapted for the AMGE task remains a challenge. Therefore, we also propose a new approach, TRANSM 3 E, which employs a transformer-based model with cross-modal multilabel knowledge distillation and surpasses all popular video classification models when dealing with both, previously seen and new types of physical activities. 1 Introduction Knowing which skeletal muscles of the human body are activated benefits sport and rehabilitation medicine from multiple perspectives and prevents inappropriate muscle usage which may cause physical injuries [103]. In health care, patients need to know how to conduct the exercise correctly to recover from surgery [106,22,4,85,90] or specific diseases [133,27], e.g., COVID-19 [93,15]. Knowledge about muscle activations allows for user-centric fitness applica- Figure 1: Overview of the proposed MuscleMap136 dataset (Top) and the TRANSM 3 E model (Bottom). We make use of four data modalities, i.e., RGB, RGB difference (RGB Diff), optical flow, and 2D skeleton. PE and TF denote patch embedding layer and the transformer block. tions providing insights for everyday users or professional athletes which need specially adapted training. The majority of existing work on Activated Muscle Group Estimation (AMGE) is based on wearable devices with electrode sensors [59,31,32,28,70,12]. Yet, many wearable devices are inconvenient and heavy [112,105], oftentimes used incorrectly [97], and might even cause unnecessary anxiety to the users [25]. A big strength of wearable devices is the high accuracy achieved through direct signal measurement from skin or muscle tissue. However, such exact bio-electrical changes are not required in a large number of medical recovery programs, and knowing the binary activation status of the muscle as shown in Figure 1 is sufficient in many situations [141,109,83]. Applying video-based AMGE on smart phones or other widely available smart devices would allow for application of such programs even without access to specialized hardware. Can modern deep learning algorithms relate fine-grained physical movements to individual muscles? To answer this question, we tackle the new task of video-based AMGE, Table 1: A comparison among the statistics of the videobased datasets, where AR, AQA and CE indicate activity recognition, activity quality assessment, and calorie consumption estimation. Dataset NumClips Task MultiLabel NumActions KTH [63] 599 AR False 6 UCF101 [120] 13,320 AR False 101 HMDB51 [69] 6,849 AR False 51 ActivityNet [17] 28,108 AR False 200 Kinetics400 [17] 429,256 AR False 400 Video2Burn [99] 9,789 CE False 72 MTL-AQA [95] 1,412 AQA True / FineDive [146] 3,000 AQA True 29 FineGym [114] 32,697 AQA True 530 MuscleMap136 (Ours) 15, 007 AMGE True 136 which estimates muscle contraction during physical activities from video recordings. As there is no previous work for video-based AMGE, we created MuscleMap136 -a video-based dataset with 136 different exercises collected from YouTube, each exercise annotated with one or multiple out of 20 different muscle group activations, as described in Table 1. To investigate AMGE beyond intensive physical excercises, we additionally extend eligible subsets of the commonly used human activity recognition (HAR) benchmarks HMDB51 [69] and UCF101 [120] with the AMGE annotations, we further refer to these annotated subset datasets as Muscle-HMDB41 and Muscle-UCF90. Since there is no comparable work for video-based AMGE, we select various off-the-shelf Convolutional Neural Networks (CNNs) [19,44], Graph Convolutional Networks (GCNs) [149,21,71] and transformer-based architectures [41,74,80] for HAR together with statistic methods as baselines, however we find that it is challenging for all these models when they deals with new activity types containing new activated muscle combinations at test-time considering the AMGE generalizabilty. To tackle the aforementioned issue, we propose TRANSM 3 E, a cross modality knowledge distillation (KD) architecture which combines RGB and RGB difference (RGB Diff) via a new transformer-specific KD mechanism during training. In total, we leverage the most competitive performing architecture MViTv2 [74] as backbone and equip it with three essential novel components designed to achieve a better extraction of the underlying cues for AMGE, i.e., Multi-Classification Tokens (MCTs) for AMGE, Multi-Classification Tokens Knowledge Distillation (MCTKD) and Multi-Classification Tokens Fusion (MCTF). MCTs, similar to the structure setting of MCT in [147], but work differently to harvest informative AMGE cues without specific class alignment. Our proposed MuscleMap benchmark describes a multi-label classification problem where each sample might be annotated with one or up to twenty labels. We use MCTs for AMGE by enriching the single classification (cls) token into twenty to build up the base for cross-modality MCTs-level KD. KD [56] is leveraged as a mechanism for cross modality knowledge transfer after each transformer block which enables us to train with multi-modal data but test with RGB data only. This separates us from preceding work which uses KD as a single-modality knowledge transfer mechanism on the final/intermediate network layers [86,43,126,77,56] or other multi-modal architectures which make use of redundant networks to extract per-modality features [47,62,129,94]. While the mentioned MCTKD mechanism integrates cross-modal knowledge into our main network with additional MCTs as KD receiver, another contribution, MCTF, merges the receiver MCTs of KD and the MCTs of major modality at the final layer. By combining these three components, TRANSM 3 E achieves state-of-the-art performances with superior generalizability compared to the tested baselines. In summary, our contributions are listed as following: • We open the vistas of video-based Activated Muscle Group Estimation (AMGE) task with the aim of lowering the threshold of entry to muscle-activation-based health care and sports applications. • We provide a new benchmark MuscleMap to propel research on the aforementioned task which includes the MuscleMap136 dataset as well as Muscle-HMDB41 and Muscle-UCF90. We also present a large number of baseline experiments for this benchmark, including CNN-, transformer-, and GCN-based approaches. • We especially take the evaluation of the generalizability into consideration by constructing test and validation sets using new activities containing new activated muscle combinations excluded during the training. 3D-CNNs especially with advanced pre-training methods and large datasets [9, 89,13,74,159,41,124,74,79,80,137]. Action quality assessment (AQA) [95,123] and visual calory estimation (VCE) [99] relate to our work since these methods likewise shift the question of research from what? to how? with the aim of detailed analysis of human motion. Multimodal data is a common strategy, e.g., by combining RGB video with audio [67,7,102,96,5], poses [30,29,104,110], optical flow [54,102], or temporal difference images [91]. Body poses are commonly used as a modality for activity recognition on their own. Yan et al. [149] and follow-up research [115,81,21,116,152,155,21,71] make use of GCNs, while competitive approaches leverage CNNs with special pre-processing methods [26,37]. Knowledge distillation (KD) [56] became a common technique to reduce the size of a neural network while maintaining performance. In review [52], methods can be categorized to focus on knowledge distillation based on final network outputs (response-based) [117, 53, Multi-label classification methods allow for assigning more than a single class to a data sample. Common strategies include per-class binary classifiers with adapted loss functions to counter the imbalance problem [11], methods which make use of spatial knowledge [154,153,48,24], methods which make use of knowledge about label relations [39,139,160,23,121] or methods which are based on word embeddings [107,78,148]. Datasets which combine visual data of the human body with muscle activation information are sparse and mainly limited to specific sub-regions of the human body, e.g., for hand gesture recognition [49]. In contrast, a large va-riety of full-body HAR datasets were collected in recent years, which are labelled with high-level human activities [69,120,65,113,75,98], fine grained human action segments [68,122,158,73], or action quality annotations [114,146,95,123]. We leverage such datasets by extending them with muscle group activation labels. Benchmark The [69] which contains 51 activities with 6, 849 video clips. HMDB51 is based on commercial movies and contains scenarios from everyday living scenarios but also depictions of very uncommon situations and environments. Muscle group annotations. We cluster skeletal muscles into 20 major muscle groups with binary activation as shown in the checkboxes in Figure 1. As sources from sport experts have been successfully used as annotation source in similar problems [95], the pairing of muscle group activations to physical activities is obtained from >400 health care and fitness resources reported by well-established fitness and sport experts and comprehensive scientific research of physical activity, e.g., [34,40,55,131,88,16,18,130]. These resources label a set of primary activated muscle. More details are provided in the supplementary. Label noise suppression. After constructing a mapping of physical activities and the activated muscle groups using the aforementioned sources, we employ a label noise suppression mechanism to ensure high quality annotations. First, to maintain the AMGE diversity, we take the activity granularity and variants into consideration. Note, that different variants always have different annotations. To avoid noisy labels, we use statistical majority voting following the ImageNet1K [33] protocol. This protocol ensures the agreement among different sources: a muscle group label is considered reliable for a certain physical activity only if multiple scientific-and expert sources agree on its activation and there are no considerable inconsistencies among the sources. A sample of statistical analysis of the label noise suppression is provided in supplementary. Architecture Preliminaries of MViT. TRANSM 3 E is based on the improved multi-scale visual transformer (MViTv2) [74] improved from MViTv1 [41]. Compared with ViT [36], MViTv1 increases the channel resolution progressively and reduces the resolution on the spatiotemporal plane simultaneously, which realizes pooling operations both on Keys (K) and Queries (Q). The basic idea of MViTv1 is the construction of different low-and high-level visual modeling stages [41]. Multi-scale pooling attention is one of the major components of MViTv2 compared with ViT. MViTv2 uses decomposed relative position embeddings and resid-ual pooling connections to integrate the principle of shiftinvariance into the model and reduce computational complexity, while the downscaling in MViTv1 is achieved by large strides on the Keys (K) and Values (V). MCTs. MCTs, which has similar structure with MCT designed for semantic segmentation [147], are used differently to harvest more informative components to achieve good generalizability for AMGE and to construct sender and receiver for cross-modality KD in our work. The major difference between our AMGE-specific MCTs and the semantic segmentation-specific MCT in [147] is that Xu et al. [147] used the MCT outputs from different-stage attention maps and leveraged pooling among channel dimension of each token in MCT to calculate the semantic segmentation map while we directly use the final layer output of MCTs and aggregate the MCTs along token dimension together with Softmax to achieve multi-label classification. Assuming the classification (cls) tokens of MCTs to be refered to by cls j , j ∈ {1 . . . C} and the flattened patch embeddings to be referred to as {p i }, i∈(1, N P atches ) for the given input video, where N P atches is the length of the patch sequence, the input of the first MViTv2 block is {cls 1 , cls 2 , ..., cls C , p 1 , ..., p N P atches }. The final classification is computed through Output = Sof tmax(P α ( C i=1 cls i /N ), dim = −1), where P α indicates a fully connected (FC) layer projecting the merged MCTs to a single vector with the number of muscle regions as dimensionality. This final calculation also shows the difference between our MCTs and the MCT from [147]. MCTKD. Multi-Classification Tokens Knowledge Distillation (MCTKD) is one of our main contributions and to the best of our knowledge we are the first to introduce this technique. Most existing multimodal fusion (MMF) approaches tend to repeat the feature extraction backbone several times [47,62,129], resulting in large memory consumption and limited inference speed. This problem increases if a complementary modality has to be calculated at inference time, as can be the case with optical flow or body pose estimation. In the past, transformer-based KD mainly focused on using intermediate full patch embeddings [86,77,43] or final cls token [126], while we propose KD on MCTs for both intermediate and final layers with an additional MCT-based knowledge receiver. The underlying benefit of MCTKD is that the token number of the MCTs is fixed, while KD on patch embeddings [77,43] may encounter the alignment issue when facing with different modalities with different token size of the patch embeddings. Instead of directly distilling knowledge from the MCTs of an auxiliary modality towards the MCTs of a major modality, another N cls tokens (Receiver MCTs) are introduced as aforementoned. This approach reduces disturbance of the MCTs for the major modality. Assuming the receiver MCTs of the major modality branch are de-noted as cls r = {cls r,1 , cls r,2 , ..., cls r,C } and the sending MCTs from the auxiliary modality branch can be indicated by cls s = {cls s,1 , cls s,2 , ..., cls s,C }, MCTKD is achieved by applying KL-Divergence (KL-Div) loss after each MViTv2 block on cls r and cls s : L M CT KD,all = ( N B i=1 KL-Div(cls i r , cls i s ))/N B ,(1) where N B and L M CT KD,all refer to block number and the sum of MCTKD losses. L M CT KD,all is combined equally with the binary cross entropy loss (L BCE ). MCTF. Another major component of TRANSM 3 E is Multi-Classification Tokens Fusion (MCTF) designed to fuse knowledge receiver MCTs and the original MCTs of the major modality branch, the process is in Figure 3. Assuming cls r denotes the receiver MCTs, and cls m denotes the MCTs from the major modality branch, K, Q, and V for each MCTs can be obtained through linear projections: K m , Q m , V m = P m K (cls m ), P m Q (cls m ), P m V (cls m ), K r , Q r , V r = P r K (cls r ), P r Q (cls r ), P r V (cls r ). (2) After obtaining the Q, K, and V from the original and receiver MCTs of the major modality branch, a mixed attention mechanism is calculated as following, A m mm = P mm (DP (Att(Q m , K m , V m ))), A m mr = P mr (DP (Att(Q m , K r , V m ))), A m rm = P rm (DP (Att(Q r , K m , V m ))),(3) where Att denotes the attention operation computed as Att(Q, K, V) = Sof tmax(Q@K) * V) and DP indicates Dropout. The above equations provide attentions considering different perspectives including self-attention A m mm and two types of cross attention, i.e. A m rm and A m mr which use the Queries from the original MCTs and the Keys from the receiver MCTs and vice versa. The same procedure is conducted for the receiver MCTs to generate A r rr , A r rm , and A r mr with Dropout denoted by DP through, A r rr = P rr (DP (Att(Q r , K r , V r ))), A r rm = P rm (DP (Att(Q r , K m , V r ))), A r mr = P mr (DP (Att(Q m , K r , V r ))).(4) Then the attention is finalized as, A m =Sum(A m mm , A m mr , A m rm ), A r =Sum(A r rr , A r mr , A r rm ).(5) The fused attention is thereby calculated through A f = P f (Concat(A m , A r )), where P f denotes a FC layer. The whole procedure can be indicated by, where LN demonstrates the layer normalization and CLS f is the CLS-Fusion. Assuming we use cls a to denote the average of major and receiver MCTs by cls a = (cls m + cls r )/2, the final fused output is obtained through, A f = CLS f (LN (cls m ), LN (cls r )),(6)cls f =cls a + CLS f (LN (cls r ), LN (cls m )),cls f :=cls a + DP (M θ (LN (cls f ))),(7) where M θ denotes a multi-layer perception (MLP) based projection and DP denoted dropout operation. MCTKD and MCTF are added after N M CT s epochs of training of TRANSM 3 E with only MCTs for both modalities. Experiments Implementation details and evaluation Implementation details. All the video models are pretrained on ImageNet1K [33] using PyTorch 1.8.0 with 4 V100 GPUs. To reproduce TRANSM 3 E, we first train MuscleMap benchmark The results of different architectures on our benchmark are provided in Table 2. The statistic baselines, e.g., Random, All Ones/Ones, show overall low performances with <30% mAP on all datasets. Skeleton-based approaches Abnormal Sit-ups Normal Sit-ups Prediction of our approach for normal action (The same with GT) Prediction of our approach for abnormal action (Alert for injury) Figure 5: A use case for abnormal activity detection. obviously outperform the statistic baselines. Video-based approaches surpass statistic and skeleton baselines, where transformer-based approaches, e.g., MViTv2 S/B [74] and VideoSwin S/B [80], and CNN-based approaches, e.g., C2D [45], I3D [19], Slow [44], SlowFast [44], are leveraged. TRANSM 3 E surpasses all the others by large margins. TRANSM 3 E-SMALL has 79.8%, 81.2%, 76.5%, 76.9%, 65.4%, and 65.5% mAP considering mean val and mean test on MuscleMap136, Muscle-UCF90, and Muscle-HMDB41 datasets, while the generalizability on new activities are mostly highlighted. We also find that the skeleton-based method do not show comparable results compared with video-based methods, which can be caused by the observance of the important visual cues. On Muscle-Map136, TRANSM 3 E-SMALL outperforms MViTv2-S by 2.8% and 4.0% on the mean val and mean test, which especially works well for new val and new test as TRANSM 3 E-SMALL surpasses MViTv2-S by 5.6% and 7.9%. Generalization to new activity types. Next, we examine if the model indeed captures the essence of muscle activation instead of memorizing constant muscle combinations corresponding to the known physical activity categories. Ideally, the network should not learn to look up activity-specific values but be able to infer the set of active muscles even for new types of movements. In Table 2 we demonstrate that while existing activity recognition approaches deliver high (mostly > 90%) mAP for known activities types, the performance drops significantly (oftentimes by ∼ 40% on MuscleMap136) generalization to new activity types. Still, all implemented models greatly outperform the statistic baselines with the best published approach (MViTv2-B) delivering 56.3% mAP on new test, showing the feasibility of our task. While our model is only slightly better than MViT on known val (+0.5%), the difference grows significantly as it comes to generalization to new activity types (+7.9% on new test). We attribute this to two designs of our model: 1) the MCT-tokens specifically designed for multi-label assignments allow high diversity in terms of muscle group combinations, and 2) MCTKD leads to more informative features with knowledge distilled from complementary modality, less prone to overfitting. Still, the performance gap of ∼ 35% between known and new activity types indicates an important future research direction. A sample indicating that AMGE output space is not limited by the known activities is in the supplementary. Ablation experiments This section presents ablation experiments, all ablation experiments were performed with TRANSM 3 E-SMALL. Ablation of TRANSM 3 E. The ablation study of MCTs, MCTKD, and MCTF, is shown in Table 3. The performance of TRANSM 3 E-SMALL is increased step by step. The performance of our MCTs is better than MCT [147]. Selection of modalities We systematically search for the best performing primary and secondary modalities and present the results in Table 7. RGB and RGB Diff present best performances. Multi-Classification Tokens Knowledge Distillation. In this experiment we ablate KD with or without receiver MCTs and list the results in Table 4. We differentiate between channel-wise (CW), token-wise (TW) and full patch-wise (FP) knowledge distillation. CW means executing Softmax along the channel instead of token dimension (TW). Additionally we evluate the effects of varying the location of KD application, i.e., at the patch embedding layer (PatchEmbKD/MCTKD), at the final layer (FinalLayerKD/MCTKD), after token size reduction (SparseKD/MCTKD), or after each MViTv2 block (DenseKD/MCTKD). DenseMCTKD (TW) achieves the best performance, thereby it is used to build our MCTKD. MCTKD (TW) outperforms the second best approaches, i.e., DenseKD (CW) and SparseMCTKD (FP) for mean val and mean test, by 0.3% and 0.4% in mAP. Multi-Classification Tokens Fusion. The ablations on MCTF for TRANSM 3 E-SMALL are presented in Table 5, where our approach is compared with existing fusion approaches. The superiority of MCTF compared to other attention-based approaches, especially on generalizability, comes from using attention from a more diverse perspective, which benefits the capability of integration considering different focus formats of the attention. Multimodal fusion and Knowledge Distillation. Table 6 presents the comparison between TRANSM 3 E-SMALL and existing MMF and KD approaches, i.e., late fusion (LF) sum, LF multiplication, LF concatenation, patch embedding (PE) sum, PE multiplication, PE concatenation, cross attention, mixed attention, and the KD pipeline in DeiT (MViT-S backbone) [126]. The MMF approaches are mostly twobranch-based, which duplicate the backbone network. Our approach has diverse benefits, i.e., the KD-based fusion barely increases the model complexity during inference, which ensures fast inference, and auxiliary modality is only used during training. The number of parameters (#PM) of LF Sum is 68.5M while #PM of the TRANSM 3 E-SMALL is 44.4M. Adding complexity for MMF decreases the generalizability. TRANSM 3 E surpasses the best MMF (LF Sum) and DeiT [126] especially considering generalizability. Analysis of qualitative results and use case Qualitative results are shown in Figure 4, the label and GradCam [111] visualizations of MViTv2-S and TRANSM 3 E-SMALL are given from left to right. The true/missed/false prediction is marked as green checkmark/purple crossmark/red crossmark. Overall, our approach has more accurate predictions and less false and missed predictions for all the samples considering new activities, i.e., 1 , 2 and 3 in Figure 4, and known activities, e.g., 4 and 5 , where 3 , 4 , and 5 are correctly predicted. TRANSM 3 E-SMALL concentrates mostly on the accurate body regions, e.g., in sample 5 TRANSM 3 E-SMALL focuses on the leg-related region, which is the dominant region, while MViTv2-S focuses additionally on the chest and feet regions which results in more false predictions. In sample 4 , the abdominal region of the human body is focused correctly by TRANSM 3 E-SMALL. An use case is shown in Figure 5, predicting abnormal activity. This sample shows TRANSM 3 E-SMALL can detect abnormal exercise behavior and dangerous muscle usage since the feet ankles and the calves regions are most likely to be hurt in this sample. Conclusion In this paper we open the vistas of video-based activated muscle group estimation. We establish the MuscleMap benchmark, with annotation sets for UCF101 and HMDB51 as well as the new MuscleMap136 to facilitate learningbased muscle group estimation. We take additional consideration regarding the AMGE generalizablity. We propose TRANSM 3 E with multi-classification token distillation and fusion in a cross-modality manner to enhance the generalization on new activities types. TRANSM 3 E sets the stateof-the art on the proposed MuscleMap benchmark. A. Discussion of Societal Impacts and Limitation Societal Impacts. In our work, a new dataset targeting at the AMGE is collected based on YouTube videos, termed as MuscleMap136. We further extend the existing famous activity recognition dataset, UCF101 [120] and HMDB51 [69], with AMGE annotations, which are termed as Muscle-UCF90 and Muscle-HMDB41. We build up MuscleMap benchmark for the AMGE by using statistic baselines and existing video-based approaches including both video-based and skeleton-based methods, while the three aforementioned datasets are all considered. Through the experiments we find that the generalizability targeting AMGE on new activities is not satisfied for the existing activity recognition approaches. In order to tackle this issue, we propose a new cross modality knowledge distillation approach named as TRANSM 3 E while using MViTv2-S [74] as its basic backbone. The proposed approach alleviates the generalization problem in a certain degree, however there is still large space for further improvement and future research. The AMGE performance gap between the known activities and new activities illustrates that our model has potential to give offensive predictions, misclassifications and biased content which may cause false prediction resulting in the negative social impact. The dataset and code will be released publicly. Limitations. The annotations of MuscleMap136 are created for each video clip instead of being created for each frame and the label is binary without giving the different levels of muscle activations. In addition, there is still a clear gap between the performance of known and new categories. While our method has enhanced the generalization capacity, there remains room for future improvement. B. Baseline Introduction Video-based approaches (i.e., I3D [19], SlowFast [44] and MViTv2 [74,41]), skeleton-based approaches (i.e., ST-GCN [149], CTR-GCN [21] and HD-GCN [71]), and statistic calculations (i.e., randomly guess) are selected to serve as baselines to formulate our AMGE benchmark on the mentioned three datasets. Statistic baselines serve to provide the borderline to verify whether the performance of a deep learning model is better than that of the random guess or a fixed prediction. Skeleton-based approaches are selected since they directly take geometric relationship of the human body into consideration without the disruption from the background. Considering video-based approaches, transformer-based models, i.e., MViTv2 [74] and VideoSwin [80], and convolutional neural network (CNN)based models, i.e., C2D [45], I3D [19], Slow [44] and Slow-Fast [44], are leveraged due to their good performance while handling the activity recognition task. Transformers are ex- pected to have better performance compared with CNNs due to its excellent long-term reasoning ability [132], which also show superior performance for AMGE on the leveraged three datasets. C. More Ablation Studies of TransM 3 E In this section, we provide more ablation studies on our TRANSM 3 E approach. In Section C.1 we provide further experiments and analysis to showcase that the AMGE output space of our trained models is not limited by the known activity types. In Section C.2 more details and clarifications regarding our ablation study for the MCTKD component are illustrated. In Section C.3 we conduct experiments to illustrate the superiority of our proposed MCTs compared with MCT proposed from [147]. In Section C.4, the ablation regarding the size of MCTs is shown. In Section C.5, we introduced the performance of different attention formats leveraged to formulate the MCTF. In Section C.6, the ablation regarding the number of heads while constructing MCTF for the attention calculation is introduced. In Sec- tion C.7, we make comparison among different combinations regarding modalities for the TRANSM 3 E approach. In Section C.8, we introduce the ablation regarding the percentage of KD for MCTKD. C.1. Analysis regarding the output space of the AMGE models To investigate if the output space of our end-to-end AMGE models is really limited by the known activity types, we conduct comparison between end-to-end AMGE and one human activity recognition (HAR) baseline where the output space is designed to be really limited by the known activity types, i.e., HAR + lookup table (HARL). HARL is trained using MViTv2-S (ImageNet1K pretrained) supervised by activity labels while using lookup table to get the corresponding AMGE. New activities are classified as one of the known activity classes and the muscle activations of this known class is assigned to that new activity sample. As shown in Table 8, we find that AMGE cannot be well addressed by using HARL. The performance HARL approach has a comparable performance on known activities, however the performance on new activities is low, showing the shortcoming that HARL cannot generalize to the new activities with only 38.8% and 38.0% in mAP on new val and new test where the output space is limited by the known activities, while end-to-end AMGE approaches surpass HARL obviously on the new activities, indicating that the output space of end-to-end AMGE approaches is not limited by known activities. Table 8 shows that AMGE models learn AMGE based on generalizable body movements beyond learning known activities in lookup relationship. TRANSM 3 E shows significantly better generalizability compared to HARL, which also indicates that the output space of AMGE models is not limited by the known activities. Compared with baselines, the improvement of our model on new val and new test shows the AMGE generalizability can be improved, verifying generalizable AMGE is achievable under even limited activities. C.2. More details of the ablation of MCTKD Since we introduced the ablation regarding MCTKD in our main paper with experimental results, only more details regarding the KD format and position will be introduced in this section. In order to make it clearer for understanding, we illustrate more details regarding the KD/MCTKD po-sition in Figure 6 to give a detailed clarification. For the MCTKD related approaches, we use the MCTKD as depicted by (d), where the KD is executed between the knowledge receiver MCTs of the main modality and the sender MCTs of the auxiliary modality. For all the other basic KD-based approaches, we use the format as depicted by (c), where the KD is executed between the MCTs of the main modality and the MCTs of the auxiliary modality, regarded as conventional KD. All the experiments are executed with MCTs while without MCTF aggregation. We simply average the MCTs for all the experiments in this ablation. Regarding the Sparse format as depicted in (a), the knowledge of the auxiliary modality is only transferred after the size reduction of the pooling layer denoted as Downsampling (DS) in Figure 6 and after the final layer. Only SparseMCTKD and DenseMCTKD are depicted since the SparseKD and DenseKD use the same position settings. SparseKD/MCTKD aims at reducing the KD/MCTKD calculation by selecting the most important intermediate layers to transfer the knowledge. After each pooling layer which has size reduction, the informative clues will be highlighted after the pooling, which makes the corresponding changes of the tokens from auxiliary modality necessary to be integrated through KD/MCTKD. In this case we choose the position after the pooling with size reduction to do the KD/MCTKD on the intermediate layer. DenseKD/MCTKD is designed to transfer the knowledge directly after each transformer block in order to leverage the knowledge from the other modality thoroughly. We make use of both KD positions to conduct comparison and select the most appropriate method to build the MCTKD in our final model. C.3. More analysis between MCTs and MCT Comparison experiments among MViTv2-S [74], MViTv2-S + MCT [147] and MViTv2-S +MCTs (our approach with only MCTs) on the MuscleMap136 dataset are introduced in Table 9 which is also reported in Tab. 3 in main paper. The MCT from Xu et al. [147] has clear class assignment at the final score prediction achieved through averaging the N C classification (cls) tokens along channel dimension, resulting a 1-channel output for each category. For our proposed MCTs, instead of using clear class assignment for the cls tokens for the supervision and prediction, we leverage unclear class assignment achieved through averaging the N C cls tokens along token dimension, projecting the averaged tokens into N C channels and then using a Softmax layer to obtain the final prediction. The unclear class assignment allows a mixture of the underlying cues for AMGE in the intermediate feature level which is verified to have strong capability for the improvement regarding the generalizability, as shown in Table 9. The MCT proposed from Xu [147] could not address the AMGE well since it is originally designed for semantic segmentation, which is much far away from the AMGE task. Compared with the MViTv2-S baseline, MViTv2-S + MCT [147] has large performance decrease while our proposed MCTs help to improve the performance either on the known activities or on the new activities during the test and evaluation, which illustrates the superiority regarding the design of the unclear class alignment mechanism in our MCTs targeting AMGE task specifically. C.4. Ablation of the token size of the MCTs In order to find out the proper size of the MCTs for the AMGE task, we have done the corresponding ablation study in the Table 10. During the ablation of the number of the cls tokens, we find that when the token number of MCTs matches the number of the prediction class, TRANSM 3 E harvests the best performance especially targeting the new activities. Thereby the cls token size in our TRANSM 3 E is set as 20 which is the same with the class size for AMGE task and all the experiments in the main context of our work use 20 as the MCTs size. The experiments are done for TRANSM 3 E without MCTF. C.5. Ablation of the diverse attention components of MCTF According to the introduction of the main text, we have three different attentions to integrate the features from the KD receiver MCTs to the original MCTs used for classification for both the major modality and the auxiliary modality, e.g., A m mm , A m mr , A m rm , A r rr , A r mr and A r rm . We conduct the comparison among using only self-attention (case 1), e.g., using only A m mm and A r rr , using only A m mr and A r mr (case 2) by taking major modality as Query, using only A m rm and A r rm (case 3) while leveraging the auxiliary modality as the Query, using A m mm ,A r rr , A m mr and A r mr (case 4), using A m mm ,A r rr , A m rm and A r rm (case 5), using A m mr , A m rm , A r mr and A r rm (case 6) and using all (case 7) in Table 12. Through the comparison we find that using all attentions (case 7) shows the best performance with mAP 99.0% and 60.6% for the known val and new val and mAP 99.0% and 63.4% for the known test and new test. Thereby the case 7 is leveraged to construct our TRANSM 3 E approach. All the experiments are done by using TRANSM 3 E-SMALL. C.6. Ablation of the head number of MCTF Since MCTF is transformer-based fusion architecture between the Receiver MCTs from the main modality and original MCTs from the main modality, the number of heads used in the attention generation has potential to deliver an influence for the AMGE performance regarding the TRANSM 3 E approach. Thereby, we conduct ablations regarding the number of head in Table 11 for the proposed MCTF to search for the best setting. Through the comparison, MCTF with head number 1 achieves the best performance compared with all other settings. We also use head number 1 in the MCTF to get the best model for TRANSM 3 E. All the experiments are done by using TRANSM 3 E-SMALL. The major and auxiliary modalities are chosen as RGB and RGB Diff according to the performance of different modality as mentioned in our main paper. However, it is still interesting to see if TRANSM 3 E helps for the performance improvement when other modalities are leveraged. In order to investigate this concern we conduct ablation experiments in Table 13 regarding RGB+RGB Diff, RGB+OPT (optical flow), RGB Diff + OPT, respectively. The performance of TRANSM 3 E using RGB+OPT also shows competitive results which achieves second best performance on the Mus-cleMap136, illustrating that TRANSM 3 E can generalize to other modality settings while delivering a competitive results. The performance of RGB Diff+OPT is unsatisfied compared with the other two settings, which showcases that the proposed approach needs an informative modality serving as the major modality. C.8. Ablation of the percentage of the auxiliary modality for MCTKD We build our architecture based on [44,74]. During the data augmentation procedure, the input video will be sampled into two clips by using random start index with a fixed stride. If we use v 1,m and v 2,m to denote the two clips from the major modality, the corresponding RGB Diff for each clip is termed as v 1,r and v 2,r , which are calculated based on a fixed temporal stride for each frame. During our experiments, we find that if we execute the MCTKD between v 1,m and v 2,r , while making the v 1,r as zero tensor and doing MCTKD between v 2,m and the zeroed v 1,r (MCTKD percentage: 50%, case 3), it will achieve the best performance as shown in Table 14. This setting is thereby leveraged to formulate our best model. We also conduct ablation experiments among case 1 (where MCTKD between v 1,m and v 1,r and MCTKD between v 2,m and v 2,r are leveraged), case 2 (where MCTKD between v 1,r and v 2,m and MCTKD between v 1,m and v 2,r are leveraged), case 3 as aforementioned, and case 4 (where MCTKD between v 1,m and zero tensor and MCTKD between v 2,m and zero tensor are leveraged). C.9. Analysis regarding the performance of GCN approaches We observe that CTR-GCN does not show batter performance compared with ST-GCN and the corresponding analysis is given in this section. As AMGE is a different task compared to HAR, AMGE aims for a better understanding of muscle contributions and the performance rule on HAR task is not expected to be followed. The experiments showed the cross-task generalization ability of CTR-GCN is insufficient. Its channel specific correlation separates joints into groups while AMGE requires knowledge on the muscle contributions of the whole body. The joint clustering approach of CTR-GCN can narrow the analysis scope into a local view while AMGE requires a global analysis. Since AMGE is different and more fine-grained compared with HAR, the performance rule on HAR is not expected to be followed on AMGE. D. Details of the Benchmark and Implementation In this section we will introduce more details regarding the proposed MuscleMap136 dataset and the other two annotation sets together with more implementation details. We first demonstrate more details regarding the annotation for the AMGE in Section D.1. The evaluation protocol and the metric are introduced in Section D.2. More implementation details are introduced in Section D.3. D.1. Details of the AMGE annotation. The AMGE annotation for each activity type is achieved through the agreement of diverse available sources including both fitness report written by well-established coaches and experts, and scientific researches. Assuming for activity X, N X annotation resources are available, describing the major activation regions of the human muscles. Considering the agreement of the annotation among different resources we set up the acceptance threshold for each occurred primary muscle region as N T = int(N X /2). If a certain region of muscles occurs >= N T , this region is annotated as True to construct our AMGE annotation. In this manner, we annotate the activity-specific video clip in multi-label manner to formulate our MuscleMap benchmark. An example is shown in Figure 8 for better clarification regarding the acquisition procedure of the AMGE annotation for PushUps activity, where 8 reports from fitness and sport experts and 2 scientific researches [6,130] are considered. The red horizontal line denotes the thresh- Figure 2 : 2An overview of the number of activities and the number of samples per muscle region (@R), depicted at the top. On the bottom some activity-specific samples from MuscleMap136 are shown according to the corresponding muscle group. Figure 4 : 4Qualitative results for the MViTv2-S[74] and TRANSM 3 E-SMALL. GradCam[111] visualization is given. The label is on the left, the prediction and gradients of the MViTv2, and our approach are in the middle and on the right. Figure 6 : 6An overview of the details regarding our ablation study for the MCTKD position and format, where (a) we execute MCTKD after the downsampling of the pooling layer and after the final transformer block to formulate SparseM-CTKD, (b) we leverage the MCTKD after each transformer block (TR Block) to formulate the DenseMCTKD, (c) indicates the conventional KD (KD directly on the MCTs between main modality and auxiliary modality), (d) indicates the MCTKD we leveraged. Figure 7 : 7An overview of MuscleMap136 dataset. In the center there are some samples coming from different activities. The legends outside denote the 136 activities included by the MuscleMap136 dataset. On the right bottom corner, the illustration for each color is given regarding how many muscles are activated for each activity. Figure 8 : 8An overview of the label denoising. MuscleMap benchmark consists of three different datasets, each highlighting different strengths of an AMGE approach. While MuscleMap136 provides a high quality dataset with well-defined muscle activations, Muscle-HMDB41 and Muscle-UCF90 present a much harder setting with activities which are more common in everyday living situations. The selected activities in MuscleMap benchmark are specifically chosen based on fitness and health care related resources which are leveraged to derive annotations for them. For each dataset, we provide an evaluation protocol which differentiates between known and new activities. MuscleMap136. With the new AMGE task in mind, we collect MuscleMap136 by querying YouTube for physical exercise video series. The collected dataset contains 136 activity types as well as 15, 007 video clips and is competitive compared to other video-based datasets targeting finegrained tasks, as shown inTable 1. Twenty activities are reserved for the validation and test splits of new activities. body occlusion and varying age to introduce various domain shift. While RGB video can be retrieved either by downloading the respective YouTube videos for Mus-cleMap136 or the original HMDB51 and UCF101 datasets, we also provide optical flow which we calculated using the work of[138] as well as 2D skeleton data extracted based on the work of[42], RGB difference can be efficiently calculated during training. A small set of activities from Mus-cleMap136 is shown in the bottom part ofFigure 2. In Table 1, MuscleMap136 is compared with existing HAR, action quality assessment and calorie consumption datasets. Further details as well as the definition of known and new activity splits are listed in the supplementary. Muscle-UCF90 is created by providing annotations for an eligible subset of activity classes of UF101[120]. UCF101 consists of 13, 320 video clips which were originally collected from YouTube with large diversity in object appearances, background, view points, and participants.MuscleMap136 targets physical exercise videos from fit- ness enthusiasts. Exercises are well suited for AMGE, since they display a large range of motions which are designed to activate specific muscle groups and instructional videos provide high quality examples of the displayed motion. These videos are mostly near-person, which results in a bet- ter AMGE understanding. In contrast, Muscle-HMDB41 and Muscle-UCF90 provide a much harder but more di- verse setting with many activities which relate to everyday living tasks. Compared with Muscle-UCF90 and Muscle- HMDB41, the MuscleMap136 dataset has more samples and activities per muscle region, especially for the head- and neck region. Statistics regarding the number of samples and the number of activities per muscle region are shown in Figure 2. Our dataset also contains samples with ro- tated/unrotated camera, varying background, varying gen- der, Muscle- UCF90 provides annotations for 10, 553 video clips of 90 activities. Nine of these activities are reserved for the splits for generalization test. Unlike MuscleMap136 designed for AMGE, some actions of UCF101 and HMDB51 do not have well-documented muscle activations, e.g., ApplyLipstick. Muscle-HMDB41 contains 41 annotated activities with 5, 259 samples. Six of these 41 activities are reserved for the splits of new activities. Muscle-HMDB41 is a subset of HMDB51 Table 2 : 2Experimental results on MuscleMap136, Muscle-HMDB41, and Muscle-UCF90. known val, new val, known test , and new test denote evaluation and test sets for normal and generalizable validation and test, respectively. mean val and mean test denote the averaged mean average precision (mAP) of normal and generalizable settings.Model #PM MuscleMap136 @ mAP Muscle-UCF90 @ mAP Muscle-HMDB41 @ mAP known val new val mean val known test new test mean test known val new val mean val known test new test mean test known val new val mean val known test new test mean test Random 0.0M 29.6 28.9 29.3 29.8 28.3 29.1 26.7 22.9 24.8 27.3 22.3 24.8 28.8 16.4 22.6 28.2 17.7 22.9 All Ones/Ones 0.0M 29.7 28.3 29.0 29.6 28.0 28.8 26.6 22.2 24.4 26.6 22.0 24.3 27.9 16.1 22.0 27.3 17.1 22.2 ST-GCN [149] 2.6M 75.3 56.1 65.7 76.2 53.5 64.9 40.6 52.5 46.6 42.0 51.0 46.5 41.6 34.0 37.8 41.2 37.0 39.1 CTR-GCN [21] 1.4M 60.3 47.3 53.8 57.7 47.3 52.5 40.8 48.9 44.9 41.8 47.7 44.8 30.3 32.9 31.6 29.6 34.1 31.9 HD-GCN [71] 0.8M 71.1 54.3 62.7 70.7 54.2 62.5 35.7 46.3 41.0 35.6 45.2 40.4 40.1 34.0 37.1 40.5 35.7 38.1 C2D (R50) [45] 23.5M 85.5 45.1 65.3 86.2 43.8 65.0 95.8 52.5 74.2 97.5 53.1 75.3 84.8 38.4 61.6 88.8 38.5 63.7 I3D (R50) [19] 20.4M 88.0 47.0 67.5 87.4 46.2 66.8 98.3 51.7 75.0 98.8 53.2 76.0 79.9 37.3 58.6 82.3 37.6 59.9 Slow (R50) [44] 24.3M 93.4 43.6 68.5 94.3 42.4 68.4 96.7 53.7 75.2 96.9 51.2 74.1 82.7 34.7 58.7 85.4 35.2 60.3 SlowFast (R50) [44] 25.3M 94.8 48.1 71.5 96.7 46.9 71.8 91.1 49.8 70.5 92.7 51.6 72.2 76.9 33.3 55.1 78.0 35.0 56.5 MViTv2-S [74] 34.2M 98.5 55.0 77.0 98.8 55.5 77.2 98.6 52.4 75.5 98.5 52.9 75.7 84.9 39.2 62.1 87.7 38.6 63.2 MViTv2-B [74] 51.2M 98.2 57.3 77.8 98.7 56.3 77.5 98.4 52.3 75.4 99.1 50.7 74.9 86.6 38.8 62.7 88.8 41.4 65.1 VideoSwin-S [80] 50.0M 88.0 48.9 68.5 89.1 47.9 68.5 97.9 54.1 76.0 97.9 52.7 75.3 59.9 39.1 49.5 61.4 37.4 49.4 VideoSwin-B [80] 88.0M 90.2 50.3 70.3 90.8 49.7 70.2 98.0 52.8 75.4 98.1 51.2 74.7 65.0 37.1 51.1 67.1 38.4 52.8 TransM 3 E-Small (Ours) 44.4M 99.0 60.6 79.8 99.0 63.4 81.2 98.6 54.3 76.5 99.2 54.6 76.9 88.7 42.0 65.4 89.5 41.4 65.5 TransM 3 E-Base (Ours) 60.7M 99.3 58.4 78.9 99.3 59.7 79.5 98.1 55.0 76.6 98.4 55.1 76.8 88.1 40.7 64.4 90.5 43.5 67.0 Table 3 : 3Ablation for TransM 3 E on the MuscleMap136.MCT [147] MCTs MCTKD MCTF known val new val mean val known test new test mean test 98.5 55.0 77.0 98.8 55.5 77.2 93.8 45.7 69.8 95.5 44.4 70.0 98.7 58.3 78.5 98.8 61.3 80.1 98.7 60.5 79.6 99.0 62.6 80.8 99.0 60.6 79.8 99.0 63.4 81.2 Table 4 : 4Ablation for KDs for MCTKD on MuscleMap136.Method CW TW FP known val new val mean val known test new test mean test PatchEmbKD 98.5 59.7 79.1 98.9 61.3 80.1 PatchEmbKD 98.5 59.5 79.0 98.7 61.4 80.1 PatchEmbKD 98.7 58.6 78.7 98.9 61.1 80.0 SparseKD 98.3 60.1 79.2 98.5 61.8 80.2 SparseKD 97.9 59.7 78.8 98.3 62.0 80.2 SparseKD 97.9 58.9 78.4 98.1 61.5 79.8 DenseKD 98.9 59.2 79.1 98.9 61.1 80.0 DenseKD 98.8 59.7 79.3 98.9 61.2 80.1 DenseKD 98.5 58.3 78.4 98.7 60.3 79.5 FinalLayerKD 98.5 59.7 79.1 98.9 61.3 80.1 FinalLayerKD 98.7 59.3 79.0 98.4 61.2 79.8 FinalLayerKD 98.7 59.0 78.9 98.9 61.0 80.0 PatchEmbMCTKD (Ours) 98.7 59.5 79.2 98.9 61.4 80.2 PatchEmbMCTKD (Ours) 98.8 59.1 79.0 98.9 61.5 80.2 PatchEmbMCTKD (Ours) 98.6 58.8 78.7 98.9 61.3 80.1 SparseMCTKD (Ours) 98.7 59.4 79.1 98.9 61.2 80.1 SparseMCTKD (Ours) 98.8 58.9 78.9 99.0 61.7 80.4 SparseMCTKD (Ours) 98.6 58.4 78.5 98.9 61.0 80.0 DenseMCTKD (Ours) 98.6 58.3 78.5 98.9 61.1 80.0 DenseMCTKD (Ours) 98.7 60.5 79.6 99.0 62.6 80.8 DenseMCTKD (Ours) 98.4 59.0 78.7 98.7 60.6 79.7 FinalLayerMCTKD (Ours) 98.8 59.0 78.9 98.9 61.2 80.1 FinalLayerMCTKD (Ours) 98.7 58.6 78.7 98.8 61.0 79.9 FinalLayerMCTKD (Ours) 98.9 59.1 79.0 99.0 61.4 80.2 Table 5 : 5Ablation for the MCTF on MuscleMap136.Method known val new val mean val known test new test mean test Sum [108] 98.8 60.4 79.6 99.1 61.9 80.5 Concatenation [108] 98.7 60.5 79.6 99.0 62.6 80.8 Multiplication [108] 98.4 60.1 79.3 98.5 62.3 80.4 SelfAttention [87] 98.5 60.5 79.5 98.9 62.7 80.8 CrossAttention [87] 98.8 59.6 79.2 98.8 62.4 80.6 AttentionBottleNeck [87] 98.7 60.4 79.6 99.0 62.2 80.6 TransM 3 E-Small 99.0 60.6 79.8 99.0 63.4 81.2 TRANSM 3 E with only MCTs for both modalities for 40 epochs, and then train TRANSM 3 E with all components for 10 epochs. We use AdamW [82] with learning rate of 1e −4 . Evaluation on known activity types (known). Our first evaluation setting validates how well the AMGE is for known activities which were also present in the training set. Generalization to new activity types (new). A key AMGE challenge is to develop models which generalize well to- wards new activity types at test-time. In this setting, the evaluation consists of new activities containing new acti- vated muscle combinations which are not previously known during training. The model therefore needs to generalize well to muscle combinations not present during training. Final metric (mean). Our final metric is the average of mean averaged precisions for identifying muscle activations for known activities and in the generalization setting. A de- tailed evaluation protocol is provided in the supplementary. Table 6 : 6Comparison for MMF/KD on MuscleMap136.Method known val new val mean val known test new test mean test LateFusionSum [108] 98.6 54.2 76.4 98.9 54.8 76.9 LateFusionConcat [140] 70.1 52.0 61.1 71.3 52.1 61.7 LateFusionMul [108] 53.8 35.8 44.8 54.0 35.9 45.0 CrossAttention [87] 83.6 40.7 62.2 87.7 40.6 64.2 PEFusionSum [98] 97.7 56.3 77.0 98.6 56.7 77.7 PEFusionConcat [98] 97.1 51.9 74.5 97.9 49.2 73.6 PEFusionMul [98] 97.7 51.4 74.6 98.1 51.1 74.6 MixedFusion [98] 98.8 52.8 75.8 98.8 52.9 75.9 DeiT [126] 98.2 51.9 75.1 98.8 53.4 76.1 ConventionalKD [56] 68.8 49.7 59.3 67.0 48.2 57.6 Ours 99.0 60.6 79.8 99.0 63.4 81.2 Table 7 : 7Results for different modalities on MuscleMap136.Modality known val new val mean val known test new test mean test Optical Flow 69.8 48.8 59.3 63.5 44.0 53.8 RGB Difference 96.1 58.0 77.1 96.5 56.5 76.5 RGB 98.7 60.5 79.6 98.8 61.3 80.1 Neck and head region Chest region Shoulder region Biceps region Triceps region Forearms region Upper back region Latissimus region Obliques region Upper abdominis region Lower abdominis region Lower back region Hamstring region Quadriceps region Calves region Inner thigh region Outer thigh region Gluteus region Feet ankles region Wrists region Neck and head region Chest region Shoulder region Biceps region Triceps region Forearms region Upper back region Latissimus region Obliques region Upper abdominis region Lower abdominis region Lower back region Hamstring region Quadriceps region Calves region Inner thigh region Outer thigh region Gluteus region Feet ankles region Wrists region [ 1 ] 1Gustavo Aguilar, Yuan Ling, Yu Zhang, Benjamin Yao, Xing Fan, and Chenlei Guo. Knowledge distillation from internal representations. In AAAI, 2020. 3 [2] Zeeshan Ahmad and Naimul Khan. Human action recogni-Shahab Alizadeh, Machel Rayner, M. Mamdouh Ibrahim Mahmoud, and David G. Behm. Push-ups vs. bench press differences in repetitions and muscle activation between sexes. Journal of Sports Science & Medicine, 2020. 18 [7] Humam Alwassel, Dhruv Mahajan, Bruno Korbar, Lorenzo Torresani, Bernard Ghanem, and Du Tran. Self-supervised learning by cross-modal audio-video clustering. In NeurIPS, 2020. 3 [8] Shumin An, Qingmin Liao, Zongqing Lu, and Jing-Hao Haoli Bai, Hongda Mao, and Dinesh Nair. Dynamically pruning segformer for efficient semantic segmentation. In ICASSP, 2022. 3 [11] Emanuel Ben-Baruch, Tal Ridnik, Nadav Zamir, Asaf Noy, Itamar Friedman, Matan Protter, and Lihi Zelnik-Manor. Asymmetric loss for multi-label classification. Pengguang Chen, Shu Liu, Hengshuang Zhao, and Jiaya Jia. Distilling knowledge via knowledge review. In CVPR, 2021. 3 [21] Yuxin Chen, Ziqi Zhang, Chunfeng Yuan, Bing Li, Ying Deng, and Weiming Hu. Channel-wise topology refinement Zhao-Min Chen, Xiu-Shen Wei, Peng Wang, and Yanwen Guo. Multi-label image recognition with graph convolutional networks. In CVPR, 2019. 3 [24] Xing Cheng, Hezheng Lin, Xiangyu Wu, Dong Shen, Fantion using deep multilevel multimodal (M 2 ) fusion of depth and inertial sensors. IEEE Sensors Journal, 2019. 3 [3] Zeeshan Ahmad and Naimul Khan. CNN-based multistage gated average fusion (MGAF) for human action recognition using depth and inertial sensors. IEEE Sensors Journal, 2020. 3 [4] Neelam Ahuja, Kamal Awad, Sara Peper, Marco Brotto, and Venu Varanasi. Mini review: Biomaterials in repair and regeneration of nerve in a volumetric muscle loss. Neuro- science Letters, 2021. 1 [5] Jean-Baptiste Alayrac, Adria Recasens, Rosalia Schneider, Relja Arandjelović, Jason Ramapuram, Jeffrey De Fauw, Lucas Smaira, Sander Dieleman, and Andrew Zisser- man. Self-supervised multimodal versatile networks. In NeurIPS, 2020. 3 [6] Xue. Efficient semantic segmentation via self-attention and self-distillation. T-ITS, 2022. 3 [9] Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, and Cordelia Schmid. ViVit: A video vision transformer. In ICCV, 2021. 3 [10] arXiv preprint arXiv:2009.14119, 2020. 3 [12] Kelly R. Berckmans, Birgit Castelein, Dorien Borms, Thierry Parlevliet, and Ann Cools. Rehabilitation exercises for dysfunction of the scapula: Exploration of muscle activ- ity using fine-wire EMG. The American Journal of Sports Medicine, 2021. 1, 2 [13] Gedas Bertasius, Heng Wang, and Lorenzo Torresani. Is space-time attention all you need for video understanding? In ICML, 2021. 3 [14] Lars Buitinck, Gilles Louppe, Mathieu Blondel, Fabian Pe- dregosa, Andreas Mueller, Olivier Grisel, Vlad Niculae, Peter Prettenhofer, Alexandre Gramfort, Jaques Grobler, Robert Layton, Jake VanderPlas, Arnaud Joly, Brian Holt, and Gaël Varoquaux. API design for machine learning soft- ware: experiences from the scikit-learn project. In ECML PKDD Workshop: Languages for Data Mining and Ma- chine Learning, 2013. 20 [15] Louise C. Burgess, Lalitha Venugopalan, James Badger, Tamsyn Street, Alon Gad, Jonathan C. Jarvis, Thomas W. Wainwright, Tamara Everington, Paul Taylor, and Ian D. Swain. Effect of neuromuscular electrical stimulation on the recovery of people with COVID-19 admitted to the in- tensive care unit: A narrative review. Journal of Rehabili- tation Medicine, 2021. 1 [16] Jeannette M. Byrne, Nicole S. Bishop, Andrew M. Caines, Kalynn A. Crane, Ashley M. Feaver, and Gregory E. P. Pearcey. Effect of using a suspension training system on muscle activation during the performance of a front plank exercise. The Journal of Strength & Conditioning Research, 2014. 4 [17] Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. ActivityNet: A large-scale video benchmark for human activity understanding. In CVPR, 2015. 2 [18] Felipe P. Carpes, Fernando Diefenthaeler, Rodrigo R. Bini, Darren J. Stefanyshyn, Irvin E. Faria, and Carlos B. Mota. Influence of leg preference on bilateral muscle activation during cycling. Journal of Sports Sciences, 2011. 4 [19] Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? A new model and the kinetics dataset. In CVPR, 2017. 2, 6, 7, 14 [20] graph convolution for skeleton-based action recognition. In ICCV, 2021. 2, 3, 6, 14 [22] Yu-Pin Chen, Yi-Jie Kuo, Shen-Wu Hung, Tsai-wei Wen, Pei-Chun Chien, Ming-Hsiu Chiang, Nicola Maffulli, and Chung-Ying Lin. Loss of skeletal muscle mass can be pre- dicted by sarcopenia and reflects poor functional recovery at one year after surgery for geriatric hip fractures. Injury, 2021. 1 [23] Yang, Honglin Liu, and Nian Shi. MlTr: Multi-label clas- sification with transformer. In ICME, 2022. 3 [25] Avishek Choudhury and Onur Asan. Impact of using wear- able devices on psychological distress: Analysis of the health information national trends survey. International Journal of Medical Informatics, 2021. 1 [26] Vasileios Choutas, Philippe Weinzaepfel, Jérôme Revaud, and Cordelia Schmid. PoTion: Pose motion representation for action recognition. In CVPR, 2018. 3 [27] Michael C. Cox, Matthew Booth, Gabriela Ghita, Zhongkai Wang, Anna Gardner, Russell B. Hawkins, Dijoia B. Dar- den, Christiaan Leeuwenburgh, Lyle L. Moldawer, Freder- ick A. Moore, Philip A. Efron, Steven Anton, and Scott C. Brakenridge. The impact of sarcopenia and acute muscle mass loss on long-term outcomes in critically ill patients with intra-abdominal sepsis. Journal of Cachexia, Sarcope- nia and Muscle, 2021. 1 Table 8 : 8Comparison between end-to-end AMGE approaches and the lookup table with HAR.Method known val new val mean val known test new test mean test HAR Lookup 95.4 38.8 67.1 95.7 38.0 66.9 MVITv2 98.5 55.0 77.0 98.8 55.5 77.2 Ours 99.0 60.6 79.8 99.0 63.4 81.2 Table 9 : 9Comparison between MCT from[147] and our MCTs.Method known val new val mean val known test new test mean test MViTv2-S 98.5 55.0 77.0 98.5 55.5 77.2 MViTv2-S + MCT [147] 93.8 45.7 69.8 95.5 44.4 70.0 Ours (WIth only MCTs) 98.7 58.3 78.5 98.8 61.3 80.1 Table 10 : 10Ablation for the token size of MCTs on the Mus-cleMap136 dataset.MCTSize known val new val mean val known test new test mean test 1 98.5 55.0 77.0 98.8 55.5 77.2 5 98.7 54.0 76.4 98.9 53.6 76.3 10 98.4 54.4 76.4 98.9 54.4 76.7 15 98.6 53.4 76.0 99.0 53.5 76.3 20 98.7 60.5 79.6 98.8 61.3 80.1 25 98.5 55.3 76.9 99.3 55.5 77.4 30 98.4 54.2 76.3 98.8 55.1 77.0 Table 11 : 11Ablation regarding the number of head for TRANSM 3 E on the MuscleMap136 dataset.HeadNum known val new val mean val known test new test mean test 1 99.0 60.6 79.8 99.0 63.4 81.2 2 98.8 60.2 79.5 98.9 62.9 80.9 3 99.0 60.1 79.6 98.9 63.2 81.1 4 98.8 60.3 79.6 99.0 62.8 80.9 5 99.0 60.0 79.5 99.1 62.9 81.0 Table 12 : 12Ablation regarding the attention combination for TRANSM 3 E on the MuscleMap136 dataset.AttCaseNUm known val new val mean val known test new test mean test 1 98.9 59.9 79.4 98.9 62.2 80.6 2 99.0 59.4 79.2 98.9 62.2 80.6 3 98.7 59.8 79.3 99.0 62.9 81.0 4 98.8 59.6 79.2 99.0 62.5 80.8 5 98.8 60.2 79.5 98.9 62.4 80.7 6 98.9 60.2 79.6 99.0 63.2 81.1 7 99.0 60.6 79.8 99.0 63.4 81.2 C.7. Ablation about using other modalities in TRANSM 3 E Table 13 : 13Ablation regarding the fusion of different modality.AttCaseNum known val new val mean val known test new test mean test RGB+RGB DIFF 99.0 60.6 79.8 99.0 63.4 81.2 RGB+OPT 98.6 60.4 62.4 98.7 63.2 81.0 RGB DIFF+OPT 96.2 58.7 60.7 96.9 56.6 76.8 Table 14 : 14Ablation regarding the percentage of distilled knowledge for MCTKD approach.Method known val new val mean val known test new test mean test case 1 99.0 60.5 79.8 99.0 62.8 80.9 case 2 99.0 60.4 79.7 99.0 63.0 81.0 case 3 99.0 60.6 79.8 99.0 63.4 81.2 case 4 99.0 60.1 79.6 99.0 62.6 80.8 Ground Truth Prediction of MVITv2 Prediction of our approach7, 151, 1, 192, 509, 1, 192and 509. The dataset, protocols and the instruction regarding the evaluation will be publicly released. Evaluation metric. Mean averaged precision (mAP) is used as the evaluation metric as mentioned in our main paper. Assuming that l i , i ∈ 1, N new test is the multihot annotation for the sample i in the new test set and P reds i , i ∈ 1, N new test is the prediction of the model for the given sample i, we first concatenate the labels to get the label vector L. We first select the subset of the concatenated list of P reds and L by calculate the mask through m = (L == 1). The corresponding subsets are thereby denoted as P reds[m] and L[m]. Then we calculated the mean averaged precision score using the existing function from sklearn[14]. We further give more qualitative experimental results inFigure 9for the GradCam Visualization of the gradients of the first normalization layer of the last transformer block. Through the comparison with the MViT-S baseline we could find that our approach has more concentrated focus on the correct activated region, demonstrating the superiority of the proposed several mechanisms regarding the understanding of the video-based AMGE. This evaluation protocol follows the multi-label classification protocol according to the repository of PySlowFast[44].D.3. Further implementation detailsWe further illustrate more implementation details apart from what has been mentioned in our main paper. Abdominal muscle activation during common modifications of the trunk curl-up exercise. Anna Martin Eriksson Crommert, Olga Bjerkefors, Maria M Tarassova, Ekblom, The Journal of Strength & Conditioning Research. 12Martin Eriksson Crommert, Anna Bjerkefors, Olga Tarassova, and Maria M. Ekblom. Abdominal muscle ac- tivation during common modifications of the trunk curl-up exercise. The Journal of Strength & Conditioning Research, 2021. 1, 2 VPN++: Rethinking video-pose embeddings for understanding activities of daily living. Srijan Das, Rui Dai, Di Yang, Francois Bremond, arXiv:2105.08141arXiv preprintSrijan Das, Rui Dai, Di Yang, and Francois Bremond. VPN++: Rethinking video-pose embeddings for un- derstanding activities of daily living. arXiv preprint arXiv:2105.08141, 2021. 3 VPN: Learning video-pose embedding for activities of daily living. Srijan Das, Saurav Sharma, Rui Dai, Francois Bremond, Monique Thonnat, ECCV. 2020Srijan Das, Saurav Sharma, Rui Dai, Francois Bremond, and Monique Thonnat. VPN: Learning video-pose embed- ding for activities of daily living. In ECCV, 2020. 3 Selectivity and excitability of upper-limb muscle activation during cervical transcutaneous spinal cord stimulation in humans. M Roberto, Atsushi De Freitas, Dimitry G Sasaki, Yohei Sayenko, Taishin Masugi, Kimitaka Nomura, Matija Nakazawa, Milosevic, Journal of Applied Physiology. 12Roberto M. de Freitas, Atsushi Sasaki, Dimitry G. Sayenko, Yohei Masugi, Taishin Nomura, Kimitaka Nakazawa, and Matija Milosevic. Selectivity and excitability of upper-limb muscle activation during cervical transcutaneous spinal cord stimulation in humans. Journal of Applied Physiol- ogy, 2021. 1, 2 The effectivity of a passive arm support exoskeleton in reducing muscle activation and perceived exertion during plastering activities. Aijse Willem De, Frank Vries, Michiel Krause, Pieter De Looze, Ergonomics. 12Aijse Willem de Vries, Frank Krause, and Michiel Pieter de Looze. The effectivity of a passive arm support exoskeleton in reducing muscle activation and perceived exertion during plastering activities. Ergonomics, 2021. 1, 2 ImageNet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, CVPR. 46Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical im- age database. In CVPR, 2009. 4, 6 Marshall, and Darin A. Padua. Gluteal muscle activation during common therapeutic exercises. Lindsay J Distefano, J Troy Blackburn, W Stephen, Journal of Orthopaedic & Sports Physical Therapy. 4Lindsay J. Distefano, J. Troy Blackburn, Stephen W. Mar- shall, and Darin A. Padua. Gluteal muscle activation during common therapeutic exercises. Journal of Orthopaedic & Sports Physical Therapy, 2009. 4 Long-term recurrent convolutional networks for visual recognition and description. Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, CVPR. Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadar- rama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. Long-term recurrent convo- lutional networks for visual recognition and description. In CVPR, 2015. 2 An image is worth 16x16 words: Transformers for image recognition at scale. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby, ICLR, 2021. 4Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An im- age is worth 16x16 words: Transformers for image recog- nition at scale. In ICLR, 2021. 4 Revisiting skeleton-based action recognition. Haodong Duan, Yue Zhao, Kai Chen, Dahua Lin, Bo Dai, CVPR. 2022Haodong Duan, Yue Zhao, Kai Chen, Dahua Lin, and Bo Dai. Revisiting skeleton-based action recognition. In CVPR, 2022. 3 Revisiting skeleton-based action recognition. Haodong Duan, Yue Zhao, Kai Chen, Dian Shao, Dahua Lin, Bo Dai, arXiv:2104.13586arXiv preprintHaodong Duan, Yue Zhao, Kai Chen, Dian Shao, Dahua Lin, and Bo Dai. Revisiting skeleton-based action recogni- tion. arXiv preprint arXiv:2104.13586, 2021. 2 Learning a deep ConvNet for multi-label classification with partial labels. Thibaut Durand, Nazanin Mehrasa, Greg Mori, CVPR. Thibaut Durand, Nazanin Mehrasa, and Greg Mori. Learn- ing a deep ConvNet for multi-label classification with par- tial labels. In CVPR, 2019. 3 Core muscle activation during Swiss ball and traditional abdominal exercises. F Rafael, Clare Escamilla, Duncan Lewis, Gwen Bell, Jason Bramblet, Steve Daffron, Amanda Lambert, Rodney Pecson, Lonnie Imamura, James R Paulos, Andrews, Journal of Orthopaedic & Sports Physical Therapy. 4Rafael F Escamilla, Clare Lewis, Duncan Bell, Gwen Bramblet, Jason Daffron, Steve Lambert, Amanda Pecson, Rodney Imamura, Lonnie Paulos, and James R. Andrews. Core muscle activation during Swiss ball and traditional ab- dominal exercises. Journal of Orthopaedic & Sports Phys- ical Therapy, 2010. 4 Jitendra Malik, and Christoph Feichtenhofer. Multiscale vision transformers. Haoqi Fan, Bo Xiong, Karttikeya Mangalam, Yanghao Li, Zhicheng Yan, ICCV. 14Haoqi Fan, Bo Xiong, Karttikeya Mangalam, Yanghao Li, Zhicheng Yan, Jitendra Malik, and Christoph Feichten- hofer. Multiscale vision transformers. In ICCV, 2021. 2, 3, 4, 14 Al-phaPose: Whole-body regional multi-person pose estimation and tracking in real-time. Jiefeng Hao-Shu Fang, Hongyang Li, Chao Tang, Haoyi Xu, Yuliang Zhu, Yong-Lu Xiu, Cewu Li, Lu, TPAMIHao-Shu Fang, Jiefeng Li, Hongyang Tang, Chao Xu, Haoyi Zhu, Yuliang Xiu, Yong-Lu Li, and Cewu Lu. Al- phaPose: Whole-body regional multi-person pose estima- tion and tracking in real-time. TPAMI, 2022. 4 Compressing visuallinguistic model via knowledge distillation. Zhiyuan Fang, Jianfeng Wang, Xiaowei Hu, Lijuan Wang, Yezhou Yang, Zicheng Liu, ICCV, 2021. 25Zhiyuan Fang, Jianfeng Wang, Xiaowei Hu, Lijuan Wang, Yezhou Yang, and Zicheng Liu. Compressing visual- linguistic model via knowledge distillation. In ICCV, 2021. 2, 5 SlowFast networks for video recognition. Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, Kaiming He, ICCV. 1820Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. SlowFast networks for video recognition. In ICCV, 2019. 2, 6, 7, 14, 18, 20 Spatiotemporal multiplier networks for video action recognition. Christoph Feichtenhofer, Axel Pinz, Richard P Wildes, CVPR. 714Christoph Feichtenhofer, Axel Pinz, and Richard P Wildes. Spatiotemporal multiplier networks for video action recog- nition. In CVPR, 2017. 6, 7, 14 Convolutional two-stream network fusion for video action recognition. Christoph Feichtenhofer, Axel Pinz, Andrew Zisserman, CVPR. Christoph Feichtenhofer, Axel Pinz, and Andrew Zisser- man. Convolutional two-stream network fusion for video action recognition. In CVPR, 2016. 2 Early vs late fusion in multimodal convolutional neural networks. Konrad Gadzicki, Razieh Khamsehashari, Christoph Zetzsche, FUSION, 2020. 25Konrad Gadzicki, Razieh Khamsehashari, and Christoph Zetzsche. Early vs late fusion in multimodal convolutional neural networks. In FUSION, 2020. 2, 5 Learning to discover multi-class attentional regions for multi-label image recognition. Bin-Bin, Hong-Yu Gao, Zhou, TIP. 3Bin-Bin Gao and Hong-Yu Zhou. Learning to discover multi-class attentional regions for multi-label image recog- nition. TIP, 2021. 3 Hand gesture recognition using multimodal data fusion and multiscale parallel convolutional neural network for human-robot interaction. Qing Gao, Jinguo Liu, Zhaojie Ju, Expert Systems. 3Qing Gao, Jinguo Liu, and Zhaojie Ju. Hand gesture recog- nition using multimodal data fusion and multiscale parallel convolutional neural network for human-robot interaction. Expert Systems, 2021. 3 Listen to look: Action recognition by previewing audio. Ruohan Gao, Tae-Hyun Oh, Kristen Grauman, Lorenzo Torresani, CVPR. 2020Ruohan Gao, Tae-Hyun Oh, Kristen Grauman, and Lorenzo Torresani. Listen to look: Action recognition by previewing audio. In CVPR, 2020. 3 ActionVLAD: Learning spatiotemporal aggregation for action classification. Rohit Girdhar, Deva Ramanan, Abhinav Gupta, Josef Sivic, Bryan Russell, CVPR. Rohit Girdhar, Deva Ramanan, Abhinav Gupta, Josef Sivic, and Bryan Russell. ActionVLAD: Learning spatio- temporal aggregation for action classification. In CVPR, 2017. 2 Knowledge distillation: A survey. Jianping Gou, Baosheng Yu, Stephen J Maybank, Dacheng Tao, 2021. 3Jianping Gou, Baosheng Yu, Stephen J. Maybank, and Dacheng Tao. Knowledge distillation: A survey. IJCV, 2021. 3 Online knowledge distillation via collaborative learning. Qiushan Guo, Xinjiang Wang, Yichao Wu, Zhipeng Yu, Ding Liang, Xiaolin Hu, Ping Luo, CVPR. 2020Qiushan Guo, Xinjiang Wang, Yichao Wu, Zhipeng Yu, Ding Liang, Xiaolin Hu, and Ping Luo. Online knowledge distillation via collaborative learning. In CVPR, 2020. 3 Selfsupervised co-training for video representation learning. Tengda Han, Weidi Xie, Andrew Zisserman, NeurIPS. 2020Tengda Han, Weidi Xie, and Andrew Zisserman. Self- supervised co-training for video representation learning. In NeurIPS, 2020. 3 A comparison of serratus anterior muscle activation during a wall slide exercise and other traditional exercises. Dustin H Hardwick, Justin A Beebe, Mary Kate Mcdonnell, Catherine E Lang, Journal of Orthopaedic & Sports Physical Therapy. 4Dustin H. Hardwick, Justin A. Beebe, Mary Kate McDon- nell, and Catherine E. Lang. A comparison of serratus ante- rior muscle activation during a wall slide exercise and other traditional exercises. Journal of Orthopaedic & Sports Physical Therapy, 2006. 4 Geoffrey E Hinton, Oriol Vinyals, Jeffrey Dean, arXiv:1503.02531Distilling the knowledge in a neural network. 27arXiv preprintGeoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. Dis- tilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. 2, 3, 7 MobileNets: Efficient convolutional neural networks for mobile vision applications. G Andrew, Menglong Howard, Bo Zhu, Dmitry Chen, Weijun Kalenichenko, Tobias Wang, Marco Weyand, Hartwig Andreetto, Adam, arXiv:1704.04861arXiv preprintAndrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco An- dreetto, and Hartwig Adam. MobileNets: Efficient con- volutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. 2 Evaluating fusion of RGB-D and inertial sensors for multimodal human action recognition. Javed Imran, Balasubramanian Raman, JAIHC. 3Javed Imran and Balasubramanian Raman. Evaluating fu- sion of RGB-D and inertial sensors for multimodal human action recognition. JAIHC, 2020. 3 Trunk kinematics and muscle activation patterns during stand-to-sit movement and the relationship with postural stability in aging. Woohyoung Jeon, Jill Whitall, Lisa Griffin, Kelly P Westlake, Gait & Posture. 12Woohyoung Jeon, Jill Whitall, Lisa Griffin, and Kelly P. Westlake. Trunk kinematics and muscle activation patterns during stand-to-sit movement and the relationship with pos- tural stability in aging. Gait & Posture, 2021. 1, 2 Structural and statistical texture knowledge distillation for semantic segmentation. Deyi Ji, Haoran Wang, Mingyuan Tao, Jianqiang Huang, Xian-Sheng Hua, Hongtao Lu, CVPR. 2022Deyi Ji, Haoran Wang, Mingyuan Tao, Jianqiang Huang, Xian-Sheng Hua, and Hongtao Lu. Structural and statistical texture knowledge distillation for semantic segmentation. In CVPR, 2022. 3 Knowledge distillation via route constrained optimization. Xiao Jin, Baoyun Peng, Yichao Wu, Yu Liu, Jiaheng Liu, Ding Liang, Junjie Yan, Xiaolin Hu, ICCV. Xiao Jin, Baoyun Peng, Yichao Wu, Yu Liu, Jiaheng Liu, Ding Liang, Junjie Yan, and Xiaolin Hu. Knowledge dis- tillation via route constrained optimization. In ICCV, 2019. 3 MMTM: Multimodal transfer module for CNN fusion. Amirreza Hamid Reza Vaezi Joze, Michael L Shaban, Kazuhito Iuzzolino, Koishida, CVPR, 2020. 25Hamid Reza Vaezi Joze, Amirreza Shaban, Michael L. Iuz- zolino, and Kazuhito Koishida. MMTM: Multimodal trans- fer module for CNN fusion. In CVPR, 2020. 2, 5 Review of action recognition and detection methods. Min Soo, Richard P Kang, Wildes, arXiv:1610.06906arXiv preprintSoo Min Kang and Richard P. Wildes. Review of ac- tion recognition and detection methods. arXiv preprint arXiv:1610.06906, 2016. 2 Large-scale video classification with convolutional neural networks. Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, Li Fei-Fei, CVPR. Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. Large-scale video classification with convolutional neural networks. In CVPR, 2014. 2 Will Kay, João Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman, arXiv:1705.06950The kinetics human action video dataset. arXiv preprintWill Kay, João Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, and Andrew Zisserman. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017. 3 EPIC-fusion: Audio-visual temporal binding for egocentric action recognition. Evangelos Kazakos, Arsha Nagrani, Andrew Zisserman, Dima Damen, ICCV. Evangelos Kazakos, Arsha Nagrani, Andrew Zisserman, and Dima Damen. EPIC-fusion: Audio-visual temporal binding for egocentric action recognition. In ICCV, 2019. 3 Cooperative learning of audio and video models from selfsupervised synchronization. Bruno Korbar, Du Tran, Lorenzo Torresani, In NeurIPS. 3Bruno Korbar, Du Tran, and Lorenzo Torresani. Co- operative learning of audio and video models from self- supervised synchronization. In NeurIPS, 2018. 3 The language of actions: Recovering the syntax and semantics of goal-directed human activities. Hilde Kuehne, Ali Bilgin Arslan, Thomas Serre, CVPR. Hilde Kuehne, Ali Bilgin Arslan, and Thomas Serre. The language of actions: Recovering the syntax and semantics of goal-directed human activities. In CVPR, 2014. 3 HMDB: A large video database for human motion recognition. Hildegard Kuehne, Hueihan Jhuang, Estíbaliz Garrote, Tomaso Poggio, Thomas Serre, ICCV. 14Hildegard Kuehne, Hueihan Jhuang, Estíbaliz Garrote, Tomaso Poggio, and Thomas Serre. HMDB: A large video database for human motion recognition. In ICCV, 2011. 2, 3, 4, 14 Muscle activations during functional tasks in individuals with chronic ankle instability: a systematic review of electromyographical studies. Luciana Labanca, Massimiliano Mosca, Marco Ghislieri, Valentina Agostini, Marco Knaflitz, Maria Grazia Benedetti, Gait & Posture. 12Luciana Labanca, Massimiliano Mosca, Marco Ghislieri, Valentina Agostini, Marco Knaflitz, and Maria Grazia Benedetti. Muscle activations during functional tasks in in- dividuals with chronic ankle instability: a systematic review of electromyographical studies. Gait & Posture, 2021. 1, 2 Hierarchically decomposed graph convolutional networks for skeleton-based action recognition. Jungho Lee, Minhyeok Lee, Dogyoon Lee, Sangyoon Lee, arXiv:2208.10741614arXiv preprintJungho Lee, Minhyeok Lee, Dogyoon Lee, and Sangyoon Lee. Hierarchically decomposed graph convolutional net- works for skeleton-based action recognition. arXiv preprint arXiv:2208.10741, 2022. 2, 3, 6, 14 FitHuBERT: Going thinner and deeper for knowledge distillation of speech selfsupervised learning. Yeonghyeon Lee, Kangwook Jang, Jahyun Goo, Youngmoon Jung, Hoirin Kim, arXiv:2207.005552022arXiv preprintYeonghyeon Lee, Kangwook Jang, Jahyun Goo, Young- moon Jung, and Hoirin Kim. FitHuBERT: Going thin- ner and deeper for knowledge distillation of speech self- supervised learning. arXiv preprint arXiv:2207.00555, 2022. 3 Ang Li, Meghana Thotakuri, David A Ross, João Carreira, Alexander Vostrikov, Andrew Zisserman, arXiv:2005.00214The AVA-kinetics localized human actions video dataset. arXiv preprintAng Li, Meghana Thotakuri, David A. Ross, João Car- reira, Alexander Vostrikov, and Andrew Zisserman. The AVA-kinetics localized human actions video dataset. arXiv preprint arXiv:2005.00214, 2020. 3 MViTv2: Improved multiscale vision transformers for classification and detection. Yanghao Li, Chao-Yuan, Haoqi Wu, Karttikeya Fan, Bo Mangalam, Jitendra Xiong, Christoph Malik, Feichtenhofer, CVPR. 1619Yanghao Li, Chao-Yuan Wu, Haoqi Fan, Karttikeya Man- galam, Bo Xiong, Jitendra Malik, and Christoph Feichten- hofer. MViTv2: Improved multiscale vision transformers for classification and detection. In CVPR, 2022. 2, 3, 4, 6, 7, 14, 16, 18, 19 NTU RGB+D 120: A large-scale benchmark for 3D human activity understanding. TPAMI. Jun Liu, Amir Shahroudy, Mauricio Perez, Gang Wang, Ling-Yu Duan, Alex C Kot, Jun Liu, Amir Shahroudy, Mauricio Perez, Gang Wang, Ling-Yu Duan, and Alex C Kot. NTU RGB+D 120: A large-scale benchmark for 3D human activity understand- ing. TPAMI, 2020. 3 Exploring interchannel correlation for diversity-preserved knowledge distillation. Li Liu, Qingle Huang, Sihao Lin, Hongwei Xie, Bing Wang, Xiaojun Chang, Xiaodan Liang, ICCV. Li Liu, Qingle Huang, Sihao Lin, Hongwei Xie, Bing Wang, Xiaojun Chang, and Xiaodan Liang. Exploring inter- channel correlation for diversity-preserved knowledge dis- tillation. In ICCV, 2021. 3 Tran-sKD: Transformer knowledge distillation for efficient semantic segmentation. Ruiping Liu, Kailun Yang, Alina Roitberg, Jiaming Zhang, Kunyu Peng, Huayao Liu, Rainer Stiefelhagen, arXiv:2202.1339325arXiv preprintRuiping Liu, Kailun Yang, Alina Roitberg, Jiaming Zhang, Kunyu Peng, Huayao Liu, and Rainer Stiefelhagen. Tran- sKD: Transformer knowledge distillation for efficient se- mantic segmentation. arXiv preprint arXiv:2202.13393, 2022. 2, 3, 5 Query2label: A simple transformer way to multi-label classification. Shilong Liu, Lei Zhang, Xiao Yang, Hang Su, Jun Zhu, arXiv:2107.10834arXiv preprintShilong Liu, Lei Zhang, Xiao Yang, Hang Su, and Jun Zhu. Query2label: A simple transformer way to multi-label clas- sification. arXiv preprint arXiv:2107.10834, 2021. 3 Swin transformer V2: Scaling up capacity and resolution. Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, CVPR. 2022Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, et al. Swin transformer V2: Scaling up capacity and reso- lution. In CVPR, 2022. 3 Video swin transformer. Ze Liu, Jia Ning, Yue Cao, Yixuan Wei, Zheng Zhang, Stephen Lin, Han Hu, CVPR. 714Ze Liu, Jia Ning, Yue Cao, Yixuan Wei, Zheng Zhang, Stephen Lin, and Han Hu. Video swin transformer. In CVPR, 2022. 2, 3, 6, 7, 14 Disentangling and unifying graph convolutions for skeleton-based action recognition. Ziyu Liu, Hongwen Zhang, Zhenghao Chen, Zhiyong Wang, Wanli Ouyang, CVPR. 2020Ziyu Liu, Hongwen Zhang, Zhenghao Chen, Zhiyong Wang, and Wanli Ouyang. Disentangling and unifying graph convolutions for skeleton-based action recognition. In CVPR, 2020. 3 . Ilya Loshchilov, Frank Hutter, arXiv:1711.05101Decoupled weight decay regularization. arXiv preprintIlya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. 7 Effects of exercise on muscle fitness in dialysis patients: A systematic review and meta-analysis. Yue Lu, Yujie Wang, Qian Lu, American Journal of Nephrology. 1Yue Lu, Yujie Wang, and Qian Lu. Effects of exercise on muscle fitness in dialysis patients: A systematic review and meta-analysis. American Journal of Nephrology, 2019. 1 Gimme signals: Discriminative signal encoding for multimodal activity recognition. Raphael Memmesheimer, Nick Theisen, Dietrich Paulus, IROS. 2020Raphael Memmesheimer, Nick Theisen, and Dietrich Paulus. Gimme signals: Discriminative signal encoding for multimodal activity recognition. In IROS, 2020. 3 The impact of prehabilitation on surgical outcomes. Enrico Maria Minnella, Kenneth Drummond, Francesco Carli, Ann. Esophagus. 20211Enrico Maria Minnella, Kenneth Drummond, and Francesco Carli. The impact of prehabilitation on surgical outcomes. Ann. Esophagus, 2021. 1 How many observations are enough? Knowledge distillation for trajectory forecasting. Alessio Monti, Angelo Porrello, Simone Calderara, Pasquale Coscia, Lamberto Ballan, Rita Cucchiara, CVPR, 2022. 25Alessio Monti, Angelo Porrello, Simone Calderara, Pasquale Coscia, Lamberto Ballan, and Rita Cucchiara. How many observations are enough? Knowledge distilla- tion for trajectory forecasting. In CVPR, 2022. 2, 5 Attention bottlenecks for multimodal fusion. Arsha Nagrani, Shan Yang, Anurag Arnab, Aren Jansen, Cordelia Schmid, Chen Sun, NeurIPS. Arsha Nagrani, Shan Yang, Anurag Arnab, Aren Jansen, Cordelia Schmid, and Chen Sun. Attention bottlenecks for multimodal fusion. In NeurIPS, 2021. 7 Trunk, pelvis, hip, and knee kinematics, hip strength, and gluteal muscle activation during a single-leg squat in males and females with and without patellofemoral pain syndrome. H Theresa, Nakagawa, T U Érika, Carlos D Moriya, Fábio V Maciel, Serrão, Journal of Orthopaedic & Sports Physical Therapy. 4Theresa H Nakagawa,Érika TU Moriya, Carlos D. Maciel, and Fábio V. Serrão. Trunk, pelvis, hip, and knee kine- matics, hip strength, and gluteal muscle activation during a single-leg squat in males and females with and without patellofemoral pain syndrome. Journal of Orthopaedic & Sports Physical Therapy, 2012. 4 Maya Zohar, and Dotan Asselmann. Video transformer network. Daniel Neimark, Omri Bar, ICCV. Daniel Neimark, Omri Bar, Maya Zohar, and Dotan Assel- mann. Video transformer network. In ICCV, 2021. 3 Physical recovery in the first six months following oesophago-gastric cancer surgery. identifying rehabilitative needs: a qualitative interview study. O&apos; Linda, Annemarie E Neill, Emer Bennett, Guinan, V John, Juliette Reynolds, Hussey, Disability and Rehabilitation. 1Linda O'Neill, Annemarie E Bennett, Emer Guinan, John V Reynolds, and Juliette Hussey. Physical recovery in the first six months following oesophago-gastric cancer surgery. identifying rehabilitative needs: a qualitative interview study. Disability and Rehabilitation, 2021. 1 AdaMML: Adaptive multi-modal learning for efficient video recognition. Rameswar Panda, Chun-Fu Richard Chen, Quanfu Fan, Ximeng Sun, Kate Saenko, Aude Oliva, Rogerio Feris, ICCV. Rameswar Panda, Chun-Fu Richard Chen, Quanfu Fan, Ximeng Sun, Kate Saenko, Aude Oliva, and Rogerio Feris. AdaMML: Adaptive multi-modal learning for effi- cient video recognition. In ICCV, 2021. 3 AdaMML: Adaptive multi-modal learning for efficient video recognition. Rameswar Panda, Chun-Fu Richard Chen, Quanfu Fan, Ximeng Sun, Kate Saenko, Aude Oliva, Rogerio Feris, ICCV. Rameswar Panda, Chun-Fu Richard Chen, Quanfu Fan, Ximeng Sun, Kate Saenko, Aude Oliva, and Rogerio Feris. AdaMML: Adaptive multi-modal learning for effi- cient video recognition. In ICCV, 2021. 3 Muscle strength and physical performance in patients without previous disabilities recovering from COVID-19 pneumonia. Mara Paneroni, Carla Simonelli, Manuela Saleri, Laura Bertacchini, Massimo Venturelli, Thierry Troosters, Nicolino Ambrosino, Michele Vitacca, American Journal of Physical Medicine & Rehabilitation. 1Mara Paneroni, Carla Simonelli, Manuela Saleri, Laura Bertacchini, Massimo Venturelli, Thierry Troosters, Nicol- ino Ambrosino, and Michele Vitacca. Muscle strength and physical performance in patients without previous disabil- ities recovering from COVID-19 pneumonia. American Journal of Physical Medicine & Rehabilitation, 2021. 1 Learning student-friendly teacher networks for knowledge distillation. Dae Young Park, Moon-Hyun Cha, Daesin Kim, Bohyung Han, NeurIPS. Dae Young Park, Moon-Hyun Cha, Daesin Kim, Bohyung Han, et al. Learning student-friendly teacher networks for knowledge distillation. In NeurIPS, 2021. 2 What and how well you performed? A multitask learning approach to action quality assessment. Paritosh Parmar, Brendan Tran Morris, CVPR. 24Paritosh Parmar and Brendan Tran Morris. What and how well you performed? A multitask learning approach to ac- tion quality assessment. In CVPR, 2019. 2, 3, 4 Multi-modal self-supervision from generalized data transformations. Mandela Patrick, Yuki M Asano, Polina Kuznetsova, Ruth Fong, Joao F Henriques, Geoffrey Zweig, Andrea Vedaldi, arXiv:2003.04298arXiv preprintMandela Patrick, Yuki M. Asano, Polina Kuznetsova, Ruth Fong, Joao F. Henriques, Geoffrey Zweig, and Andrea Vedaldi. Multi-modal self-supervision from generalized data transformations. arXiv preprint arXiv:2003.04298, 2020. 3 Acceptance of wearable technology: A meta-analysis. Chenming Peng, Nannan Xi, Zhao Hong, Juho Hamari, HICSS. 2022Chenming Peng, Nannan Xi, Zhao Hong, and Juho Hamari. Acceptance of wearable technology: A meta-analysis. In HICSS, 2022. 1 Delving deep into one-shot skeleton-based action recognition with diverse occlusions. Kunyu Peng, Alina Roitberg, Kailun Yang, Jiaming Zhang, Rainer Stiefelhagen, arXiv:2202.1142337arXiv preprintKunyu Peng, Alina Roitberg, Kailun Yang, Jiaming Zhang, and Rainer Stiefelhagen. Delving deep into one-shot skeleton-based action recognition with diverse occlusions. arXiv preprint arXiv:2202.11423, 2022. 3, 7 Jiaming Zhang, and Rainer Stiefelhagen. Should I take a walk? Estimating energy expenditure from video data. Kunyu Peng, Alina Roitberg, Kailun Yang, CVPRW, 2022. 23Kunyu Peng, Alina Roitberg, Kailun Yang, Jiaming Zhang, and Rainer Stiefelhagen. Should I take a walk? Estimating energy expenditure from video data. In CVPRW, 2022. 2, 3 TransDARC: Transformer-based driver activity recognition with latent space feature calibration. Kunyu Peng, Alina Roitberg, Kailun Yang, Jiaming Zhang, Rainer Stiefelhagen, IROS. 2022Kunyu Peng, Alina Roitberg, Kailun Yang, Jiaming Zhang, and Rainer Stiefelhagen. TransDARC: Transformer-based driver activity recognition with latent space feature calibra- tion. In IROS, 2022. 2 Combining skeleton and accelerometer data for human fine-grained activity recognition and abnormal behaviour detection with deep temporal convolutional networks. Cuong Pham, Linh Nguyen, Anh Nguyen, Ngon Nguyen, Van-Toi Nguyen, MTAP. 3Cuong Pham, Linh Nguyen, Anh Nguyen, Ngon Nguyen, and Van-Toi Nguyen. Combining skeleton and accelerom- eter data for human fine-grained activity recognition and abnormal behaviour detection with deep temporal convolu- tional networks. MTAP, 2021. 3 Evolving losses for unsupervised video representation learning. A J Piergiovanni, Anelia Angelova, Michael S Ryoo, CVPR. 2020A. J. Piergiovanni, Anelia Angelova, and Michael S. Ryoo. Evolving losses for unsupervised video represen- tation learning. In CVPR, 2020. 3 Peripheral nerve injury in sports. Borislav Radić, Petra Radić, Din Duraković, Acta Clinica Croatica. 1Borislav Radić, Petra Radić, and Din Duraković. Peripheral nerve injury in sports. Acta Clinica Croatica, 2018. 1 CoCon: Cooperative-contrastive learning. Nishant Rai, Ehsan Adeli, Kuan-Hui Lee, Adrien Gaidon, Juan Carlos Niebles, CVPR. Nishant Rai, Ehsan Adeli, Kuan-Hui Lee, Adrien Gaidon, and Juan Carlos Niebles. CoCon: Cooperative-contrastive learning. In CVPR, 2021. 3 Fall detection and activity recognition using human skeleton features. Heilym Ramirez, Sergio A Velastin, Ignacio Meza, Ernesto Fabregas, Dimitrios Makris, Gonzalo Farias, IEEE AccessHeilym Ramirez, Sergio A. Velastin, Ignacio Meza, Ernesto Fabregas, Dimitrios Makris, and Gonzalo Farias. Fall de- tection and activity recognition using human skeleton fea- tures. IEEE Access, 2021. 1 Persisting muscle atrophy two years after replacement of the hip. A Rasch, A H Byström, N Dalen, N Martinez-Carranza, H E Berg, A. Rasch, A. H. Byström, N. Dalen, N. Martinez-Carranza, and H. E. Berg. Persisting muscle atrophy two years af- ter replacement of the hip. The Journal of Bone and Joint Surgery. British Volume, 2009. 1 ML-decoder: Scalable and versatile classification head. Tal Ridnik, Gilad Sharir, Avi Ben-Cohen, Emanuel Ben-Baruch, Asaf Noy, arXiv:2111.12933arXiv preprintTal Ridnik, Gilad Sharir, Avi Ben-Cohen, Emanuel Ben- Baruch, and Asaf Noy. ML-decoder: Scalable and versa- tile classification head. arXiv preprint arXiv:2111.12933, 2021. 3 A comparative analysis of decision-level fusion for multimodal driver behaviour understanding. Alina Roitberg, Kunyu Peng, Zdravko Marinov, Constantin Seibold, David Schneider, Rainer Stiefelhagen, IV, 2022. 7Alina Roitberg, Kunyu Peng, Zdravko Marinov, Constantin Seibold, David Schneider, and Rainer Stiefelhagen. A com- parative analysis of decision-level fusion for multimodal driver behaviour understanding. In IV, 2022. 7 Prehabilitation in thoracic surgery. David Sanchez-Lorente, Ricard Navarro-Ripoll, Rudith Guzman, Jorge Moises, Elena Gimeno, Marc Boada, Laureano Molins, Journal of Thoracic Disease. 1David Sanchez-Lorente, Ricard Navarro-Ripoll, Rudith Guzman, Jorge Moises, Elena Gimeno, Marc Boada, and Laureano Molins. Prehabilitation in thoracic surgery. Jour- nal of Thoracic Disease, 2018. 1 Pose-based contrastive learning for domain agnostic activity representations. David Schneider, Saquib Sarfraz, Alina Roitberg, Rainer Stiefelhagen, CVPRW. 2022David Schneider, Saquib Sarfraz, Alina Roitberg, and Rainer Stiefelhagen. Pose-based contrastive learning for domain agnostic activity representations. In CVPRW, 2022. 3 Grad-CAM: Visual explanations from deep networks via gradient-based localization. R Ramprasaath, Michael Selvaraju, Abhishek Cogswell, Ramakrishna Das, Devi Vedantam, Dhruv Parikh, Batra, ICCV. 819Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Ba- tra. Grad-CAM: Visual explanations from deep networks via gradient-based localization. In ICCV, 2017. 6, 8, 19 A study on user experience evaluation of glasses-type wearable device with builtin bone conduction speaker: Focus on the zungle panther. Ayoung Seok, Yongsoon Choi, TVX. Ayoung Seok and Yongsoon Choi. A study on user experi- ence evaluation of glasses-type wearable device with built- in bone conduction speaker: Focus on the zungle panther. In TVX, 2018. 1 NTU RGB+D: A large scale dataset for 3D human activity analysis. Amir Shahroudy, Jun Liu, Tian-Tsong Ng, Gang Wang, CVPR. Amir Shahroudy, Jun Liu, Tian-Tsong Ng, and Gang Wang. NTU RGB+D: A large scale dataset for 3D human activity analysis. In CVPR, 2016. 3 FineGym: A hierarchical video dataset for fine-grained action understanding. Dian Shao, Yue Zhao, Bo Dai, Dahua Lin, CVPR. 23Dian Shao, Yue Zhao, Bo Dai, and Dahua Lin. FineGym: A hierarchical video dataset for fine-grained action under- standing. In CVPR, 2020. 2, 3 Twostream adaptive graph convolutional networks for skeletonbased action recognition. Lei Shi, Yifan Zhang, Jian Cheng, Hanqing Lu, CVPR. Lei Shi, Yifan Zhang, Jian Cheng, and Hanqing Lu. Two- stream adaptive graph convolutional networks for skeleton- based action recognition. In CVPR, 2019. 3 Skeleton-based action recognition with multi-stream adaptive graph convolutional networks. Lei Shi, Yifan Zhang, Jian Cheng, Hanqing Lu, TIP. 3Lei Shi, Yifan Zhang, Jian Cheng, and Hanqing Lu. Skeleton-based action recognition with multi-stream adap- tive graph convolutional networks. TIP, 2020. 3 Channel-wise knowledge distillation for dense prediction. Changyong Shu, Yifan Liu, Jianfei Gao, Zheng Yan, Chunhua Shen, ICCV. Changyong Shu, Yifan Liu, Jianfei Gao, Zheng Yan, and Chunhua Shen. Channel-wise knowledge distillation for dense prediction. In ICCV, 2021. 3 Two-stream convolutional networks for action recognition in videos. Karen Simonyan, Andrew Zisserman, NeurIPS. Karen Simonyan and Andrew Zisserman. Two-stream con- volutional networks for action recognition in videos. In NeurIPS, 2014. 2 Spotadaptive knowledge distillation. Jie Song, Ying Chen, Jingwen Ye, Mingli Song, TIP. 20223Jie Song, Ying Chen, Jingwen Ye, and Mingli Song. Spot- adaptive knowledge distillation. TIP, 2022. 3 UCF101: A dataset of 101 human actions classes from videos in the wild. Khurram Soomro, Mubarak Amir Roshan Zamir, Shah, arXiv:1212.040214arXiv preprintKhurram Soomro, Amir Roshan Zamir, and Mubarak Shah. UCF101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012. 2, 3, 4, 14 Combining metric learning and attention heads for accurate and efficient multilabel image classification. Vladislav Sovrasov, arXiv:2209.065852022arXiv preprintVladislav Sovrasov. Combining metric learning and atten- tion heads for accurate and efficient multilabel image clas- sification. arXiv preprint arXiv:2209.06585, 2022. 3 Combining embedded accelerometers with computer vision for recognizing food preparation activities. Sebastian Stein, Stephen J Mckenna, UbiComp. Sebastian Stein and Stephen J. McKenna. Combining em- bedded accelerometers with computer vision for recogniz- ing food preparation activities. In UbiComp, 2013. 3 Uncertainty-aware score distribution learning for action quality assessment. Yansong Tang, Zanlin Ni, Jiahuan Zhou, Danyang Zhang, Jiwen Lu, Ying Wu, Jie Zhou, CVPR. 2020Yansong Tang, Zanlin Ni, Jiahuan Zhou, Danyang Zhang, Jiwen Lu, Ying Wu, and Jie Zhou. Uncertainty-aware score distribution learning for action quality assessment. In CVPR, 2020. 3 Videomae: Masked autoencoders are data-efficient learners for self-supervised video pre-training. Zhan Tong, Yibing Song, Jue Wang, Limin Wang, arXiv:2203.126022022arXiv preprintZhan Tong, Yibing Song, Jue Wang, and Limin Wang. Videomae: Masked autoencoders are data-efficient learn- ers for self-supervised video pre-training. arXiv preprint arXiv:2203.12602, 2022. 3 sEMG-based upper limb movement classifier: Current scenario and upcoming challenges. Juliano Costa Maurício Cagliari Tosin, Alexandre Machado, Balbinot, JAIR. 20222Maurício Cagliari Tosin, Juliano Costa Machado, and Alexandre Balbinot. sEMG-based upper limb movement classifier: Current scenario and upcoming challenges. JAIR, 2022. 2 Training data-efficient image transformers & distillation through attention. Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. 7ICMLHugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Train- ing data-efficient image transformers & distillation through attention. In ICML, 2021. 2, 5, 7, 8 Learning spatiotemporal features with 3D convolutional networks. Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, Manohar Paluri, ICCV. Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torre- sani, and Manohar Paluri. Learning spatiotemporal features with 3D convolutional networks. In ICCV, 2015. 2 A closer look at spatiotemporal convolutions for action recognition. Du Tran, Heng Wang, Lorenzo Torresani, Jamie Ray, Yann Lecun, Manohar Paluri, CVPR. Du Tran, Heng Wang, Lorenzo Torresani, Jamie Ray, Yann LeCun, and Manohar Paluri. A closer look at spatiotem- poral convolutions for action recognition. In CVPR, 2018. 2 Late fusion of multimodal deep neural networks for weeds classification. Computers and Electronics in Agriculture. Yu Vo Hoang Trong, Gwang-Hyun, Thanh Dang, Kim Jin-Young Vu, 25Vo Hoang Trong, Yu Gwang-hyun, Dang Thanh Vu, and Kim Jin-young. Late fusion of multimodal deep neural net- works for weeds classification. Computers and Electronics in Agriculture, 2020. 2, 5 Comparison of kinematics and muscle activation between push-up and bench press. Roland Van Den Tillaar, Sports Medicine International Open. 418Roland van den Tillaar. Comparison of kinematics and muscle activation between push-up and bench press. Sports Medicine International Open, 2019. 4, 18 Comparison of hamstring muscle activation during high-speed running and various hamstring strengthening exercises. Roland Van Den Tillaar, Jens Asmund Brevik Solheim, Jesper Bencke, International Journal of Sports Physical Therapy. 4Roland van den Tillaar, Jens Asmund Brevik Solheim, and Jesper Bencke. Comparison of hamstring muscle activation during high-speed running and various hamstring strength- ening exercises. International Journal of Sports Physical Therapy, 2017. 4 Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, NeurIPS. 15Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS, 2017. 15 Impact of βhydroxy-β-methylbutyrate (HMB) on muscle loss and protein metabolism in critically ill patients: A RCT. V Marina, Fabio Viana, Olivier Becce, Sabine Pantet, Géraldine Schmidt, John J Bagnoud, Gabriella A M Thaden, Ten Have, P K J Mariëlle, Aline Engelen, Voidey, E P Nicolaas, Mette M Deutz, Bergera, Clinical Nutrition. 1Marina V Viana, Fabio Becce, Olivier Pantet, Sabine Schmidt, Géraldine Bagnoud, John J. Thaden, Gabriella A. M. Ten Have, Mariëlle P. K. J. Engelen, Aline Voidey, Nicolaas E. P. Deutz, and Mette M. Bergera. Impact of β- hydroxy-β-methylbutyrate (HMB) on muscle loss and pro- tein metabolism in critically ill patients: A RCT. Clinical Nutrition, 2021. 1 Exploring multimodal video representation for action recognition. Cheng Wang, Haojin Yang, Christoph Meinel, IJCNN. Cheng Wang, Haojin Yang, and Christoph Meinel. Explor- ing multimodal video representation for action recognition. In IJCNN, 2016. 3 Generative multi-view human action recognition. Lichen Wang, Zhengming Ding, Zhiqiang Tao, Yunyu Liu, Yun Fu, ICCV. Lichen Wang, Zhengming Ding, Zhiqiang Tao, Yunyu Liu, and Yun Fu. Generative multi-view human action recogni- tion. In ICCV, 2019. 3 Temporal segment networks: Towards good practices for deep action recognition. Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, Luc Van Gool, ECCV. Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, and Luc Van Gool. Temporal segment networks: Towards good practices for deep action recogni- tion. In ECCV, 2016. 2 BEVT: BERT pretraining of video transformers. Rui Wang, Dongdong Chen, Zuxuan Wu, Yinpeng Chen, Xiyang Dai, Mengchen Liu, Yu-Gang Jiang, Luowei Zhou, Lu Yuan, CVPR. 2022Rui Wang, Dongdong Chen, Zuxuan Wu, Yinpeng Chen, Xiyang Dai, Mengchen Liu, Yu-Gang Jiang, Luowei Zhou, and Lu Yuan. BEVT: BERT pretraining of video transform- ers. In CVPR, 2022. 3 . Shiguang Wang, Zhizhong Li, Yue Zhao, Yuanjun Xiong, Limin Wang, Dahua Lin, Denseflow, 2020. 4Shiguang Wang, Zhizhong Li, Yue Zhao, Yuanjun Xiong, Limin Wang, and Dahua Lin. Denseflow. https:// github.com/open-mmlab/denseflow, 2020. 4 Multi-label classification with label graph superimposing. Ya Wang, Dongliang He, Fu Li, Xiang Long, Zhichao Zhou, Jinwen Ma, Shilei Wen, AAAI. 2020Ya Wang, Dongliang He, Fu Li, Xiang Long, Zhichao Zhou, Jinwen Ma, and Shilei Wen. Multi-label classifica- tion with label graph superimposing. In AAAI, 2020. 3 Multimodal depression estimation based on sub-attentional fusion. Ping-Cheng Wei, Kunyu Peng, Alina Roitberg, Kailun Yang, Jiaming Zhang, Rainer Stiefelhagen, arXiv:2207.06180arXiv preprintPing-Cheng Wei, Kunyu Peng, Alina Roitberg, Kailun Yang, Jiaming Zhang, and Rainer Stiefelhagen. Multi- modal depression estimation based on sub-attentional fu- sion. arXiv preprint arXiv:2207.06180, 2022. 7 impact of exercise on physical frailty in patients with chronic liver disease. Felicity R Williams, Annalisa Berzigotti, Janet M Lord, Jennifer C Lai, Matthew J Armstrong, Alimentary Pharmacology & Therapeutics. 1Felicity R. Williams, Annalisa Berzigotti, Janet M. Lord, Jennifer C. Lai, and Matthew J. Armstrong. impact of ex- ercise on physical frailty in patients with chronic liver dis- ease. Alimentary Pharmacology & Therapeutics, 2019. 1 Why skip if you can combine: A simple knowledge distillation technique for intermediate layers. Yimeng Wu, Peyman Passban, Mehdi Rezagholizade, Qun Liu, arXiv:2010.03034arXiv preprintYimeng Wu, Peyman Passban, Mehdi Rezagholizade, and Qun Liu. Why skip if you can combine: A simple knowl- edge distillation technique for intermediate layers. arXiv preprint arXiv:2010.03034, 2020. 3 Audiovisual slowfast networks for video recognition. Fanyi Xiao, Yong Jae Lee, Kristen Grauman, Jitendra Malik, Christoph Feichtenhofer, arXiv:2001.08740arXiv preprintFanyi Xiao, Yong Jae Lee, Kristen Grauman, Jitendra Malik, and Christoph Feichtenhofer. Audiovisual slow- fast networks for video recognition. arXiv preprint arXiv:2001.08740, 2020. 3 Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification. Saining Xie, Chen Sun, Jonathan Huang, Zhuowen Tu, Kevin Murphy, In ECCV. 2Saining Xie, Chen Sun, Jonathan Huang, Zhuowen Tu, and Kevin Murphy. Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification. In ECCV, 2018. 2 Deep learning for EMG-based human-machine interaction: A review. Dezhen Xiong, Daohui Zhang, Xingang Zhao, Yiwen Zhao, IEEE/CAA Journal of Automatica Sinica. 2Dezhen Xiong, Daohui Zhang, Xingang Zhao, and Yiwen Zhao. Deep learning for EMG-based human-machine inter- action: A review. IEEE/CAA Journal of Automatica Sinica, 2021. 2 FineDiving: A fine-grained dataset for procedure-aware action quality assessment. Jinglin Xu, Yongming Rao, Xumin Yu, Guangyi Chen, Jie Zhou, Jiwen Lu, CVPR, 2022. 23Jinglin Xu, Yongming Rao, Xumin Yu, Guangyi Chen, Jie Zhou, and Jiwen Lu. FineDiving: A fine-grained dataset for procedure-aware action quality assessment. In CVPR, 2022. 2, 3 Multi-class token transformer for weakly supervised semantic segmentation. Lian Xu, Wanli Ouyang, Mohammed Bennamoun, Farid Boussaid, Dan Xu, CVPR. 16Lian Xu, Wanli Ouyang, Mohammed Bennamoun, Farid Boussaid, and Dan Xu. Multi-class token transformer for weakly supervised semantic segmentation. In CVPR, 2022. 2, 5, 7, 8, 15, 16 A dual modality approach for (zero-shot) multilabel classification. Shichao Xu, Yikang Li, Jenhao Hsiao, Chiuman Ho, Zhu Qi, arXiv:2208.095622022arXiv preprintShichao Xu, Yikang Li, Jenhao Hsiao, Chiuman Ho, and Zhu Qi. A dual modality approach for (zero-shot) multi- label classification. arXiv preprint arXiv:2208.09562, 2022. 3 Spatial temporal graph convolutional networks for skeleton-based action recognition. Sijie Yan, Yuanjun Xiong, Dahua Lin, AAAI. 614Sijie Yan, Yuanjun Xiong, and Dahua Lin. Spatial tempo- ral graph convolutional networks for skeleton-based action recognition. In AAAI, 2018. 2, 3, 6, 14 Masked generative distillation. Zhendong Yang, Zhe Li, Mingqi Shao, Dachuan Shi, Zehuan Yuan, Chun Yuan, arXiv:2205.015292022arXiv preprintZhendong Yang, Zhe Li, Mingqi Shao, Dachuan Shi, Ze- huan Yuan, and Chun Yuan. Masked generative distillation. arXiv preprint arXiv:2205.01529, 2022. 3 Describing videos by exploiting temporal structure. Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, Aaron Courville, ICCV. Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, and Aaron Courville. Describing videos by exploiting temporal structure. In ICCV, 2015. 2 Dynamic GCN: Context-enriched topology learning for skeleton-based action recognition. Fanfan Ye, Shiliang Pu, Qiaoyong Zhong, Chao Li, Di Xie, Huiming Tang, MM. 2020Fanfan Ye, Shiliang Pu, Qiaoyong Zhong, Chao Li, Di Xie, and Huiming Tang. Dynamic GCN: Context-enriched topology learning for skeleton-based action recognition. In MM, 2020. 3 Attention-driven dynamic graph convolutional network for multi-label image recognition. Jin Ye, Junjun He, Xiaojiang Peng, Wenhao Wu, Yu Qiao, ECCV. 2020Jin Ye, Junjun He, Xiaojiang Peng, Wenhao Wu, and Yu Qiao. Attention-driven dynamic graph convolutional net- work for multi-label image recognition. In ECCV, 2020. 3 Cross-modality attention with semantic graph embedding for multi-label classification. Renchun You, Zhiyao Guo, Lei Cui, Xiang Long, Yingze Bao, Shilei Wen, AAAI. 2020Renchun You, Zhiyao Guo, Lei Cui, Xiang Long, Yingze Bao, and Shilei Wen. Cross-modality attention with seman- tic graph embedding for multi-label classification. In AAAI, 2020. 3 Semantics-guided neural networks for efficient skeleton-based human action recognition. Pengfei Zhang, Cuiling Lan, Wenjun Zeng, Junliang Xing, Jianru Xue, Nanning Zheng, CVPR. 2020Pengfei Zhang, Cuiling Lan, Wenjun Zeng, Junliang Xing, Jianru Xue, and Nanning Zheng. Semantics-guided neural networks for efficient skeleton-based human action recog- nition. In CVPR, 2020. 3 Distilling inter-class distance for semantic segmentation. Zhengbo Zhang, Chunluan Zhou, Zhigang Tu, IJCAI. 2022Zhengbo Zhang, Chunluan Zhou, and Zhigang Tu. Dis- tilling inter-class distance for semantic segmentation. In IJCAI, 2022. 3 Decoupled knowledge distillation. Borui Zhao, Quan Cui, Renjie Song, Yiyu Qiu, Jiajun Liang, CVPR. 2022Borui Zhao, Quan Cui, Renjie Song, Yiyu Qiu, and Jiajun Liang. Decoupled knowledge distillation. In CVPR, 2022. 3 HACS: Human action clips and segments dataset for recognition and temporal localization. Hang Zhao, Zhicheng Yan, Lorenzo Torresani, Antonio Torralba, ICCV. Hang Zhao, Zhicheng Yan, Lorenzo Torresani, and Antonio Torralba. HACS: Human action clips and segments dataset for recognition and temporal localization. In ICCV, 2019. 3 Tuber: Tubetransformer for action detection. Jiaojiao Zhao, Xinyu Li, Chunhui Liu, Shuai Bing, Hao Chen, G M Cees, Joseph Snoek, Tighe, CVPR. 2022Jiaojiao Zhao, Xinyu Li, Chunhui Liu, Shuai Bing, Hao Chen, Cees G. M. Snoek, and Joseph Tighe. Tuber: Tube- transformer for action detection. In CVPR, 2022. 3 Transformer-based dual relation graph for multi-label image recognition. Jiawei Zhao, Ke Yan, Yifan Zhao, Xiaowei Guo, Feiyue Huang, Jia Li, ICCV. Jiawei Zhao, Ke Yan, Yifan Zhao, Xiaowei Guo, Feiyue Huang, and Jia Li. Transformer-based dual relation graph for multi-label image recognition. In ICCV, 2021. 3 Aude Oliva, and Antonio Torralba. Temporal relational reasoning in videos. Bolei Zhou, Alex Andonian, ECCV. Bolei Zhou, Alex Andonian, Aude Oliva, and Antonio Tor- ralba. Temporal relational reasoning in videos. In ECCV, 2018. 2 Hidden two-stream convolutional networks for action recognition. Yi Zhu, Zhenzhong Lan, Shawn Newsam, Alexander Hauptmann, ACCV. Yi Zhu, Zhenzhong Lan, Shawn Newsam, and Alexander Hauptmann. Hidden two-stream convolutional networks for action recognition. In ACCV, 2018. 2 Student customized knowledge distillation: Bridging the gap between student and teacher. Yichen Zhu, Yi Wang, ICCV. Yichen Zhu and Yi Wang. Student customized knowledge distillation: Bridging the gap between student and teacher. In ICCV, 2021. 3 WiFi and vision multimodal learning for accurate and robust device-free human activity recognition. Han Zou, Jianfei Yang, Hari Prasanna Das, Huihan Liu, Yuxun Zhou, Costas J Spanos, CVPRW. Han Zou, Jianfei Yang, Hari Prasanna Das, Huihan Liu, Yuxun Zhou, and Costas J. Spanos. WiFi and vision mul- timodal learning for accurate and robust device-free human activity recognition. In CVPRW, 2019. 3 Deep learning-based gait recognition using smartphones in the wild. Qin Zou, Yanling Wang, Qian Wang, Yi Zhao, Qingquan Li, TIFS. 3Qin Zou, Yanling Wang, Qian Wang, Yi Zhao, and Qingquan Li. Deep learning-based gait recognition using smartphones in the wild. TIFS, 2020. 3
[]
[ "Complex nature of magnetic field-induced ferroelectricity in GdCrTiO 5", "Complex nature of magnetic field-induced ferroelectricity in GdCrTiO 5" ]
[ "T Basu \nExperimental Physics V\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\nUniversitätsstrasse 2D-86135AugsburgGermany\n\nTata Institute of Fundamental Research\nHomi Bhabha Road, Colaba, Mumbai-400005India\n\nPresent address: Laboratoire CRISMAT, UMR 6508 du CNRS et de l'Ensicaen\n6 Bd Marechal Juin14050CaenFrance\n", "D T Adroja \nISIS Facility\nOX11 0QXRutherford Appleton Laboratory, Chilton, Didcot OxonUnited Kingdom\n\nPhysics Department\nHighly Correlated Matter Research Group\nUniversity of Johannesburg\nPO Box 5242006Auckland ParkSouth Africa\n", "F Kolb \nExperimental Physics V\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\nUniversitätsstrasse 2D-86135AugsburgGermany\n", "H.-A Krug Von Nidda \nExperimental Physics V\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\nUniversitätsstrasse 2D-86135AugsburgGermany\n", "A Ruff \nExperimental Physics V\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\nUniversitätsstrasse 2D-86135AugsburgGermany\n", "M Hemmida \nExperimental Physics V\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\nUniversitätsstrasse 2D-86135AugsburgGermany\n", "A D Hillier \nISIS Facility\nOX11 0QXRutherford Appleton Laboratory, Chilton, Didcot OxonUnited Kingdom\n", "M Telling \nISIS Facility\nOX11 0QXRutherford Appleton Laboratory, Chilton, Didcot OxonUnited Kingdom\n", "E V Sampathkumaran \nTata Institute of Fundamental Research\nHomi Bhabha Road, Colaba, Mumbai-400005India\n", "A Loidl \nExperimental Physics V\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\nUniversitätsstrasse 2D-86135AugsburgGermany\n", "S Krohns \nExperimental Physics V\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\nUniversitätsstrasse 2D-86135AugsburgGermany\n" ]
[ "Experimental Physics V\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\nUniversitätsstrasse 2D-86135AugsburgGermany", "Tata Institute of Fundamental Research\nHomi Bhabha Road, Colaba, Mumbai-400005India", "Present address: Laboratoire CRISMAT, UMR 6508 du CNRS et de l'Ensicaen\n6 Bd Marechal Juin14050CaenFrance", "ISIS Facility\nOX11 0QXRutherford Appleton Laboratory, Chilton, Didcot OxonUnited Kingdom", "Physics Department\nHighly Correlated Matter Research Group\nUniversity of Johannesburg\nPO Box 5242006Auckland ParkSouth Africa", "Experimental Physics V\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\nUniversitätsstrasse 2D-86135AugsburgGermany", "Experimental Physics V\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\nUniversitätsstrasse 2D-86135AugsburgGermany", "Experimental Physics V\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\nUniversitätsstrasse 2D-86135AugsburgGermany", "Experimental Physics V\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\nUniversitätsstrasse 2D-86135AugsburgGermany", "ISIS Facility\nOX11 0QXRutherford Appleton Laboratory, Chilton, Didcot OxonUnited Kingdom", "ISIS Facility\nOX11 0QXRutherford Appleton Laboratory, Chilton, Didcot OxonUnited Kingdom", "Tata Institute of Fundamental Research\nHomi Bhabha Road, Colaba, Mumbai-400005India", "Experimental Physics V\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\nUniversitätsstrasse 2D-86135AugsburgGermany", "Experimental Physics V\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\nUniversitätsstrasse 2D-86135AugsburgGermany" ]
[]
This work shows an unconventional route for spin-driven ferroelectricity originating from a metastable magnetic field-induced canting of the chromium sublattice in the presence of gadolinium moments in GdCrTiO 5 at low temperatures. Compared to the isostructural neodymium compound, significant differences of magnetism and magnetoelectric effects are seen. We present the results of thorough investigations of temperature and magnetic field dependent magnetization as well as ac and dc magnetic susceptibility. These bulk measurements are complemented by local-probe spectroscopy utilizing electron-spin resonance and muon-spin rotation/relaxation for probing the chromium moments.Ferroelectric order is inferred from pyro-and magnetocurrent measurements. GdCrTiO 5 shows a pyrocurrent signal around 10 K, only if the system is cooled in an applied magnetic field exceeding 10 kOe. A distinct spin-driven ferroelectric order is revealed in this state for temperatures below 10 K, which can be switched by changing the magnetic-field direction and the polarity of the electric field. The magnetic measurements reveal no clear signature of long-range magnetic ordering. The presence of such 'meta-magnetoelectric-type' behaviour 2 in the absence of any 'meta-magnetic' behaviour is rare in literature. Our microscopic spectroscopy results indicate significant changes of the magnetic properties around ~10 K.Probably there exists exchange frustration between Gd and Cr moments, which prevents long-range magnetic ordering at higher temperatures. Below 10 K, weak ferromagnetic magnetic order occurs by minimizing frustration due to lattice distortion, which supports magnetodielectric coupling. However, non-polar distortions attain appreciable values after application of magnetic fields above 10 kOe, obviously breaking spatial inversion symmetry and creating ferroelectricity.
10.1103/physrevb.96.184431
[ "https://export.arxiv.org/pdf/1707.02756v4.pdf" ]
119,450,645
1707.02756
281a1733dba88a776c9c78b86708a0cf69d95ceb
Complex nature of magnetic field-induced ferroelectricity in GdCrTiO 5 T Basu Experimental Physics V Center for Electronic Correlations and Magnetism University of Augsburg Universitätsstrasse 2D-86135AugsburgGermany Tata Institute of Fundamental Research Homi Bhabha Road, Colaba, Mumbai-400005India Present address: Laboratoire CRISMAT, UMR 6508 du CNRS et de l'Ensicaen 6 Bd Marechal Juin14050CaenFrance D T Adroja ISIS Facility OX11 0QXRutherford Appleton Laboratory, Chilton, Didcot OxonUnited Kingdom Physics Department Highly Correlated Matter Research Group University of Johannesburg PO Box 5242006Auckland ParkSouth Africa F Kolb Experimental Physics V Center for Electronic Correlations and Magnetism University of Augsburg Universitätsstrasse 2D-86135AugsburgGermany H.-A Krug Von Nidda Experimental Physics V Center for Electronic Correlations and Magnetism University of Augsburg Universitätsstrasse 2D-86135AugsburgGermany A Ruff Experimental Physics V Center for Electronic Correlations and Magnetism University of Augsburg Universitätsstrasse 2D-86135AugsburgGermany M Hemmida Experimental Physics V Center for Electronic Correlations and Magnetism University of Augsburg Universitätsstrasse 2D-86135AugsburgGermany A D Hillier ISIS Facility OX11 0QXRutherford Appleton Laboratory, Chilton, Didcot OxonUnited Kingdom M Telling ISIS Facility OX11 0QXRutherford Appleton Laboratory, Chilton, Didcot OxonUnited Kingdom E V Sampathkumaran Tata Institute of Fundamental Research Homi Bhabha Road, Colaba, Mumbai-400005India A Loidl Experimental Physics V Center for Electronic Correlations and Magnetism University of Augsburg Universitätsstrasse 2D-86135AugsburgGermany S Krohns Experimental Physics V Center for Electronic Correlations and Magnetism University of Augsburg Universitätsstrasse 2D-86135AugsburgGermany Complex nature of magnetic field-induced ferroelectricity in GdCrTiO 5 1 This work shows an unconventional route for spin-driven ferroelectricity originating from a metastable magnetic field-induced canting of the chromium sublattice in the presence of gadolinium moments in GdCrTiO 5 at low temperatures. Compared to the isostructural neodymium compound, significant differences of magnetism and magnetoelectric effects are seen. We present the results of thorough investigations of temperature and magnetic field dependent magnetization as well as ac and dc magnetic susceptibility. These bulk measurements are complemented by local-probe spectroscopy utilizing electron-spin resonance and muon-spin rotation/relaxation for probing the chromium moments.Ferroelectric order is inferred from pyro-and magnetocurrent measurements. GdCrTiO 5 shows a pyrocurrent signal around 10 K, only if the system is cooled in an applied magnetic field exceeding 10 kOe. A distinct spin-driven ferroelectric order is revealed in this state for temperatures below 10 K, which can be switched by changing the magnetic-field direction and the polarity of the electric field. The magnetic measurements reveal no clear signature of long-range magnetic ordering. The presence of such 'meta-magnetoelectric-type' behaviour 2 in the absence of any 'meta-magnetic' behaviour is rare in literature. Our microscopic spectroscopy results indicate significant changes of the magnetic properties around ~10 K.Probably there exists exchange frustration between Gd and Cr moments, which prevents long-range magnetic ordering at higher temperatures. Below 10 K, weak ferromagnetic magnetic order occurs by minimizing frustration due to lattice distortion, which supports magnetodielectric coupling. However, non-polar distortions attain appreciable values after application of magnetic fields above 10 kOe, obviously breaking spatial inversion symmetry and creating ferroelectricity. Introduction Multiferroic compounds with magnetoelectric coupling (cross-coupling between spin and dipolar degrees of freedom) have generated considerable interest in recent years due to a variety of magnetoelectric (ME) phenomena, interesting for basic research as well as for potential applications in device technology. 1,2 In non-collinear magnets, spin-driven ferroelectricity is induced via the inverse Dzyaloshinskii-Moriya (DM) interaction, and in collinear magnets either via exchange-striction or in some cases via local distortions, due to a spin-dependent p-d hybridization. 2,3,4 Nevertheless, there are many systems (such as, Haldane spin-chain systems), where the cross-coupling mechanism is not well understood. 5,6,7 Even in well-known multiferroic compounds like RMn 2 O 5 (R = rare-earth), 8 crystallizing in an orthorhombic structure with Pbnm space group, the mechanism is still under debate. 9,10 Initially, there were contradictory reports on the mechanism of magnetoelectricity. Later the existence of both DM interaction and exchange-striction in these compounds has been demonstrated. 9 However, recently it has been shown by Balédent et al. 10 that, for this class of multiferroics, ferroelectricity is already present at room temperature, deep in the paramagnetic regime, and is further enhanced at low temperatures via spin-driven mechanisms. Another family of compounds of type RCrTiO 5,11,12 crystallizing in the same structure as RMn 2 O 5 , has received considerable attention. 13 Early neutron diffraction experiments revealed long-range antiferromagnetic order in NdCrTiO 5 below 13 K. 11 However, magnetoelectric coupling and the onset of spin-driven ferroelectricity have been observed already at 21 K and were attributed to possible antiferromagnetic ordering of the Cr moments. 12, 13 Hwang et al. 13 predicted that the Cr and Nd moments order around 21 K and 13 K, respectively. They proposed the possibility of magnetostriction mechanism for the ME coupling. Later on, Kori et al. 14 argued that both Nd and Cr moments start to order around the same temperature, which then allows for a DM interaction in a possible non-collinear spin structure. Further investigations of GdCrTiO 5 , a heavy rare-earth member of this series, by Basu et al, 15 evidenced a magnetic-field (H) induced dielectric anomaly around 10 K, similar to the observations in NdCrTiO 5 . However, there was no clear-cut evidence of long-range magnetic order -neither in the temperature dependence of the magnetic susceptibility (χ) nor in the heat capacity (C) -for GdCrTiO 5 . 15 Unlike in NdCrTiO 5 , the derivative of χ indicates possible magnetic ordering from Cr around 10 K, warranting confirmation by microscopic techniques. According to the de Gennes 16 scaling -i.e. the expected linear relation between magnetic-ordering temperature and de Gennes factor dG = (g J -1) 2 J(J+1), where J denotes the total spin and g J the Landé factor of the rare-earth ion -a reduction of the magnetic ordering temperature is not typical for heavy rare-earth members. Therefore, finding out the possible mechanism and the role of different magnetic ions on the magnetic and magnetoelectric properties of this class of compounds is a challenging task. Until now, to the best of our knowledge, there exists no report using local spectroscopic probes on RCrTiO 5 compounds, focusing on these fundamental aspects of multiferroicity. Local probes are expected to shed some light on existing magnetic interactions in this class of materials and to unravel the role of rare earth and chromium moments on the ME coupling. Here, we report the results of our investigations on GdCrTiO 5 by means of electronspin-resonance (ESR) and muon-spin rotation/relaxation (μSR) spectroscopy. Neutronscattering experiments are not feasible due to high absorbance of Gd, whereas both ESR and μSR spectroscopy are ideal tools to explore the microscopic spin-dynamics and magnetic correlations on a local scale. We also have investigated in detail the electric polarization in the presence of magnetic fields exploring the role of the rare-earth ions on ME properties. Our results reveal spin-driven ferroelectricity for temperatures below 10 K under the application of magnetic fields above 10 kOe. Although the features due to the onset of longrange magnetic order are not transparent in the magnetic susceptibility and heat capacity down to 2 K, ESR and μSR evidence sharp magnetic anomalies around 10 K, which support the existence of a magnetic phase transition at this temperature. We also discuss possible scenarios for magnetic interactions and ME properties in RCrTiO 5 compounds. Experimental Details Preparation and structural characterization of polycrystalline GdCrTiO 5 is described in an earlier work. 15 The dc magnetization (M) as a function of temperature and magnetic field was carried out using a Superconducting Quantum Interference Device (SQUID), procured from Quantum Design. The same instrument was used to measure ac susceptibility χ. Electron spin resonance measurements were performed in a Bruker ELEXSYS 500 spectrometer with a standard rectangular microwave cavity ER 4102 ST working at X-band frequency (ν = 9.36 GHz). An Oxford ESR 900 helium gas-flow cryostat was used for the ESR measurements in the temperature range of 4.2 < T < 300 K. By sweeping the external static magnetic field in a regime 0 < H < 10 kOe at constant microwave frequency, ESR measures the microwave power absorbed from the transverse magnetic component of the microwave field in the center of the cavity. Due to the lock-in amplification with field modulation, the field derivative dP/dH of the absorption signal is detected as function of the external magnetic field. For the ESR measurements, the powder form of the sample was immersed in paraffin within a suprasil-quartz tube. The μSR experiments were carried out under zero-field (ZF) conditions down to 2 K on the HIFI spectrometer at the ISIS pulsed muon source of the Rutherford Appleton Laboratory (UK). 17 The powder sample was mounted on a silver plate with GE-varnish and cooled to a base temperature of 2 K in Heexchange gas cryostat. The 'MANTID' 18 software was used to analyse the data. Pyroelectric current measurements were carried out as function of magnetic field and temperature using an electrometer (6517A, Keithley) employed in a Physical Property Measurement System (PPMS, Quantum Design). The temperature-dependent remnant electric polarization (P r ) has been determined by cooling the sample in the presence of an electric field from 50 to 4 K. Subsequently, the electric field was set to zero, the capacitor shortened for sufficiently long time to remove any stray charges and then the pyroelectric current (I pyro ) was measured as a function of temperature with a constant heating rate. For pyroelectric current measurements in magnetic fields, two conditions, field-cooled (FC) and zero-field-cooled (ZFC), were carried out. The sample is cooled in the presence of an electric and a magnetic field for FC conditions. For ZFC condition no magnetic field is applied while cooling the sample. Subsequently, the pyroelectric current (FC and ZFC conditions) is measured while heating the sample in presence of applied magnetic fields. Magnetocurrent (I magneto ) measurements were performed at a fixed temperature as a funtion of magnetic fields. Results A. Magnetic properties The temperature dependences of the dc susceptibility χ (= M/H) and C of GdCrTiO 5 have been reported in Ref. 15. It may be recalled that no clear-cut signature due to magnetic ordering was observed, though the Curie-Weiss temperature (Θ) was found to be approximately Θ = -25 K. The deviation from the high temperature linear inverse magnetic susceptibility appears already below 150 K indicating the presence of significant antiferromagnetic exchange (short-range interaction) in this compound. 15 In addition, the derivatives of χ(Τ) and C(T) revealed a change of slope around 10 K providing possible evidence for the onset of weak magnetic order of the Cr moments. 15 Fig. 3 shows typical ESR spectra of GdCrTiO 5 at selected temperatures. One observes a single broad resonance line, which is well described by a Lorentzian curve with resonance field H res and a half-width at half-maximum (HWHM) linewidth ΔH. Due to the large linewidth, which is of the same order of magnitude as the resonance field, it was necessary to take into account the mirrored resonance at negative resonance field -H res for fitting the line shape as described, e.g., by Joshi and Bhat. 19 Figure 3: ESR spectra of GdCrTiO 5 at selected temperatures (100 K, 50 K and 10 K) as described in the text. Shown is the field derivative of the absorption dP/dH vs. the external magnetic field H. The solid lines are fits as described in the text. B. ESR Spectroscopy The resulting fit parameters are depicted in Fig. 4. The double integrated intensity I ESR (Fig. 4a), which is shown in the top frame, follows a Curie-Weiss law 1/I ESR ~ T -Θ at high temperatures, with an antiferromagnetic Curie-Weiss temperature of Θ ~ -25 K, but deviates at lower temperatures. This is in reasonable agreement with the static susceptibility indicating that both the Gd 3+ -and the Cr 3+ -spins contribute to the ESR signal. For temperatures above 10 K, the resonance field resides at about H res = 3.38 kOe, but strongly shifts to lower fields at lower temperatures (Fig. 4b). Inserting the resonance field obtained above 10 K into the Larmor condition for magnetic resonance, h ν = g μ B H res , where h denotes the Planck constant and μ B the Bohr magneton. We obtain a g-value of 1.976, close to g = 2, expected for Gd 3+ ion with half-filled 4f shell and spin-only Cr 3+ (3d 3 ) with quenched orbital momentum. The resonance linewidth ΔH amounts to about 3 kOe at elevated temperatures, reveals a flat, broad maximum around 40 K and strongly broadens below approximately 10 K (Fig. 4c). The linewidth at elevated temperatures is comparable to that observed in GdMnO 3 20 and is a consequence of anisotropic exchange and dipolar interactions within and between the Gd and Cr spin systems. Both the strong and abrupt increase of the resonance field and the significant line broadening below 10 K (see Figs. 4b and c) clearly indicate the onset of some kind of magnetic order close to 10 K, most probably of the Cr moments only, while the Gd moments remain paramagnetic down to the lowest temperatures (Fig. 4a). The slight kink in the inverse ESR intensity close to 10 K (Fig. 4a) can also be interpreted along these lines. This type of behavior of the ESR line width and the resonance field has been observed in a variety of compounds revealing strong electronic correlations, which are close to magnetic order. 21,22 The broad hump or saturation effects, which are visible in the temperature dependence of the line width below 50 K (Fig. 4c), obviously correspond to deviations from the ideal Curie-Weiss behavior of the static susceptibility (Fig. 4a) due to short-range antiferromagnetic correlations. C. Zero-field µSR spectroscopy To further explore the nature of the magnetic ground state in GdCrTiO 5, we performed zero-field µSR measurements, which again are very sensitive to changes of the local magnetic fields. The asymmetry G z (t) of ZF μSR spectra is shown as function of time t at different temperatures below 10 K in Fig. 5. For a detailed analysis, the asymmetry of the muon decay is modeled with an exponential decay function (G(t) = A bg + A 0 × e -λt ), where the constants A bg , A 0 and λ, correspond to the background, the pre-exponential factor (i.e. the initial asymmetry) and the relaxation rate, respectively. Solid lines in Fig. 5 indicate the result of these fits. In the inset, the asymmetry as measured at 20 K is compared with that measured at 100 K indicating that only minor changes of the asymmetry of the µSR signal can be observed at elevated temperatures (Fig.5). Furthermore, the relaxation rates are very similar at 20 K and 100 K. In Fig. 6 the resulting relaxation parameters, namely the pre-exponential factor A 0 and the relaxation rate λ, are plotted as a function of temperature on semi-logarithmic scales. The initial amplitude of the asymmetry spectrum (G(t) at t = 0) is nearly constant above approximately 11 K, exhibits a sharp jump around 10 K and becomes nearly constant below 8 K, as shown in Fig. 6a. Interestingly, the initial low-temperature asymmetry, after taking out of the background component, is nearly 1/3 of that at elevated temperatures (Fig. 6a). The 1/3 drop in the initial asymmetry is generally observed below long-range magnetic ordering temperature with a larger magnetic moment or larger internal field at the muons stopping sites. However, we did not observe any oscillations of the asymmetry spectra (Fig. 5), a typical signature of long-range ordered magnetic moments, as observed in many other magnetic compounds (and multiferroic as well). 23 When the internal fields are larger and with larger relaxation rate the signal damps faster and it could be difficult to observe any frequency oscillations using the µSR spectrometer at ISIS due to the pulse width. Fig. 6b shows that the relaxation rate is constant at elevated temperatures, exhibits a jump-like reduction at 10 K and remains constant again towards low temperatures. Clearly, the striking anomalies in both parameters, asymmetry and relaxation rate, indicate significant changes of local magnetic fields. The results as documented in Fig. 6 strongly point towards a long-range magnetic order or spin-glass behavior in GdCrTiO 5 for temperatures below 10 K. In the present compound, only the chromium moments (S = 3/2) will order, while the gadolinium moments (S = 7/2) remain paramagnetic, as indicated 15 by the bulk susceptibility. Hence, below 10 K in GdCrTiO 5 two spin species exist, one, which is ordered, and one that still fluctuates. The ordered one could be screened by the fluctuating component. It is also possible that below 10 K due to the pulse width of muons produced at ISIS, we are unable to estimate both components from the present data. For example, the multiferroic compound HoMnO 3 , 24 where Mn and Ho moments order at significantly different temperatures, did not reveal clear oscillations in the asymmetry spectra at the onset of magnetic ordering, when measured at the pulsed muon source at ISIS. However, heavily damped oscillations resulting from antiferromagnetic long-range order were captured from measurements performed with the continuous muon source at PSI (Switzerland). 24 We believe that the abrupt, jump-like decrease of λ(Τ) in GdCrTiO 5 at 10 K, probably signals a magnetic transition. However, the observed temperature dependence of λ(Τ) is not characteristic of that usually observed in magnetically long-range ordered systems. Here, one expects a rather continuous increase of λ(Τ) with lowering temperature above the long-range magnetic ordering and a rapid decrease below the magnetic ordering temperature (e.g., Ref. 24). Therefore, our µSR results with respect to temperature dependence of the relaxation rate and absence of oscillation in ZFspectra strictly do not prove long-range magnetic order in GdCrTiO 5 , even though the drop in the asymmetry by nearly a factor of 2/3 clearly points towards a magnetic ground state. D. Spin driven ferroelectricity and magnetoelectric effects The pyroelectric current as a function of temperature, with positive and negative electric poling fields, for various external magnetic fields from 0 to 80 kOe is shown in Fig. 7a. The data were recorded with a heating rate of 2 K/min. The resulting remnant polarization for positive poling fields is depicted in Fig. 7b for the same set of external fields. No pyrocurrent and hence no polarization is observed for external magnetic fields up to 10 kOe. A finite ferroelectric polarization appears below ~ 10 K only in the presence of a magnetic field ≥ 20 kOe and only if the sample was cooled in an external magnetic field. The ferroelectric ordering temperature slightly increases by about 1 K for higher magnetic fields (see Figs. 7a and 7b). Higher magnetic fields also enhance the saturation polarization almost continuously up to 0.4 µC/m 2 for 80 kOe (Fig. 7b). Thus a magnetic-field induced ferroelectric phase can be inferred, which is depicted in Fig. 7c. The polarization can be switched for reversed electrical poling fields (Fig. 7a), confirming the ferroelectric behaviour. The material is insulating (tanδ ≤ 0.0007 for T < 25 K) and we did not observe any dielectric relaxation in this system at the characteristic temperature (see Ref. 15). In addition, we reproduced the pyrocurrent behavior with a heating rate of 5 K/min (not shown here); no shift of the peak temperature is detected for different heating rates. These results altogether clearly exclude extrinsic artefacts to be responsible for the observation of a pyrocurrent signal, such as, thermally stimulated depolarization current or pyro-current due to hopping conductivity 25 and confirms the intrinsic ferroelectricity. Interestingly, a finite value of the pyroelectric current is observed for FC conditions only. If the sample is cooled in zero external magnetic field, followed by measurements with applied magnetic fields, no pyroelectric signal can be observed within the detection limit. The blue cross symbol in Fig. 7a document this Magnetocurrent measurements were performed at 4 K (Fig. 8a), which provide further experimental evidence for magnetoelectric effects in this compound. No change in the magnetocurrent is detected as a function of magnetic field, if the sample is cooled in zeromagnetic field, which is consistent with the T-dependent polarization. Therefore, prior to magnetocurrent measurement, the sample was cooled to 4 K in FC condition with +80 kOe. Thus, the measurement was started in the induced ferroelectric phase (i.e., +80 kOe at 4 K) and the current was detected as a function of the external magnetic field H with continuous field sweeping. The change in remnant polarization P r (H) is calculated by integrating the current as function of time. This is a special case where the initial state is FC +80 kOe at t = 0s, unlike that reported for any other magnetoelectric compound until now. Therefore, the change in magnetocurrent and thus the change in polarization with H are reliable, though absolute values may differ depending on measurement conditions. The relative change in polarization ΔP r [= P r (H) -P r (H = 0)] as function of magnetic field is plotted in Fig. 8b. The arrows in figure 8a denote the sequence of the magnetocurrent measurement. Reducing the magnetic field below about +40 kOe results in a decreasing magnetocurrent (Fig. 8a), which originates from a reduced polarisation based on a less strong canting of the Cr moments. With further decreasing magnetic field, at about +10 kOe, negligible change of magnetocurrent is observed (see sequence '1' in Fig. 8a). Increasing now the magnetic field value in the opposite direction up to -80 kOe gives rise to a symmetric magnetocurrent behaviour (sequence '2' in Fig. 8a). The electric polarisation switches into the opposite direction by reversing the magnetic field (c.f. Fig. 8b). Switching back into the initial state (sequence '3' and '4' in Fig. 8a) gives rise to a magnetocurrent with the opposite sign. Again, negligible change in magnetocurrent is detected between around -10 kOe and +10 kOe denoting the dielectric regime. The electric polarization curve (Fig. 8b) for the cycle -80 to +80 kOe (cycle '3' and '4') derived from the above mentioned magnetocurrent data almost overlaps with that for +80 to -80 kOe (cycle '1' and '2'), without showing any clear hysteresis (a small change in the polarization value is an artefact of the measurement). The sequence '5' to '7' (Fig. 8a) superimposes with sequence '1' to '3' respectively, which proves the intrinsic nature of the observed closed loop of the magnetocurrent signal while changing the magnetic field from +80 kOe to -80 kOe. It seems plausible that, for FC conditions (with Gd moments resulting in canting of Cr moments), a critical magnetic field (H > |10 kOe|) exists, which breaks the inversion symmetry allowing spin-driven ferroelectricity. The fluctuations of Gd moments may hamper a stable arrangement of Cr spins, however, the FC condition could help to stabilize the spin texture of Cr moments by reducing fluctuation of Gd moments. Discussion and Summary Unlike the isostructural neodymium compound, dc and ac magnetic susceptibilities as well as the heat capacity do not yield any signature of long-range magnetic order in The spin-driven ferroelectric behaviour of GdCrTiO 5 is closely related to the observations in the Nd member of this series, as reported by Hwang et al. 13 However, there are clear differences in the field-dependent polarization. In the case of NdCrTiO 5 , there is a linear increase of polarization, starting from zero magnetic field, 13 eventually ferroelectricity is reported in absence of magnetic field by Saha, et al. 26 An interesting observation of the present work is the drop in ferroelectric polarization below 10 kOe in the magnetocurrent measurements for the material cooled in FC condition below the ferroelectric ordering temperature. Surprisingly, no metamagnetic-like transition is observed in magnetization experiments. However, the weak metamagnetic transition of the Cr system is probably masked by the strong paramagnetic contributions of the Gd moments, whereas, the paramagnetism of Gd may not affect the electric features. It appears, that ME measurements sometimes are more sensitive to detect such field-induced transitions, compared to magnetic measurements, at least in this case. Here, GdCrTiO 5 exhibits strong ME coupling and significant effects of external magnetic fields on the ferroelectric behaviour; however, there is no direct one-to-one correspondence of observed anomalies in the H-dependent isothermal magnetic and electric behaviour. This kind of unusual ME behaviour due to complex dynamics of spins and dipoles is rare in the literature; for example, a magnetodielectric phase-coexistence is observed in the spin-chain compound Ca 3 Co 2 O 6 without any fingerprints of such a phase-co-existence in isothermal magnetization measurements, despite strong intrinsic magnetodielectric coupling in this complex compound. 27 The results suggest that spin-induced ferroelectricity in RCrTiO 5 may be governed by DM interactions due to the Cr moments, significantly influenced by R-moments. It has to be noted that a finite magnetodielectric coupling (dielectric constant as a function of H, which consists higher order coupling term as well) for GdCrTiO 5 exists even for temperatures up to 20 K. 15 This fact could arise from magnetic exchange frustration resulting from competing exchange interactions between Gd and Cr moments, active even above magnetic ordering (> 10 K). For GdCrTiO 5 we speculate about a magnetic field induced quasi-ferromagnetic alignment of Gd moments for T < 10 K enforcing a canting of Cr moments, again in the presence of a magnetic field H > 10 kOe. This canting of Cr moments allows inverse DM interaction that breaks the inversion symmetry leading to spin-driven ferroelectric ordering. Note that, only if the system is "frozen" in a frustrated state via cooling below 10 K with applied extrinsic magnetic field > 10 kOe, the Cr moments form a stable canted spin structure in an applied magnetic field and break the inversion symmetry which govern ferroelectricity. While our manuscript was under review, a manuscript has been published by Guedes et al. 28 reporting on the magnetism of GdCrTiO 5 . They demonstrated the onset of long-range magnetic order of the Gd spins at 0.9 K and suggested non-collinear antiferromagnetic ordering in this system. This recent report 28 supports our interpretation of the magnetic properties of GdCrTiO 5 . Hence, our study confirms that dielectric and pyroelectric measurements are very sensitive to capture such weak or metastable H-induced transitions, especially in the case of strong ME coupling. In summary, the magnetodielectric compound GdCrTiO 5 has been investigated by electric polarization measurements, ESR and ZF-μSR spectroscopy. Our results confirm significant changes of magnetic interaction below ~10 K, attributed to the onset of a possible canted magnetic order of the chromium moments. The role of the rare-earth ions on the magnetic and magnetoelectric features is also inferred. The ferroelectric polarization is observed only for external magnetic fields > 10 kOe and in field-cooled conditions, which is rare in literature. Figure 1 : 1Temperature dependent magnetic dc susceptibility of GdCrTiO 5 for various applied magnetic fields between 100 Oe and 50 kOe. The zero-field cooled and field cooled data is shown only for 100 Oe. The inset shows the magnetization at 4 K for ZFC conditions. Though the dc susceptibility χ(Τ) (= Μ/Η) of the titled compound was already reported,15 here, we will show, for the sake of completeness, the data measured in the course of the present experiments at low temperatures. Field-cooled measurements of the magnetic susceptibility in external magnetic fields between 100 Oe and 50 kOe are shown inFig. 1 and compared with the zero-field cooled susceptibility measured with 100 Oe. FC and ZFC measurements are identical within experimental uncertainty and show no indications of a bifurcation, an observation that excludes any spin-glass type of freezing of magnetic moments. For all fields, the magnetic susceptibility shows a continuous increase with decreasing temperature, characteristic of purely paramagnetic behaviour. There is no indication of any anomaly pointing towards the existence of a magnetic phase transition. The inset inFig. 1shows the magnetization in fields from -50 to + 50 kOe at 4 K. The field dependence of the magnetization is very close to an ideal Brillouin function, expected for purely paramagnetic behaviour.The real part (χ') of the ac susceptibility as a function of temperature below 20 K, measured in the absence of any dc magnetic field, using a 2 Oe amplitude for frequencies between 1 and 1139 Hz, is presented inFig. 2. Like the dc susceptibility, ac susceptibility also exhibits a continuous increase on decreasing temperature, suggestive of purely paramagnetic behaviour without any indication for an onset of long-range magnetic order or spin-glass freezing. The latter facts are further exemplified by the temperature dependence of the imaginary part of ac susceptibility (χ''), shown for 1 Hz in the inset ofFig. 2. The ac loss is always close to zero with no indication for spin order. Figure 2 : 2Real part of ac susceptibility of GdCrTiO 5 as a function of temperature for various frequencies in the absence of a dc magnetic field. Inset shows imaginary part of ac susceptibility for 1 Hz. Figs. 1 and 2 are paramount examples of paramagnetic behaviour of a local moment system. However, one has to be aware that the Gd moments with spin S = 7/2 are much larger than the Cr moments with S = 3/2. Thus, Gd dominates the paramagnetic susceptibility and probably could hide effects due to (weak) magnetic ordering of Cr. To evidence possible ordering of the chromium moments, we therefore conducted experiments utilizing local probes, such as µSR and ESR. Figure 4 : 4(a) The double-integrated intensity I ESR (left axis) and inverse intensity 1/I ESR (right axis) are plotted as a function of temperature T for the compound GdCrTiO 5 . The red solid line shows the Curie-Weiss fit as described in the text. (b) Resonance field H res (left axis) and Landé g factor (right axis) as a function of T. The inset highlights the H res data at low temperatures. (c) Linewidth ΔH (HWHM) as a function of temperature. The inset shows the temperature derivative of ΔH, to highlight the strong anomalies at low temperatures. Figure 5 : 5Asymmetry ZF μSR spectra observed in GdCrTiO 5 at different temperatures vs. time below 20 K. The solid lines represent fits as explained in the text. In the inset spectra taken at 20 K are compared with those taken at 100 K. Figure 6 : 6Temperature dependencies of the fitting parameters of the exponential decay of ZF μSR spectra in GdCrTiO 5 as discussed in the text. The data are presented on a semilogarithmic plot: (a) initial asymmetry A 0 and (b) relaxation rate λ. effect, showing the results of ZFC measurement with the measurement of pyrocurrent with subsequent application of 50 kOe. It seems that an alignment of the paramagnetic Gd moments for T < 10 K in an applied external magnetic field of H > 10kOe enforces a canting of the Cr moments, which persists only in the presence of an external magnetic field H > 10 kOe. The presence of canted Cr-moments allows breaking the inversion symmetry and gives rise to the observed improper spin-driven ferroelectric effect. Figure 7 : 7(a) Pyroelectric current of GdCrTiO 5 as a function of temperature for positive (open symbols) and negative (solid symbols) electric poling fields for different magnetic fields from 0-80 kOe in FC conditions as described in the text. For the 0 Oe and 10 kOe FC measurements the pyroelectric current is zero, therefore only data for positive poling fields are shown for clarity reasons. A measurement at 50 kOe taken under ZFC conditions (× symbol) is shown for comparison. (b) Remnant polarization for positive electric poling fields (E = 181 kV/m) for different external magnetic fields from 0-80 kOe as defined in (a). (c) (T,H) polar phase diagram of GdCrTiO 5 . The shaded area indicates the ferroelectric phase. Figure 8 : 8(a) Magneto-electric current for GdCrTiO 5 as function of magnetic field from +80 kOe to -80 kOe in FC condition at T = 4 K. Black, red, green and blue curves (numbers 1-7) represent the magnetocurrent for different cycling directions of the magnetic field, as discussed in the text. I magneto is set to zero at the beginning of the measurement, i.e. at +80 kOe and therefore the values on y-axis are relative changes with varying H. (b) Relative change in remnant polarization as function of magnetic field at 4 K. The remnant polarization, ΔP r is calculated as discussed in the text. Black and red symbols denote the two different magnetic field directions as depicted in (a). GdCrTiO 5 . 5However, ESR and μSR investigations document significant anomalies indicating clear changes of local magnetic properties in this system. These anomalies appear close to 10 K, the temperature where the onset of finite ferroelectric polarization can be detected in external magnetic fields > 10 kOe. One explanation for this experimental observation via local spectroscopic probes could be the onset of magnetic order of the chromium moments at 10 K. This magnetic order of the chromium spins with S = 3/2 is detected via microscopic probes, but remains hidden in the bulk susceptibility as well as in the thermodynamic response, due to the large fluctuating paramagnetic moment of the Gd ions with S = 7/2.Compared to NdCrTiO 5 , the magnetism of the Gd compound investigated in the course of this work seems to be strongly affected by the heavy rare-earth moment. In the neodymium compound, the chromium moments exhibit antiferromagnetic (AFM) order below 21 K, while the neodymium moments gradually order below approximately 13 K.13 The chromium moments are aligned parallel to the crystallographic c axis, with AFM order within the ab plane and AFM stacking along the c axis. The neodymium moments exhibit AFM order with the magnetic moments aligned within the ab plane. In contrast, in the Gd compound, the magnetic order of the chromium moment is shifted to 10 K, while the Gd spins seem to fluctuate down to the lowest temperatures. If an external magnetic field is applied, a field induced quasi-ferromagnetic alignment of the paramagnetic Gd moments influences the Cr sublattice. It is clear, that all exchange interactions of the 4f moments in this class of compounds are very weak and that the 4f moments have to align in the internal magnetic field of the 3d spins, which exhibit antiferromagnetic order. In the Gd compound the 4f exchange is further weakened due to the spin-only character of the half-filled 4f shell of the Gd ions, with negligible spin-lattice interaction. Hence, the Gd spins remain almost paramagnetic down to the lowest temperatures, even in the internal field of the ordered chromium moments. It may be also possible that stronger frustration of the interaction between competing Gd and Cr spins plays a role for lowering the ordering temperature in GdCrTiO 5 with respect to NdCrTiO 5 : Comparing the susceptibilities of the Nd and Gd compounds, the value for the former is about a factor of 10 smaller at 20 K than in the latter.13,15 This difference results from the large contribution of the Gd spins. In NdCrTiO 5 the Nd moments are forced to align antiferromagnetically within the ab plane between the antiferromagnetic Cr layers with their magnetic moments along the c axis. Obviously, thermal fluctuations of the larger Gd moments partially suppress the magnetic order of the Cr moments in this compound and, therefore, significantly decrease the ordering temperature compared to that for Nd member. , whereas in GdCrTiO5 , finite values of the pyroelectric current are observed only in the presence of external magnetic fields > 10 kOe. This is documented by magnetic field-dependent pyrocurrent measurements (i.e. magnetocurrent) performed in the course of this work. The ferroelectric ordering temperature slightly increases with increasing magnetic fields for GdCrTiO 5 , whereas, magnetic field effects seem less important for the ferroelectric ordering temperature of NdCrTiO 5 .13,26 AcknowledgmentThis work was supported by the BMBF via project ENREKON 03EK3015 and partly by the DFG via the Transregional Collaborative Research Center TRR 80 (Augsburg, Munich, Stuttgart). . J T Heron, D G Schlom, R Ramesh, Appl. Phys. Rev. 121203J. T. Heron, D. G. Schlom, and R. Ramesh, Appl. Phys. Rev. 1, 021203 (2014); . M Bibes, A Barthélémy, Nat. Mater. 7425M. Bibes and A. Barthélémy, Nat. Mater. 7, 425 (2008) . Y Tokura, S Seki, N Nagaosa, Rep. Prog. Phys. 7776501Y. Tokura, S. Seki, and N. Nagaosa, Rep. Prog. Phys. 77, 076501 (2014) . H Katsura, N Nagaosa, A V Balatsky, Phys. Rev. Lett. 9557205H. Katsura, N. Nagaosa, and A. V. Balatsky, Phys. Rev. Lett. 95, 057205 (2005) . T Arima, J. Phys. Soc. Jpn. 7673702T. Arima, J. Phys. Soc. Jpn. 76, 073702 (2007) . K Singh, T Basu, S Chowki, N Mahapotra, K K Iyer, P L Paulose, E , K. Singh, T. Basu, S. Chowki, N. Mahapotra, K. K. Iyer, P. L. Paulose, and E. V. . Sampathkumaran, Phys. Rev. B. 8894438Sampathkumaran, Phys. Rev. B 88, 094438 (2013); . T Basu, P L Paulose, K K Iyer, K , T. Basu, P. L. Paulose, K. K. Iyer, K. . N Singh, S Mohapatra, B Chowki, E V Gonde, Sampathkumaran, J. Phys.: Condens. Matter. 26172202Singh, N. Mohapatra, S. Chowki, B. Gonde, and E. V. Sampathkumaran, J. Phys.: Condens. Matter 26, 172202 (2014) . G Nenert, T T M Palstra, Phys.Rev.B. 7624415G. Nenert and T. T. M. Palstra, Phys.Rev.B 76, 024415 (2007) . T Basu, V V R Kishore, S Gohil, K Singh, N Mohapatra, S Bhattacharjee, B Gonde, N P Lalla, P Mahadevan, S Ghosh, E V Sampathkumaran, Sci. Rep. 45636T. Basu, V. V. R. Kishore, S. Gohil, K. Singh, N. Mohapatra, S. Bhattacharjee, B. Gonde, N. P. Lalla, P. Mahadevan, S. Ghosh, and E. V. Sampathkumaran , Sci. Rep. 4, 5636 (2014); . T Basu, K Singh, E V Mohapatra, Sampathkumaran, J. Appl. Phys. 116114106T. Basu, K. Singh, N Mohapatra, E. V. Sampathkumaran, J. Appl. Phys. 116, 114106 (2014) . A Inomata, K Kohn, J. Phys.: Condens. Matter. 82673A. Inomata and K. Kohn, J. Phys.: Condens. Matter. 8, 2673 (1996); . I Kagomiya, K Kohn, T Uchiyama, Ferroelectrics. 280131I. Kagomiya, K. Kohn and T. Uchiyama, Ferroelectrics, 280, 131 (2002); . N Hur, S Park, P A Sharma, J Ahn, S Guha, S-W Cheong, Nature. 429392N. Hur, S. Park, P. A. Sharma, J. Ahn, S. Guha and S-W. Cheong, Nature (London) 429, 392 (2004) . G R Blake, L C Chapon, P G Radaelli, S Park, N Hur, S.-W Cheong, J , G. R. Blake, L. C. Chapon, P. G. Radaelli, S. Park, N. Hur, S.-W. Cheong, and J. . Rodriguez-Carvajal, Phys. Rev. B. 71214402Rodriguez-Carvajal, Phys. Rev. B 71, 214402 (2005) . V Balédent, S Chattopadhyay, P Fertey, M B Lepetit, M Greenblatt, B Wanklyn, F O , V. Balédent, S. Chattopadhyay, P. Fertey, M. B. Lepetit, M. Greenblatt, B. Wanklyn, F. O. . J I Saouma, P Jang, Foury-Leylekian, Phys. Rev. Lett. 114117601Saouma, J. I. Jang and P. Foury-Leylekian, Phys. Rev. Lett. 114, 117601 (2015) . G Buisson, J. Phys. Chem. Solids. 311171G. Buisson, J. Phys. Chem. Solids 31, 1171 (1970) . M Greenblatt, R M Hornreich, B Sharon, J. Solid State Chem. 10371M. Greenblatt, R. M. Hornreich and B. Sharon, J. Solid State Chem. 10, 371 (1974) . J Hwang, E S Choi, F Ye, C R Cruz, Y Xin, H D Zhou, P Schlottmann, Phys. Rev. B. 8524415J. Hwang, E. S. Choi, F. Ye, C. R. Dela Cruz, Y. Xin, H. D. Zhou, and P. Schlottmann, Phys. Rev. B 85, 024415 (2012) . S Kori, T Okamura, R Okazaki, I Terasaki, Phys Rev. B. 91144403S. Kori, T. Okamura, R. Okazaki and I. Terasaki, Phys Rev. B 91, 144403 (2015) . T Basu, K Singh, S Gohil, S Ghosh, E V Sampathkumaran, J. App. Phys. 118189902ibid.T. Basu, K. Singh, S. Gohil, S. Ghosh, and E. V. Sampathkumaran, J. App. Phys. 118, 234103 (2015); ibid., 121, 189902 (2017) P.-G De Gennes, Scaling concepts in Polymer Physics. IthacaCornell University PressP.-G. de Gennes, Scaling concepts in Polymer Physics (Cornell University Press, Ithaca) (1979) S L Lee, S H Kilcoyne, R Cywinski, Muon Science: Muons in Physics, Chemistry and Materials. BristolSUSSP Publications and IOP PublishingS. L. Lee, S. H. Kilcoyne, and R. Cywinski, Muon Science: Muons in Physics, Chemistry and Materials (SUSSP Publications and IOP Publishing, Bristol) (1999) . O Arnold, Nuclear Instruments and Methods in Physics Research Section A. 764156O. Arnold, et al., Nuclear Instruments and Methods in Physics Research Section A, 764, 156 (2014) . J P Joshi, S V Bhat, J. Magn. Reson. 168284J. P. Joshi and S. V. Bhat, J. Magn. Reson. 168, 284 (2004) . T P Gavrilova, R M Ermina, I V Yatsik, I I Fazlizhanov, A A Rodionov, D V , T. P. Gavrilova, R. M. Ermina, I. V. Yatsik, I. I. Fazlizhanov, A. A. Rodionov, D. V. . N V Mamedov, V I Andreev, Y M Chichkov, Mykovskiy, JETP Lett. 98380Mamedov, N. V. Andreev, V. I. Chichkov, and Y. M. Mykovskiy, JETP Lett. 98, 380 (2013); . I V Yatsyk, D V Mamedov, I I Fazlizhanov, T P Gavrilova, R M Eremina, N V , I. V. Yatsyk, D. V. Mamedov, I. I. Fazlizhanov, T. P. Gavrilova, R. M. Eremina, N. V. . V I Andreev, Ya M Chichkov, H A Mukovskii, A Krug Von Nidda, Loidl, JETP Lett. 96416Andreev, V. I. Chichkov, Ya. M. Mukovskii, H.A. Krug von Nidda and A. Loidl, JETP Lett. 96, 416 (2012) . H.-A Krug Von Nidda, A Schütz, M Heil, B Elschner, A Loidl, Phys. Rev. B. 5714344H.-A. Krug von Nidda, A. Schütz, M. Heil, B. Elschner, and A. Loidl, Phys. Rev. B 57, 14344 (1998) . A Dittl, S Krohns, J Sebald, F Schrettle, M Hemmida, H.-A Krug Von Nidda, S Riegg, A Reller, S G Ebbinghaus, A Loidl, Eur. Phys. J. B. 79391A. Dittl, S. Krohns, J. Sebald, F. Schrettle, M. Hemmida, H.-A. Krug von Nidda, S. Riegg, A. Reller, S. G. Ebbinghaus, and A. Loidl, Eur. Phys. J. B 79, 391 (2011) . T Lancaster, S J Blundell, P J Baker, H J Lewtas, W Hayes, F L Pratt, H T Yi, S.-W Cheong, Phys. Rev. B. 8020409T. Lancaster, S. J. Blundell, P. J. Baker, H. J. Lewtas, W. Hayes, F. L. Pratt, H. T. Yi, and S.-W. Cheong, Phys. Rev. B 80, 020409(R) (2009); . P J Baker, H J Lewtas, S J Blundell, T Lancaster, I Franke, W Hayes, F L Pratt, L Bohatý, P Becker, Phys. Rev. B. 81214403P. J. Baker, H. J. Lewtas, S. J. Blundell, T. Lancaster, I. Franke, W. Hayes, F. L. Pratt, L. Bohatý, and P. Becker, Phys. Rev. B 81, 214403 (2010) . H J Lewtas, T Lancaster, P J Baker, S J Blundell, D Prabhakaran, F L Pratt, Phys. Rev. B. 8114402H. J. Lewtas, T. Lancaster, P. J. Baker, S. J. Blundell, D. Prabhakaran, and F. L. Pratt, Phys. Rev. B 81, 014402 (2010) . Y Kohara, Y Yamasaki, Y Onose, Y Tokura, Phys. Rev. B. 82104419Y. Kohara, Y. Yamasaki, Y. Onose, and Y. Tokura, Phys. Rev. B 82, 104419 (2010); . X , X. . Y G Zhang, Y F Zhao, L D Cui, D Y Ye, P S Zhao, J W Li, M H Wang, H Y Zhu, Zhang, Y. G. Zhao, Y. F. Cui, L. D. Ye, D. Y. Zhao, P. S. Li, J. W. Wang, M. H. Zhu, H. Y. . G H Zhang, Rao, Appl. Phys. Lett. 10462903Zhang, and G. H. Rao, Appl. Phys. Lett. 104, 062903 (2014) . J Saha, G Sharma, S Patnaik, J. Mag. Mag. Mater. 36034J. Saha, G. Sharma, and S. Patnaik, J. Mag. Mag. Mater. 360, 34 (2014). . T Basu, K K Iyer, K Singh, E V Sampathkumaran, Sci. Rep. 33104T. Basu, K. K. Iyer , K. Singh and E. V. Sampathkumaran, Sci. Rep. 3, 3104 (2013); . K K Basu, K Iyer, P L Singh, E V Paulose, Sampathkumaran, J. Alloys Compd. 675354Basu, K. K. Iyer, K. Singh, P. L. Paulose, and E. V. Sampathkumaran, J. Alloys Compd. 675, 354 (2016) . E B Guedes, H P Martins, M Abbate, S H Masunaga, F E N Ramirez, R F Jardim, R J O Mossanek, J. Alloys Compd. 72467E.B. Guedes, H.P. Martins, M. Abbate, S.H. Masunaga, F.E.N. Ramirez, R.F. Jardim, R.J.O. Mossanek, J. Alloys Compd. 724, 67 (2017)
[]
[ "Theory of laser-induced demagnetization at high temperatures", "Theory of laser-induced demagnetization at high temperatures" ]
[ "A Manchon \nDepartment of Physics\nUniversity of Arizona\n85721TucsonAZUSA\n\nMaterials Science and Engineering\nPhysical Science and Engineering Division\nKAUST\nSaudi Arabia\n", "Q Li \nDepartment of Physics\nUniversity of Arizona\n85721TucsonAZUSA\n", "L Xu \nDepartment of Physics\nUniversity of Arizona\n85721TucsonAZUSA\n", "S Zhang \nDepartment of Physics\nUniversity of Arizona\n85721TucsonAZUSA\n" ]
[ "Department of Physics\nUniversity of Arizona\n85721TucsonAZUSA", "Materials Science and Engineering\nPhysical Science and Engineering Division\nKAUST\nSaudi Arabia", "Department of Physics\nUniversity of Arizona\n85721TucsonAZUSA", "Department of Physics\nUniversity of Arizona\n85721TucsonAZUSA", "Department of Physics\nUniversity of Arizona\n85721TucsonAZUSA" ]
[]
Laser-induced demagnetization is theoretically studied by explicitly taking into account interactions among electrons, spins and lattice. Assuming that the demagnetization processes take place during the thermalization of the sub-systems, the temperature dynamics is given by the energy transfer between the thermalized interacting baths. These energy transfers are accounted for explicitly through electron-magnons and electron-phonons interaction, which govern the demagnetization time scale. By properly treating the spin system in a self-consistent random phase approximation, we derive magnetization dynamic equations for a broad range of temperature. The dependence of demagnetization on the temperature and pumping laser intensity is calculated in detail. In particular, we show several salient features for understanding magnetization dynamics near the Curie temperature. While the critical slowdown in dynamics occurs, we find that an external magnetic field can restore the fast dynamics. We discuss the implication of the fast dynamics in the application of heat assisted magnetic recording. PACS numbers: 75.78.Jp,75.40Gb,75.70.-i Keywords: Time (ps) Normalized Magnetization (m(T)/S) J ex = 0.05eV J ex = 0.1eV J ex = 0.2eV J ex = 0.3eV J ex = 0.5eV
10.1103/physrevb.85.064408
[ "https://arxiv.org/pdf/1112.2428v1.pdf" ]
20,159,657
1112.2428
f726a206ce88ec0c7256ebb5491d7492746da8c3
Theory of laser-induced demagnetization at high temperatures 12 Dec 2011 (Dated: December 13, 2011) A Manchon Department of Physics University of Arizona 85721TucsonAZUSA Materials Science and Engineering Physical Science and Engineering Division KAUST Saudi Arabia Q Li Department of Physics University of Arizona 85721TucsonAZUSA L Xu Department of Physics University of Arizona 85721TucsonAZUSA S Zhang Department of Physics University of Arizona 85721TucsonAZUSA Theory of laser-induced demagnetization at high temperatures 12 Dec 2011 (Dated: December 13, 2011)numbers: 7578Jp7540Gb7570-i Keywords: Laser-induced demagnetization is theoretically studied by explicitly taking into account interactions among electrons, spins and lattice. Assuming that the demagnetization processes take place during the thermalization of the sub-systems, the temperature dynamics is given by the energy transfer between the thermalized interacting baths. These energy transfers are accounted for explicitly through electron-magnons and electron-phonons interaction, which govern the demagnetization time scale. By properly treating the spin system in a self-consistent random phase approximation, we derive magnetization dynamic equations for a broad range of temperature. The dependence of demagnetization on the temperature and pumping laser intensity is calculated in detail. In particular, we show several salient features for understanding magnetization dynamics near the Curie temperature. While the critical slowdown in dynamics occurs, we find that an external magnetic field can restore the fast dynamics. We discuss the implication of the fast dynamics in the application of heat assisted magnetic recording. PACS numbers: 75.78.Jp,75.40Gb,75.70.-i Keywords: Time (ps) Normalized Magnetization (m(T)/S) J ex = 0.05eV J ex = 0.1eV J ex = 0.2eV J ex = 0.3eV J ex = 0.5eV I. INTRODUCTION Laser-induced Demagnetization 1,2 (LID) and Heat Assisted Magnetization Reversal 3 (HAMR) constitute a promising way to manipulate the magnetization direction by optical means. While both LID and HAMR involve laser-induced magnetization dynamics of magnetic materials, there are several important differences. LID is usually considered as an ultrafast process where the hot electrons excited by the laser field transfer their energy to the spin system, causing demagnization. The demagnetization time scale ranges from 100 femtosecond to a few picoseconds. For HAMR, the laser field is to heat the magnetic material up to the Curie temperature so that the large room-temperature magnetic anisotropy is reduced to a much smaller value and consequently, a moderate magnetic field is able to reverse the magnetization. The time scale for the HAMR process is about sub-nanosecond, three orders of magnitude larger compared to LID. LID observations have been carried out in a number of magnetic materials including transition metals 1,4-6 , insulators 7 , half-metals [8][9][10] and dilute magnetic semiconductors 11 . A general consensus of the laserinduced demagnetization process is that the high energy non-thermal electrons generated by a laser field relax their energy to various low excitation states of the electron, spin and lattice 12 . The phenomenological model for this physical picture is referred to as three-temperature model 1,5,9 where the three interacting sub-systems (electrons, spins, lattice) are assumed thermalized individually at different temperatures which are equilibrated according to a set of energy rate equations. By fitting experimental data to the model, reasonable relaxation times of the order of several hundred femtosecond to a few picoseconds have been determined. Various microscopic theories 4, [13][14][15][16] have been proposed to interpret these ultrafast time scales of electron-spin and electron-lattice relaxations. Zhang and Hübner 13 proposed that the laser field can directly excite the spinpolarized ground states to spin-unpolarized excited states in the presence of spin-orbit coupling, i.e., the spin-flip transition leads to the demagnetization during the laser pulse. In this picture, the demagnetization is instantaneous (≈50-150fs). Recent numerical simulations 17 show that due to a few active "hot spots", the instantaneous demagnetization is expected for at most a few percent of the magnetization, consistently with experimental arguments 18 . Koopmans et al. 4,5 suggested that the excited electrons lose their spins in the presence of spin-orbit coupling and impurities or phonons, through an "Elliot-Yafet"-type (EY) spin-flip scattering. Recent numerical evaluations of the EY mechanism in transition metals 14 tend to support this point of view. Alternatively, Battiato et al. 19 recently modelled such ultrafast demagnetization in terms of superdiffusive currents. Finally, numerical simulations of the ultrafast demagnetization based on the phenomenological Landau-Lifshitz-Bloch equation have been achieved successfully 20 . While these demagnetization mechanisms provide reasonable estimation for the demagnetization time scales, the theories are usually limited to the temperature much lower than the Curie temperature and/or make no direct connection to the highly successful phenomenological three-temperature model 1,5,9 . As it has been recently shown experimentally 7,8 , most interesting magnetization dynamics occur near the Curie temperature. In this paper, we propose a microscopic theory of the laser-induced magnetization dynamics under the threetemperature framework and derive the equations that govern the demagnetization at arbitrary temperatures. More specifically we predict magnetization dynamics in the critical region. The paper is organized as follows. In Sec. II, we propose a model for LID processes. In Sec. III, we describe the spin system by the Heisenberg model which is solved by using a self-consistent random phase approximation. In Sec. IV, the central dynamic equations for the magnetization are derived. In Sec. V, the numerical solutions of the equations are carried out and the connection of our results with the experimental data of LID and HAMR is made in Sec. VI. We conclude our paper in Sec. VII. II. MODEL OF LID A. Spin loss mechanisms One of the keys to understand ultrafast demagnetization is to identify the mechanisms responsible for the spin memory loss. In the case of transition metal ferromagnets for example, the spin relaxation processes lead to complex spin dynamics due to the itinerant character of the magnetization. Elliott 21 first proposed that delocalized electrons in spin-orbit coupled bands may lose their spin under spin-independent momentum scattering events (such as electron-electron or electron-impurity interaction). This mechanism was later extended to electron-phonon scattering by Yafet and Overhauser 22 . Consequently, the spin relaxation time τ s is directly proportional to the momentum relaxation time τ p . Whereas the electron-electron relaxation time is on the order of a few femtoseconds 23 (fs), the electron-impurity and electron-phonon relaxation time is on the picoseconds (ps) scale. In semiconductors, bulk and structural inversion symmetry breaking as well as electron-hole interactions lead to supplementary spin relaxation mechanisms such as D'yakonov-Perel 24 and Bir-Aronov-Pikus 25 that are beyond the scope of the present study. Relaxation processes also apply to collective spin excitations such as magnons. Whereas the electron-magnon interaction conserves the angular momentum, magnonmagnon interactions and magnon-lattice interactions in the presence of spin-orbit coupling contribute to the total spin relaxation. While the former occurs on the magnon thermalization time scale 26 (100fs), the latter is however at the second order in spin-orbit coupling and is considered to occur on the 100ps time scale. Therefore, in a laser-induced demagnetization experiment, it is most probable that all the processes mentioned above take place during the thermalization time scale of the excited electrons and excited magnons. B. Demagnetization scenario To establish our model, we first separate the LID processes into four steps: (i) generation of non-thermal hot electrons by laser pumping; (ii) relaxation of these hot electrons into thermalized electrons characterized by an electron temperature T e ; (iii) energy transfer from the thermalized hot electrons to the spin and lattice subsystems; (iv) heat diffusion to the environment. In our model, to be given below, we will take steps (i) and (ii) infinitely fast. In the step (i), a laser pump excites a fraction of electrons below the Fermi sea to ≈1.5 eV above the Fermi level. This excitation process is of the order of a few fs. The photo-induced electron transition is considered spin conserving and thus does not significantly contribute to the demagnetization although the spin-flip electron transition could occur in the presence of the spinorbit coupling 13 . In step (ii), the strong Coulomb interaction among electrons relaxes these non-thermal high-energy electrons to form a hot electron bath which may be described by a thermalized hot electron temperature T e . During this electron thermalization process, strong electron-electron interaction-induced momentum scattering in the presence of spin-orbit coupling leads to the ultrafast transfer of the spin degree of freedom to the orbital one 27 . In our model, the electron thermalization is considered instantaneous and any possible femtosecond coherent processes are disregarded 28 . Therefore, due to ultrafast (fs) momentum scattering, the thermalized hot electrons act as a spin sink. Under this approximation, the demagnetization itself, defined as the loss of spin angular momentum, takes place during the thermalization of the electron bath in the presence of (either intrinsic or extrinsic) spin-orbit coupling. Following the definition of the three-temperature model, we assume that the system can be described in term of three interacting baths composed of laserinduced hot electrons, spin excitations of the ground state (magnons) and lattice excitations (phonons). The applicability of this assumption is discussed in Sec. II D. Therefore, the magnetic signal essentially comes from the collective spin excitation and it is assumed that the laserinduced hot electron only contribute weakly to the magnetization. Consequently, under the assumption that the spin loss occurs during the thermalization time of the electron and spin systems, the demagnetization problem reduces to tracking the energy transfer between the spin bath and the electron and phonon bathd. Our main objective is then to understand step (iii), where the electrons at a higher temperature transfer their energy to the spin and lattice sub-systems. Under the electron-magnon interaction, the magnons spin is transferred to the electron system, and is eventually lost through thermalization of the electron bath. Through interactions among electrons, spins and lattice, the entire system will ultimately reach a common temperature. Finally, a heat diffusion, step (iv), will expel the heat to the environment; this last step will be considered via a simple phenomenological heat diffusion equation. To quantitatively determine the energy transfer among electrons, spins and lattice in the step (iii), one not only needs to know the explicit interaction, but also the distribution of the densities of excitations (electrons, magnons and phonons). Within the spirit of the three tempera-ture model, we consider that each sub-system (electron, spin and lattice) is thermalized, i.e., one can define three temperatures for electrons T e , spins T s and lattice T l . The justification of this important assumption has been made in the previous section and can be qualitatively summarized: 1) For the hot electrons of the order of 1eV, the electron-electron relaxation time is τ ee ≈ 10f s, which is about 100 times faster than the electron-spin and electron-phonon interactions 23 . 2) The lattice-lattice interaction is about one order of magnitude smaller than the electron-electron relaxation time, τ ll ≈ 100f s 29 . 3) Multiple spin-waves processes are known to take place in the ferromagnetic relaxation leading to so-called Suhl instabilities 26 . The relaxation time is of the order of τ ss ∝h/T c ≈ 100f s at least for high energy magnons 26 (for long wave length magnons, the lifetime could be significantly longer). Thus, it is reasonable to assume that the concepts of the three temperatures are approximately valid as long as the time scale is longer than sub-picoseconds. C. Model Hamiltonian We now propose the following Hamiltonian for LID H = µĤ µ +Ĥ es +Ĥ el +Ĥ sl ,(1) whereĤ µ (µ = e, s, l) are the electron, spin and lattice Hamiltonians, andĤ µν (µ = ν) are the interaction among sub-systems. In the remaining of the article, the hatˆdenotes an operator. Each term is explicitly described below. The electron system is described by a free electron modelĤ e = k ǫ kĉ + kĉ k whereĉ + k (ĉ k ) represents the electron creation (annihilation) operator. The equilibrium distribution is simply the Fermi distribution at T e . The lattice HamiltonianĤ l = qλh ω p kλb + kλb kλ is modeled by simple harmonic oscillators whereb + kλ (b kλ ) is the phonon creation (annihilation) operator and λ is the polarization of the phonon. The phonon distribution at T l is n kλ = [exp(hω p kλ /k B T l ) − 1] −1 . The spin Hamiltonian is modeled by the Heisenberg exchange interaction, H s = − <ij> J ijŜi ·Ŝ j − gµ B H ex iŜ z i ,(2) where J ij is the symmetric exchange integral,Ŝ i is the spin operator at the site i, and H ex is the external magnetic field applied in z-direction. Unlike the electron and lattice Hamiltonians, the spin Hamiltonian is not a single particle Hamiltonian and the distribution of the spin density is neither a fermionic nor a bosonic distribution. To describe the equilibrium distribution of the spin system at arbitrary temperatures, we will model the equilibrium properties of the spin system in the next section. The electron-lattice interactionĤ el is taken as a standard form 29 H el = k,qλ B qλ (ĉ + k+qĉ kbqλ +ĉ + k−qĉ kb + qλ ),(3) where the B qλ is the electron-phonon coupling constant. For acoustic phonons, the coupling constant takes a particularly simple form 29 , B qλ = 2ǫ F q 3 h 2M N ω p qλ .(4) Here ǫ F is the electron Fermi energy and M is the mass of the ion. The electron-spin interactionĤ es is modeled by the conventional exchange interaction (sd Hamiltonian): H es = −J ex j,k,k ′ĉ + k e ik·rj (σ ·Ŝ j )ĉ k ′ e −ik ′ ·rj , (5) where we have assumed a constant coupling constant J ex andσ is the electron spin. When one replacesσ ·Ŝ j bŷ σ zŜ z j + 1 2 (σ −Ŝ + j +σ +Ŝ − j ) , the above H es contains two effects: the first term is responsible for the spin-splitting of the conduction bands and the second term leads to a transfer of angular momentum between the spins of the hot electrons and the spins of the ground state, i.e. spinwaves generation and annihilation. While the interaction conserves the total spin angular momentum, the thermalization process of each bath is not spin conserving as mentioned above. Therefore, this interaction transfers energy between the electron and spin baths, which results in the effective demagnetization of the magnon bath. Consequently, the generation of magnons by hot electron is a key mechanism in our model (see also Ref. 6). Finally, the spin-lattice interactionĤ sl has been attributed to spin-orbit coupling 30 . The energy and the angular momentum conservations requireĤ sl containing two-magnon (â + qâ q ′ ) and two-phonon operators (b + kb k ′ ). Since the spin-orbit coupling is already treated as a perturbation, this process is second order in the spinorbit coupling parameter and it is expected to be rather small 30 . Thus,Ĥ sl is much smaller thanĤ es andĤ el , and we placeĤ sl = 0 throughout the rest of the paper. To summarize our model, we consider three subsystems (electrons, spins, and lattice) described byĤ e ,Ĥ s and H l respectively. These subsystems have their individual equilibrium temperatures T e , T s and T l . The heat or energy transfer among them are given by the interaction H es andĤ el . To determine the kinetic equation for three subsystems, we should first establish the low excitation properties of the spin system fromĤ s and relate T s to the magnetization m(T s ). D. Materials considerations As stated in the introduction, laser-induced demagnetization has been observed in a wide variety of materials presenting very diverse band structures and magnetism. From the materials viewpoint, the present model makes three important assumptions: (i) laser-induced hot electrons, ground state spin excitations and phonons can be treated as separate interacting sub-systems; (ii) there exists a direct interaction between hot electrons and collective spin excitations; (iii) the excited spin sub-system can be described in terms of spin-waves. Whereas the consideration of a separate phonon bath is common, the separation between the electron and spin populations may seem questionable. In systems where the itinerant and localized electrons can be identified (such as 4f -rare earth or carrier-mediated dilute magnetic semiconductors), it seems quite reasonable. However, in typical itinerant ferromagnets such as transition metals, the magnetism arises from a significant portion of itinerant electrons. We stress out that in our model, the separation between electron and spin baths arises from the fact the electrons we consider are laserinduced hot electrons near Fermi level (in the range [ǫ F − k B T e , ǫ F + k B T e ]) , whereas the spin bath describes the magnetic behavior of electrons lying well below Fermi level. The concept of spin waves used in the present article is rather general and applies to a wide range of ferromagnetic materials. Although energy dispersion may vary from one material to another, it is unlikely to have strong influence on the main conclusions of this work. The interaction between hot electrons and magnons is actually more restrictive since it assumes overlap between electrons near and far below Fermi level. For example, this approach does not apply to half-metals (electronmagnon interaction is quenched by the 100% spin polarization) or magnetic insulators. Nevertheless, in metallic materials such as transition metals and rare-earth, this interaction does not vanish and can lead to strong spin wave generation, as demonstrated by Schmidt et al. 31 in Fe. III. EQUILIBRIUM PROPERTIES OF THE SPIN SYSTEM The Heisenberg model for the spin system, Eq. (2), has no exact solution even in equilibrium. At low temperature, the simplest approach is based on the spinwave approximation which predicts Bloch's law for the magnetization m(T ) = m 0 − B(T /T c ) 3/2 where T c is the Curie temperature and B is a numerical constant 32 . As the temperature approaches the Curie temperature, Bloch's law fails. Instead, one uses a molecular mean field to model the magnetization. The resulting magnetization displays a critical relation near T c , i.e., m(T ) ∝ (1 − T /T c ) 1/2 . Since we are interested in modeling the magnetization in the entire range of temperature, we de-scribe below a self-consistent random phase approximation which reproduces Bloch's law at low temperatures and the mean field result at high temperatures. We first recall some elementary relations of these spin operators given below, S + i =Ŝ x i + iŜ y i ,Ŝ − i =Ŝ x i − iŜ y i ,(6)Ŝ + i ,Ŝ − i = 2Ŝ z i , Ŝ ± i ,Ŝ z i = ∓Ŝ ± i ,(7)S + iŜ − i = S(S + 1) +Ŝ z i − (Ŝ z i ) 2 ,(8) and the spin Hamiltonian, Eq. (2) can be rewritten aŝ H = − ij J ij (Ŝ − iŜ + j +Ŝ z iŜ z j ) − gµ B H ex iŜ z i . (9) Our self-consistent random phase approximation treats the resulting commutator, Ŝ + i ,Ŝ − i = 2Ŝ z i ≈ 2m(T ) as a c-number, where m(T ) is the thermal average of S z i to be determined self-consistently. If we introduce the Fourier transformation,Ŝ ± k = (1/N ) iŜ ± i e −ik·R i , the above commutator reads as [Ŝ + k ,Ŝ − q ] = 2m(T )δ kq and thus by introducingâ ± k ≡Ŝ ∓ k / 2m(T ), one has a standard boson commutator relation [â q ,â + q ′ ] = δ q,q ′ . Similarly, we have [Ĥ s ,â q ] =hω qâq , wherē hω q = gµ B H ex + 2m(T ) q [J 0 − J(q)](10) where J(q) = (1/N ) <ij> J ij exp[iq · (R i − R j )]. With the above bosonic approximation, one can selfconsistently determine the magnetization m(T ) and other macroscopic variables such as the spin energy and specific heat. A particular simple case is for the spin-half S = 1/2 where the identitŷ S z = S −Ŝ −Ŝ+ = 1/2 − q 2m(T )a + q a q(11) immediately leads to the self-consistent determination for m(T ) m(T ) = 1/2 − 1 N q 2m(T ) e βhωq(T ) − 1 .(12) At low temperature, one can approximately replace m(T ) by 1/2 in the right-hand side of the equation and one immediately sees that the above solution produces the well-known Bloch relation, i.e., 1/2 − m(T ) ∝ T 3/2 . Near the Curie temperature, one expands e βhωq = 1 + βhω q + (1/2)(βhω q ) 2 and notice that ω q is proportional to m(T ) at zero magnetic field, see Eq. (10). By placing this expansion into Eq. (12), the zero order term in m(T ) determines the Curie temperature and the second order term gives the scaling m 2 (T ) ∝ (T c − T ), i.e. the mean field result is recovered, m(T ) ∝ (1 − T /T c ) 1/2 . Thus the self-consistent approach captures both low and high temperature limiting cases. In fact, the Green's function technique 26 has been developed to justify this approximation. For the cases other than S = 1/2, the relation between S z i and the number of magnons is more complicated due to non-constant (Ŝ z i ) 2 and thus Eq. (8) cannot immediately lead to a self-consistent equation for m(T ). Instead, one needs to relate (Ŝ z i ) 2 to m(T ) and the magnon density. Tyablikov 33 introduces a decoupling method to approximate (Ŝ z i ) 2 with m(T ) and the normalized number of magnons n 0 ≡ 1 N q â + qâq = 1 N q 1 exp(βω q ) − 1 .(13) Here finds that, for arbitraryS, the self-consistent equation for determining m(T ) is m(T ) = (S − n 0 )(1 + n 0 ) 2S+1 (1 + S + n 0 )n 2S+1 0 (1 + n 0 ) 2S+1 − n 2S+1 0 .(14) By replacing S = 1/2, the above equation reduces to Eq. (12). The magnetic energy can be similarly obtained 34 E = E 0 + S − m(T ) 2n 0 qh ω q (0) +hω q exp(βω q ) − 1(15) where E 0 is the ground state energy andhω q (0) is the spin wave energy at T = 0. Once the internal energy is obtained, the specific heat, C p = ∂E/∂T , may be numerically calculated. m(T s ) is uniquely determined from Eq. (14) or Eq. (12) for s = 1/2, if the spin temperature is known. Thus, the laser-induced demagnetization is solely dependent on the the time-dependent spin temperature T s . Before we proceed to calculate T s (t) or m(t), we show the solutions of Eq. (14) or Eq. (12). In Figure 1 First, the shapes of the magnetization curves for different spins are very similar. Second, the magnetic field removes the divergence of the specific heat at the Curie temperature. As expected, the magnetization reduces to that of the mean field result near the Curie temperature and to that of the spin wave approximation at low temperatures. IV. DYNAMIC EQUATIONS The energy or heat transfer among electrons, spins and lattice may be captured by the general rate equations given below, dE e dt = −Γ es − Γ el(16)dE l dt = Γ sl + Γ el (17) dE s dt = Γ es − Γ sl(18) where E i are the energy densities (i = e, s, l) and the rate of the energy transfer Γ ij should be determined by Eq. (1). Since we have neglected the weaker interaction between spins and lattice, we set Γ sl = 0 in the above equations. In the following, we explicitly derive the relaxation rates of Γ el and Γ es from Eqs. (3) and (5). A. Electron-lattice relaxation Γ el The energy transfer rate between electrons and phonons does not involve the spin. The Fermi golden rule applied to Eq. (3) immediately leads to Γ el = 4π h k,qh ω p q |B q | 2 δ(ǫ k − ǫ k+q +hω p q ) ×(19)(n k+q (1 − n k )(1 + n p q ) − n k (1 − n k+q )n p q ), where the first (second) term represents the energy transfer from (to) the electrons to (from) lattice by emitting (absorbing) a phonon. Note that the electrons and phonons have different temperatures; otherwise the detailed balance will make the net energy transfer zero. The electron and phonon densities are given by their respective equilibrium temperatures at T s and T l , i.e., n k = [exp((ǫ k − ǫ F )/k B T e ) + 1] −1 and n p q = [exp(hω p q /k B T l ) − 1] −1 . We consider polarization-independent acoustic phonons, i.e., ω p q = v s q where v s is the phonon velocity. By replacing B p given in Eq. (4) , (20) where we have defined the cut-off energy ǫ q ≡ (q − 2m h 2h v s ) 2h 2 2m which comes from the δ function, and we have introduced the maximum phonon wave number in the First Brillouin zone (this definition is the same for magnons and Fermi wave vectors), q m = k F = (6π 2 ) 1/3 /a 0 . By integrating over the electron energy ǫ, we obtain Γ el = 4π h 2 3 ǫ F 2 V (2π) 4 m ē h m e M qm 0 q 3 dq(n p q (T e ) − n p q (T l ))k B T e  h v s q k B T e − ln   eh vs q k B Te e ǫq−ǫ F k B Te + 1 e ǫq−ǫ F k B Te + 1     ,(21) where we have defined a Debye temperature, θ = hv s q m /k B . The last term of Eq. (21) can be approximated by  h v s q k B T e − ln   eh vs q k B Te e ǫq −ǫ F k B Te + 1 e ǫq −ǫ F k B Te + 1     ≈h v s q k B T e Θ(2k F −p),(22) forhω q ≪ k B T e , where Θ(x) is the step function. Therefore, the relaxation rate becomes Γ el = 1 h 2 3 ǫ F 2 9π 2V m e M θ T F G 4 T e θ − G 4 T l θ(23) and G n (x) = x n+1 1/x 0 t n dt/(e t − 1). Interestingly, for T e , T l > ∼ θ, the relaxation rate reduces to: Γ el = 1 h 2 3 ǫ F 2 9π 8V m e M T e − T l T F .(24) Thus, the relaxation rate is simply proportional to the difference between the electron and lattice temperatures(Γ el ∝ T e −T l ); this is the assumption made in the earlier three temperature model 1 . B. Electron-spin relaxation Γes The interaction between the electrons and spins given by Eq. (5) may be simplified by using the self-consistent random phase approximation, i.e., we replaceŜ z i by its thermal average m(T s ) andŜ ± q = 2m(T s )a ∓ q . The electron-spin interaction can then be rewritten aŝ H es = − J ex √ N 2Sm(T s ) kq (ĉ + k−q↑ĉ k↓âq +ĉ + k+q↓ĉ k↑â + q ),(25) where we have dropped the m(T )σ z term since it does not involve the energy transfer between the electron and the spin. The second order perturbation immediately leads to the electron-spin relaxation: Γ es = 2π h 2Sm(T s ) N J 2 ex k,qh ω q δ(ǫ k − ǫ k−q −hω q ) × (n k↓ (1 − n k−q↑ )(1 + n s q ) − n k−q↑ (1 − n k↓ )n s q )(26) where ω q is the magnon frequency given by Eq. (10), the electron distribution is n kσ = [exp((ǫ k −ǫ F )/k B T e )+1] −1 and the magnon distribution is n s q = [exp(hω q ) − 1] −1 . Note that the electron sub-system is considered unpolarized due to the strong spin relaxation occuring during thermalization. In Eq. (26), the first (second) term represents the electron emitting (absorbing) a magnon. For the long wavelength, the magnon dispersion, Eq. (10), is simplyhω q = µ B H ext + αk B T c q 2 a 2 0 where α ≈ 1. Following the same procedure as the previous section, we find, Γ es = 4π h 2SM (T s )J 2 ex V (2π) 4 m 2 h 4 × qm 0 qdq(hω q ) 2 (n s q (T e ) − n s q (T s ))(27) If the magnetic field is zero, the integration over q can be immediately carried out and we approximately have (discard the numerical constant α) Γ es = (6π 2 ) 10/3 J 2 ex m 3 (T s ) 2hV T c T F 2 × G 2 T e DT c − G 2 T s DT c(28) where G n (x) has been defined below Eq. (23) and the temperature-dependent spin stiffness is D = m(T s )q 2 m a 2 . An important result of this paper is that Γ es is proportional to m 3 (T s ) and thus it is vanishingly small near the Curie temperature. Furthermore, since T e /T c and T s /T c are always comparable to 1, the electron-spin relaxation rate given by Eq. (28) is not proportional to T e − T s , which is quite different from the previous threetemperature model 1 . C. Specific heats of subsystems As the right sides of the rate equations, Eqs. (16)- (18), are expressed in terms of three temperatures T e , T s and T l , we need to relate the energy change of each system to the temperatures, i.e., we should define the heat capacities C i for each subsystem dE i = C i dT i . The specific heat depends on the material details. To be more specific, we consider the Ni metal that has been experimentally investigated most extensively. a. Specific heat of the electrons In a free electron picture, the specific heat of an electron gas is C e = 1 2 πn e k B (T e /T F )(1 − 3π 2 /10(T e /T F ) 2 − ...), where n e is the electron density and T F is the Fermi temperature 35 . However, this approximation is usually poor in the case of transition metals. In our model, we assume that the electron specific heat remains proportional to the temperature C e = γ e T e where γ e ≈ 1.5 × 10 3 Jm −3 K −2 is taken from experiments 36 , which is smaller than the one assumed earlier 1 . b. Specific heat of lattice The phonon energy is derived from the Debye model, E p = d 3 qhω p q n p q (T ). This yields C l = 3N A k B F D (T D /T ), where N A is the Avogadro number and F D = TD/T 0 x 4 e x /(e x − 1) 2 dx is the Debye function. This form of the lattice specific heat is consistent with Pawel et al. 36 . c. Specific heat of the spins We determine the spin specific heat from the numerical derivative of the spin energy Eq. (15), as explicitly calculated in Sec. III. In Fig. 1(b) and Fig. 1(d), we have already shown the temperature dependence of the spin specific heat with and without the external field. D. Summary We summarize below the dynamic equations that govern the time-dependence of the three subsystem temperatures after the laser pumping, C e (T e ) dT e dt = −Γ el (T e , T l ) − Γ es (T e , T s ) + P (t)(29) C l (T l ) dT l dt = Γ el (T e , T l ) − T l − T rm τ l (30) C s (T s ) dT s dt = Γ es (T e , T s )(31) where we have used dE µ /dt = C µ (T µ )dT µ /dt and we have discarded the spin-phonon interaction. In Eq. (29) we have inserted P (t) representing the initial laser energy transfer to the electrons and in Eq. (30), we have included a phenomenological heat diffusion of phonons to environment which is set at the room temperature T rm , this term becomes significant only at long time scale (subnanoseconds). The functions Γ ij are: Γ el = W el G 4 T e θ D − G 4 T l θ D(32)Γ es = W es m 3 (T s ) G 2 T e DT c − G 2 T s DT c(33) where the constants W el and W es are given in Eq. (24) and (28), and D = (6π 2 ) 1/3 m(T s ) V. NUMERICAL RESULTS In this Section, we numerically solve our central Eqs. (29)-(34) for a number of plausible material parameters. Our particular focus will be on the difference between our model and the previous three-temperature model. Since the demagnetization is mainly controlled by the interaction between electrons and spins, we choose a set of different J ex -a large J ex representing transition metals (e.g., Ni, Fe and Co) and a weak J ex for some ferromagnetic oxides and dilute magnetic semiconductors. Equations (29)-(31) are solved by using the following procedure. First, we assume that the laser instantaneously heats the electron bath to T e (0) while the spin and lattice temperatures remain at the room temperature T s (0) = T l (0) = T rm . With these initial conditions, we compute these temperatures after t > 0 where the laser source has been turned off P (t > 0) = 0. If we only consider the time scale smaller than 100ps we may drop the heat diffusion term in Eq. (30). In Fig. 2, we show the typical temperature profiles after a low intensity laser pumping. In general, the electronspin interaction is stronger than the electron-phonon interaction at low temperature, and spin and electron temperatures equilibrate within subpicoseconds. It takes an order of magnitude longer to reach the equilibrium between lattice and the electrons. Also shown in the inset is the time dependent magnetization which illustrates the fast demagnetization and slow remagnetization. In Fig. 3, we show the temperature dependence of magnetization for different J ex . As expected, the demagnetization time scales with the inverse of J ex while the remagnetization is independent of J ex since the latter is controlled by the electron-lattice interaction. A much more interesting case is the high intensity of laser pumping. In this case, the spin temperature raises to the Curie temperature in 0.1-0.2 ps as shown in Fig. 4. Due to vanishingly small magnetization at the Curie temperature, the energy transfer between electrons and spins becomes negligible and thus the spin temperature stays constant for an extended period of time (a few ps). The electron temperature, however, continues to decrease due to the electron-lattice interaction which is not affected by the dynamic slowdown of the spins. Interestingly, after the electron temperature drops below the spin temperature, the spin system begins to heat the electron system and thus the electron temperature behaves non-monotonically as seen in Fig. 4. The dynamic slowdown of the spin temperature shown in Fig. 4 is a general property of critical phenomena. Due to the disappearance of the order parameter (the magnetization m(T ) in present case), the effective interaction reduces to zero at the critical point. In Fig. 5, we show the time interval (labeled in Fig. 5) for the critical slowdown as a function of the maximum spin temperature T m for a given laser pumping power. As it is expected, the critical slowdown shows a power law, τ s ∝ [1 − T m /T c ] −δ with the exponent δ depending on J ex . In the case of very high intensity of the laser pumping, T m can be very close to T c and the magnetization dynamics can be extremely slow. In the presence of the external field, however, the spin system does not have a sharp phase transition anymore and the critical slowdown is removed, i.e., one recovers the fast magnetization dynamics. In Fig. 6, we compare the magnetization dynamics with and without the magnetic field. The magnetic field suppresses the dynamic slowdown. Finally, Fig. 7 shows the exponential dependence of τ d on the initial electron temperature which is directly related to the pumping laser fluence. VI. DISCUSSION A. Connection with experiments on LID We now comment on the connection of our theory to the existing experimental results. The LID experiments performed on transition metallic ferromagnets 1,4,6 are usually at low laser pumping power. In these experiments, the previous phenomenological three temperature model provides an essential interpretation of demagnetization: the laser induced hot electrons transfer their energies to spins and lattice. As discussed in Sec. II, in our model, the demagnetization (i.e., loss of spin memory) occurs during the instantaneous thermalization of the interacting baths. Therefore, the demagnetiza-tion/remagnetization time scale is governed by energy transfer between the baths: the demagnetization is given by the electron-spin interaction while remagnetization time is determined by the electron-phonon interactions. For transition metals, the electron-spin interaction is at least several times larger than the electron-phonon interaction. Thus, the demagnetization is faster than the remagnetization. For half-metals and oxidized ferromagnets, the demagnetization is usually longer due to a reduced electron-spin interaction. When the temperature increases, the demagnetization time could be significantly increased 8 ; this is due to the weakening of the effective electron-spin interaction with a reduced magnetization. As the temperature approaches the Curie temperature, Ogasawara et al. 7 observed that in all their samples, the demagnetization time could be enhanced by one order of magnitude. The influence of the pump intensity on the demagnetization time can be similarly understood. As we have shown, a large pumping intensity creates high temperature electrons which heat the spin temperature to the Curie temperature. Thus the temperature and the pumping intensity dependence of the demagnetization involve the exact physics of critical slowdown. B. Connection with HAMR HAMR involves heating ferromagnets to an elevated temperature so that a moderate magnetic field is able to overcome the magnetic anisotropy for magnetization reversal. Since the time scale in HAMR processes is of the order of nanoseconds, the dynamics studied here can be viewed as ultrafast, i.e., all three temperatures have already reached equilibrium for HAMR dynamics. Even for the temperature close to the Curie temperature, the dynamics slowdown remains "ultrafast" for HAMR as long as a moderate magnetic field is present. Thus, the HAMR dynamics could be performed in two distinct time scales: a fast dynamics within 10 picoseconds which determines the longitudinal magnetization m(T ) and a slow dynamics from subnanoseconds to a few nanoseconds which determines the direction of the magnetization by the conventional Landau-Lifshitz-Gilbert equation. The detail calculations for HAMR dynamics will be published elsewhere. VII. CONCLUSION We have proposed a microscopic approach to the three temperature model applied to laser-induced ultrafast demagnetization. The microscopic model consists of interactions among laser-excited electrons, collective spin excitations and lattice. Under the assumption of instantaneous spin memory loss during the baths thermalization, the demagnetization problem reduces to energy transfer between the thermalized baths. A self-consistent random phase approximation is developed to model the low excitation of the spin system for a wide range of temperatures. A set of dynamic equations for the time-dependent temperatures of electrons, spins and lattice are explicitly expressed in terms of the microscopic parameters. While the resulting equations are similar to the phenomenological three-temperature model, there are important distinctions in the temperature dependent properties. In particular, the magnon softening plays a key role in demagnetization near Curie temperature where a significant slowdown of the spin dynamics occurs. We have also shown that for sufficiently high temperatures (above the Debye temperature), the dynamic properties are governed by only a few parameters: the Curie and Fermi temperatures, the electron-spin exchange integral J ex , and the electron-phonon coupling constant B q . The magnetization dynamic near the Curie temperature is rather universal. Our numerical study of these equations illustrates that, due to the reduction of the average magnetization as a function of the spin temperature, both pump intensity and sample temperature are responsible for a relative long demagnetization (several picoseconds). An external magnetic field can suppress the critical dynamic slowdown. , the reduced magnetization m(T )/S and the specific heat as a function of the normalized temperature T /T c with [Figs. 1(a) and (b)] and without [Figs. 1(c) and (d)] the magnetic field are shown. A few general features can be readily identified. FIG. 1 : 1(Color online) Temperature dependence of (a) magnetization and (b) specific heat (arbitrary unit) for spin=1/2,1,2,8 in the absence of the external field; Temperature dependence of (c) magnetization and (d) specific heat for spin=1/2,1,2,8 in an external field H/Tc = 0.001. FIG. 2 :FIG. 3 : 23(Color online) Time dependence of the temperatures of the electrons, spins and lattice after irradiation by a low intensity laser with Te(0) = 0.7Tc, and Ts(0) = Tp(0) = Trm = 0.47Tc, and Tc = 620K. The inset shows the minimum magnetization (or maximum spin temperature) occurs at about 260 femtosecond. The other parameters are: Jex = 0.15eV, ǫF = 8eV, , M/m = 10 5 , and a0 = 0.25nm. (Color online) Time-dependent magnetization as a function of time in logarithmical scale for various exchange parameters at a fixed laser-fluence. The parameters are same as those of Fig. 2. FIG. 4 :FIG. 5 : 45(Color Online) Time evolution of three temperature for a large laser-fluence case Te(0) = 1.6. The critical slowing down of the spin system is identified as the plateau in the figure. The inset defines a slowdown time τ d . The smaller inset shows the magnified region in the vicinity of the maximum temperature. The other parameters are same as those in Fig. 2. (Color online) Log-log plot of τ d versus the reduced temperature for Jex = 0.1 and 0.2. The exponents are δ = 0.43 and δ = 0.34 respectively. The parameters are same as those in Fig. 4. The dashed line is for eye-guidance. online) Magnetization as a function of time with and without the magnetic field. The parameters are same as those inFig. 4. FIG. 7: (Color online) Log(τ d ) versus the initial electron temperature for two values of Jex with (open symbols) and without (filled symbols) the magnetic field. Te(0) is normalized initial electron temperature.1.54 1.56 1.58 1.60 1.62 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4 T e (0) J ex =0.1eV J ex =0.2eV Ln( d ) AcknowledgmentsThe authors acknowledge the support from DOE (DE-FG02-06ER46307) and NSF (ECCS-1127751). . E Beaurepaire, J.-C Merle, A Daunois, J.-Y Bigot, Phys. Rev. Lett. 764250E. Beaurepaire, J.-C. Merle, A. Daunois and J.-Y. Bigot, Phys. Rev. Lett. 76, 4250 (1996). . C D Stanciu, F Hansteen, A V Kimel, A Kirilyuk, A Tsukamoto, A Itoh, Th Rasing, Phys. Rev. Lett. 9947601C. D. Stanciu, F. Hansteen, A.V. Kimel, A. Kirilyuk, A. Tsukamoto, A. Itoh, and Th. Rasing, Phys. Rev. Lett. 99, 047601 (2007). . W A Challener, C Peng, A V Itagi, D Karns, W Peng, Y Peng, X M Yang, X Zhu, N J Gokemeijer, Y.-T Hsia, G Ju, R E Rottmayer, M A Seigler, E C Gage, Nature Photonics. 3W. A. Challener, C. Peng, A. V. Itagi, D. Karns, W. Peng, Y. Peng, X.M. Yang, X. Zhu, N. J. Gokemeijer, Y.-T. Hsia, G. Ju, R. E. Rottmayer, M. A. Seigler and E. C. Gage, Nature Photonics 3, 220-224 (2009). . B Koopmans, J J M Ruigrok, F Dalla Longa, W J M De Jonge, Phys. Rev. Lett. 95267207B. Koopmans, J.J.M. Ruigrok, F. Dalla Longa, and W.J.M. de Jonge, Phys. Rev. Lett. 95, 267207 (2005); . B Koopmans, H H J E Kicken, M Van Kampen, W J M De Jonge, J. Magn. Magn. Mater. 286B. Koopmans, H.H.J.E. Kicken, M. van Kampen, and W.J.M. de Jonge, J. Magn. Magn. Mater. 286, 271-275 (2005). . B Koopmans, G Malinowski, F Longa, D Steiauf, M Fhnle, T Roth, M Cinchetti, M Aeschlimann, Nature Materials. B. Koopmans, G. Malinowski, F. Dalla Longa, D. Steiauf, M. Fhnle, T. Roth, M. Cinchetti and M. Aeschlimann, Nature Materials (2009). . E Carpene, E Mancini, C Dallera, M Brenna, E Puppin, S. De Silvestri, Phys. Rev. B. 78174422E. Carpene, E. Mancini, C. Dallera, M. Brenna, E. Puppin, and S. De Silvestri, Phys. Rev. B 78, 174422 (2008). . T Ogasawara, K Ohgushi, Y Tomioka, K S Takahashi, H Okamoto, M Kawasaki, Y Tokura, Phys. Rev. Lett. 9487202T. Ogasawara, K. Ohgushi, Y. Tomioka, K. S. Takahashi, H. Okamoto, M. Kawasaki, and Y. Tokura, Phys. Rev. Lett. 94, 087202 (2005). . T Kise, T Ogasawara, M Ashida, Y Tomioka, Y Tokura, M Kuwata-Gonokami, Phys. Rev Lett. 851986T. Kise, T. Ogasawara, M. Ashida, Y. Tomioka, Y. Tokura, and M. Kuwata-Gonokami, Phys. Rev Lett. 85, 1986 (2000). . G M Muller, J Walowski, M Djordjevic, G.-X Miao, A Gupta, A V Ramos, K Gehrke, V Moshnyaga, K Samwer, J Schmalhorst, A Thomas, A Hutten, G Reiss, J S Moodera, M Münzenberg, Nature Material. 856G. M. Muller, J. Walowski, M. Djordjevic, G.-X. Miao, A. Gupta, A. V. Ramos, K. Gehrke, V. Moshnyaga, K. Samwer, J. Schmalhorst, A. Thomas, A. Hutten, G. Reiss, J. S. Moodera and M. Münzenberg, Nature Material 8, 56 (2009). . Q Zhang, A V Nurmikko, G X Miao, G Xiao, A Gupta, Phys. Rev. B. 7464414Q. Zhang, A.V. Nurmikko, G.X. Miao, G. Xiao and A. Gupta, Phys. Rev. B 74, 064414 (2006). . J Wang, C Sun, J Kono, A Oiwa, H Munekata, L Cywinski, L J Sham, Phys. Rev. Lett. 95167401J. Wang, C. Sun, J. Kono, A. Oiwa, H. Munekata, L. Cywinski, and L. J. Sham, Phys. Rev. Lett. 95, 167401 (2005); . J Wang, I Cotoros, K M Dani, X Liu, J K Furdyna, D S Chemla, Phys. Rev. Lett. 98217401J. Wang, I. Cotoros, K. M. Dani, X. Liu, J. K. Furdyna, and D. S. Chemla, Phys. Rev. Lett. 98, 217401 (2007). . A Kirilyuk, A V Kimel, T Rasing, Rev. Mod. Phys. 822731A. Kirilyuk, A. V. Kimel, and T. Rasing, Rev. Mod. Phys. 82, 2731 (2010). . G P Zhang, W Hubner, Phys. Rev. Lett. 853025G. P. Zhang and W. Hubner, Phys. Rev. Lett. 85, 3025 (2000). . D Steiauf, M Fahnle, Phys. Rev. B. 79140401D. Steiauf and M. Fahnle, Phys. Rev. B 79, 140401(R) (2009). . W Hübner, K H Bennemann, Phys. Rev. B. 533422W. Hübner and K. H. Bennemann, Phys. Rev. B 53, 3422 (1996). . L Cywinski, L J Sham, Phys. Rev. B. 7645205L. Cywinski and L. J. Sham, Phys. Rev. B 76, 045205 (2007); . J Wang, L Cywinski, C Sun, J Kono, H Munekata, L J Sham, Phys. Rev. B. 77235308J. Wang, L. Cywinski, C. Sun, J. Kono, H. Munekata, and L.J. Sham Phys. Rev. B 77, 235308 (2008). . T Hartenstein, G Lefkidis, W Hübner, G P Zhang, Y Bai, J. Appl. Phys. 105T. Hartenstein, G. Lefkidis, W. Hübner, G. P. Zhang, and Y. Bai, J. Appl. Phys. 105, 07D305 (2009). . B Koopmans, M Van Kampen, J T Kohlhepp, W J M De Jonge, Phys. Rev. Lett. 85844B. Koopmans, M. van Kampen, J. T. Kohlhepp, and W. J. M. de Jonge, Phys. Rev. Lett. 85, 844 (2000). . M Battiato, K Carva, P M Oppeneer, Phys. Rev. Lett. 10527203M. Battiato, K. Carva, and P. M. Oppeneer, Phys. Rev. Lett. 105 027203 (2010). . U Atxitia, O Chubykalo-Fesenko, J Walowski, A Mann, M Münzenberg, Phys. Rev. B. 81174401U. Atxitia, O. Chubykalo-Fesenko, J. Walowski, A. Mann, and M. Münzenberg, Phys. Rev. B 81, 174401 (2010). . R J Elliott, Phys. Rev. 96266R. J. Elliott, Phys. Rev. 96, 266 (1954). Y Yafet, Solid State Physics. F. Seitz and D. TurnbullNew YorkAcademic Press14Y. Yafet, in Solid State Physics, edited by F. Seitz and D. Turnbull (Academic Press, New York, 1963), Vol. 14; . A W Overhauser, Phys. Rev. 89689A.W. Overhauser, Phys. Rev. 89 689 (1953). . P M Echenique, J M Pitarke, E V Chulkov, A Rubio, Chem. Phys. 2511P.M. Echenique, J.M. Pitarke,E.V. Chulkov, and A. Ru- bio, Chem. Phys. 251, 1 (2000); . R Knorren, G Bouzerar, K H Bennemann, J. Phys.: Cond. Matter. 14739R. Knorren, G. Bouzerar, and K.H. Bennemann, J. Phys.: Cond. Matter 14, R739 (2002). . M I D&apos;yakonov, V I Perel, Zh. Eksp. Teor. Fiz. 601954M.I. D'yakonov, V.I. Perel', Zh. Eksp. Teor. Fiz. 60, 1954 (1971); . Fiz. Tverd. Tela (Leningrad). 133581Fiz. Tverd. Tela (Leningrad) 13, 3581 (1971). . G L Bir, A G Aronov, G E Pikus, Zh. Eksp. Teor. Fiz. 691382G.L. Bir, A.G. Aronov, G.E. Pikus, Zh. Eksp. Teor. Fiz. 69, 1382 (1975). Ferromagnetic relaxation, and resonance line widths. C W Hass, H B Callen, Magnetism. G.T. RadoNew York and LondonAcademic pressC. W. Hass, and H. B. Callen, "Ferromagnetic relaxation, and resonance line widths", in Magnetism, edited by G.T. Rado, Academic press (New York and London), (1963). . C Boeglin, E Beaurepaire, V Halte, V Lopez-Flores, C Stamm, N Pontius, H A Durr, J.-Y Bigot, Nature. 465458C. Boeglin, E. Beaurepaire, V. Halte, V. Lopez-Flores, C. Stamm, N. Pontius, H. A. Durr and J.-Y. Bigot, Nature 465, 458 (2010). . J.-Y Bigot, M Vomir, E Beaurepaire, Nature Physics. 5515J.-Y. Bigot, M. Vomir and E. Beaurepaire, Nature Physics 5, 515 (2009). Elementary excitations in solids. D Pines, See also J.M. Ziman, Electrons and phonons. Oxford, Classic TextsWestview PressD. Pines, Elementary excitations in solids, Westview Press, 1999. See also J.M. Ziman, Electrons and phonons, Oxford, Classic Texts, 2001. . J H Van Vleck, Phys. Rev. 57426J.H. Van Vleck, Phys. Rev. 57, 426 (1940); . R D Mattuck, M W P Strandberg, Phys. Rev. 1191204R. D. Mattuck and M.W.P. Strandberg, Phys. Rev. 119, 1204 (1960). . A B Schmidt, M Pickel, M Donath, P Buczek, A Ernst, V P Zhukov, P M Echenique, L M Sandratskii, E V Chulkov, M Weinelt, Phys. Rev. Lett. 105197401A. B. Schmidt, M. Pickel, M. Donath, P. Buczek, A. Ernst, V.P. Zhukov, P. M. Echenique, L. M. Sandratskii, E. V. Chulkov, and M. Weinelt, Phys. Rev. Lett. 105 197401 (2010). . N W Ashcroft, N D Mermin, Holt, Rinehart and WinstonNew YorkSolid State PhysicsN. W. Ashcroft and N. D. Mermin, Solid State Physics, (Holt, Rinehart and Winston, New York), 1976. S V Tyablikov, Methods in the quantum theory of magnetism. Plenum PressS. V. Tyablikov, Methods in the quantum theory of mag- netism (Plenum Press, 1967). . G V Vasyutinskii, A A Kazakov, Theor. Math. Phys. 95450G. V. Vasyutinskii and A. A. Kazakov, Theor. Math. Phys. 95, 450 (1993). Introduction to Solid State Physics. C Kittel, John Wiley & SonsNew York4th EdC. Kittel, Introduction to Solid State Physics, 4th Ed. (John Wiley & Sons, New York) (1971). . D L Connelly, J S Loomis, D E Mapother, Phys. Rev. B. 3924D. L.Connelly, J. S. Loomis, and D. E. Mapother, Phys. Rev. B 3, 924 (1971); . R E Pawel, E E Stansbury, J. Phys. Chem. Solids. 26757R.E. Pawel and E.E. Stansbury, J. Phys. Chem. Solids 26, 757 (1965).
[]
[ "Ultrafast dynamics of entanglement in Heisenberg antiferromagnets", "Ultrafast dynamics of entanglement in Heisenberg antiferromagnets" ]
[ "G Fabiani \nInstitute for Molecules and Materials (IMM)\nRadboud University\nHeyendaalseweg 1356525 AJNijmegenThe Netherlands\n", "J H Mentink \nInstitute for Molecules and Materials (IMM)\nRadboud University\nHeyendaalseweg 1356525 AJNijmegenThe Netherlands\n" ]
[ "Institute for Molecules and Materials (IMM)\nRadboud University\nHeyendaalseweg 1356525 AJNijmegenThe Netherlands", "Institute for Molecules and Materials (IMM)\nRadboud University\nHeyendaalseweg 1356525 AJNijmegenThe Netherlands" ]
[]
We investigate entanglement dynamics in the antiferromagnetic Heisenberg model in two dimensions following a spatially anisotropic quench of the exchange interactions. Opposed to established results in one dimension, the magnon quasiparticles show an initial growth of entanglement dynamics that does not depend on the system size and is governed by the oscillation period of the exchange interaction. We ascribe this to the dominance of the intrinsic entanglement of short wavelength non-propagating magnon-pairs, which also leads to a competition between area-law and volume-law contribution in the entanglement dynamics. Furthermore, by adopting the neural-network quantum states, we provide numerical evidence that this behavior survives even in the presence of strong magnon-magnon interactions, suggesting new avenues for manipulating entanglement dynamics in quantum materials. arXiv:1912.10845v3 [cond-mat.str-el]
10.1103/physrevb.105.094438
[ "https://arxiv.org/pdf/1912.10845v3.pdf" ]
247,450,501
1912.10845
03fe6aa6c52141e5330ad208a197cd6447286f38
Ultrafast dynamics of entanglement in Heisenberg antiferromagnets G Fabiani Institute for Molecules and Materials (IMM) Radboud University Heyendaalseweg 1356525 AJNijmegenThe Netherlands J H Mentink Institute for Molecules and Materials (IMM) Radboud University Heyendaalseweg 1356525 AJNijmegenThe Netherlands Ultrafast dynamics of entanglement in Heisenberg antiferromagnets (Dated: March 16, 2022) We investigate entanglement dynamics in the antiferromagnetic Heisenberg model in two dimensions following a spatially anisotropic quench of the exchange interactions. Opposed to established results in one dimension, the magnon quasiparticles show an initial growth of entanglement dynamics that does not depend on the system size and is governed by the oscillation period of the exchange interaction. We ascribe this to the dominance of the intrinsic entanglement of short wavelength non-propagating magnon-pairs, which also leads to a competition between area-law and volume-law contribution in the entanglement dynamics. Furthermore, by adopting the neural-network quantum states, we provide numerical evidence that this behavior survives even in the presence of strong magnon-magnon interactions, suggesting new avenues for manipulating entanglement dynamics in quantum materials. arXiv:1912.10845v3 [cond-mat.str-el] I. INTRODUCTION The study of entanglement in non-equilibrium quantum systems has gained considerable attention in recent years, fueled by the possibility to probe the dynamics of quantum correlations in systems such as cold atoms [1,2] and trapped ions [3,4], which provide experimental access to fundamental questions concerning the onset of thermalization in isolated quantum many-body systems [5][6][7]. The dynamics of entanglement has been studied in a wide variety of one-dimensional systems, following quantum quenches [8][9][10][11][12][13][14][15][16][17][18][19]. Due to a widely accepted semiclassical approach pioneered by Calabrese and Cardy [20], the post-quench dynamics can be understood as ballistic propagation of pairs of entangled quasiparticles that, as they move, correlate spatially separated regions of the system. As a consequence, entanglement grows linearly with time, with a speed determined by the highest quasiparticle velocity, setting a fundamental bound for the time scale at which thermalization can set in. In principle, recently developed ultrafast pump-probe techniques make it possible to investigate entanglement dynamics in extended quantum materials as well. However, in these systems the light-matter interaction in general breaks rotation invariance, even in the simplest electric dipole approximation, yielding homogeneous (i.e. translationally invariant) yet spatially anisotropic perturbations. For this reason, it is unclear how to link the dynamics in these systems to the isotropic quenches considered so far. Studying the entanglement dynamics in quantum materials requires, therefore, to uncover the non-equilibrium dynamics following spatially anisotropic excitation protocols and this may reveal how the quasiparticle picture generalizes to high dimensional and spatially anisotropic cases. A prime example of spatially anisotropic control of quantum interactions is the ultrafast control of exchange interactions, which recently has emerged as a central tool for the manipulation of quantum materials [21][22][23][24][25][26][27][28][29][30][31][32][33][34][35]. In this case, the light-matter interaction depends on the relative orientation of the laser field polarization and the exchange bonds, resulting in homogeneous but spatially anisotropic perturbations. The class of antiferromagnetic quantum materials is particularly interesting, since antiferromagnets feature magnon entanglement even in the ground state, which manifests as magnon squeezing between the two opposite sublattices [36,37]. Intriguingly, similar two-magnon squeezing has been observed also in the time-domain following ultrashort anisotropic perturbations of the exchange interaction [38], opening up new avenues to control and harness entanglement at the timescale of the exchange interactions. By focusing on the two-dimensional Heisenberg antiferromagnet, the effect of anisotropic quenches of the exchange interaction on the propagation of the excited magnons was recently analyzed in [39]. However, the dynamics of entanglement has not been investigated so far and is the main objective of this work. Interestingly, despite that this is a locally interacting system with welldefined quasiparticles, we find that the entanglement of these quasiparticles themselves yields entanglement dynamics that greatly deviates from the quasiparticle picture. We provide both analytical and numerical evidence that the entanglement entropy increases and oscillates in a time-scale determined by the exchange interaction, reminiscent to the dynamics of the nearest neighbour spin correlations. For typical values of the exchange interactions, such dynamics is ultrafast, i.e. in the subpicosecond regime. We explain this by showing that the entanglement dynamics is dominated by the intrinsic entanglement of short wavelength magnon pairs, which are favored by the anisotropic quench. Furthermore, the presence of intrinsic entanglement leads to contributions of both area law and volume law scaling of the entanglement dynamics with system size. By employing the recently proposed neural-network quantum states, we show that the rapid entanglement growth persists even in the presence of strong magnon-magnon interactions. II. MAGNON-PAIR DYNAMICS We consider the spin-1 2 antiferromagnetic Heisenberg model on a square lattice with N = L × L spinsŜ i = S(r i ), with r i = (x i , y i ), described by the Hamiltonian H = J ex ij Ŝ i ·Ŝ j ,(1) where J ex is the exchange interaction (J ex > 0) and · restricts the sum to nearest neighbours. To excite nonequilibrium spin dynamics we consider the following perturbation of the exchange interaction δĤ(t) = ∆J ex (t) i,δ δ δ e · δ δ δ 2Ŝ (r i ) ·Ŝ(r i + δ δ δ),(2) where e is a unit vector that determines the polarization of the electric field of the light pulse which causes the perturbation and δ δ δ connects nearest neighbour spins. This perturbation has been employed in several works to model the setup of impulsive stimulated Raman scattering [38,[40][41][42][43][44], for which ∆J ex (t) = δ(t). Here instead, in order to be closer to typical setups of non-equilibrium quantum dynamics, we consider a global quench, namely we set ∆J ex (t) = 0.1J ex Θ(t), where Θ(t) is the Heaviside step-function. Nevertheless, as previously found [39,45], the two protocols feature qualitatively similar dynamics. In the following we set = 1, the lattice constant a = 1 and we choose e along the y-axis of the square lattice. Magnon-pair dynamics can be solved analytically treated in the linear spin wave approximation (LSWT). To this end, we perform the linear Holstein-Primakoff transformationsŜ z i = S −â † iâ i ,Ŝ + i = √ 2Sâ i ,Ŝ − i = √ 2Sâ † i ,H = zJ ex S 2 k â † kâ k +â −kâ † −k −ηγ k â † kâ † −k +â kâ−k ,(3) where z = 4 is the coordination number and γ k = 1 z δ e k·δ . In this step we have added a staggered field to Eq. (1), which results in a factor η in front of γ k . The value of η is then adjusted to guarantee that, for finite systems, the staggered magnetization is zero. This modified spin wave theory [46] restores the sublattice invariance of the Heisenberg Hamiltonian (otherwise broken in the conventional spin wave treatment), and in a previous work was found to yield accurate results for the ground state entanglement of the 2D Heisenberg model [47]. The Hamiltonian Eq. (3) can be diagonalized via a Bogoliubov transformationα k = u kâk − v kâ † −k , yieldinĝ H + δĤ(t) = k 1 2 ω k + δω k α † kα k +α −kα † −k + V k α † kα † −k +α kα−k .(4) Here ω k is the single-magnon dispersion renormalized by the Oguchi correction [48], while δω k and V k are proportional to ∆J ex (t) and depend on the details of the perturbation, see Appendix A for further information. The first term describes the bare magnon spectrum, which is renormalized due to the perturbation of J ex . The second term originates only from Eq. (2) and is responsible for the creation and annihilation of pairs of counterpropagating magnons. We want to emphasize that the spin-flip excitation of Eq. (2) is homogeneous (i.e. translation invariant) in the lattice space, but highly localized to nearest neighbouring bonds. This favours the excitation of high-k magnons, which is reflected on a hierarchy of V k from a vanishing value at the center of the Brillouin zone to the largest value at the edge where k = X ≡ (0, π). III. ENTANGLEMENT OF MAGNON-PAIRS To quantify the entanglement dynamics following the quench we employ the second order Renyi entropy In the linear spin wave approximation the Hamiltonian is quadratic and this allows to express the reduced density matrices solely in terms of single-particle correlation functions, yielding a simple procedure to evaluate S 2 (ρ A ) [49]. Specifically, we define the matrices X ij = f (r i − r j ) + g(r i − r j ) and S 2 (ρ A ) = − log Trρ 2 A = − log Trρ 2 B = S 2 (ρ B ),(5)P ij = f (r i − r j ) − g(r i − r j ), with f (r i − r j ) = 1 2 b † ib j + b ib † j ,(6)g(r i − r j ) = 1 2 b † ib † j + b ibj ,(7) where r i,j ∈ A(B) andb i ,b † i are the annihilation/creation operators of a chosen basis. If ζ p are the eigenvalues of the matrix Q(r i −r j ) = X ·P ij , then the Renyi entropy can be evaluated from S 2 (ρ A(B) ) = p ln 2ζ p ,(8) and ζ p ≥ 1/2. While f (r i − r j ) and g(r i − r j ) can be evaluated analytically, the eigenvalues of Q have to be evaluated numerically for the system sizes considered. In the following we are going to calculate the Renyi entropy for three different partitions of the square lattice. This is done both in the Holstein-Primakoff and Bogoliubov basis by choosing the corresponding bosons in Eqs. (6)- (7). The former is closely related to the entanglement in the spin basis of Eqs. (1)-(2), while the latter measures the magnon entanglement for which it is possible to get an analytical expression for S 2 (ρ), that will be used to interpret the results. The dynamics of the entanglement entropy in the Holstein-Primakoff basis S HP (t) = S HP 2 ρ A (t) − S HP 2 ρ A (0) is shown in Figs. 1(a)-(c) for three different partitions of the lattice and L = 16, 20, 24. The striking feature that emerges is that the entanglement dynamics greatly deviates from the evolution expected by a straightforward application of the quasiparticle picture, which predicts an entanglement growth in the time-scale of L/(2v max ), where v max = dω k dk | k=0 ∼ 1.64 J ex corresponds to the highest magnon group velocity. Here, instead, the initial growth happens in a time-window independent of system size and is followed by periodic oscillations at a frequency only determined by the exchange interaction. In particular, for the partition shown in Fig. 1(a), the dynamics of entanglement resembles the oscillations of the nearest neighbour spin correlations C x (t) = Ŝ (r i )(t) ·Ŝ(r i + δ δ δ x )(t) (grey dotted line), while for the other partitions S HP oscillates at twice the frequency of C x (t). This suggests that the short wavelength magnons at the edge of the Brillouin zone, which dominantly contribute to the dynamics of C x (t), play a prominent role in the time-evolution of the entanglement entropy. To further support the origin of this rapid entanglement dynamics and the relation with high-k magnons, we next consider the entanglement directly from the Bogoliubov bosons. In this case the initial state is the vacuum state and has therefore no entanglement. This setting is therefore closer to conventional quench scenarios in one dimension. Nevertheless, the dynamics of the entanglement entropy S bog (t) ( Fig. 1(d, f)) still deviates from the quasiparticle picture and closely resembles the corresponding dynamics in the HP basis. Importantly, in this basis an analytical expression can be obtained for the checkerboard partition of Fig. 1(f). This relies on the fact that the counter-propagating magnons of each excited pair originate from opposite sublattices corresponding to sublattice A and B of the checkerboard partition [38,42] and the reduced density matrix for one sublattice can be obtained by tracing out the degrees of freedom of one of the two bosons. This can be easily done by restricting the Hilbert space to a collection of independent two-level systems, each of them encompassing the vacuum state and a state with a pair of magnons [50]. The wavefunction of the entire system can then be written as |ψ(t) = k |ψ k (t) , with |ψ k (t) = c k (t) |0 k 0 −k + d k (t) |1 k 1 −k . The time-dependent coefficients c k (t) and d k (t) are found by solving the Schroedinger equation i∂ t |ψ(t) = (Ĥ + δĤ) |ψ(t) with the initial conditions c k (0) = 1 and d k (0) = 0 (see Appendix B). With this ansatz, the second order Renyi entropy between the two modes of each two-level system reads S bog k (t) = − ln |c k (t)| 4 + |d k (t)| 4 ,(9) and the entanglement of the entire system is given by S bog 2L (t) = k S bog k (t), where "2L" remarks that the Renyi entropy is calculated in the two-level system approximation. The dynamics of S bog 2L (t) is shown in Fig. 2(b) and we find perfect overlap with the calculation performed by using Eq. (8) with Bogoliubov bosons. However, we can now employ Eq. (9) to get an analytic understanding of the entanglement dynamics. In particular, we note that at each time, the largest terms in the sum are those given by the zone-edge magnons with k ≈ X (see Fig. 2(c)), which follows from the dominance of V k in the vicinity of the zone-edge. Moreover, due to a Van Hove singularity in the magnon density of states at the zone edge, the number of short wavelength magnons greatly exceeds that of long wavelength magnons. As a result, the entanglement dynamics is fully dominated by the oscillations of zone-edge magnon entanglement and can be approximated as S bog 2L (t) ≈ D X ln |c X | 4 + |d X | 4 , where D X is the amplitude of the magnon density of states at k ≈ X. Similar dynamics is found as well in Fig. 1(c), since for these wave vectors the Bogoliubov transformation is nearly the identity [42]. Pairs of long wavelength magnons, which determine the light-cone dynamics of correlations, also contribute to the growth of entanglement, but their contribution is eclipsed by the intrinsic entanglement dynamics of each high-energy magnon pair. Therefore, our results are not in contradiction with the quasiparticle picture, but reveal a new scenario where spreading of correlations and growth of entanglement originate from excitations with different energy scales. Interestingly, we find that also the scaling of entanglement is altered by the localized character of the spin-flip excitation considered. In Fig. 1(a) we observe that the entanglement dynamics follows an area law, while a volume law is found in the other cases. This can be qualitatively explained as follows: From the analytical calculation above, we expect the entanglement to scale with the number of magnons in close vicinity to the zone-edge and this increases with the system size, resulting in a volume law entanglement dynamics. At the same time, the dominance of high-k magnons causes exceptionally large nearest neighbour correlations in the spin basis, which suggests an important area law contribution for S HP . Due to the symmetry of the perturbation [39], the nearest neighbour correlations along orthogonal directions of the lattice oscillate out of phase and therefore they yield opposite contributions to S 2 (ρ A ) (see Appendix C; similar results are also found in photo-doped Mott insulators [51]). The area law contribution from boundary correlations vanishes for partitions with equal number of correlations along the x and y bonds, but can even dominate when an excess is present, which explains the qualitative differences between the scaling found in Fig. 1(a) and Fig. 1(b-f). In Appendix C we show that for larger system sizes (L ∼ 120), also the volume law contribution becomes visible in the geometry studied in Fig. 1(a). This heuristic argument is not directly transferable to Fig. 1(d), since the magnon entanglement vanishes in the ground state. Instead, in this case we observe a cancellation of the area law contributions coming from the nearest neighbour correlations. IV. EFFECT OF MAGNON-MAGNON INTERACTIONS So far we have considered the dynamics of entanglement only for non-interacting magnon quasiparticles. Although this is generally a good description at long wavelengths, the results above reveal a dominance of short wavelength magnons for which magnon-magnon interactions can significantly influence the dynamics of correlations, especially in the limit of strong quantum fluctua- tions that we consider here. To investigate if magnonmagnon interactions qualitatively change the entanglement dynamics, we numerically solve Eqs. (1)-(2) with the recently proposed neural-network quantum states (NQS) [52], which were previously adopted to investigate the spreading of correlations in the same setup employed here [39,45]. In this approach, the wavefunction of the system is approximated with a restricted Boltzmann machine: ψ M (σ) = e i aiS z i × M i=1 2 cosh b i + j W ij S z j , where {a i , b i , W ij } are the variational parameters, whose number is N var = α × N 2 + α × N + N , with α = M/N . These are trained by means of variational Monte Carlo techniques and are time-evolved by employing the timedependent variational principle [53]. To evaluate the entanglement dynamics with NQS, we exploit that Trρ 2 A can be expressed as the expectation value of a swap operator between two copies of the system which can be easily computed with Monte Carlo techniques [54]. This allows to efficiently compute the second-order Renyi entropy. The dynamics of S 2 (t) is shown in Fig. 3 for system sizes up to 24×24 spins. Similarly to what is observed within LSWT, the time-scale of the entanglement growth is independent of system size, with the first peak appearing in concomitance with the first peak of nearest neighbour correlations. The rise of entanglement is slightly delayed with respect to LSWT, in close agreement with [39]. Such a renormalization is consistent with what is found in spontaneous Raman spectroscopy [55][56][57] and follows from the appearance of a quasi-bound spin-flip state due to magnon-magnon interactions. Moreover, as can be observed for the largest system, oscillations are strongly damped with time as opposed to LSWT. Although NQS results are limited to a small time-window, relaxation of entanglement may be a signature of thermalization that is triggered by the strong magnon-magnon interactions present in the model. The dynamics of entanglement shown in Fig. 3 depends weakly on system size, and therefore does not follow the area law found within LSWT for the same geometry. Understanding whether this is an artifact of the numerical approximation (due to finite system size and/or the small α employed, especially for the largest system size) or an effect of magnon-magnon interactions requires more elaborate investigations at even larger systems than considered here. However, we emphasize that the nearest neighbour correlations that dominate the entanglement entropy are already converged. V. CONCLUSION We evaluated the entanglement dynamics after a spatially anisotropic quench of the exchange interaction in 2D Heisenberg antiferromagnets. We found that the entanglement is determined by the localized zone-edge magnons and evolves periodically with a frequency solely determined by the exchange energy, independent on the system size. By employing neural-network quantum states, we numerically found that the effect of magnonmagnon interactions is two-fold: at short times they delay the entanglement growth in agreement with the renormalization of the two-magnon mode observed in Raman spectroscopy; at longer times, magnon-magnon interactions lead to a quick relaxation of the oscillations. Whether this is a signature of thermalization is an open question that deserves further investigation. We notice that oscillations of the entanglement entropy and/or absence of linear growth have been found in other works in one dimension [58][59][60][61]. For instance, in [58] it is shown that meson-like quasiparticles can appear in the post quench dynamics of the Ising chain. In this case, confinement leads to the suppression of light-cone spreading, with correlations and entanglement oscillating at frequencies compatible with the masses of the mesons excited. The present case, however, differs from this scenario as domain walls are absent and there is no suppression of the light cone [39]. Moreover, we found that the dynamics of entanglement is closely related to the dynamics of nearest neighbours correlations. The latter are accessible in state-of-the-art time-resolved Raman scattering experiments [38,62,63], thus providing a direct way to access the dynamics of entanglement in quantum materials. To conclude, we notice that the quench here employed is relatively weak. Within linear spin wave theory, increasing ∆J ex does not lead to a qualitative difference in the time-evolution as the dynamics remains dominated by the zone-edge magnon pairs and therefore features oscillations at the frequency of the exchange interaction. Moreover, although magnon-magnon interactions renormalize the magnon-pair mode frequency, this qualitative picture is expected to survive even for stronger quenches beyond LSWT. Naturally, distinct physics is expected for very strong quenches to totally different Hamiltonians, such as quenches to a set of 1D chains with negligible inter-chain interaction. We leave this for future studies. In this appendix we give further details concerning the expansion of Eqs. (1)-(2) of the main text in terms of magnon operators within the linear spin wave approximation. In particular, we consider the modified spin wave theory approach of [46]. This gets rid of spurious divergences of finite lattices by restoring the sublattice invariance of the Heisenberg Hamiltonian otherwise broken in conventional spin wave treatments. To this end, we add a staggered field to Eq. (1) of the main text, of the formĤ s = −h ri e iπ·riŜz ri ,(A1) where e iπ·ri = +1(−1) if r belongs to sublattice A(B) of the checkerboard decomposition of the square lattice. This term will be treated in a variational fashion to guarantee that the sublattice invariance is restored, as explained below. Moreover, in the following we perform a unitary transformation on the spin operators, consisting of a π rotation about the y-axis of sublattice A. This yields the transformed operatorŝ S z i = e iπ·riŜz i ,Ŝ x i = e iπ·riŜx i ,Ŝ y i =Ŝ y i .(A2) This transformation allows us to define only one species of boson operators. Here we consider the first-order Holstein-Primakoff transformatioñ S z i = S −â † iâ i ,Ŝ + i = √ 2Sâ i ,Ŝ − i = √ 2Sâ † i . (A3) It is convenient to work in momentum space, where the Holstein-Primakoff bosons are expressed aŝ a k = 1 √ N i e −ik·riâ i ,â i = 1 √ N k e ik·riâ k ,(A4) where the i-sum is over the full lattice, and the k-sum is over the full Brillouin zone. With these transformations we get thatĤ T =Ĥ+Ĥ s becomes (up to constant terms) H T = zJ ex S 2 k â † kâ k +â −kâ † −k − γ k â † kâ † −k +â kâ−k + h 2 k â † kâ k +â −kâ † −k , where z = 4 is the coordination number, and γ k = 1 z δ e ik·δ . This Hamiltonian is diagonalized with a Bogoliubov transformationα k = cosh θ kâk − sinh θ kâ † −k , where tanh 2θ k = ηγ k and η = 1 + h zJexS −1 , yieldinĝ H T = 1 2 k ω k α † kα k +α −kα † −k ,(A5) with ω k = zSJ ex η 1 − (ηγ k ) 2 .(A6) In the spin wave calculation of the main text we employed the Oguchi correction to the single-magnon spectrum [48], which is given by the renormalization ω k → Z c ω k , where Z c = 1 + 1 2S 1 N k 1 − 1 − (ηγ k ) 2 ≈ 1.158. This captures the most simple effect of magnon-magnon interactions. The perturbation δĤ(t) can be written in terms of the same bosons. This basis is convenient because it allows us to express the initial state at t = 0 as a vacuum state. However, as a consequence the perturbation is not diagonal in this basis. Up to a constant term, we obtain δĤ(t) = 1 2 k δω k α † kα k +α −kα † −k + V k α † kα † −k +α kα−k ,(A7) with δω k = zS∆J ex (t) 1 − ηξ k γ k 1 − (ηγ k ) 2 , V k = −zS∆J ex (t) ξ k − ηγ k 1 − (ηγ k ) 2 ,(A8) where we defined ξ k = 1 2 δ (e · δ) 2 e ik·δ . The value of h is adjusted such that the staggered magnetization m z = ri e iπ·ri Ŝ z ri = 0 in order to restore the sublattice invariance of Eqs. (1)-(2). This is obtained by requiring 1 2N k 2 K z k − ηγ k K + k +K − k 1 − (ηγ k ) 2 − 1 2 = S (A9) where we have defined the magnon-pair operators [42] K z k = 1 2 α † kα k +α −kα † −k , K + k =α † kα † −k ,K − k =α kα−k . Note that K z k and K ± k are time-dependent and therefore η acquires a time-dependence through Eq. (A9), which has to be solved self-consistently at each step of the time-evolution. The dynamics of K z k and K ± k is found by numerically solving the equations of motion d K z k dt = iV k K − k − K + k , (A10) d K ± k dt = ±2i (ω k + δω k ) K ± k + V k K z k ,(A11) together with Eq. (A9) and the initial conditions K z k = 1 2 and K ± k = 0. The solution of these equations are qualitatively similar to the corresponding solutions for h = 0 found in [39]. The dynamics of correlations is given by C(R, t) = 2S N k C k , where C k = cos(k · R) 1 − (ηγ k ) 2 K z k + ηγ k 2 K + k + K − k − 1 2 ,(A12) for R connecting spins in the same sublattice; a similar expression can be obtained when R connects different sublattice spins. Appendix B: II. Two-level system approximation The calculations of the main text based on the twolevel system approximation are here reviewed and complemented with further details. This approximation assumes that the system can be regarded as a collection of independent two-level systems, each of them consisting of a superposition of the vacuum state and a state describing a pair of magnons with wavevector {k, −k}. This yields the following ansatz for the wavefunction: |ψ(t) = k |ψ k (t) , where |ψ k (t) = c k (t) |0 k 0 −k + d k (t) |1 k 1 −k .(B1) All the time-dependence of the wavefunction is captured in the coefficients c k (t) and d k (t). At t = 0 the system is in the ground state of Eq. (A5), which is the vacuum state of the magnon operators α k , and therefore c k (0) = 1 and d k (0) = 0. The time-evolution of each |ψ k (t) can be found by projecting the Schroedinger equation onto the states |0 k 0 −k and |1 k 1 −k . This yields the following set of equations These coupled equations can be solved exactly, yielding i d dt c k (t) = V k d k (t),(B2)i d dt d k (t) = V k c k (t) + 2(ω k + δω k )d k (t).(B3)c k (t) = cos a k t + i ω k + δω k a k sin a k t e −i(ω k +δω k )t , d k (t) = −i V k a k e −i(ω k +δω k )t sin a k t, with a k = (ω k + δω k ) 2 + V 2 k . The form of the wavefunction Eq. (B1) suggests that the magnons of each two-level system are highly entangled. Following the main text, we calculate the second order Renyi entropy between the mode with wavevector +k and the mode with wavevector −k. To this purpose, we introduce the density matrix ρ k = |ψ k ψ k |; then, the reduced density matrixρ k of the mode with wavevector −k can be obtained by tracing out the first boson (with wavevector +k) from ρ k ρ k = 0 k |ρ k |0 k + 1 k |ρ k |1 k = |c k (t)| 2 |0 −k 0 −k | + |d k (t)| 2 |1 −k 1 −k | = |c k (t)| 2 0 0 |d k (t)| 2 .(B4) Note that alternatively we could trace out the boson with wavevector −k; this would yield the same reduced density matrix. Finally, we get the second order Renyi entropy 1], revealing that correlations along the x and y direction of the lattice oscillate out of phase as expected from the symmetry of the perturbation [39]. Note that the nearest neighbours correlations have converged with system size already for the small systems considered here and therefore no deviations from L = 24 are found at larger L. Differently, the entanglement dynamics has a strong dependence with system size. In the main text it was observed that if one considers a partition of the lattice with exceeding number of nearest neighbour correlations along one direction, the dynamics of the entanglement entropy in the Holstein-Primakoff basis is mainly determined by such nearest neighbour correlations, resulting in a area law scaling of the entanglement dynamics. To better understand this, let us write the reduced density matrix ρ A(B) in terms of spin operators S bog 2L = − ln Trρ 2 k = − ln |c k (t)| 4 + |d k (t)| 4 . (B5)ρ A(B) = 2 n 3 µ,...,ν=0 Ŝ µ i1 · · ·Ŝ ν in Ŝ µ i1 ⊗ · · · ⊗Ŝ ν in , (C1) where n is the number of spins in A(B) andŜ 0 i = 1 2 I and µ = {1, 2, 3} correspond to {x, y, z}. Recalling that the second-order Renyi entropy is defined as S 2 (ρ A(B) ) ≡ S 2 = −Trρ 2 A(B) , it is possible to show that the entanglement entropy can be expressed in terms of the sum of all the possible correlations within A(B), which enter in S 2 with a square. Since nearest neighbouring correlations are much larger than the other correlations [39], the leading order contributions to the entanglement entropy can be written as S 2 ∼ − ln a 0 + a 1 i∈A(B) Ŝ i ·Ŝ i+δx 2 + Ŝ i ·Ŝ i+δy 2 , (C2) for some constants a 0 and a 1 . Since at leading order in ∆J ex it holds that Ŝ i ·Ŝ i+δ x(y) = const + (−) ∆J ex s(t), for some oscillating function s(t), then at leading order also Ŝ i ·Ŝ i+δx 2 and Ŝ i ·Ŝ i+δy 2 oscillate out of phase. Therefore, it follows from Eq. (C2) that if there is an equal amount of correlations alongx and y, then the leading order contribution to S 2 vanishes. To the contrary, when there is an exceeding number of correlations along one direction, let sayx, then S 2 ∼ − ln a 0 + a 1 i∈∂A(B) Ŝ i ·Ŝ i+δx 2 , where ∂A(B) is the boundary of the A(B) partition. Hence, the symmetry of the perturbation and the dominance of nearest neighbor correlations explain the qualitative difference in the dynamics of the entanglement entropy between the partition considered in Fig. 1(a) of the main text, which has an exceeding number of bonds along the x-direction, and the other partitions, where the number of x and y bonds is balanced. In addition, for the other partitions a volume law scaling was found, which we ascribe to nextto-leading order terms, as well as to the correlations neglected in Eq. (C2). Even though nearest neighbor correlations dominate and are converged already for small L, the contribution of area law and volume law entanglement strongly depends on system size, which suggest that in large systems the volume law contributions will become visible even in the partition with an exceeding number of bonds along one direction. In the following we show that this is indeed the case by reporting additional results concerning the finite-size scaling of the entanglement within LSWT. In Fig 5 we plot the entanglement dynamics in the Holstein-Primakoff basis for the same partitioning shown in the main text, but for system sizes up to L = 120. By comparing Fig. 1(a) of the main text and Fig. 5(a) we observe that while for small systems the entanglement neatly follows an area law scaling, when increasing system size a volume law contribution builds up. The effect of this term comprises an additional oscillation with double the frequency of those seen at small system sizes and similar to what is observed for the other partitions where the area law contribution is absent. Indeed, in these other partitions a volume law scaling is maintained also at large system sizes, as can be observed in Figs. 5(b)-(c). We conclude by noticing that the appearance of such double-frequency oscillations is due to the structure of the matrix Q = X · P introduced in the main text. Both X and P are dominated by nearest neighbour correlations, which oscillate with the characteristic twomagnon frequency ω 2M . As such, Q contains both terms oscillating at ω 2M (at leading order in ∆J ex ) and 2 × ω 2M (at next-to-leading order in ∆J ex ), as also follows from the discussion below Eq. (C2). We numerically verified that in the Bogoliubov basis, the leading order terms oscillating in Q ij (t) = k X ik (t)P kj (t) at ω 2M cancel out due to the vanishing of correlations at t = 0, which explains the difference between Fig. 1(a) and Fig. 1(d) of the main text. FIG. 1 . 1(Color online) Dynamics of the entanglement entropy for three different partitions in the Holstein-Primakoff (a)-(c) and Bogoliubov basis (d)-(f) for L = 16 (back), L = 20 (red), L = 24 (blue). The three partitions have respectively L × L/2 (a,c), L/2 × L/2 (b,d) and L/2 × L spins (c,f). Dotted gray line: nearest neighbour correlations Cx(t). between two complementary partitions A and B of the square lattice, where ρ A(B) = Tr B(A) |Ψ Ψ| and Tr A (Tr B ) is the trace of the A (B) degrees of freedom in the Hilbert space H = H A ⊗ H B . FIG. 2 . 2(Color online) Dominance of short-wavelength magnons in the entanglement entropy. (a) Two-level system representation. (b) Dynamics of the entanglement entropy in the Bogoliubov basis for L = 24: comparison between S bog 2L /N (orange solid line) and S bog /V (A) for the checkerboard partition with V (A) = L × L/2 (black diamonds). (c) Time-evolution of S bog k along the symmetry line of the Brillouin zone shown in the inset (green arrows), for a system with L = 120. FIG. 3 . 3(Color online) Dynamics of second order Renyi entanglement entropy S2(t) evaluated with neural-network quantum states for the partition shown at the bottom right, compared with nearest neighbour correlations Cx(t) × 16 (grey dots). S2(t) shown for L = 16 (α = 16, black), L = 20 (α = 12, red) and L = 24 (α = 8, blue), while Cx(t) for L = 24 (α = 8). The Monte Carlo estimations are converged with sampling size. ACKNOWLEDGMENTS This work is part of the Shell-NWO/FOM-initiative "Computational sciences for energy research" of Shell and Chemical Sciences, Earth and Life Sciences, Physical Sciences, FOM and STW, and received funding from the European Research Council ERC grant agreement No. 856538 (3D-MAGiC).Appendix A: Modified spin wave theory of magnon-pair dynamics FIG. 4 . 4(Color online) Dynamics of nearest neighbour correlations with R = (1, 0) and R = (0, 1) for L = 24 (solid orange line) and L = 16 (black dots) evaluated within the linear spin wave approximation. C: III. Additional results on correlation and entanglement dynamics Here we provide additional results about the correlation dynamics and the scaling of entanglement with system size in the linear spin wave approximation. The dynamics of nearest neighbours correlations evaluated with Eq. (A12) is shown in Fig 4 for L = 16, 24 and R = [1, 0], R = [0, FIG. 5 . 5(Color online) (a)-(c) Dynamics of Renyi entanglement entropy in the Holstein-Primakoff basis for L = 20, 40, 80, 120 for the partitioning shown in the inset of each plot. . M Greiner, O Mandel, T W Hänsch, I Bloch, Nature. 41951M. Greiner, O. Mandel, T. W. Hänsch, I. Bloch, Nature 419, 51 (2002). . M Lewenstein, A Sanpera, V Ahufinger, B Damski, A De, U Sen, Adv. Phys. 56243M. Lewenstein, A. Sanpera, V. Ahufinger, B. Damski, A. Sen De, U. Sen, Adv. Phys. 56, 243 (2007). . D Leibfried, R Blatt, C Monroe, D Wineland, Rev. Mod. Phys. 75281D. Leibfried, R. Blatt, C. Monroe, and D. Wineland, Rev. Mod. Phys. 75, 281 (2003). . R Blatt, C F Roos, Nature Physics. 8277R. Blatt and C. F. Roos, Nature Physics 8, 277 (2012). . C Gogolin, J Eisert, Rep. Prog. Phys. 7956001C. Gogolin and J. Eisert, Rep. Prog. Phys. 79, 056001 (2016). . L D&apos;alessio, Y Kafri, A Polkovnikov, M Rigol, Adv. Phys. 65239L. D'Alessio, Y. Kafri, A. Polkovnikov, and M. Rigol, Adv. Phys. 65, 239 (2016). . J Eisert, M Friesdorf, C Gogolin, Nature Phys. 11124J. Eisert, M. Friesdorf, C. Gogolin, Nature Phys 11, 124 (2015). . G De Chiara, S Montangero, P Calabrese, R Fazio, J. Stat. Mech. 3001G. De Chiara, S. Montangero, P. Calabrese and R. Fazio, J. Stat. Mech. P03001 (2006). . A M Läuchli, C Kollath, J. Stat. Mech. 5018A. M. Läuchli and C. Kollath, J. Stat. Mech. P05018 (2008). . M Fagotti, P Calabrese, Phys. Rev. A. 7810306M. Fagotti and P. Calabrese, Phys. Rev. A 78, 010306(R) (2008). . P Calabrese, J Cardy, J. Phys. A. 42504005P. Calabrese and J. Cardy, J. Phys. A 42, 504005 (2009). . M Kormos, L Bucciantini, P Calabrese, EPL. 10740002M. Kormos, L. Bucciantini and P. Calabrese, EPL 107, 40002 (2014). . M Collura, M Kormos, P Calabrese, J. Stat. Mech. 1009M. Collura, M. Kormos and P. Calabrese, J. Stat. Mech. P01009 (2014). . K R A Hazzard, M Van Den Worm, M Foss-Feig, S R Manmana, E G Dalla Torre, T Pfau, M Kastner, A M Rey, Phys. Rev. A. 9063622K. R. A. Hazzard, M. van den Worm, M. Foss-Feig, S. R. Manmana, E. G. Dalla Torre, T. Pfau, M. Kastner, and A. M. Rey, Phys. Rev. A 90, 063622 (2014). . A S Buyskikh, M Fagotti, J Schachenmayer, F Essler, A J Daley, Phys. Rev. A. 9353620A. S. Buyskikh, M. Fagotti, J. Schachenmayer, F. Essler and A. J. Daley, Phys. Rev. A 93, 053620 (2016). . M Kormos, M Collura, G Takács, P Calabrese, Nat. Phys. 13246M. Kormos, M. Collura, G. Takács and P. Calabrese, Nat. Phys. 13, 246 (2016). . W W Ho, D A Abanin, Phys. Rev. B. 9594302W. W. Ho and D. A. Abanin, Phys. Rev. B 95, 094302 (2017). . V Alba, P Calabrese, Scipost Phys, 417V. Alba and P. Calabrese, SciPost Phys. 4, 017 (2018). . P Calabrese, SciPost Phys. Lect. Notes. 20P. Calabrese, SciPost Phys. Lect. Notes 20, (2020). . P Calabrese, J Cardy, Phys. Rev. Lett. 96136801P. Calabrese and J. Cardy, Phys. Rev. Lett. 96, 136801 (2006). . J H Mentink, M Eckstein, Phys. Rev. Lett. 11357201J. H. Mentink and M. Eckstein, Phys. Rev. Lett. 113, 057201 (2014). . J H Mentink, K Balzer, M Eckstein, Nat. Commun. 66708J. H. Mentink, K. Balzer, and M. Eckstein, Nat. Com- mun. 6, 6708 (2015). . M Claassen, H.-C Jiang, B Moritz, Thomas P Devereaux, Nat. Commun. 81192M. Claassen, H.-C. Jiang, B. Moritz, and Thomas P. De- vereaux, Nat. Commun. 8, 1192 (2017). . S Kitamura, T Oka, H Aoki, Phys. Rev. B. 9614406S. Kitamura, T. Oka, and H. Aoki, Phys. Rev. B 96, 014406 (2017). . J Liu, K Hejazi, Leon Balents, Phys. Rev. Lett. 121107201J. Liu, K. Hejazi, and Leon Balents, Phys. Rev. Lett. 121, 107201 (2018). . S Chaudhary, D Hsieh, G Refael, Phys. Rev. B. 100220403S. Chaudhary, D. Hsieh, and G. Refael, Phys. Rev. B 100, 220403(R) (2019). . M M S Barbeau, M Eckstein, M I Katsnelson, J H Mentink, SciPost Phys. 627M. M. S. Barbeau, M. Eckstein, M. I. Katsnelson, and J. H. Mentink, SciPost Phys. 6, 027 (2019). . R V Mikhaylovskiy, E Hendry, A Secchi, J H Mentink, M Eckstein, A Wu, R V Pisarev, V V Kruglyak, M I Katsnelson, Th Rasing, A V Kimel, Nat. Commun. 68190R.V. Mikhaylovskiy, E. Hendry, A. Secchi, J.H. Mentink, M. Eckstein, A. Wu, R.V. Pisarev, V.V. Kruglyak, M.I. Katsnelson, Th. Rasing and A.V. Kimel, Nat. Commun. 6, 8190 (2015). . R V Mikhaylovskiy, T J Huisman, V A Gavrichkov, S I Polukeev, S G Ovchinnikov, D Afanasiev, R V Pisarev, Th, A V Rasing, Kimel, Phys. Rev. Lett. 125157201R. V. Mikhaylovskiy, T. J. Huisman, V. A. Gavrichkov, S. I. Polukeev, S. G. Ovchinnikov, D. Afanasiev, R. V. Pisarev, Th. Rasing, and A. V. Kimel Phys. Rev. Lett. 125, 157201 (2020). . J M Losada, A Brataas, A Qaiumzadeh, Phys. Rev. B. 10060410J. M. Losada, A. Brataas, and A. Qaiumzadeh, Phys. Rev. B 100, 060410(R) (2019). . A Sriram, M Claassen, arXiv:2105.01062A. Sriram and M. Claassen, arXiv:2105.01062 . M Ke, M M Asmar, W.-K Tse, Phys. Rev. Research. 233228M. Ke, M. M. Asmar, and W.-K. Tse, Phys. Rev. Re- search 2, 033228 (2020). . M A Sentef, J Li, F Künzel, M Eckstein, Phys. Rev. Research. 233033M. A. Sentef, J. Li, F. Künzel, and M. Eckstein, Phys. Rev. Research 2, 033033 (2020). . A Ono, S Ishihara, Phys. Rev. Lett. 119207202A. Ono and S. Ishihara, Phys. Rev. Lett. 119, 207202 (2017). . Y Wang, Y Chen, T P Devereaux, B Moritz, M Mitrano, Commun. Phys. 4212Y. Wang, Y. Chen, T. P. Devereaux, B. Moritz, and M. Mitrano, Commun. Phys. 4, 212 (2021) . A Kamra, E Thingstad, G Rastelli, R A Duine, A Brataas, W Belzig, A Sudbø, Phys. Rev. B. 100174407A. Kamra, E. Thingstad, G. Rastelli, R. A. Duine, A. Brataas, W. Belzig, and A. Sudbø, Phys. Rev. B 100, 174407 (2019). . D Wuhrer, N Rohling, W Belzig, Phys. Rev. B. 10554406D. Wuhrer, N. Rohling, and W. Belzig, Phys. Rev. B 105, 054406 (2022). . J Zhao, A V Bragas, D J Lockwood, R Merlin, Phys. Rev. Lett. 93107203J. Zhao, A. V. Bragas, D. J. Lockwood, and R. Merlin, Phys. Rev. Lett. 93, 107203 (2004). . G Fabiani, M D Bouman, J H Mentink, Phys. Rev. Lett. 12797202G. Fabiani, M. D. Bouman, J. H. Mentink, Phys. Rev. Lett. 127, 097202 (2021). . T P Devereaux, R Hackl, Rev. Mod. Phys. 79175T. P. Devereaux and R. Hackl, Rev. Mod. Phys. 79, 175 (2007). . D Bossini, S Conte, Y Hashimoto, A Secchi, R V Pisarev, Th, G Rasing, A V Cerullo, Kimel, Nat. Commun. 710645D. Bossini, S. Dal Conte, Y. Hashimoto, A. Secchi, R. V. Pisarev, Th. Rasing, G. Cerullo, and A. V. Kimel, Nat. Commun. 7, 10645 (2016). . D Bossini, S Conte, G Cerullo, Phys. Rev. B. 10024428D. Bossini, S. Dal Conte, G. Cerullo et al., Phys. Rev. B 100, 024428 (2019). . P A Fleury, R Loudon, Phys. Rev. 166514P. A. Fleury and R. Loudon, Phys. Rev. 166, 514 (1968). Raman scattering in materials science. W H Weber, R Merlin, SpringerBerlin HeidelbergW. H. Weber and R. Merlin, Raman scattering in mate- rials science, Springer Berlin Heidelberg (2000). . G Fabiani, J H Mentink, SciPost Phys. 74G. Fabiani and J. H. Mentink, SciPost Phys. 7, 004 (2019). . J E Hirsch, S Tang, Phys. Rev. B. 404769J. E. Hirsch and S. Tang, Phys. Rev. B 40, 4769 (1989). . H F Song, N Laflorencie, S Rachel, K Le Hur, Phys. Rev. B. 83224410H. F. Song, N. Laflorencie, S. Rachel, and K. Le Hur, Phys. Rev. B 83, 224410 (2011). . T Oguchi, Phys. Rev. 117117T. Oguchi, Phys. Rev. 117, 117 (1960). . I Peschel, J.Phys.A: Math.Gen. 36205I. Peschel, J.Phys.A: Math.Gen. 36, L205 (2003). . J H Mentink, J. Phys.: Condens. Matter. 29453001J H Mentink 2017 J. Phys.: Condens. Matter 29, 453001 (2017). . K Tsutsui, K Shinjo, T Tohyama, Phys. Rev. Lett. 126127404K. Tsutsui, K. Shinjo, and T. Tohyama, Phys. Rev. Lett. 126, 127404 (2021). . G Carleo, M Troyer, Science. 355602G. Carleo and M. Troyer, Science 355, 602 (2017). . G Carleo, F Becca, M Schiró, M Fabrizio, Sci. Rep. 2243G. Carleo, F. Becca, M. Schiró, and M. Fabrizio, Sci. Rep. 2, 243 (2012). . M B Hastings, I Gonzalez, A B Kallin, R G Melko, Phys. Rev. Lett. 104157201M. B. Hastings, I. Gonzalez, A. B. Kallin, and R. G. Melko, Phys. Rev. Lett. 104, 157201 (2010). . R J Elliott, M F Thorpe, J. Phys. C. 21630R. J. Elliott and M. F. Thorpe, J. Phys. C 2, 1630 (1969). . C M Canali, S M Girvin, Phys. Rev. B. 457127C. M. Canali and S. M. Girvin, Phys. Rev. B 45, 7127 (1992). . J Lorenzana, G A Sawatzky, Phys. Rev. B. 529576J. Lorenzana and G. A. Sawatzky, Phys. Rev. B 52, 9576 (1995). . M Kormos, M Collura, G Takács, P Calabrese, Nature Physics. 13M. Kormos, M. Collura, G. Takács and P. Calabrese, Nature Physics 13, 246-249 (2017). . K Hódsági, M Kormos, G , Takács, SciPost Phys. 527K. Hódsági, M. Kormos, G. Takács, SciPost Phys. 5, 027 (2018) · . O A Castro-Alvaredo, M Lencs Es, I M Sz, J Viti, JHEP. 201979O. A. Castro-Alvaredo, M. Lencs es, I. M. Sz ecs enyi, and J. Viti, JHEP 2019, 79 (2019). . O A Castro-Alvaredo, M Lencsés, I M Szécsényi, J Viti, Phys. Rev. Lett. 124230601O. A. Castro-Alvaredo, M. Lencsés, I. M. Szécsényi, and J. Viti, Phys. Rev. Lett. 124, 230601 (2020). . J.-A Yang, N Pellatz, T Wolf, R Nandkishore, D Reznik, Nat. Commun. 112548J.-A. Yang, N. Pellatz, T. Wolf, R. Nandkishore, and D. Reznik Nat. Commun. 11, 2548 (2020). . D G Mazzone, D Meyers, Y Cao, J G Vale, C D Dashwood, Y Shi, A J A James, N J Robinson, J Lin, V Thampy, PNAS. 118D. G. Mazzone, D. Meyers, Y. Cao, J. G. Vale, C. D. Dashwood, Y. Shi, A. J. A. James, N. J. Robinson, J. Lin, V. Thampy, et al. PNAS 118 (2021).
[]
[ "Distribution of Brownian coincidences", "Distribution of Brownian coincidences" ]
[ "Alexandre Krajenbrink ", "Bertrand Lacroix-A-Chez-Toine ", "Pierre Le Doussal ", "\nLaboratoire de Physique de l'Ecole Normale Supérieure\nPSL University\nCNRS\nSorbonne Universités\n24 rue Lhomond75231, Cedex 05ParisFrance\n", "\nLaboratoire de Physique de l'Ecole Normale Supérieure, PSL University CNRS, Sorbonne Universités\nLPTMS\nCNRS\nUniv. Paris-Sud\nUniversité Paris-Saclay\n24 rue Lhomond91405, 75231, Cedex 05Orsay, ParisFrance, France\n" ]
[ "Laboratoire de Physique de l'Ecole Normale Supérieure\nPSL University\nCNRS\nSorbonne Universités\n24 rue Lhomond75231, Cedex 05ParisFrance", "Laboratoire de Physique de l'Ecole Normale Supérieure, PSL University CNRS, Sorbonne Universités\nLPTMS\nCNRS\nUniv. Paris-Sud\nUniversité Paris-Saclay\n24 rue Lhomond91405, 75231, Cedex 05Orsay, ParisFrance, France" ]
[]
We study the probability distribution, P N (T ), of the coincidence time T , i.e. the total local time of all pairwise coincidences of N independent Brownian walkers. We consider in details two geometries: Brownian motions all starting from 0, and Brownian bridges. Using a Feynman-Kač representation for the moment generating function of this coincidence time, we map this problem onto some observables in three related models (i) the propagator of the Lieb Liniger model of quantum particles with pairwise delta function interactions (ii) the moments of the partition function of a directed polymer in a random medium (iii) the exponential moments of the solution of the Kardar-Parisi-Zhang equation. Using these mappings, we obtain closed formulae for the probability distribution of the coincidence time, its tails and some of its moments. Its asymptotics at large and small coincidence time are also obtained for arbitrary fixed endpoints. The universal large T tail, P N (T ) ∼ exp(−3T 2 /(N 3 − N )) is obtained, and is independent of the geometry. We investigate the large deviations in the limit of a large number of walkers through a Coulomb gas approach. Some of our analytical results are compared with numerical simulations. arXiv:1903.06511v1 [cond-mat.stat-mech]
10.1007/s10955-019-02360-x
[ "https://arxiv.org/pdf/1903.06511v1.pdf" ]
119,058,157
1903.06511
866caa9c4ea5f70ad806131f4a0289fb1080ec86
Distribution of Brownian coincidences 15 Mar 2019 Alexandre Krajenbrink Bertrand Lacroix-A-Chez-Toine Pierre Le Doussal Laboratoire de Physique de l'Ecole Normale Supérieure PSL University CNRS Sorbonne Universités 24 rue Lhomond75231, Cedex 05ParisFrance Laboratoire de Physique de l'Ecole Normale Supérieure, PSL University CNRS, Sorbonne Universités LPTMS CNRS Univ. Paris-Sud Université Paris-Saclay 24 rue Lhomond91405, 75231, Cedex 05Orsay, ParisFrance, France Distribution of Brownian coincidences 15 Mar 2019CONTENTS 2 We study the probability distribution, P N (T ), of the coincidence time T , i.e. the total local time of all pairwise coincidences of N independent Brownian walkers. We consider in details two geometries: Brownian motions all starting from 0, and Brownian bridges. Using a Feynman-Kač representation for the moment generating function of this coincidence time, we map this problem onto some observables in three related models (i) the propagator of the Lieb Liniger model of quantum particles with pairwise delta function interactions (ii) the moments of the partition function of a directed polymer in a random medium (iii) the exponential moments of the solution of the Kardar-Parisi-Zhang equation. Using these mappings, we obtain closed formulae for the probability distribution of the coincidence time, its tails and some of its moments. Its asymptotics at large and small coincidence time are also obtained for arbitrary fixed endpoints. The universal large T tail, P N (T ) ∼ exp(−3T 2 /(N 3 − N )) is obtained, and is independent of the geometry. We investigate the large deviations in the limit of a large number of walkers through a Coulomb gas approach. Some of our analytical results are compared with numerical simulations. arXiv:1903.06511v1 [cond-mat.stat-mech] Abstract. We study the probability distribution, P N (T ), of the coincidence time T , i.e. the total local time of all pairwise coincidences of N independent Brownian walkers. We consider in details two geometries: Brownian motions all starting from 0, and Brownian bridges. Using a Feynman-Kač representation for the moment generating function of this coincidence time, we map this problem onto some observables in three related models (i) the propagator of the Lieb Liniger model of quantum particles with pairwise delta function interactions (ii) the moments of the partition function of a directed polymer in a random medium (iii) the exponential moments of the solution of the Kardar-Parisi-Zhang equation. Using these mappings, we obtain closed formulae for the probability distribution of the coincidence time, its tails and some of its moments. Its asymptotics at large and small coincidence time are also obtained for arbitrary fixed endpoints. The universal large T tail, P N (T ) ∼ exp(−3T 2 /(N 3 − N )) is obtained, and is independent of the geometry. We investigate the large deviations in the limit of a large number of walkers through a Coulomb gas approach. Some of our analytical results are compared with numerical simulations. Introduction In this article, we study independent and identical diffusive Brownian processes. We want to characterise in details the statistics of the time spent by identical diffusing particles in the vicinity of each other, which we call the coincidence time. This problem is relevant for instance to analyse reaction networks of the type A + A → B .(1) Indeed, for chemical species to react, they first have to encounter each other and then overcome the potential barriers. This takes a finite amount of time and this coincidence time plays a crucial role in the kinetics of the reaction. The behaviour of this coincidence time is highly dependent on the spatial dimension, as it is clear that the higher the dimension, the harder it is for diffusing particles to encounter one another. We limit our study of this problem to the case of dimension d = 1, where the number of encounters is maximum. We model the identical diffusing particles via independent one dimensional Brownian motions x i (τ ) on the time interval τ ∈ [0, t] x i (τ ) = η i (τ ) with η i (τ ) = 0 η i (τ )η j (τ ) = 2Dδ i,j δ(τ − τ ) . where below we choose units such that D = 1. We consider the case where the N diffusing particles are emitted at t = 0 from a single point-like source x i (0) = x 0 for i = 1, . . . , N . We define the total time spent by these particles in a close vicinity of length to each other as T N ( ; t) = N i =j t 0 Θ 2 − |x i (τ ) − x j (τ )| dτ ,(3) where Θ(x) is the Heaviside step-function. We are particularly interested in the limit where the length is small compared to all the other scales of the problem and therefore define in the limit → 0, T N (t) = lim →0 1 T N ( ; t) = N i =j t 0 δ[x i (τ ) − x j (τ )]dτ .(4) We will refer to this observable as the coincidence time of N diffusing particles, i.e. the amount of time that these independent particles spend crossing each other ‡. Note however that T N (t) does not have the dimension of a time, which is a usual property of the local time of a stochastic process [1,2,3]. Using the Brownian scaling, the process has a trivial rescaling to the time interval τ ∈ [0, 1], and we obtain the equality T N (t = 1) = T N = T N (t) √ t .(5) We will see in the following that the N -dependence is however highly non-trivial. The case N = 2 can be solved quite easily by considering the process z(τ ) = ‡ for calculational convenience below, each pair appears twice in the sum in our definition (4). 1 √ 2 (x 1 (τ ) − x 2 (τ ) ) which is also a simple diffusive process of same diffusion coefficient D = 1,ż (τ ) = ξ(τ ) , with ξ(τ ) = 0 ξ(τ )ξ(τ ) = 2Dδ(τ − τ ) .(6) The coincidence time T 2 of the two Brownian particles can be expressed in terms of the local time L t (x) of the process z(τ ) defined as L t (x) = t 0 dτ δ(z(τ ) − x) .(7) Setting x = 0, we obtain L t (0) = t 0 dτ δ(z(τ )) = t 0 dτ δ 1 √ 2 (x 1 (τ ) − x 2 (τ )) (8) = √ 2 t 0 dτ δ (x 1 (τ ) − x 2 (τ )) = T 2 (t) √ 2 . The joint PDF of the local time L t (0) = L and the final position z(t) = x f was obtained by Borodin et al. [2] and exploited by Pitman [3] and reads P joint (L, x f ) = 1 2 1 Dπt 3 (|x f | + 2DL)e − (|x f |+2DL) 2 4Dt .(9) We set t = 1 and remind that D = 1 in our convention. Using now the identities L 1 (0) = T 2 / √ 2 together with z(1) = (x 1 (1) − x 2 (1))/ √ 2, we obtain the joint PDF of the coincidence time T 2 = T and the final algebraic distance between the diffusive particles d = x 1 (1) − x 2 (1) as P joint (T, d) = 1 2 P joint T √ 2 , d √ 2 = (|d| + 2T ) 2 √ 8π e − (|d|+2T ) 2 8 .(10) However, as soon as N = 3, one cannot define independent processes for which our coincidence time is a simple observable. Here we will study the Probability Distribution Function (PDF) of T N for two cases, Brownian motions and Brownian bridges. In the first case (Brownian motion) one defines the Moment Generating Function (MGF) e −cT N (t) x0 , where . . . x0 denotes the expectation value with respect to the PDF of T N (t) for given initial conditions x(τ = 0) = x 0 = (x 0 , . . . , x 0 ). This MGF can be expressed in the Feynman-Kač framework as an N -dimensional path integral e −cT N (t) x0 = R N dyZ N (y, t|x 0 ; c) , with ,(11)Z N (y, t|x 0 ; c) = x(t)=y x(0)=x0 Dx(τ ) e − t 0   N i=1ẋ i (τ ) 2 4 + 2c N i<j δ[x i (τ ) − x j (τ )]   dτ ,(12) where x(τ ) = (x 1 (τ ), x 2 (τ ), · · · , x N (τ )). In this expression we wrote the sum i =j as 2 i<j to represent the coincidence as a delta interaction between each pair of Brownian motions. As the final point y is arbitrary, we integrated here over all the possible realisations. This situation corresponds to the right panel in Fig. 1. In the second case (Brownian bridges) we define the MGF as the following expectation value with fixed initial and final positions e −cT N (t) x0,y = Z N (y, t|x 0 ; c) Z N (y, t|x 0 ; c = 0)(13) where Z N (y, t|x 0 ; c = 0) = (4πt) −N/2 e −(y−x0) 2 /(4t) is the free propagator. This situation corresponds to the left panel in Fig. 1. The function Z N (y, t|x 0 ; c) can be interpreted as the imaginary time N -body propagator of the bosonic Lieb-Liniger Hamiltonian [4] −∂ t Z N =Ĥ N (c)Z N withĤ N (c) = N i=1p 2 i + 2c N i<j δ(x i −x j ) .(14) We therefore see that this MGF, eq. (13), for non-interacting diffusing classical particles is mapped onto the propagator of a quantum problem of identical bosons with a contact interaction. This problem is also related to other well-studied models: the directed polymer in a random potential, equivalent to the Kardar-Parisi-Zhang (KPZ) stochastic growth equation. Indeed, consider an elastic polymer of length t and fixed endpoints x 0 and y, in a random potential ξ(x, t). Its partition function reads Z(y, x 0 , t) = x(t)=y x(0)=x0 Dx(τ )e − t 0 ẋ(τ ) 2 4 + ξ(x(τ ), τ ) dτ .(15) One may then compute the N -point correlation function of the partition function, averaged with respect to the random potential (denoted as · · ·) and obtain N i=1 Z(y i , x 0 , t) = x(t)=y x(0)=x0 Dx(τ )e − t 0 N i=1 ẋ i (τ ) 2 4 + ξ(x i (τ ), τ ) dτ .(16) Taking now ξ(x, t) as a Gaussian white noise of variancec > 0, ξ(x, t) = 0 and ξ(x, t)ξ(x , t ) = 2cδ(x − x )δ(t − t ) ,(17) the replica average in Eq. (16) reduces to the Lieb-Liniger real time propagator in Eq. (12) [5,6,7] with the value of the parameter c = −c, and satisfies the same evolution equation (14) §. The directed polymer partition function Z(y, x 0 , t) is also equal to the droplet solution of the KPZ equation, through a Cole-Hopf transformation, log Z(y, x 0 , t) = h(y, x 0 , t) , where ∂ t h = ∂ 2 y h + (∂ y h) 2 + √ 2c ξ(y, t) .(18) Both the directed polymer and the KPZ equation correspond to the choice c = −c < 0, i.e. to the attractive Lieb-Liniger model. The moments (16) of the directed polymer problem, which correspond to exponential moments for the KPZ height field, have been studied extensively recently and exact formulae have been obtained for the droplet solution at arbitrary time [7,8,9]. The MGF of our diffusion problem, for N independent Brownian bridges, and negative value of the parameter c = −c is thus related to these moments as follows ec T N (t) 0,0 = Z(0, 0, t) N Z 0 (0, 0, t) N(19) where Z 0 (x, 0, t) = 1 √ 4πt e −x 2 /(4t) is the free Brownian propagator (for c = 0). Explicit expressions of these moments were derived from the Bethe ansatz. Note that all initial and final points here are set to 0. In this paper, we obtain complementary exact formula for the MGF for positive value c > 0, i.e. for the Laplace transform of the coincidence time e −cT N (t) . We study in particular the case of N independent Brownian bridges denoted e −cT N (t) BB ≡ e −cT N (t) 0,0 , and the case of N Brownian motions starting from the same point, x 0 = 0, with free final points, denoted e −cT N (t) BM ≡ e −cT N (t) 0 . We also use the formula (19) for negative c = −c < 0 to obtain the large T asymptotics of the PDF of T N (t). For that particular asymptotic behavior we obtain some more results for arbitrary fixed final points, using a formula for e −cT N (t) 0,y . We now summarize our main results. Summary of the main results Brownian bridges We obtain an exact formula for the PDF of the rescaled coincidence time T N for N Brownian bridges as (12) originates from the Itô prescription in the (Feynman-Kač) stochastic heat equation satistied by Z, equivalently in the definition of the path integral. P N,BB (T ) = ∂ N +1 T R N + dr Θ T − r det 1 i,j N e − (r i −r j ) 2 4 (20) § The constraint i = j in For N = 2, 3, we obtain explicit formulae for P N,BB (T ) respectively in Eq. (42) and (46). For general N , we extract the small and large T asymptotic behaviours as P N,BB (T ) =            2 − N (N −1) 2 G(N + 2) Γ(N (N − 1)) T N (N −1)−1 + O(T N (N −1)+1 ) , T → 0 , N !2 N −1 π N 2 −1 N 3/2 α N 2 N H N −1 ( √ α N T ) e −α N T 2 + O(e −β N T 2 ) , T → ∞ ,(21)where G(n) = n−2 k=1 k! is the Barnes-G function, Γ is the usual Gamma function, H p (x) = e x 2 (−∂ x ) p e −x 2 is the Hermite polynomial of degree p and the exponential factors are α N = 3 (N − 1)N (N + 1) , β N = 3 (N − 2)(N − 1)N .(22) We also obtain the mean coincidence time in Eq. (56) and the variance in Eq. (60). Brownian motions We obtain an exact formula for the MGF of the rescaled coincidence time T N for N Brownian motions (i.e. with free final points), which depend on the parity of N . It is given in Eq. (95) for even values of N and in Eq. (96) for odd values. For N = 2, 3, we obtain explicit formulae for the PDF P N,BM (T ), P 2,BM (T ) = 2 π e − T 2 2 , P 3,BM (T ) = 2 π e − T 2 2 (e 3 8 T 2 − 1) .(23) For arbitrary values of N , we obtained the expression of the PDF as: N odd P N,BM (T ) = N ∂ N +1 T R N + dr Θ T − N =1 r 1 k< N sign(r k − r ) Pf 1 k, N −1 erf r k − r √ 2(24) N even P N,BM (T ) = ∂ N +1 T R N + dr Θ T − N =1 r 1 k< N sign(r k − r ) Pf 1 k, N erf r k − r √ 2(25) We further extracted the small and large T behaviours as P N,BM (T ) =            I N T N (N −1) 2 −1 + O(T N (N −1) 2 +1 ) , T → 0 , 2 N −1 3 πN (N 2 − 1) e −α N T 2 + O(e −β N T 2 ) , T → ∞ ,(26) where I N is given in Eq. (135). The coefficients α N and β N are the same as for the Brownian bridge in Eq. (22). We also obtain the mean coincidence time in Eq. (119) and the variance in Eq. (121). Finally, we extended our study to N Brownian walkers with arbitrary fixed endpoints. The small T asymptotics of the PDF for arbitrary final points y (all initial points being at 0) is given in (133). The large T asymptotics of the PDF for any choice of initial and final points is displayed in (139). A trivial consequence of our work is the PDF P c N (T ) of the coincidence time for N Brownian walkers interacting pairwise via a delta interaction of strength c (of any sign). It is given in terms of the PDF for non-interacting Brownian P N (T ) calculated here, simply as P c N (T ) = e −cT P N (T ) +∞ 0 dT e −cT P N (T )(27) The remaining of the paper is organised as follows. Section 2, is dedicated to the results for the case of Brownian bridges. In subsection 2.1, we describe how the Bethe ansatz applied to the Lieb-Liniger model allows to obtain a formula for the MGF.In subsection 2.2, we derive the exact formula in Eq. (20) for the PDF of the coincidence time T N = T for arbitrary N . In subsection 2.3, we analyse this formula for N = 2, 3. In subsection 2.4, we derive the asymptotic behaviours for T → 0 and T → ∞ for arbitrary N . In subsection 2.5, we obtain the mean and variance for arbitrary N . Finally, in subsection 2.6, we analyse the large N limit via Coulomb gas techniques. In Section 3, we extend all previous results -except the Coulomb gas -to the Brownian motions in the same fashion and derive the exact PDF for arbitrary N Brownian motions given in Eqs. (25) and (24). In Section 4, we study some properties of the coincidence time for N Brownian with arbitrary fixed endpoints. We obtain the exact expression of its PDF for N = 2 and N = 3, the small T asymptotics for any N and the large T asymptotics for arbitrary initial and final points. The latter is obtained using the properties of the ground state of the Lieb-Liniger model. Coincidence time for Brownian bridges and KPZ equation with droplet initial conditions Bethe ansatz solution of the Lieb-Liniger model for droplet initial conditions The Lieb-Liniger Hamiltonian in Eq. (14) is an exactly solvable model using the Bethe ansatz. Here we need only the symmetric eigenstates ofĤ N (c), which in the repulsive case c > 0, are single particle states. The eigen-energies E k of this system defined on a circle of radius L form, in the large L limit, a continuum indexed by N real momenta {k i }'s such that H N (c)|Ψ c k = E k |Ψ c k , with E k = k 2 = N i=1 k 2 i ,(28)x|Ψ c k = C c k N ! σ∈S N i<j 1 − ic sign(x j − x i ) k σ(j) − k σ(i) e i N j=1 xj k σ(j) ,(29)C c k =   i<j (k i − k j ) 2 (k i − k j ) 2 + c 2   1/2 .(30) where the |Ψ c k denote the eigenstates ofĤ N (c). The calculation is similar to the one in refs. [7,8] except that we retain here for c > 0 only the particle states. Consider the imaginary time propagator at coinciding endpoints Z N (0, t|0; c) = 0|e −Ĥ N (c)t |0 = R N dk (2π) N 0|Ψ c k Ψ c k |0 e −E k t .(31) We now use that 0|Ψ c k = C c k and we arrive at the following Laplace transform for the PDF of the coincidence time for N Brownian bridges e −cT N (t) BB = (4πt) N 2 R N dk (2π) N e −k 2 t i<j (k i − k j ) 2 (k i − k j ) 2 + c 2 .(32) where we have divided by the result at c = 0 which coincides with the free propagator Z N (0, t|0; 0) = (4πt) − N 2 . We note that there is an alternative formula for the moments of the directed polymer problem, which was derived using Macdonald processes [9,10]. Using this formula we can write the MGF of N Brownian bridges equivalently as e −cT N (t) BB = (4πt) N 2 iR N dz (2iπ) N e tz 2 i<j z i − z j z i − z j + c .(33) Note that this formula is valid only for c > 0, otherwise the contours are different. Using the symmetrization identity σ∈Sn i>j z σ(i) − z σ(j) + c z σ(i) − z σ(j) = n!(34) one can show that this formula is in agreement with (32). Probability Distribution Function of the coincidence time In this section we obtain from Eq. (32) an expression for the PDF of the coincidence time for the N Brownian bridges. First, we use the Cauchy identity i<j (k i − k j ) 2 (k i − k j ) 2 + c 2 = det 1 ,m N c c + i(k − k m ) = σ∈S N sign(σ) N =1 c c + i(k − k σ( ) ) . (35) We then introduce the auxiliary variables r = (r 1 , · · · , r N ) such that i<j (k i − k j ) 2 (k i − k j ) 2 + c 2 = c N σ∈S N sign(σ) R N + dr e −c r N =1 e −ir (k −k σ( ) ) (36) = c N σ∈S N sign(σ) R N + dr e −c r N =1 e −ik (r −r σ( ) ) .(37) Replacing in Eq. (32), we are now able to compute the integrals over k, yielding e −cT N (t) BB = c N R N + dr e −c r σ∈S N sign(σ) N =1 e − (r −r σ(l) ) 2 4t (38) = c N R N + dr e −c r det 1 i,j N e − (r i −r j ) 2 4t(39) Inverting the Laplace transform of this expression, we obtain the PDF P N,BB (T ) of the rescaled coincidence time T N = T N (t = 1) as P N,BB (T ) = ∂ N T R N + dr δ T − r det 1 i,j N e − (r i −r j ) 2 4 = ∂ N +1 T R N + dr Θ T − r det 1 i,j N e − (r i −r j ) 2 4(40) where Θ(x) is the Heaviside step function. We start by analysing the case of N = 2, 3, where the PDF can be computed explicitly before considering the general behaviour for arbitrary values of N . This formula (40) is the most compact general expression that we could find. We show in the next section that it can be used to derive explicitly the PDF for small values of N = 2, 3. Full distribution of the coincidence time for N = 2, 3 Brownian bridges Distribution for N = 2. In the case of N = 2, we will see how to extract the full PDF from Eq. (40). Setting N = 2, we may compute exactly the determinant in the integrand, yielding P 2,BB (T ) = ∂ 3 T T 0 dr 1 T −r1 0 dr 2 1 − e − (r 1 −r 2 ) 2 2 = −∂ 3 T T 0 dr 1 T −r1 0 dr 2 e − (r 1 −r 2 ) 2 2 . (41) The first term in the integrand vanishes when deriving with respect to T . The PDF can then be derived in a few steps as P 2,BB (T ) = −∂ 2 T T 0 dre − (2r−T ) 2 2 = −∂ 2 T T 0 due − u 2 2 = T e − T 2 2 .(42) The asymptotic behaviours of this PDF are simple to extract as P 2,BB (T ) =      T + O(T 3 ) , T → 0 , T e − T 2 2 , T → ∞ .(43) In Fig. 2, we compare our analytical formula of Eq. (42) with the numerical simulations of Brownian bridges (see Appendix C for the details of the simulations), showing an excellent agreement. Note that in the case N = 2, this result can be obtained directly from Eq. (10) by setting the final distance between the diffusive particles d = 0 and ensuring the normalisation of the probability P 2,BB (T ) = P joint (T, d = 0) ∞ 0 P joint (T, d = 0)dT = T e − T 2 2 .(44) As noted previously this method cannot be generalised to larger values of N , a contrario with our exact formula in Eq. (40). Distribution for N = 3. Starting from N = 3, the distribution cannot be obtained from a simple rescaling of the local time of a single Brownian motion,and we need to use our results in Eq. (40) for the Brownian bridge. In the case of N = 3, the determinant appearing in the integrand can still be computed quite easily, yielding P 3,BB (T ) = ∂ 4 T T 0 dr 1 T −r1 0 dr 2 T −r1−r2 0 dr 3 1 − 3e − (r 1 −r 2 ) 2 2 + 2 3 i=1 e − (r i −r i+1 ) 2 4 ,(45) where we used the symmetry between the r i 's. After some computations (See Appendix B for details), we finally obtain the PDF P 3,BB (T ) = T 2 e − T 2 2 + 1 8 2π 3 e − T 2 8 (T 2 − 4) erf 3 8 T .(46) Its asymptotic behaviours read which is given in Eq. (32). The trajectories which contribute to a small coincidence time are repelled from each others. In the Lieb-Liniger picture, this is consistent with the repulsive case where the states are described as single particle states rather than bound state. Taking the large c → +∞ limit of Eq. (32), we obtain P 3,BB (T ) =            T 5 80 + O(T 7 ) , T → 0 , 1 8 2π 3 T 2 − 4 e − T 2 8 + O(e − T 2 2 ) , T → ∞ .(47)e −cT N BB ≈ 1 π N 2 c −N (N −1) R N dke −k 2 i<j (k i − k j ) 2 = 2 − N (N −1) 2 G(N + 2) c N (N −1) ,(48) where G(n) = n−2 k=1 k! is the Barnes G function. We have used the Mehta integral formula e.g. (1.5)-(1.6) in [11]. Inverting the Laplace transform, we obtain the small T behavior P N,BB (T ) = T →0 2 − N (N −1) 2 G(N + 2) Γ(N (N − 1)) T N (N −1)−1 + O(T N (N −1)+1 ).(49) Note that setting N = 2, 3, using G(4) = 2, G(5) = 12 and Γ(6) = 120, one recovers explicitly the result of the first line of Eqs. (43) and (47). Large T limit of the Probability Distribution Function To study the large T limit of P N,BB (T ), one needs to investigate the c → −∞ limit of the expectation e −cT N BB which is dominated by large values of T N . The trajectories which contribute to a large coincidence time are attracted to each others. In the Lieb-Liniger picture, this corresponds to the attractive case, where particles form bound states called strings. Upon increasingc = −c, the potential becomes more and more attractive and the trajectories become dominated by the configuration where all the bosons are bounded into a single string. Mathematically, for c = −c < 0 the expression of the moments of the directed polymer partition function is more involved as it involves a sum over string states [7,8,9,12]. Nonetheless, in the limitc → +∞, the energy spectrum of the Lieb-Liniger model is dominated by its ground state which is a single string containing all particles with energy E 0 (N ) = −c 2 N (N 2 −1) 12 . It follows from the contribution of the ground state to the moment (see e.g. (65-66) in Supp. Mat. of [13] replacing t byc 2 t and multiplying by c N ) that e −cT N BB = N !(4π) N −1 2 N 3/2c N −1 ec 2 N (N 2 −1) 12 [1 + O(e − 1 4 N (N −1)c 2 )](50) One can check that this behavior is consistent with the following T → ∞ asymptotics P N,BB (T ) = N !2 N −1 π N 2 −1 N 3/2 √ α N (−∂ T ) N −1 e −α N T 2 + O(e −β N T 2 ) (51) = N !2 N −1 π N 2 −1 N 3/2 α N 2 N H N −1 ( √ α N T ) e −α N T 2 + O(e −β N T 2 ) ,(52) where H p (x) = e x 2 (−∂ x ) p e −x 2 is the Hermite polynomial of degree p. The exponential factors are equal to      α N = 3 N (N 2 − 1) β N = 3 N 3 − 3N 2 + 2N (53) This is checked by calculating the Laplace transform of (51) using a saddle point approximation (with a saddle point for T * =cα −1 N > 0 forc = −c > 0). For completeness, we provide this asymptotics explicitly for N = 2, 3. • N = 2, P 2,BB (T ) = T e − T 2 2 . This matches eq. (42). • N = 3, P 3,BB (T ) = 1 8 2π 3 e − T 2 8 T 2 − 4 + O(e − T 2 2 ) . This matches eq. (47). Mean and variance of the coincidence time for N Brownian bridges In this section, we compute the two first cumulant of the Brownian bridge coincidence time T N for arbitrary values of N . Mean value of T N,BB To compute the first moment T N,BB , there are two alternative methods. We may obtain it by expanding Eq. (33) for small c or we may use the Brownian bridge propagator to compute it directly. In this section, we present both of these methods. We start by considering Eq. (33). We take a derivative ofthis equation with respect to c to obtain T N e −cT N BB = (4π) N 2 iR N dz (2iπ) N e z 2 i<j (z i − z j ) (z i − z j ) + c i<j 1 (z i − z j ) + c .(54) We introduce the auxiliary variables r and take the limit c → 0 + to recast this integral as T N BB = (4π) N 2 iR N dz (2iπ) N e z 2 i<j ∞ 0 dre −(zi−zj +0 + )r .(55) where the 0 + term is explicitly written for convergence. We may then change variables from z → ik and compute the integrals for = i, j, yielding T N BB = 4π N (N − 1) 2 ∞ 0 dr R 2 dk 1 dk 2 (2π) 2 e −k 2 1 −k 2 2 e −i(k1−k2)r−0 + r = N (N − 1) 2 ∞ 0 dre − r 2 2 = N (N − 1) 2 π 2 .(56) Alternatively, this computation can be done by introducing the Brownian bridge propagator P BB (x, τ |x 0 , 0, 1) = e − (x−x 0 ) 2 4Dτ (1−τ ) 4πDτ (1 − τ ) .(57) where the symbol P BB (x, τ |x 0 , 0, 1) denotes the propagator of the Brownian from x 0 at time t = 0 to x at time τ conditioned on being x 0 at time t = 1. The mean value of the coincidence time of our process can simply be computed as (now taking D = 1) T N BB = 1 0 dτ i =j δ(x i (τ ) − x j (τ )) = 1 0 dτ i =j ∞ −∞ P BB (x i , τ |x 0 , 0, 1) 2 dx i = N (N − 1) 1 0 dτ τ (1 − τ ) ∞ −∞ dx 4π e − x 2 2 = N (N − 1) 2 π 2 ,(58) which coincides with the above result. Variance of T N,BB The second moment of the distribution can be obtained in a similar fashion either using the cumulant generating function in Eq. (33) or the Brownian bridge propagator in Eq. (57). Using the propagator we obtain that there are three different kinds of contributions T 2 N = 1 0 dτ 1 1 0 dτ 2 i =j =m δ(x i (τ 1 ) − x j (τ 1 ))δ(x m (τ 2 ) − x (τ 2 )) (59) = N ! (N − 4)! 1 0 dτ ∞ −∞ P BB (x, τ |0, 0) 2 dx 2 + 8 N ! (N − 3)! 1 0 dτ 1 1 τ1 dτ 2 ∞ −∞ ∞ −∞ P BB (x, τ 1 |0, 0, 1) 2 × P (y − x, τ 2 − τ 1 |0, 1)P BB (y, τ 2 |0, 0, 1)dxdy + 4 N ! (N − 2)! 1 0 dτ 1 1 τ1 dτ 2 ∞ −∞ ∞ −∞ P (x, τ 1 |0, 0) 2 P (y − x, τ 2 − τ 1 |0, 1) 2 dxdy , where P (x, τ |0, 0) = (4πτ ) −1/2 e −x 2 /(4τ ) is the free Brownian propagator. The first term in this expansion gives a disconnected contribution N ! 4(N −4)! T 2 2 . After a careful computation, we obtain the variance for the Brownian bridge Var (T N ) BB = [T N − T N BB ] 2 BB = N ! (N − 3)! 8π 9 √ 3 − π 2 + N ! (N − 2)! 1 − π 4 . (60) This is the last result that we obtain for arbitrary N Brownian bridges. In the limit of a large number of walkers, N → +∞ limit we find T N BB 1 2 π 2 N 2 , Var (T N ) BB 8π 9 √ 3 − π 2 N 3(61) This scaling with N is characteristic of the regime of typical fluctuations. We will now explore another regime, the large deviation regime, dominated by rare fluctuations. 2.6. Coulomb gas method for N → ∞ We now study the limit of a large number of walkers N → ∞. We will study the moment generating function of the coincidence time T N , in that limit for c > 0. As we show below, there is a natural scaling regime, c ∼ √ N , which allows to use a Coulomb gas approach. This regime corresponds to large positive c, hence to events where the coincidence time T N is much smaller than its average T N . Exponentiating the integrand of eq. (32), we get e −cT N (t) BB = π −N/2 R N dke −S N (k)(62) where the action is given by S N (k) = i k 2 i + i<j log 1 + c 2 (k i − k j ) 2(63) For the two terms to be of the same order we must scale k i = √ N p i and c = √ Nc. Introducing the empirical density ρ(p) = 1 N i δ(p − p i ) we rewrite the action as S N (k) = N 2 Sc[ρ] + o(N 2 )(64) where we have defined the Coulomb gas energy functional Sc[ρ] = R dp p 2 ρ(p) + 1 2 R 2 dpdp ρ(p)ρ(p ) log 1 +c 2 (p − p ) 2 .(65) We can evaluate the integral in (62) using a saddle point approximation and obtain e −cT N (t) BB ∝ exp −N 2 Ψ c √ N ,(66) where from its definition the rate function Ψ(c) is defined onc ∈ [0, +∞[ and must be positive, increasing Ψ (c) 0 and concave Ψ (c) 0, with Ψ(0) = 0. It is determined by minimizing the Coulomb gas energy Ψ(c) = min ρ Sc[ρ] − µ R ρ(p)dp − 1(67) with respect to the density ρ(p), where we have introduce a Lagrange multiplier µ to enforce the normalization constraint R dpρ(p) = 1. We denote ρ * (c, p) the minimizer of the action Sc[ρ] under the normalisation constraint. Note that forc = 0, there is no normalised density allowing to minimise S[ρ]. Computing the functional derivative, we obtain an integral equation for ρ * (c, p), valid for p in the support of ρ * (c; p ) p 2 + R dp ρ * (c; p ) log 1 +c 2 (p − p ) 2 = µ(c) .(68) Multiplying this equation with ρ * (c; p) and integrating with respect to p, we obtain R 2 dpdp ρ * (c; p)ρ * (c; p ) log 1 +c 2 (p − p ) 2 = µ(c) − R dpρ * (c; p)p 2 .(69) Replacing in Eq. (65), this yields a simpler expression, which can be interpreted as a virial theorem, for the rate function Ψ(c) for general value ofc Ψ(c) = R dp p 2 2 ρ * (c; p) + µ(c) 2 .(70) Note also that taking a derivative of Eq. (68) with respect to p, we obtain an equation that does not depend on the Lagrange multiplier µ and reads, for any p in the support of ρ * (c; p ) p = − R ρ * (c; p )dp p − p c 2 c 2 + (p − p ) 2 .(71) Solving this integral equation for arbitrary values ofc is quite non-trivial and left for future studies. Here we will only solve this equation perturbatively in the regimẽ c → ∞. The density ρ * (c; p) can be expressed as a perturbative series ofc −2 ρ * (c; p) = ρ sc (p) + 1 c 2 δρ 1 (p) + O( 1 c 4 ),(72) where the leading order of the density ρ sc (p) does not depend onc. It is solution of the integral equation − R ρ sc (p )dp p − p = p .(73) The general solution of the equation − b a ρ(p )dp p−p = g(p) (i.e. assuming a single interval support) can be obtained by the Tricomi formula [14] ρ(p) = 1 π √ p − a √ b − p b a ρ(p )dp − − b a dp π √ p − a √ b − p p − p g(p ) .(74) We expect in this case that a = −b and that the density ρ sc (p) is normalised. Inserting these two conditions, we obtain ρ(p) = 2 + a 2 − 2p 2 2π a 2 − p 2 , − √ a p √ a .(75) Imposing that this density vanishes at the edges in ±a, we obtain a = √ 2. The solution of Eq. (73) is then the Wigner semi-circle law ρ sc (p) = 2 − p 2 π , − √ 2 p √ 2 .(76) Computing the next order term, we obtain for p ∈ [− √ 2, √ 2] − √ 2 − √ 2 δρ 1 (p )dp p − p = √ 2 − √ 2 ρ sc (p )(p − p )dp = p .(77) The solution at first order of perturbation must have a vanishing integral √ 2 − √ 2 δρ 1 (p)dp for the density to be normalised. Using again the Tricomi formula in Eq. (74), this yields δρ 1 (p) = 1 − p 2 π 2 − p 2 , − √ 2 p √ 2 .(78) We may then compute the rate function at orderc −2 . Evaluating the Lagrange multiplier from Eq. (68) at, e.g. p = 0, we obtain µ(c) = 2 logc − 2 R dpρ sc (p) log |p| + 1 c 2 R dpρ sc (p)p 2 − 2 R dpδρ 1 (p) log |p| + O(c −4 ) = 2 logc + 1 + log 2 + 1 c 2 ( 1 2 + 1) + O(c −4 ).(79) where we used the largec expansion log 1 +c 2 (p − p ) 2 = 2 logc − 2 log |p − p | + 1 c 2 (p − p ) 2 + O( 1 c 4 )(80) Gathering the different terms, we obtain from (70) the largec behavior Ψ(c) = logc + 3 4 + 1 2 log 2 + 1 2c 2 + O( 1 c 4 )(81) which is consistent, to leading order with Ψ(c) being increasing and concave. This behaviour, obtained here in the regime N, c → +∞ with largec = c/ √ N , can be compared with the large c expansion at fixed N that we obtained in Eq. (48), − 1 N 2 log e −cT N BB c→+∞ − log G(N + 2) N 2 + N (N − 1) N 2 (log c + 1 2 log 2) (82) N →+∞ log c √ N + 3 4 + 1 2 log 2(83) where we used the asymptotic behaviour of the Barnes function log G(z + 2) = (z 2 /2) log z − 3z 2 /4 + O(z). This estimate coincides precisely with the first three leading terms at largec in (81). The matching of these two regimes is illustrated in the Figure 4. If we assume that the above variational problem has a well behaved solution for anyc, then the MGF has the large deviation form Eq. (66). That is then consistent with the PDF of T having the following large deviation tail in the regime T ∼ N 3/2 P N,BB (T ) ∼ e −N 2 G(T ) ,T = T N 3/2 .(84) Indeed substituting this form in the definition of the Laplace transform, the integral is dominated at large N by the maximum of its integrand G(T ) = log( 1 T ) − 1 4 + 1 2 log 2 +T 2 2 + o(T 2 )(87) Again, the first three terms match exactly the large N limit of (49). The large deviation regime studied above thus corresponds to events of probability − log P = O(N 2 ) in the double limit N, T → +∞ with T ∼ N 3/2 , i.e. to coincidence times T much smaller than the average (and typical) one T = T N BB . It equivalently corresponds to configurations of Brownian paths which strongly repel each others, i.e. in the Laplace variable this is the regime c ∼ √ N > 0. This is illustrated in Figure 4. It is then clear from the figure that the result (49) for the PDF, which corresponds to small T = O(1) and fixed N , should match for large N the largec limit. One cannot exclude however that there exists intermediate deviation regimes, e.g. for T ∼ T N BB ∼ N 2 . One notes for instance that the upper tail at fixed N and large T behaves as P N (T ) ∼ e −T 2 /N 3 (see formula (51)). Inserting T ∼ T N BB we see that this factor in the upper tail can be of order e −O(N ) at most. Studying this regime at large N would require to extend the above Coulomb gas on the c < 0 side, which is left for future studies. Coincidence time for Brownian motions and KPZ equation with flat initial condition Let us now turn to the case of the Brownian motion, where the final points are not fixed (i.e. they are integrated upon). This is connected to the directed polymer with one free endpoint, equivalently with the Kardar-Parisi-Zhang equation with a flat initial condition. A solution for the latter was given in [15,16]. We will use this solution here both for c > 0 and c < 0. Since it was obtained by first calculating the moments for the half-flat initial condition we start by the latter, which corresponds to N Brownians all starting at 0 and ending on a given half line. Bethe ansatz solution of the Lieb-Liniger model for half-flat initial conditions Consider the partition sum of the directed polymer with half-flat initial condition Z w (x, t) = 0 −∞ dye wy Z(x, y, t)(88) From Eq (52) in [17], or Eq. (88) in [16], we have the following: restoring the factors ofc (by a change of units), with c = −c, restricting to c 0 and retaining only the single particle states (as they are the only states of the repulsive Lieb-Liniger model), T N and has rate N 2 . We have shown that its left part, which corresponds to T N 3/2 (andc 1 for the corresponding MGF) matches the regime T ∼ 1 which is also indicated. The matching of its right part, i.e. N 3/2 T N 2 (andc 1 for the corresponding MGF), towards the typical regime remains to be studied. We note that the right tail of the regime of typical fluctuations is consistent with the existence of a right large deviation regime for |T − T N | ∼ N 2 of rapidity N , which remains to be studied. P N (T) T T N 2 T N 2 T ∼ 1 T ∼N 3/ 2 e −N 2 G T N 3/ 2) ( e − T 2 N 2 ∼e −N F ( T N 2 ) typical fluctuations < T N >∼ = ∼N 3/ 2 we obtain Z w (x, t) N = N j=1 R dk j 2π e −k 2 j t−ixkj ik j + w 1 i<j N (k i − k j ) 2 (k i − k j ) 2 + c 2 ik i + ik j + 2w + c ik i + ik j + 2w(89) From this we obtain the MGF for the coincidence time of N Brownian walkers which start at x and reach the half line ]−∞, 0], as e −cT N half−BM = Z w (x, t) N Z 0 w (x, t) N , Z 0 w (x, t) = R dk 2π e −k 2 t−ixk ik + w(90) It is written here in presence of an extra weight e wx , which can then be considered in the limit w = 0 + . Flat initial conditions as a limit of the half-flat The moments of the partition sum of the directed polymer with one free endpoint i.e. Z flat (t) = R dyZ(x, y, t), associated to the Kardar-Parisi-Zhang equation with flat initial condition, can be obtained from the ones for the half-flat initial condition in the double limit x → −∞ and w → 0 + , as performed in [16]. In that limit only paired strings and strings with zero momenta remain: this provides the MGF for the coincidence time of unconstrained Brownians walkers all starting at 0. Moment generating function and probability distribution function for N = 2, 3 We start with the lowest moments, from eqs. (52) and (56) in [16], and consider c > 0, in which case we keep only the contribution of single particle states. The first moment is simply Z flat (t) = 1 for all times t. We consequently focus on the moments at time t = 1. N = 2 From the second moment we obtain the result for two walkers e −cT2 BM = Z flat (1) 2 = 4c R dk 2π e −2k 2 4k 2 + c 2 = e c 2 2 (1 − erf( c √ 2 ))(91) The inverse Laplace transform then yields P 2,BM (T ) = L −1 e −cT2 BM = R dk 2π e −2k 2 4 cos(2kT ) = 2 π e − T 2 2(92) This result compares very well with the numerical simulations in Fig. 5. e −cT3 BM = Z flat (1) 3 = 12c R dk 2π k 2 e −2k 2 (k 2 + c 2 )(4k 2 + c 2 )(93) The inverse Laplace transform then yields P 3,BM (T ) = L −1 e −cT3 BM = R dk 2π 4e −2k 2 (cos(kT ) − cos(2kT )) = 2 π e − T 2 2 (e 3 8 T 2 − 1)(94) This result compares very well with the numerical simulations in Fig. 6. For N even e −cT N BM = (2c) N 2 N ! ( N 2 )! N 2 p=1 R dk p 2π e −2k 2 p 4k 2 p + c 2 (95) × 1 p<q N 2 (k p − k q ) 2 (k p − k q ) 2 + c 2 (k p + k q ) 2 (k p + k q ) 2 + c 2 For N odd e −cT N BM = (2c) N −1 2 N ! ( N −1 2 )! N −1 2 p=1 R dk p 2π k 2 p e −2k 2 p (k 2 p + c 2 )(4k 2 p + c 2 ) (96) × 1 p<q N −1 2 (k p − k q ) 2 (k p − k q ) 2 + c 2 (k p + k q ) 2 (k p + k q ) 2 + c 2 These results are obtained from Eq. (108) in [16] by noting that for c > 0 and N even only particle states with paired momenta (k 1 , −k 1 , . . . , k N/2 , −k −N/2 ) contribute, while for N odd there are N − 1 paired momenta and one zero momentum. Probability Distribution Function of the coincidence time In this section, we obtain from Eqs. (95) and (96) an expression for the PDF of the coincidence time for N Brownian motions. For N even For p ∈ [1, N/2], define the conjugate variables X 2p = c − 2ik p , X 2p−1 = c + 2ik p(97) and observe that (k p − k q ) 2 (k p − k q ) 2 + c 2 (k p + k q ) 2 (k p + k q ) 2 + c 2 = (X 2q − X 2p )(X 2q−1 − X 2p−1 )(X 2q−1 − X 2p )(X 2q − X 2p−1 ) (X 2p + X 2q )(X 2p−1 + X 2q−1 )(X 2p−1 + X 2q )(X 2p + X 2q−1 ) (98) Then one rewrites Eq. (95) as e −cT N BM = (2c) N 2 N ! ( N 2 )! N 2 p=1 R dk p 2π e −2k 2 p N 2 p=1 1 X 2p X 2p−1 X 2p + X 2p−1 X 2p − X 2p−1 1 p<q N X q − X p X q + X p (99) Using the explicit values of X 2p and X 2p−1 we simplify the expression as e −cT N BM = c N N ! ( N 2 )! N 2 p=1 R dk p 2π ie −2k 2 p k p N p=1 1 X p 1 p<q N X q − X p X q + X p(100) Using formulae (9.1) and (9.4) of Bruijn [18], we write the last part of (100) as N p=1 1 X p 1 p<q N X q − X p X q + X p = R N + dr e −c r 1 k< N sign(r k −r ) N 2 p=1 e −2ikp(r2p−1−r2p) (101) From this identity, we obtain after anti-symmetrizing the last exponential w.r.t r 2p and r 2p−1 e −cT N BM = c N N ! ( N 2 )! R N + dr e −c r 1 k< N sign(r k − r ) N 2 p=1 R dk p 2π e −2k 2 p sin(2k p (r 2p−1 − r 2p )) k p = c N N ! 2 N 2 ( N 2 )! R N + dr e −c r 1 k< N sign(r k − r ) N 2 p=1 erf r 2p−1 − r 2p √ 2(102) where we computed the integral w.r.t {k p }'s. The final trick consists in (i) changing the labels in the r variables in the error function from r 2p to r σ(2p) where σ belongs to the symmetric group S N , (ii) using the fact that there are N !/(2 N/2 (N/2)!) ways of pairing N objects (N is even) and (iii) using the definition of the Pfaffian of an antisymmetric matrix Pf(A) = σ∈S N ,σ(2p−1)<σ(2p) sign(σ) N/2 p=1 A σ(2p−1),σ(2p) to finally obtain e −cT N BM = c N R N + dr e −c r 1 k< N sign(r k − r ) Pf 1 k, N erf r k − r √ 2 (103) Inverting the Laplace transform of this expression, we obtain the PDF P N,BM (T ) of the rescaled coincidence time T N = T N (t = 1) as P N,BM (T ) = ∂ N T R N + dr δ T − N =1 r 1 k< N sign(r k − r ) Pf 1 k, N erf r k − r √ 2 = ∂ N +1 T R N + dr Θ T − N =1 r 1 k< N sign(r k − r ) Pf 1 k, N erf r k − r √ 2(104) where Θ(x) is the Heaviside step function. Even though the product of sign functions can itself be written as a Pfaffian, we have not tried to simplify further Eq. (104). For N odd For p ∈ [1, (N − 1)/2], define the conjugate variables X 2p = c − 2ik p , X 2p−1 = c + 2ik p(105) and define X N = c. Then, in a similar fashion as the even N case, one rewrites Eq. (96) as e −cT N BM = c N N ! ( N −1 2 )! N −1 2 p=1 R dk p 2π ie −2k 2 p k p N p=1 1 X p 1 p<q N X q − X p X q + X p(106) By the same argument as in the even N case, using Bruijn's results [18], the moment generating function reads e −cT N BM = c N N ! 2 N −1 2 ( N −1 2 )! R N + dr e −c r 1 k< N sign(r k −r ) N −1 2 p=1 erf r 2p−1 − r 2p √ 2(107)= N ∂ N T R N + dr δ T − N =1 r 1 k< N sign(r k − r ) Pf 1 k, N −1 erf r k − r √ 2 = N ∂ N +1 T R N + dr Θ T − N =1 r 1 k< N sign(r k − r ) Pf 1 k, N −1 erf r k − r √ 2(109) where Θ(x) is the Heaviside step function. We have verified that for N = 2, 3, these formulae give back the results for the PDF and the MGF for the coincidence time of Brownian motions. For N = 2, 3 we only have to use a 2×2 Pfaffian, Pf 1 i,j 2 A ij = A 12 , hence the calculation is elementary. Asymptotic behaviour of P N,BM (T ) for arbitrary N Small T limit of the Probability Distribution Function We now discuss the small T behavior of the PDF P N,BM (T ) of T N = T N (t)/ √ t. It can be extracted from the c → +∞ limit of e −cT N BM . When c is increased, the interaction in the Lieb-Liniger model becomes more repulsive and the corresponding Brownian trajectories with small coincidence time are those where none of the Brownian are bounded together. Taking the large c → +∞ limit of Eqs. (95) and (96) we see that the leading term is in all cases c −N (N −1)/2 . Inverting the Laplace transform, we obtain the small T behavior of P N,BM (T ) as P N,BM (T ) = T →0 I N T N (N −1) 2 −1 + O(T N (N −1) 2 +1 ),(111) where I N depends on the parity of N . For N even I N = 2 N 2 N ! ( N 2 )!Γ N (N −1) 2 N 2 p=1 R dk p 2π e −2k 2 p 1 p<q N 2 (k 2 p − k 2 q ) 2 = Γ(N + 1) 2 N (N −2) 2 (2π) N 4 Γ N (N −1) 2 N 2 −1 k=1 Γ(2k + 1)(112) For N odd I N = 2 N −1 2 N ! ( N −1 2 )!Γ N (N −1) 2 N −1 2 p=1 R dk p 2π k 2 p e −2k 2 p 1 p<q N −1 2 (k 2 p − k 2 q ) 2 = Γ( N 2 + 1) 2 N (N −3) 2 (2π) N +1 4 Γ N (N −1) 2 N −1 2 k=1 Γ(2k + 1) .(113) The k integrals are typical examples of Selberg integrals, and further details can be found in Ref. [11] in Section 1.4 and after changing variables from {x i }'s to {k i = x i / √ 2}. One can check that these results agree with the small T expansion of the formula for the PDF in the cases N = 2, 3 given above, i.e. I 2 = 2 π and I 3 = 3 4 √ 2π . 3.4.2. Large T limit of the Probability Distribution Function Similarly to Section 2.4.2, we determine the large T tail of the PDF P N,BB (T ) of T N = T N (t)/ √ t by investigating the c → −∞ limit of the Lieb-Liniger model. It again determined by the same ground state as in Section 2.4.2. From Eqs. (65-66) in Ref. [13] it follows that the MGF in the c → −∞ limit reads e −cT N BM = 2 N −1 ec 2 N (N 2 −1) 12 [1 + O(e − 1 4 N (N −1)c 2 )](114) As in Section 2.4.2 we find the T → ∞ asymptotics P N,BM (T ) = 2 N −1 3 πN (N 2 − 1) e −α N T 2 + O(e −β N T 2 )(115) with the exponential factors again given by eqs. (53), i.e. α N = 3 N (N 2 −1) and β N = 3 N 3 −3N 2 +2N . For completeness, we compute this asymptotics explicitly for N = 2, 3. • N = 2 P 2,BM (T ) = 2 π e − T 2 2(116) This matches eq. (92). • N = 3 P 3,BM (T ) = 2 π e − T 2 8 + O(e − T 2 2 )(117) This matches eq. (94). Mean and variance of the coincidence time for N Brownian motions Mean value of T N,BM To compute the first moment, we may use the propagator for a single Brownian motion with diffusion coefficient D P (x, τ |x 0 , 0) = e − (x−x 0 ) 2 4Dτ √ 4πDτ ,(118) together with the fact that we consider i.i.d. variables. The mean value of the coincidence time of the Brownian motion can be obtained (setting D = 1) as T N BM = 1 0 dτ i =j δ(x i (τ ) − x j (τ )) = 1 0 dτ i =j ∞ −∞ P (x i , τ |x 0 , 0) 2 dx i = N (N − 1) 2 1 0 dτ √ τ ∞ −∞ dx 4π e − x 2 2 = N (N − 1) 2 2 π .(119) As expected, we have T N BM < T N BB . Variance of T N,BM We may still use the free propagator to compute the second moment of the distribution. Using again the Brownian propagator, we obtain T 2 N = 1 0 dτ 1 1 0 dτ 2 i =j =m δ(x i (τ 1 ) − x j (τ 1 ))δ(x m (τ 2 ) − x (τ 2 )) (120) = N ! (N − 4)! 1 0 dτ ∞ −∞ P (x, τ |0, 0) 2 dx 2 + 8 N ! (N − 3)! 1 0 dτ 1 1 τ1 dτ 2 ∞ −∞ ∞ −∞ P (x, τ 1 |0, 0) 2 P (y − x, τ 2 − τ 1 |0, 0) × P (y, τ 2 |0, 0)dxdy + 4x N ! (N − 2)! 1 0 dτ 1 1 τ1 dτ 2 ∞ −∞ ∞ −∞ P (x, τ 1 |0, 0) 2 P (y − x, τ 2 − τ 1 |0, 0) 2 dxdy . After a careful computation, we finally obtain Var (T N ) BM = [T N − T N BM ] 2 BM = N ! (N − 3)! 2 3 − 2 π + N ! (N − 2)! 1 2 − 1 π .(121) Coincidence time for arbitrary fixed final points As mentionned in Section 2.1 there is an alternative formula (33) to the one from the Bethe ansatz, for the moments of the directed polymer problem, obtained from the study of Macdonald processes. It turns out that this formula can be extended to arbitrary final points y [9,10] (while the starting points are still all at zero, i.e. x 0 = 0) and for arbitrary values of c. From this formula, specialized to the case c > 0, we immediately obtain that for y 1 y 2 · · · y N , the Laplace transform of the PDF of the coincidence time for N such Brownian walkers is e −cT N (t) 0,y = (4πt) N 2 iR N dz (2iπ) N e t(z+ y 2t ) 2 i<j z i − z j z i − z j + c .(122) We note that the r.h.s. of (122) is invariant by a global shift of the endpoints y i → y i + w. This global shift corresponds to adding a linear drift to each of the Brownians which has no effect on the coincidence time. Exact distribution for N = 2, 3 Distribution for N = 2. We start from Eq. (122) where we have set N = 2 and t = 1. This yields e −cT2 0,y = 4π iR 2 dz 1 dz 2 (2iπ) 2 e (z1+ y 1 2 ) 2 +(z2+ y 2 2 ) 2 z 1 − z 2 z 1 − z 2 + c , y 1 y 2 .(123) Taking the inverse Laplace transform and introducing the change of variables z j → ik j one obtains P 2 (T, y 1 , y 2 ) = 4π R 2 dk 1 dk 2 (2π) 2 (ik 1 − ik 2 )e −(k1− iy 1 2 ) 2 −(k2− iy 2 2 ) 2 −T (ik1−ik2) = −∂ T R 2 dk 1 dk 2 π e −(k1− iy 1 2 ) 2 −(k2− iy 2 2 ) 2 −iT (k1−k2) .(124) The two Gaussian integrals can now be computed and one obtains P 2 (T, y 1 , y 2 ) = 1 2 (2T + y 2 − y 1 )e − T 2 (T +y2−y1) , y 1 y 2 .(125) Note that setting y 2 = y 1 , we recover the result for the Brownian bridge (42). Alternatively, the result for the Brownian motion (92) is obtained by computing Finally, symmetrising the problem by releasing the constraint y 1 y 2 , by defining P 2,sym (T, y 1 , y 2 ) = 1 2 (2T + |y 2 − y 1 |)e − T 2 (T +|y2−y1|) ,(127) we may obtain the joint PDF of the coincidence time, the algebraic distance d between the Brownian defined as d = y 1 − y 2 (with d ∈ R) and their center of mass position u = (y 1 + y 2 )/2. It reads P 2 (T, d, u) = e − ( d 2 +u ) 2 + ( − d 2 +u ) 2 4 4π P 2,sym T, − d 2 + u, d 2 + u = e − u 2 2 8π (2T + |d|)e − 1 8 (2T +|d|) 2 .(128) We may then recover the joint PDF of d ∈ R and T ∈ R + in Eq. (10) by integrating over the center of mass position u, P joint (T, d) = ∞ −∞P 2 (T, d, u)du = (2T + |d|) 2 √ 8π e − 1 8 (2T +|d|) 2(129) Distribution for N = 3. The distribution for N = 3 can be obtained from Eq. (122) but its expression is quite cumbersome and not very enlightening. Therefore, we only reproduce here its asymptotic behaviours for y 1 < y 2 < y 3 P 3 (T, y 1 , y 2 , y 3 ) =            (y 2 − y 1 )(y 3 − y 2 )(y 3 − y 1 ) 16 T 2 + O(T 3 ) , T → 0 1 4 π 6 e (y 1 +y 3 −2y 2 ) 2 24 − T 4 (y3−y1)− T 2 8 (T + y 3 − y 1 ) 2 − 4 , T → ∞ . (130) Note that taking two equal points y 2 = y 3 , the PDF behaves for small T as P 3 (T, y 1 , y 2 , y 2 ) = (y 1 − y 2 ) 2 T 3 /24 + O(T 4 ), while for all equal points y 1 = y 2 = y 3 , we recover the expression in Eq. (47). Small T asymptotics of the PDF for any N and for any fixed final points From (122) one can extract the small T asymptotics for any N and any fixed final points y. In the large c limit it becomes e −cT N 0,y (4π) N 2 c − N (N −1) 2 iR N dz (2iπ) N e (z+ y 2 ) 2 i<j (z i − z j ) . (131) = (4π) N 2 c − N (N −1) 2 iR N dz (2iπ) N e z 2 i<j (− y i 2 + y j 2 ) . = (2c) − N (N −1) 2 i<j (y j − y i )(132) The second line follows from the fact that the first line is antisymmetric in the y i and upon the shift z i → z i − y i /2, that is is a polynomial of degree N (N − 1)/2 in the y i . Taken together these properties imply that it is proportional to the Vandermonde product i<j (y j − y i ). Upon inverse Laplace inversion we obtain P N (T, y) 1 2 N (N −1) 2 Γ( N (N −1) 2 ) i<j |y j − y i | T N (N −1) 2 −1(133) where we use that it must be a symmetric function of the endpoints. It agrees with the above results for N = 2, 3. The results (111), (112), (113) for the BM is recovered from the identity P N,BM (T ) = R N dy (4π) N/2 e − i y 2 i 4 P N (T, y)(134) Using again the Mehta integral formula e.g. (1.5)-(1.6) in Ref. [11] we recover (111) with an amplitude I N = 1 2 N (N −1) 4 Γ( N (N −1) 2 ) ( 2 √ π ) N N j=1 Γ(1 + j 2 )(135) One checks that this (simpler) expression for I N is equivalent to those in (112), (113) obtained by a quite different method. To prove the equivalence, one takes Eqs. (112) and (113) and insert the duplication formula Γ(2k + 1) = Γ(k + 1 2 )Γ(k + 1)2 2k / √ π and the identity (135) follows. Large T asymptotics of the PDF for any N and arbitrary final and initial points A formula for P N (T ) for any N and arbitrary initial and final points seems out of reach at present. Indeed it would require to handle all eigenstates of the Lieb-Liniger Hamiltonian (14), i.e. with arbitrary symmetry, and not just the fully symmetric (i.e. bosonic) ones. For attempts at solving this problem in the KPZ/directed polymer context using the so-called generalized Bethe ansatz see Refs. [19,20,21,22,23,24]. It is however possible to extract the leading asymptotics at large T for arbitrary fixed final and initial points. Indeed, as discussed above, this asymptotics is controled in the MGF by large negative values c = −c < 0. In that limit, all moments are dominated by the ground state of the Lieb-Liniger Hamiltonian (14). The key fact is that this ground state is the bosonic one which we used before, where all particles are bound in a single string. Eigenstates with other symmetries have a higher energy (and at zero total momentum, differ by a finite gap from the ground state). To treat the case of arbitrary endpoints we now use the known form of the ground state eigenfunction Ψ 0,k (x) = N !e −c 2 1 i<j N |xi−xj |+ik N j=1 xj (136) where k is the momentum of the center of mass of the string. Its energies are E 0,k (N ) = −c 2 N (N 2 −1) 12 + N k 2 . Note that we must sum over all values of k, i.e. it is a ground state manifold, but this is already what we did above to obtain the large T asymptotics for the BB and the BM. We can now write, in the largec limit Z N (x, t|y; c) = x|e −Ĥ N (c)t |y e c 2 12 N (N 2 −1)t R dk 2π N Le −N k 2 t ||Ψ 0,k || 2 Ψ 0,k (x)Ψ * 0,k (y) (137) where the norm is ||Ψ 0,k || 2 = N !N 2c1−N L. As in Ref. [7], we keep the leading order in large L, but the factors of L cancel in the final result. This leads to the moment generating function ec T N 0,y = Z N (x, 1|y; c) Z N (x, 1|y; c = 0) = (4π) N 2 e 1 4 i (xi−yi) 2 Z N (x, 1|y; c) (138) (4π) N 2c N −1 N !e N 3 −N 12c 2 e −c 2 i<j (|yi−yj |+|xi−xj |) R dke −N k 2 2πN e j (ik(xj −yj )+ (x j −y j ) 2 4 ) = (4π) N −1 2c N −1 N !N − 3 2 e N 3 −N 12c 2 e −c 2 i<j (|yi−yj |+|xi−xj |) e − 1 4N [ j (xj −yj )] 2 + j (x j −y j ) 2 4 From this we obtain the following leading large T asymptotics for the PDF of the coincidence time with fixed initial and final points P N (T, x, y) N ! 2 N −1 π N 2 −1 √ α N N 3/2 e N i=1 (y i −ȳ−x i +x) 2 4 e − α N 4 (d[y]+d[x]) 2 (139) × (−∂ T ) N −1 e −α N (d[y]+d[x])T −α N T 2 d[y] := 1 i<j N |y i − y j | ,ȳ := 1 N N i=1 y i , α N = 3 N (N 2 − 1)(140) Hence to this order in the large T expansion, the PDF depends only on (i) the sum of the total distance of the final points, d[y], and of the initial points, d[x] and (ii) the sum of the squares of the centered variables y i − x i − (ȳ −x). We thus note that the PDF is invariant by the global shift y i −x i → y i −x i +w, an exact property, not restricted to the tail, as previously discussed. We also note that the PDF is symmetric in the simultaneous exchange of all initial and final points y ↔ x, which again should be an exact property from the time reversibility of Brownian motion. Mean coincidence time for arbitrary final points We can extend the calculation of Section 2.5 using formula (122) for arbitrary final points y. The generalization being straightforward we give only a few steps. One finds for y 1 y 2 · · · y N . 1 − erf y j − y i 2 √ 2(142) Note that it decays as T N 0,y ∼ i<j 2 yj −yi when the final points are all taken very far apart. Conclusion We have investigated the probability distribution function P N (T ) of the total pairwise coincidence time T N = T of N independent Brownian walkers in one dimension. Our main results have been obtained for two special geometries: (i) Brownian motions (BM) starting from the same point 0 and (ii) Brownian bridges (BB). We have obtained explicit expressions for the moment generating function (MGF), i.e. the expectation of e −cT N . We have mapped, through a Feynman-Kač path integral representation, the determination of the MGF to the calculation of a Green function in the Lieb-Liniger model of quantum particles interacting with a pairwise delta interaction. Restricting to Brownians all starting all at 0 allows to consider only bosonic states, i.e. the delta Bose gas. For c > 0 the MGF is the standard Laplace transform of P N (T ), for which we obtained a formula for any N using the eigenstate (spectral) decomposition of the repulsive Bose gas. Laplace inversion then led us to a compact formula for P N (T ) for each geometry, one involving a determinant (for the BB) and the other one a Pfaffian (for the BM). We found that at small T the PDF vanishes with two different sets of exponents for the BM and the BB, and we obtained their exact amplitudes, related to Selberg integrals. We have displayed very explicit formula for N = 2, 3 which we checked with excellent accuracy using extensive numerical simulations of Browian motions. We also obtained the mean and the variance of the coincidence time T N as a function of N . At large N , the mean grows as N 2 and the variance as N 3 . We then considered the double limit of large N and large T , and, using a Coulomb gas approach, we showed the existence of a large deviation tail P N (T ) ∼ e −O(N 2 ) for T ∼ N 3/2 . Although we obtained explicit formula for small T /N 3/2 , obtaining the full solution remains a challenge. Furthermore, investigation of other possible regimes in this double large T, N limit remain for future work. We have shown that for c < 0 the MGF is related to the exponential moments of the one dimensional Kardar-Parisi-Zhang (KPZ) equation, equivalently to the moments of the directed polymer in a random potential. These moments are calculated using a summation over the eigenstates of the attractive Lieb-Liniger model. These include bound states called strings. Here we obtained the large T asymptotics of P N (T ) from the contribution of the ground state of the Lieb-Liniger model, for the BB and the BM. We were able to extend this asymptotics to arbitrary fixed initial and final endpoints for the BM. Our main result is that the PDF of the coincidence time has a universal decay at large T , of the form P N (T ) ∼ exp(−3T 2 /(N 3 − N )), and only the pre-exponential factor depends on the geometry. For the BB we used the connection to the droplet solution of the KPZ equation, and for the BM to the flat initial condition. It would be interesting to use other known solutions of the KPZ equation, such as the stationary initial condition or KPZ in a half-space, to obtain properties on the coincidence time of Brownian walkers with different constraints. More generally, it would interesting to establish other universal properties of the distribution coincidence time using the knowledge of the spectral properties of the Lieb-Liniger model. We hope that this work will motivate further studies of the coincidence properties of multiple diffusions. G. Schehr for interesting discussions. We acknowledge support from ANR grant ANR-17-CE30-0027-01 RaMaTraF. A. Expected value of the coincidence time for non identical diffusing particles Considering independent particles with different diffusion coefficients D i and initial velocity v i , we may compute the mean coincidence time using the independence together with the Brownian propagator P i (x, t|0, 0) = e − (x−v i t) 2 4D i t √ 4πD i t . (A.1) We obtain for the mean coincidence time T N = i =j 1 0 dτ ∞ −∞ dxP i (x, t|0, 0)P j (x, t|0, 0) = i =j 1 v i − v j erf v i − v j 2 D i + D j . (A.2) We verify that this result only depends on the difference of speeds v i − v j and not on the speed of the centre of massv = 1 N N i=1 v i . It is therefore not affected by a global shift of the speed v i → v i + v. B. Details of the calculations of the distribution for N = 3 Brownian bridges In this section, we detail the steps to obtain Eq. (46). We start from Eq. (45) that we reproduce here A 3 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Summary of the main results . . . . . . . . . . . . . . . . . . . . . . . 6 2 Coincidence time for Brownian bridges and KPZ equation with droplet initial conditions 8 2.1 Bethe ansatz solution of the Lieb-Liniger model for droplet initial conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2 Probability Distribution Function of the coincidence time . . . . . . . 9 2.3 Full distribution of the coincidence time for N = 2, 3 Brownian bridges 10 2.4 Asymptotic behaviours of P N,BB (T ) for arbitrary N . . . . . . . . . . 11 2.4.1 Small T limit of the Probability Distribution Function . . . . . 11 2.4.2 Large T limit of the Probability Distribution Function . . . . . 12 2.5 Mean and variance of the coincidence time for N Brownian bridges . . 13 2.6 Coulomb gas method for N → ∞ . . . . . . . . . . . . . . . . . . . . . 15 3 Coincidence time for Brownian motions and KPZ equation with flat initial condition 18 3.1 Bethe ansatz solution of the Lieb-Liniger model for half-flat initial conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.2 Flat initial conditions as a limit of the half-flat . . . . . . . . . . . . . 19 3.2.1 Moment generating function and probability distribution function for N = 2, 3 . . . . . . . . . . . . . . . . . . . . . . . . 20 3.2.2 Moment generating function for arbitrary N . . . . . . . . . . 20 3.3 Probability Distribution Function of the coincidence time . . . . . . . 21 3.4 Asymptotic behaviour of P N,BM (T ) for arbitrary N . . . . . . . . . . . 24 3.4.1 Small T limit of the Probability Distribution Function . . . . . 24 3.4.2 Large T limit of the Probability Distribution Function . . . . . 24 3.5 Mean and variance of the coincidence time for N Brownian motions . 25 4 Coincidence time for arbitrary fixed final points 26 4.1 Exact distribution for N = 2, 3 . . . . . . . . . . . . . . . . . . . . . . 26 4.2 Small T asymptotics of the PDF for any N and for any fixed final points 27 4.3 Large T asymptotics of the PDF for any N and arbitrary final and initial points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 4.4 Mean coincidence time for arbitrary final points . . . . . . . . . . . . Expected value of the coincidence time for non identical diffusing particles 31 B Details of the calculations of the distribution for N = Brownian Figure 1 . 1Plot of a simulation of N = 3 diffusive particles for a diffusion coefficient D = 1 starting from x 0 = (0, 0, 0). Left: Brownian bridges with endpoints in y = (0, 0, 0). Right: Brownian motions with arbitrary final points. Figure 2 . 2Comparison between the PDF of the coincidence time obtained numerically for N = 2 Brownian bridges on the time interval τ ∈ [0, 1] and diffusion coefficient D = 1 and the analytical result in Eq. (42) (some details on the simulations are provided in Appendix C). Left: Linear scale, Right: Logarithmic scale. In Fig. 3 , 3we compare the analytical formula of Eq. (46) with the numerical simulations of Brownian bridges (see Appendix C for the details of the simulations), showing an excellent agreement.2.4. Asymptotic behaviours of P N,BB (T ) for arbitrary N We now come back to the case of N Brownian bridges and analyse the tails of the PDF P N,BB (T ) of the rescaled coincidence time T N = T N (t)/ √ t respectively for T → 0 and T → ∞. .Figure 3 . 3Small T limit of the Probability Distribution Function To obtain the small T limit of P N,BB (T ), it is convenient to consider the c → +∞ limit of e −cT N BB , Comparison between the coincidence time obtained numerically for N = 3 Brownian bridges on the time interval τ ∈ [0, 1] and diffusion coefficient D = 1 and the analytical result in Eq. (46). Left: Linear scale, Right: Logarithmic scale. +∞ 0 dTT 0P N,BB (T ) e −cT ∼ +∞ 0 dT e −N 2 (G(T )+cT ) ∼ e −N 2 minT 0 [G(T )+cT ] saddle pointsT * (c) andc * (T ) we haveT * −T at smallT . We obtain the smallT expansion Figure 4 . 4Schematic representation of the different regimes for the PDF of the coincidence time T , P N (T ), in the limit of large N . The average T is T N ∼ N 2 , and the typical fluctuations live in a window |T − T N | ∼ N 3/2 around it. The large deviation regime studied in the text via the Coulomb gas is represented in blue and correspond to T ∼ N 3/2 Figure 5 . 5Comparison between the coincidence time obtained numerically for N = 2 Brownian motions on the time interval τ ∈ [0, 1] and diffusion coefficient D = 1 and the analytical result in Eq. (92) (some details on the simulations are provided in Appendix C). Left: Linear scale, Right: Logarithmic scale. N = 3 The third moment gives the result for three walkers 3. 2 . 2 .Figure 6 . 226Moment generating function for arbitrary N In the case of Brownian motions, the expression of the MGF of the coincidence time of N walkers depends on the parity of N : Comparison between the coincidence time obtained numerically for N = 3 Brownian motions on the time interval τ ∈ [0, 1] and diffusion coefficient D = 1 and the analytical result in Eq. (94). Left: Linear scale, Right: Logarithmic scale. We have to be careful in the odd case as there are N variables r but only N − 1 of them are involved in the error functions whereas all N variables are involved in the product of sign functions. We now wish to employ the same Pfaffian trick as the even N case and hence have to consider all (N − 1)!) ways of pairings N − 1 objects. Hence, we obtain our final expression for the moment generating function in the odd N case e −cT N BM = N c Laplace transform of this expression, we obtain the PDF P N,BM (T ) of the rescaled coincidence time T N = T N (t = 1) as P N,BM (T ) We note that the variance ratio Var (T N ) BM /Var (T N ) BB decreases monotonically from 0.846638 . . . for N = 2 to 0.724549 . . . for infinite N . The ratio of relative variances T N 2 BB Var (T N ) BM /(Var (T N ) BB T N 2 BM ) decreases from 2.088996 . . . for N = 2 to 1.7877536 . . . for infinite N . 2 (T, y 1 , y 2 ) . term in the integrand will give a zero contribution as after integration it is proportional to T 3 and derived four times with respect to T . The second term can be 3 Θ (T − u 1 − u 2 − u 3 ) e − (u 1 −u 2 AcknowledgementsWe are greatly indebted to G. Barraquand for numerous interactions during the preparation of this manuscript. B.LACT would like to thank D. Mavroyiannis for bringing this problem to his attention. We are also grateful to S. N. Majumdar andFinally, the third term in the integrand of Eq. (B.1) can also be obtained explicitlyC. Details of the numerical simulationsOur numerical simulations of the Brownian motion are realized from independent random walksfor T = 10 4 steps and j = 1, · · · , N where the η i,j 's are Gaussian centered i.i.d random variables of unit variance. To simulate the Brownian bridge, we consider the processto ensure x BB,T = x BB,0 = 0. To measure the local time we just add for each j = k the number of steps i where |x i,j − x i,k | < and divide by 2 T . We chose for the simulations = 0, 01. Continuous martingales and Brownian motion. D Revuz, M Yor, Springer Science & Business Media293D. Revuz and M. Yor. Continuous martingales and Brownian motion, volume 293. Springer Science & Business Media, 2013. Brownian local time. A N Borodin, Russian Math. Surveys. 442A. N. Borodin, Brownian local time, Russian Math. Surveys, 44:2:1-51, (1989). The distribution of local times of a Brownian bridge. J Pitman, 33Séminaire de probabilités de StrasbourgJ. Pitman, The distribution of local times of a Brownian bridge, Séminaire de probabilités de Strasbourg 33, 388-394 (1999). Exact Analysis of an Interacting Bose Gas. I. The General Solution and the Ground State. E H Lieb, W Liniger, Phys. Rev. 1301605E. H. Lieb, W. Liniger, Exact Analysis of an Interacting Bose Gas. I. The General Solution and the Ground State, Phys. Rev. 130, 1605 (1963). Replica Bethe ansatz studies of two-dimensional interfaces with quenched random impurities. M Kardar, Nucl. Phys. B. 290582M. Kardar,Replica Bethe ansatz studies of two-dimensional interfaces with quenched random impurities, Nucl. Phys. B 290, 582 (1987). Probability distribution of the free energy of a directed polymer in a random medium. E Brunet, B Derrida, Phys. Rev. E. 616789E. Brunet and B. Derrida, Probability distribution of the free energy of a directed polymer in a random medium, Phys. Rev. E 61, 6789 (2000); Ground state energy of a non-integer number of particles with delta attractive interactions. Physica A. 279395Ground state energy of a non-integer number of particles with delta attractive interactions, Physica A 279, 395 (2000). Free-energy distribution of the directed polymer at high temperature. P Calabrese, P Le Doussal, A Rosso, Europhys. Lett. P. Calabrese, P. Le Doussal, A. Rosso, Free-energy distribution of the directed polymer at high temperature, Europhys. Lett. 90.2, 20002, (2010). Bethe ansatz derivation of the Tracy-Widom distribution for one-dimensional directed polymers EPL. V Dotsenko, 90V. Dotsenko, Bethe ansatz derivation of the Tracy-Widom distribution for one-dimensional directed polymers EPL 90, 20003 (2010); Replica Bethe ansatz derivation of the Tracy-Widom distribution of the free energy fluctuations in one-dimensional directed polymers. J. Stat. Mech. 7010Replica Bethe ansatz derivation of the Tracy-Widom distribution of the free energy fluctuations in one-dimensional directed polymers, J. Stat. Mech. P07010 (2010); Bethe ansatz solution for one-dimensional directed polymers in random media. V Dotsenko, B Klumov, J. Stat. Mech. 3022V. Dotsenko and B. Klumov, Bethe ansatz solution for one-dimensional directed polymers in random media J. Stat. Mech. (2010) P03022. Macdonald processes. A Borodin, I Corwin, arXiv:1111.4408Prob. Theor. Rel. Fields. 1581-2A. Borodin and I. Corwin, Macdonald processes, Prob. Theor. Rel. Fields 158 (2014), no. 1-2, 225-400, arXiv:1111.4408. P , arXiv preprint: 1808.04353Moments of the SHE under delta initial measure. P. Ghosal, Moments of the SHE under delta initial measure, arXiv preprint: 1808.04353 (2018). The importance of the Selberg integral. P Forrester, S V E N Warnaar, B. Am. Math. Soc. 454P. Forrester, S. V. E. N. Warnaar, The importance of the Selberg integral, B. Am. Math. Soc., 45(4), 489-534 (2008). P Calabrese, M Kormos, P Le Doussal, arXiv:1405.2582From the sine-Gordon field theory to the Kardar-Parisi-Zhang growth equation. 10710011P. Calabrese, M. Kormos and P. Le Doussal, From the sine-Gordon field theory to the Kardar- Parisi-Zhang growth equation, arXiv:1405.2582, EPL 107 10011 (2014). P Le Doussal, S N Majumdar, G Schehr, arXiv:1601.05957Large deviations for the height in 1D Kardar-Parisi-Zhang growth at late times. 11360004P. Le Doussal, S. N. Majumdar, G. Schehr, Large deviations for the height in 1D Kardar-Parisi- Zhang growth at late times, arXiv:1601.05957, EPL 113, 60004 (2016). F G Tricomi, Integral Equations. LondonF. G. Tricomi, Integral Equations, Pure and Applied Mathematics, Vol. V. Interscience, London, (1957). An exact solution for the KPZ equation with flat initial conditions. P Calabrese, P Le Doussal, Phys. Rev. Lett. 106250603P. Calabrese and P. Le Doussal, An exact solution for the KPZ equation with flat initial conditions Phys. Rev. Lett. 106, 250603 (2011). The KPZ equation with flat initial condition and the directed polymer with one free end. P , Le Doussal, P Calabrese, arXiv:1204.2607J. Stat. Mech. 6001P. Le Doussal and P. Calabrese, The KPZ equation with flat initial condition and the directed polymer with one free end., arXiv:1204.2607, J. Stat. Mech. (2012) P06001. P , Le Doussal, arXiv:1401.1081Crossover from droplet to flat initial conditions in the KPZ equation from the replica Bethe ansatz. 4018P. Le Doussal, Crossover from droplet to flat initial conditions in the KPZ equation from the replica Bethe ansatz arXiv:1401.1081, J. Stat. Mech. (2014) P04018. On some multiple integrals involving determinants. N De Bruijn, J. Indian Math. Soc. 19N. De Bruijn, On some multiple integrals involving determinants, J. Indian Math. Soc, 19 133-151, (1955), https://pure.tue.nl/ws/files/1920642/597510.pdf. Some Exact Results for the Many-Body Problem in one Dimension with Repulsive Delta-Function Interaction. C N Yang, Phys. Rev. Lett. 191312C. N. Yang, Some Exact Results for the Many-Body Problem in one Dimension with Repulsive Delta-Function Interaction, Phys. Rev. Lett. 19 (1967) 1312. Further Results for the Many-Body Problem in One Dimension. B Sutherland, Phys. Rev. Lett. 2098B. Sutherland,Further Results for the Many-Body Problem in One Dimension, Phys. Rev. Lett. 20 (1968) 98. T Emig, M Kardar, arXiv:cond-mat/0101247Probability Distributions of Line Lattices in Random Media from the 1D Bose Gas. 604479T. Emig and M. Kardar, Probability Distributions of Line Lattices in Random Media from the 1D Bose Gas, arXiv:cond-mat/0101247 Nucl. Phys. B 604 [FS] (2001) 479 (2001). The crossing probability for directed polymers in random media. A De Luca, P. Le Doussal, arXiv:1505.04802Phys. Rev. E. 9240102A. De Luca and P. Le Doussal, The crossing probability for directed polymers in random media arXiv:1505.04802, Phys. Rev. E 92, 040102 (2015). Crossing probability for directed polymers in random media: exact tail of the distribution. A De Luca, P. Le Doussal, arXiv:1511.05387Phys. Rev. E. 9332118A. De Luca and P. Le Doussal, Crossing probability for directed polymers in random media: exact tail of the distribution arXiv:1511.05387, Phys. Rev. E 93, 032118 (2016). A De Luca, P. Le Doussal, arXiv:1606.08509Mutually avoiding paths in random media and largests eigenvalues of random matrices. 9530103A. De Luca and P. Le Doussal, Mutually avoiding paths in random media and largests eigenvalues of random matrices arXiv:1606.08509, Phys. Rev. E 95, 030103 (2017).
[]
[ "Principle of Detailed Balance and Convergence Assessment of Markov Chain Monte Carlo methods and Simulated Annealing", "Principle of Detailed Balance and Convergence Assessment of Markov Chain Monte Carlo methods and Simulated Annealing" ]
[ "Ioana A Cosma [email protected] \nDepartment of Statistics\nDepartment of Mathematics and Statistics\nUniversity of Oxford\n1 South Parks Road, McGill Uni-versity, Burnside Hall, 805 Sherbrooke WOX1 3TG, H3A 2K6Oxford, MontrealQuebec, CanadaUnited Kingdom\n", "Masoud Asgharian \nDepartment of Statistics\nDepartment of Mathematics and Statistics\nUniversity of Oxford\n1 South Parks Road, McGill Uni-versity, Burnside Hall, 805 Sherbrooke WOX1 3TG, H3A 2K6Oxford, MontrealQuebec, CanadaUnited Kingdom\n" ]
[ "Department of Statistics\nDepartment of Mathematics and Statistics\nUniversity of Oxford\n1 South Parks Road, McGill Uni-versity, Burnside Hall, 805 Sherbrooke WOX1 3TG, H3A 2K6Oxford, MontrealQuebec, CanadaUnited Kingdom", "Department of Statistics\nDepartment of Mathematics and Statistics\nUniversity of Oxford\n1 South Parks Road, McGill Uni-versity, Burnside Hall, 805 Sherbrooke WOX1 3TG, H3A 2K6Oxford, MontrealQuebec, CanadaUnited Kingdom" ]
[]
Markov Chain Monte Carlo (MCMC) methods are employed to sample from a given distribution of interest, π, whenever either π does not exist in closed form, or, if it does, no efficient method to simulate an independent sample from it is available. Although a wealth of diagnostic tools for convergence assessment of MCMC methods have been proposed in the last two decades, the search for a dependable and easy to implement tool is ongoing. We present in this article a criterion based on the principle of detailed balance which provides a qualitative assessment of the convergence of a given chain. The criterion is based on the * Ioana A. Cosma is a doctoral student in the ). This research was partially supported by research grants from NSERC and FQRNT. The authors thank Russell Steele for insightful discussions on the topic. behaviour of a one-dimensional statistic, whose asymptotic distribution under the assumption of stationarity is derived; our results apply under weak conditions and have the advantage of being completely intuitive. We implement this criterion as a stopping rule for simulated annealing in the problem of finding maximum likelihood estimators for parameters of a 20-component mixture model. We also apply it to the problem of sampling from a 10-dimensional funnel distribution via slice sampling and the Metropolis-Hastings algorithm.Furthermore, based on this convergence criterion we define a measure of efficiency of one algorithm versus another.
null
[ "https://arxiv.org/pdf/0807.3151v1.pdf" ]
88,511,879
0807.3151
40c6a37d51b6e82ea01fbe50f2303097392d1c94
Principle of Detailed Balance and Convergence Assessment of Markov Chain Monte Carlo methods and Simulated Annealing 20 Jul 2008 July 20, 2008 Ioana A Cosma [email protected] Department of Statistics Department of Mathematics and Statistics University of Oxford 1 South Parks Road, McGill Uni-versity, Burnside Hall, 805 Sherbrooke WOX1 3TG, H3A 2K6Oxford, MontrealQuebec, CanadaUnited Kingdom Masoud Asgharian Department of Statistics Department of Mathematics and Statistics University of Oxford 1 South Parks Road, McGill Uni-versity, Burnside Hall, 805 Sherbrooke WOX1 3TG, H3A 2K6Oxford, MontrealQuebec, CanadaUnited Kingdom Principle of Detailed Balance and Convergence Assessment of Markov Chain Monte Carlo methods and Simulated Annealing 20 Jul 2008 July 20, 2008arXiv:0807.3151v1 [stat.CO] Ma-soud Asgharian is Associate Professor,Metropolis-Hastingsslice samplingMarkov chain Central Limit Theoremdetailed balanceergodic Markov chainequilibriumstation- ary distribution Markov Chain Monte Carlo (MCMC) methods are employed to sample from a given distribution of interest, π, whenever either π does not exist in closed form, or, if it does, no efficient method to simulate an independent sample from it is available. Although a wealth of diagnostic tools for convergence assessment of MCMC methods have been proposed in the last two decades, the search for a dependable and easy to implement tool is ongoing. We present in this article a criterion based on the principle of detailed balance which provides a qualitative assessment of the convergence of a given chain. The criterion is based on the * Ioana A. Cosma is a doctoral student in the ). This research was partially supported by research grants from NSERC and FQRNT. The authors thank Russell Steele for insightful discussions on the topic. behaviour of a one-dimensional statistic, whose asymptotic distribution under the assumption of stationarity is derived; our results apply under weak conditions and have the advantage of being completely intuitive. We implement this criterion as a stopping rule for simulated annealing in the problem of finding maximum likelihood estimators for parameters of a 20-component mixture model. We also apply it to the problem of sampling from a 10-dimensional funnel distribution via slice sampling and the Metropolis-Hastings algorithm.Furthermore, based on this convergence criterion we define a measure of efficiency of one algorithm versus another. INTRODUCTION Let π be a given distribution such that either π does not exist in closed form or no efficient method to simulate an independent sample from it is available. Suppose that interest lies in the expected value of a random variable h(X), denoted by E π h(X) , where X has distribution π. Monte Carlo sampling methods (Hammersley and Handscomb 1964) such as rejection sampling, importance sampling or samplingimportance resampling (SIR) approximate the value of E π h(X) by sampling from a distribution g that closely resembles π (Smith and Gelfand 1992). Although for low dimensional distributions π it is oftentimes possible to find sampling distributions g that provide estimates to within given accuracy with low computational cost, these sampling methods suffer greatly from the curse of dimensionality. The need to approximate the value of high dimensional integrals arising in statistical mechanics led to the development of MCMC sampling methods. The first MCMC method, known today as the Metropolis Monte Carlo algorithm, was proposed by Metropolis, Rosenbluth, Rosenbluth, Teller, and Teller (1953) as a general method for studying the equilibrium properties of systems consisting of many interacting particles. The algorithm simulates the behaviour of the system under equilibrium, and the expected value of a given property is approximated by ergodic averages based on these simulations. In statistical terms, the Metropolis Monte Carlo algorithm constructs an ergodic Markov chain {X t , t = 1, . . . , n} with stationary distribution π, i.e. as the number of iterations n tends to ∞, the conditional distribution of X n given the value of X 1 converges to π regardless of the starting distribution g, where X 1 has distirubtion g (in notation: X 1 ∼ g). Hastings (1970) generalized the procedure of proposing the next move X t given X t−1 = x t−1 . His algorithm, known as the Metropolis-Hastings algorithm, transforms an arbitrary stochastic matrix into a π-reversible one, and only requires that π be known up to a normalizing constant. An equally popular MCMC algorithm is the Gibbs sampler, introduced by Geman and Geman (1984) with an application to image restoration. This algorithm proposes the next move by sampling from the full conditional distributions and, unlike the Metropolis-Hastings algorithm, accepts each proposal with probability 1. Two well-known variants on Gibbs sampling are the data-augmentation algorithm of Tanner and Wong (1987) and the substitution sampling algorithm of Gelfand and Smith (1990). The goal of MCMC methods is to produce an approximate i.i.d. sample X K+1 , X K+2 , . . . , X K+n from π, where K, n > 1, and K is known as the number of 'burn-in' iterations to be removed from the beginning of the chain. Analysing the output of an MCMC method consists of assessing convergence to sampling from π, convergence to i.i.d. sampling, and convergence of empirical averages of the form 1 n n i=1 h(X K+i ) to E π h(X) = h(x)π(x)dx as n → ∞. Robert and Casella (2004) argue that while convergence to π is not of major concern since it can only be achieved asymptotically, the issues of convergence to i.i.d. sampling and of convergence of empirical averages are strongly interrelated and depend on the mixing speed of the chain. By definition, a chain whose elements converge rapidly to weakly correlated draws from the stationary distribution is said to possess good mixing speed. Therefore, the mixing speed of a chain is determined by the degree to which the chain escapes the influence of the starting distribution and by the extent to which it explores the high density regions of the support of π. Recent research in MCMC methodology has focused on developing, on one hand, samplers that escape quickly the attraction of the starting distribution as well as that of local modes, and, on the other hand, convergence assessment criteria for analysing the mixing speed of a given chain. A recent sampling algorithm which exploits the idea of jumping between states of similar energy to facilitate efficient sampling is the equi-energy sampler of Kou et al.(2006). Robert (1995Robert ( ,1998, Cowles and Carlin (1996), and Brooks and Roberts (1998) present a comprehensive review of the practical implementation of convergence criteria and the mathematics underlying them. Liu (2001), Neal (1993), Brooks (1998), and Kass, Carlin, Gelman, and Neal (1998) offer an in-depth introduction to MCMC methodology and its applications, as well as discussions on the issues surrounding it. The common view among researchers and practitioners is that developing a good sampler or a reliable convergence criterion is problem-specific. A sampler with good mixing speed when sampling from a relatively smooth, low-dimensional distribution might become trapped in a well of low probability when sampling from a distribution having many local modes. Similarly, a convergence criterion which proves reliable for analysing a given MCMC output might incorrectly assess the convergence of a chain that has only explored a subset of the entire support space. Our interest lies in convergence assessment, in particular, in identifying lack of convergence. We define a one-dimensional statistic and derive an intuitive criterion based on the principle of detailed balance that provides a qualitative assessment on the convergence of a given MCMC chain. In Section 2 we recall basic notions and results from the theory of Markov chains, which we subsequently use in Section 3 to derive the asymptotic distribution of our proposed statistic under the assumption of stationarity. In the same section, we discuss two possible implementations of our criterion, one using the asymptotic distribution, the other experimental as a qualitative tool. Section 4 discusses two applications: one as a stopping rule for simulated annealing, an algorithm for function maximization applied to the problem of finding maximum likelihood estimators (Azencott 1992), the second as a graphical tool for comparing the performances of Metropolis-Hastings versus slice sampling for the problem of sampling from a 10-dimensional funnel distribution. All computations were performed using code written in C++. We conclude in Section 5 with general remarks, comparisons, and criticisms. PRELIMINARIES Let X = {X t , t = 1, 2, . . .} be a Markov chain with state space S and transition probability matrix P = (p ij ). We refer the reader to Medhi (1994), Norris (1997), and Jones (2004) for details and proofs. For the purpose of the convergence criterion we present in this article, we restrict our attention to finite Markov chains. Let p (n) ij be the transition probability from state i to state j in n steps. The Ergodic Theorem states that if X is irreducible and aperiodic, then the limits π j := lim n→∞ p (n) ij exist and are independent of the initial state i for all i, j ∈ S and (π j , j ∈ S) is the stationary distribution of X. The chain X is called ergodic. Definition 1 (Principle of detailed balance) Transition probability matrix P and probability distribution π are said to be in detailed balance, or, equivalently, the principle of detailed balance is said to hold, if π i p ij = π j p ji ∀i, j ∈ S. Definition 2 A Markov chain X with irreducible transition probability matrix P and initial distribution g, i.e. X 1 ∼ g, is reversible if, for all N ≥ 2, the chain {X N , X N −1 , . . . , X 2 , X 1 } is a Markov chain with transition probability matrix P and initial distribution g. Norris (1997) proves that if X is irreducible, then it is reversible if and only if P and g are in detailed balance, where g is the initial distribution of X. The following definitions are needed to introduce the Markov chain Central Limit Theorem (Jones 2004). P n (i, ·) − π(·) ≤ M(i)γ(n). (1) Let X be a Markov chain on state space S with transition probability P and stationary distribution π. If (1) holds for all i ∈ S with γ(n) = t n for some t < 1, then X is geometrically ergodic. If, moreover, M is bounded, then X is uniformly ergodic. If (1) holds for all i ∈ S with γ(n) = n −m for some m ≥ 0, then X is polynomially ergodic of order m. 2. X is polynomially ergodic of order m, E π M < ∞ and E π |h(X)| 2+δ < ∞ where mδ > 2 + δ; 3. X is geometrically ergodic and E π |h(X)| 2+δ < ∞ for some δ > 0; 4. X is geometrically ergodic and E π h 2 (X)[log + |h(X)|] < ∞; 5. X is geometrically ergodic, satisfies detailed balance and E π h 2 (X) < ∞; 6. X is uniformly ergodic and E π h 2 (X) < ∞. Then for any initial distribution, √ n h n − E π h(X) D → Normal 0, σ 2 h as n → ∞, whereh n = 1 n n i=1 h(X i ) and σ 2 h = var π h(X 1 ) + 2 ∞ i=2 cov π h(X 1 ), h(X i ) < ∞. DETAILED BALANCE AND CONVERGENCE DIAGNOSTICS Let π = (π i , i ∈ S) be a discrete distribution with finite state space S, m = |S|. Let {X t , t = 1, . . . , n} be an irreducible, aperiodic Markov chain with transition probability matrix P = (p ij ) and stationary distribution π. We say that a chain has reached equilibrium by step t if P t (i, j) = π j , ∀i, j ∈ S and ∃i, j ∈ S such that P t−1 (i, j) = π j . Our convergence assessment criterion is based on the principle of detailed balance from statistical mechanics (Chandler 1987). Statistical mechanics is concerned with the study of physical properties of systems consisting of very large number of particles, for example liquids or gases, as these systems approach the equilibrium state, i.e. a uniform, time-independent state. In these terms, the principle of detailed balance states that a physical system in equilibrium satisfies π i π j = p ji p ij = exp − E i − E j kT , ∀i, j ∈ S, where E i is the energy of the system in state i, k is Boltzmann's constant, T is the temperature, and π i and p ij have the usual interpretation. We assume that the Markov chain {X t , t = 1, . . . , n} is constructed to satisfy detailed balance. This is oftentimes the case since the principle of detailed balance implies that π is the stationary distribution of the chain, and it is easier to check the former than the latter, see for example the discussions on the Metropolis-Hastings (Hastings 1970) and slice sampling algorithms (Neal 2003). We introduce the notion of an energy function E i ∝ − log(π i ), ∀i ∈ S. When implementing simulated annealing, the stationary distribution at temperature T k is π 1/T k , so the energy func- tion becomes E i = − log(π i )/T k , where {T k , k = 1, 2, . . .} is a sequence of decreas- ing temperatures. Therefore, the equilibrium probability of being in state i equals π i = 1 Z exp(−E i ), where the normalizing constant is defined as Z := i∈S exp(−E i ). Define the following approximation to π i based on a Markov chain of n iterationŝ π i = 1 n n j=1 I(X j = i), ∀i ∈ S. The idea of working with indicator functions is similar to that of Raftery and Lewis (1992) who develop a convergence assessment method based on the sequence {I(X t ≤ i), t = 1, . . .}, for fixed i ∈ S. We point out that, for fixed i ∈ S, the sequence Our criterion assesses the convergence of the chain by comparing the behaviour {I(X t = i), t = 1, . . .}of the functions f i =π i / exp(−E i ), i ∈ S, to their averagef = 1 m j∈S f j , via the statistic V n := n m i∈S f i −f 2 . Theoretical approach We proceed to derive the distribution of the statistic V n under the hypothesis that the chain has reached stationarity, i.e. that X i ∼ π, ∀i = 1, . . . , n. V n = n m i∈S f i − 1 m j∈S f j 2 = n m i∈S f i − 1 m f i − 1 m j∈S j =i f j 2 = n m i∈S a i ′ f 2 , where f = (f i , i ∈ S) ′ and a i = − 1 m , . . . , − 1 m , 1 − 1 m , − 1 m , . . . , − 1 m ′ is an mdimensional column vector with ith entry equal to 1 − 1 m and the remaining entries equal to − 1 m . Define the following (m × m) dimensional matrix A =            a 1 ′ a 2 ′ . . . . . . a m ′            =            1 − 1 m − 1 m − 1 m . . . − 1 m − 1 m 1 − 1 m − 1 m . . . . . . . . . . . . . . . . . . . . . − 1 m . . . . . . 1 − 1 m − 1 m − 1 m . . . . . . − 1 m 1 − 1 m            , so V n = n m Af ′ Af . First, we observe that ∀i ∈ S, f j − E π f j , j ∈ S a i ′ = 1 − 1 m f i − E π f i − 1 m j∈S j =i f j − E π f j = f i −f ,(2) since E π f j = 1 Z , ∀j ∈ S. Second, we notice that f i − E π f i =π i e −E i − 1 Z =π i − π i Zπ i , ∀i ∈ S.(3) Define W i,n := √ n π i − π i , ∀i ∈ S, and the m-dimensional column vector W n := (2) and (3), we obtain that V n = CW n ′ CW n , where W i,n , i ∈ S ′ . FromC = A         1 √ mZπ 1 0 . . . 0 0 1 √ mZπ 2 0 . . . . . . . . . . . . . . . 0 . . . 0 1 √ mZπm         =         m−1 m 3/2 e −E 1 0 . . . 0 0 . . . 0 . . . . . . 0 . . . . . . 0 . . . 0 m−1 m 3/2 e −Em         The following result presents the asymptotic distribution of the statistic V n under the assumption of stationarity. Theorem 2 Under the conditions of Theorem 1, CW n D → Normal 0, CΣC ′ and V n D → k i=1 λ i Z 2 i as n → ∞, where λ 1 , . . . , λ k are the characteristic roots of CΣC ′ and Z 1 , . . . , Z k are i.i.d. Normal(0, 1) random variables. proof: We begin by pointing out that irreducible and aperiodic Markov chains on finite state spaces are uniformly ergodic (Roberts and Rosenthal 2004), so condition (6) of Theorem 1 is satistifed. It follows that for every i ∈ S, W i,n = √ n π i − π i = √ n 1 n n j=1 I(X j = i) − E π I(X 1 = i) D → Normal 0, σ 2 i as n → ∞, where σ 2 i = π i (1 − π i ) + 2 ∞ j=2 P I(X j = i) = 1|I(X 1 = i) = 1 π i − π 2 i < ∞. By the Cramér-Wold Device (Billingsley 1968, Varadarajan 1958, it follows that W n D → Normal 0, Σ as n → ∞, where 0 is an m-dimensional column vector of zeros and Σ is an (m × m) variance-covariance matrix whose entries are given Σ(i, i) = σ 2 i Σ(i, j) = lim n→∞ cov π W i,n , W j,n = lim n→∞ 1 n n k=1 n l=1 cov π I(X k = i), I(X l = j) = lim n→∞ 1 n n k=1 P X k = i, X k = j − π i π j + 1 n n k,l=1 k<l P X k = i, X l = j −π i π j + 1 n k,l=1 l<k P X k = i, X l = j − π i π j So, for all i, j ∈ S, i = j Σ(i, j) = −π i π j + lim n→∞ π i n n k,l=1 k<l P X l = j|X k = i − π j + n k,l=1 l<k P X l = j|X k = i − π j = −π i π j + 2π i ∞ k=2 P X k = j|X 1 = i − π j < ∞, The last equality follows from the fact that if a Markov chain satisfies detailed balance, then it is reversible, i.e. for k > 1, P X k = j|X 1 = i = P X 1 = j|X k = i . Finally, the conditions of the Markov chain Central Limit Theorem guarantee that the infinite summation in the last line is finite. It then follows that CW n D → Normal 0, CΣC ′ as n → ∞. Lastly, since V n = CW n ′ CW n , it follows from Lemma 1 in Chernoff and Lehmann (1953) that V n D → k i=1 λ i Z 2 i as n → ∞, where λ 1 , . . . , λ k are the characteristic roots of CΣC ′ and Z 1 , . . . , Z k are i.i.d. Normal(0, 1) random variables. Q.E.D. Example 1 Let the Markov chain be generated by the Metropolis-Hastings algorithm with symmetric proposal probability matrix P = (p ij ). The expressions for Σ(i, i) and Σ(i, j) can be simplified as follows. Consider the Markov-Bernoulli chain I(X j = i), j = 1, . . . , n for fixed i ∈ S with transition probability matrix P i =   1 − a a b 1 − b   . It is shown in Medhi (1994, pp. 101-102) that P j−1 i = 1 a + b   b a b a   + (1 − a − b) j−1 a + b   a −a −b b   , ∀j ≥ 2. Now, a = j∈S j =i P X 1 = j, X 2 = i 1 − P {X 1 = i} = P {X 2 = i} − P {X 1 = i, X 2 = i} 1 − P {X 1 = i} = π i 1 − π i 1 − p ii , b = 1 − P X 2 = i|X 1 = i = 1 − p ii . Then, provided that max{0, 2π i − 1} < p ii < 1, ∀i ∈ S, Σ(i, i) = π i (1 − π i ) + 2 ∞ j=2 π i (1 − π i ) p ii − π i 1 − π i j−1 = π i (1 − π i )(1 + p ii − 2π i ) 1 − p ii , Σ(i, j) = −π i π j + 2π i ∞ k=2 P k−1 (i, j) − π j , for i = j. Implementation Let {X K+1 , X K+2 , . . . , X K+n } be an irreducible and aperiodic Markov chain with finite state space S and stationary distribution π that satisfies detailed balance. A burn-in of K draws are discarded, where K depends on the rate of convergence of the sampling algorithm on π (Brooks 1998). We implement our convergence assessment criterion as a test of hypothesis under the null hypothesis that the chain has reached stationarity by iteration K + 1. For n large enough, V n D = k i=1 λ i Z 2 i , and we estimate its distribution using Lyapunov's Central Limit Theorem (Loève 1963) . Since Z i is Normal(0, 1), Z 2 i is χ 2 (1) , so E λ i Z 2 i = λ i and var λ i Z 2 i = 2λ 2 i , for i = 1, . . . , k. Define Y i = λ i Z 2 i −λ i ; E(Y i ) = 0, and var(Y i ) = E Y 2 i = 2λ 2 i < ∞ for i = 1, . . . , n. Moreover, E Y 3 i = −4λ 3 i < ∞, so E Y 3 i < ∞, for i = 1, . . . , k. Define s 2 k = k i=1 var(Y i ) = 2 k i=1 λ 2 i . It remains to show that the following condition holds: lim k→∞ k i=1 E Y i 3 /s 3 k = 0, which is equivalent to showing that lim k→∞ 1 2 k i=1 λ 2 i 3/2 k i=1 |λ i | 3 = 0,(4)since E Y i 3 = |λ i | 3 E Z 2 i − 1 3 ≈ 8.6916|λ i | 3 , for i = 1, . . . , k. So, provided that condition (4) is satisfied, Lyapunov's Central Limit Theorem gives the following result for k and n large enough: V n D = k i=1 λ i Z 2 i ∼ Normal k i=1 λ i , 2 k i=1 λ 2 i approximately.(5) For the computation of the mean and variance in (5), we resort to the following simplifications k i=1 λ i = trace CΣC ′ = m i=1 C(i, i) 2 Σ(i, i),(6)k i=1 λ 2 i = k i=1 λ i 2 − 2 k i,j=1 i<j λ i λ j ,(7) where the first summation in equation (7) is given in (6), and the second is the sum of all the 2-square principal subdeterminants of CΣC ′ (Marcus and Ming 1964, p. 22). We propose a quantitative assessment of convergence via a test of hypothesis at confidence level (1 − α) using the approximate distribution of V n given in (5) as follows. 1. Obtain an aperiodic, irreducible Markov chain which satisfies the principle of detailed balance: {X 1 , X 2 , . . . , X K , . . . , X K+n }; discard the first K draws. 2. Compute the statistic V n = n m i∈S f i −f 2 from the remaining n draws and the (1 − α/2) quantile v α/2 = k i=1 λ i + z α/2 2 k i=1 λ 2 i . 3. If V n < v α/2 , conclude that the chain has reached stationarity at level (1 − α) and stop; else, continue for an additional n iterations and return to step 2, replacing n by 2n. In this article we implement the criterion in the form of a qualitative tool for convergence assessment. We iterate the chain and plot the absolute value of the relative difference, V (k−1)n − V kn /V (k−1)n , against the number of iterations kn, every n iterations, k = 1, 2, . . .. We claim that the chain has reached equilibrium if the relative difference drops below some problem-specific, pre-specified constant ǫ > 0. The value of the constant ǫ is problem-specific because it depends on the distribution of interest π. For a high-dimensional, multi-modal distribution, the value of ǫ might need to be very small in order for this analysis to correctly detect lack of convergence to π, whereas the same value might be too conservative for a onedimensional, unimodal distribution. Based on this implementation of the criterion as a qualitative tool, we can define a measure of efficiency of one algorithm against another. Let ǫ > 0 be given. Let (k−1)n 1 − V (1) kn 1 /V (1) (k−1)n 1 < ǫ min kn 2 : V (2) (k−1)n 2 − V (2) kn 2 /V (2) (k−1)n 2 < ǫ . If V (ǫ)(1) 1,2 < 1, we conclude that algorithm 1 is more efficient than algorithm 2 at level ǫ; if V (ǫ) 1,2 > 1, algorithm 2 is more efficient than algorithm 1. APPLICATIONS Application 1: multipath changepoint problem The following application is taken from Asgharian and Wolfson (2001). Let Y ij denote the jth measurement on patient i, where 1 ≤ i ≤ 100, 1 ≤ j ≤ 20. To each patient there is associated a possibly distinct changepoint τ i such that measurements Y i1 , Y i2 , . . . , Y iτ i are i.i.d. Normal(0, 1) random variables and measurements Y iτ i +1 , . . . , Y i20 are i.i.d. Normal(4, 1). Let Z i = (1, Z i1 ) ′ and θ = (θ 0 , θ 1 ) ′ denote the covariate vector and the regression coefficient vector, respectively, for patient i, i.e. Y ij = θ 0 + θ 1 Z i1 , ∀j. Define parameters α = θ 0 + θ 1 and β = θ 0 − θ 1 . The goal is to find the maximum likelihood estimators (MLE's) of α and β, denoted byα andβ, respectively. We simulate the data with θ 0 = 0 and θ 1 = 1; the joint log likelihood is bimodal. We let the parameter space be (−10, 10) 2 , assuming zero mass is placed outside this region, and we discretize the space over a grid of width 0.01. We apply the algorithm of simulated annealing, introduced by Kirkpatrick, Gelatt, and Vecchi (1983), which performs function optimization through an iterative improvement approach. The algorithm was developed via an analogy with thermodynamics where a substance is melted by a slow annealing process and equilibrium is attained at each temperature until eventually the substance stabilizes at its lowestenergy state. Similarly, in simulated annealing, a global temperature parameter controls the effects of high probability regions under the distribution of interest π. For each T k in a sequence such that T k → 0 as k → ∞, an MCMC chain with stationary distribution π 1/T k is generated until equilibrium. As the temperature is lowered following a pre-specified schedule, known as the cooling schedule, the effects become more pronounced and the chain stabilizes at its global maximum value or equivalently, lowest energy state (Neal 1993, Brooks andMorgan 1995). Geman and Geman (1984) show that this convergence is guaranteed under a logarithmic cooling schedule, which unfortunately is too slow to be followed in practice. We implement the algorithm with a geometric cooling schedule T k+1 = T k /2, k = 0, . . . , 5, and T 0 = 50 and zero burn-in. Simulated annealing with a very fast cooling schedule is known as simulated quenching; refer to Catoni (1992) for a discussion on the design of cooling schedules. For (α, β) ∈ (−10, 10) 2 , the function f (k) α,β at temperature T k is given by f (k) α,β =π α,β / exp(−E (α,β) ). The aim is to compare the performance of the Metropolis-Hastings sampler in determining the MLE's via simulated annealing with two different methods for proposing the next move. In the first method, we draw uniformly from a cube of length w centered at the current position, where w has the values: {12, 7, 4, 2.5, 1.7, 1.2, 0.9, 0.6} for k = 1, . . . , 8. These values are set retrospectively to obtain an acceptance rate of approximately 0.4. In the second method, we propose the next move via univariate slice sampling applied to each variable in turn; this algorithm is described briefly in Subsection 4.2. We use the "stepping-out" procedure with an initial interval size of 0.1 at each temperature. At each temperature, we perform 1000 iterations of the Metropolis-Hastings algorithm, computing the value of V n every 25 iterations. We obtain the following results: (1) ,β (1) ) = 247.645, and α (2) ,β (2) = (1.19, −1.15), E (α (2) ,β (2) ) = 247.645 for the first and second methods, respectively, which equal the lowest energy value obtained by a systematic grid search. We conclude that both methods correctly identified the MLE's. Figures 1 and 2 display the relative difference in variance; sharp drops indicate that the sampler has jumped to previously unexplored regions of the parameter space, i.e. to points (α, β) for whichπ α,β is significantly different from π α,β , thus increasing the value of the variance. α (1) ,β (1) = (1.18, −1.17), E (α We proceed to simulate 50 datasets; for each, we initialize the two chains from the same randomly chosen point. At each temperature level, we compute the value of V n every 25 iterations until V (k−1)n − V kn /V (k−1)n < ǫ, with ǫ = 0.05. We remark that this value of ǫ is very conservative; ideally, a different value would be employed at each temperature level. We make the following two observations: first, for any given dataset, the lowest energy values reported by the two algorithms differ by at most 0.011 units in magnitude, and, second, the difference between the lowest energy values found by a systematic search and by simulated annealing is at X t = x t , it samples a value y uniformly from the interval 0, π(x t ) . Given y, the next position X t+1 is sampled from an appropriately chosen subset of the horizontal "slice" {x; π(x) > y}. Neal (2003) shows that the algorithm produces an ergodic Markov The plots show the decreasing trend of the relative difference in V n as the number of iterations increases; the increases in V n are more frequent than in Figure 1. chain with stationary distribution π, and that, moreover, due to its adaptive nature, the algorithm sometimes outperforms Metropolis-Hastings and the Gibbs sampler. Let X be a Normal(0, 9) random variable, and let Y 1 , . . . , Y 9 be independent Normal random variables, which, conditional on X = x, have mean 0 and variance exp(x). The goal is to obtain an approximate independent sample from the joint distribution of X, Y 1 , . . . , Y 9 . We initialize the chain as follows: X = 0 and Y i = 1, Numbers are rounded to the closest value on the grid. Second, we implement the slice sampling algorithm with single-variable updates; each iteration consists of 120 updates for each variable in sequence. We use the "stepping-out" procedure with an initial interval of size 1. We compute V n every 100 iterations until the absolute value of the relative difference is below ǫ = 0.01. for i = 1, The left column of Figure slice sampling poses signs of concern regarding convergence to stationarity (notice the frequent increases in value from iteration 17500 onwards), whereas the value of V n under Metropolis-Hastings appears stable towards the end of the run. Therefore the behaviour of V n under slice sampling across eleven chains with overdispersed starting points indicates lack of convergence to stationarity, whereas the behaviour of V n under Metropolis-Hastings, which is known to allow a more restrictive exploration of the support space, gives misleading results. CONCLUSION The last fifty years have witnessed the development and rise in popularity, in particular in Bayesian statistical inference, of Markov Chain Monte Carlo methods for simulating from complex probability distributions (Smith and Roberts 1993). For a practitioner who has a finite MCMC output, questions arise regarding how reliable the sample is as a representation of π. Although a wealth of convergence diagnostic tools for analysing MCMC output have been proposed over the past decades, their performance, in general, is problem-specific, and developing a dependable, easy to implement tool for convergence assessment continues to be a challenge. This article presents a new convergence assessment method for irreducible, aperiodic Markov chains on discrete spaces obtained by MCMC samplers that satisfy the principle of detailed balance and requirement (4). We introduce a one-dimensional test statistic whose behaviour under the assumption of stationarity is analyzed both theoretically and experimentally, and present a possible implementation of our criterion as a graphical tool for convergence assessment. In low dimensional problems, the proposed criterion as a qualitative tool assesses convergence satisfactorily; however, in high dimensional problems, the criterion is unreliable for convergence assessment, but can provide useful insight into lack of convergence of the chain to stationarity. In particular, if the variance function experiences sharp increases in value, then it can be concluded that stationarity has not yet been reached; however, if the value of the variance function is stable, then the results are inconclusive. The advantage of our method lies in its attempt to analyse the behaviour of an MCMC chain travelling through a possibly high dimensional space by monitoring the behaviour of a one-dimensional statistic. Lack of convergence to stationarity is correctly assessed by the behaviour of the statistic to the extent to which the sampler explores freely the underlying space. Particularly in high dimensional problems with irregularly shaped distribution functions, we recommend that the MCMC output be analyzed using different ǫ values, compared across multiple chains, and that several diagnostic tools be employed. There exist in the literature at least two convergence assessment criteria based on weighting functions that are very similar to our approach. Ritter and Tanner (1992) propose to detect convergence to the full joint distribution by monitoring convergence of the importance weight w t = π(x)/g t (x), where g t is the joint distribution of the observations sampled at iteration t. They estimate g t (x) by 1 m m i=1 p x|x (i) t−1 , where x (i) t−1 , i = 1, . . . , m is a sample from g t−1 . If the chain has converged, the distribution of the weights w t , based on multiple replications of the chain, will be degenerate about a constant. Zellner and Min (1995) propose a convergence criterion for the Gibbs sampler in the special case that x can be partitioned into x (1) , x (2) . They define two criteria based on the weight functions W 1 = p(x (1) )p(x (2) |x (1) ) − p(x (2) )p(x (1) |x (2) ) and W 2 = p(x (1) )p(x (2) |x (1) ) / p(x (2) )p(x (1) |x (2) ) , where p (1) is estimated by 1 m m i=1 p x (1) |x j (2) , and x j (2) , j = 1, . . . , m is the sequence of draws of x (2) obtained by Gibbs sampling. They compute the value of these weights at many points in the parameter space and argue that if the chain has converged, then the values of W 1 will be close to 0 and those of W 2 close to 1. Zellner and Min use asymptotic results from the stationary time series literature to calculate posterior odds for the hypothesis H 0 : W 1 = 0 vs. H 1 : W 1 = 0 for the k-dimensional case, k ≥ 1, when the weights are computed at k different points in the parameter space. The main drawback of these methods is the assumption that the transition probability p(x|x t−1 ), in the method of Ritter and Tanner, and the conditionals p(x (1) |x (2) ) and p(x (2) |x (1) ), in the method of Zellner and Min, exist explicitly. Our method, how-ever, makes no such assumption and estimates π i , the probability of being in state i, by the empirical distribution function. All three methods have the disadvantage of being computationally expensive; the ergodic averages used to approximate various marginal and conditional probabilities (in our method,π i ) require a large number of summands in order to provide good estimates, so large numbers of iterations, and possibly many replicates of the chain, are needed. Furthermore, since the normalizing constant of π is unknown, the functions f i and the weights w t of the criterion of Ritter and Tanner might stabilize around an incorrect value if the sampler has failed to explore all the high density regions of the space. For this reason, we recommend to run multiple replicates of the chain started from different regions of the space. The criterion of Zellner and Min also gives misleading results if the space is poorly explored and the weights are computed at points that come from low density regions. Finally, our criterion has an intuitive graphical representation, very similar to that proposed by Ritter and Tanner, and, whereas the criterion of Zellner and Min uses multivariate weight functions, our criterion is based on a one-dimensional statistic regardless of the dimension of the underlying space, thus offering a dimensionality reduction approach to the problem of convergence assessment in high dimensional spaces. An interesting alternative to approximating a continuous state space by a discrete grid is to sample the continuous state-space Markov chain and to apply the discretization method developed by Guihenneuc-Jouyaux and Robert (1998). Provided that the continous chain is Harris-recurrent, the method defines renewal times based on the visiting times to one of m disjoint small sets in the support space. By subsampling the underlying chain at the renewal times, the method builds a homogeneous Markov chain on the finite state space {1, . . . , m}. Our propoposed criterion can then be applied to the finite chain; it would be interesting to explore whether the convergence assessment extends to the continous Markov chain. Definition 3 3Let M(i) be a nonnegative function and γ(n) a nonnegative decreasing function on the positive integers such that forms a Markov chain, whereas the sequence defined by Raftery and Lewis does not. Brooks et al.(2003) use a similar approach of estimating the stationary distribution by the empirical distribution function obtained from the MCMC output; they derive nonparametric convergence assessment criteria for MCMC model selection by monitoring the distance, as the number of simulations increases, between the empirical mass functions obtained from multiple independent chains. value of the statistic after n iterations of algorithm i, i = 1, 2. Let n i represent the interval, in iterations, at which the statistic is computed for algorithm i. The measure of efficiency is defined as V Figure 1 : 1Relative difference in V n versus n using uniform proposal distributions for application 1. The plots show the decreasing trend of the relative difference in V n as the number of iterations increases, interrupted by sharp increases in V n . most 0.614909. Moreover, we note that the methods required on average 5605 iterations, and 3162 iterations, respectively. Averaged over 50 tests, the measure of efficiency of simulated annealing using Metropolis-Hastings with uniform proposals versus Metropolis-Hastings with slice sampling is approximately 1.77, i.e. MCMC with slice sampling is almost twice as efficient as MCMC with uniform proposals. 4.2 Application 2: 10-dimensional funnel Neal (2003) illustrates the advantage of slice sampling over Metropolis-Hastings in sampling from a 10-dimensional funnel distribution. Slice sampling is an adaptive MCMC method which proceeds in two alternating steps. Given the current position Figure 2 : 2Relative difference in V n versus n using slice sampling for application 1. Figure 3 : 3. . . , 9. For each variable, the parameter space is taken to be (−30.0, 30.0) and it is discretized over a grid of width 0.01. First, we implement the Metropolis-Hastings algorithm with single-variable updates applied to each variable in sequence; one iteration of the chain consists of 1300 updates. For each variable, the proposal distribution is Normal, centered at the current value, with standard deviation of 1.0, truncated on the interval (−30.Sampled values and relative difference in V n in application 2. The left column displays histograms of the sampled values of X superimposed on the Normal(0, 9) density function. The right column displays the relative difference in V n versus n. Figure 4 : 43 compares the histograms of the sampled values of X with the true probability distribution function; the histograms are based on chains of 4600 and 17200 iterations, respectively. Metropolis-Hastings oversamples negative values of X and undersamples positive ones; slice sampling samples correctly in the left tail of the distribution, but undersamples positive values. The right column displays the behaviour of the relative difference in V n ; the variance function undergoes sharp increases in value under both sampling methods, but stabilizes towards the end Autocorrelation of X in application 2. Slice sampling has a faster rate of convergence than Metropolis-Hastings evidenced by the smaller autocorrelation. Figure 5 : 5Relative difference in V n versus n for eleven parallel chains in application 2. The value of V n under Metropolis-Hastings sampling seems to be more stable than under slice sampling. of the run. The behaviour of the variance function fails to reflect the incorrect sampling in the tails of the distribution. The plot of the relative difference in variance for the Metropolis-Hastings algorithm indicates that a smaller value of ǫ would be more appropriate for assessing convergence. The plots in Figure 4 show that the autocorrelation obtained by slice sampling remains close to zero after 100 iterations, whereas that obtained by Metropolis-Hastings continues to fluctuate even after 1000 iterations. This indicates that the Metropolis-Hastings algorithm converges more slowly than slice sampling. We compute the Raftery and Lewis (1992) convergence diagnostic using the Coda package in R (http://www.r-project.org) obtaining dependence factors of 14 and 18.7 for the Metropolis-Hastings and the slice sampling algorithms, respectively, indicating strong autocorrelation. Finally, we run eleven parallel chains started from the following quantiles of the marginal distribution of X : {0.1, 0.2, 0.3, 0.4, 0.45, 0.5, 0.55, 0.6, 0.7, 0.8, 0.9}; we employ the value ǫ = 0.01. We expect the parameter space to be insufficiently explored by both algorithms; however, we are interested in whether this insufficient exploration can be detected from the behaviour of V n across chains with overdispersed starting points. Pooling the sampled values results in chains of 30800 and 19800 draws, respectively; thus the measure of efficiency of Metropolis-Hastings versus slice sampling is 1.56. Trace plots and histograms indicate that negative values of X are oversampled and positive ones are undersampled by both algorithms.Figure 5 is obtained by pooling the sampled values across the eleven chains; the behaviour of V n under Theorem 1 The Central Limit Theorem (finite state space) Let X be an ergodicMarkov chain on state space S with stationary distribution π. Let h : S → R be aBorel function. Assume that one of the following conditions holds: 1. X is polynomially ergodic of order m > 1, E π M < ∞ and there exists B < ∞ such that |h(X)| < B almost surely; Modeling covariates in multipath changepoint problems: Modeling and consistency of the MLE. M Asgharian, D B Wolfson, The Canadian Journal of Statistics. 29Asgharian, M. and Wolfson, D. B. (2001) "Modeling covariates in multipath changepoint problems: Modeling and consistency of the MLE," The Canadian Journal of Statistics, 29, 4, 515-528. Simulated annealing: parallelization techniques. R Azencott, WileyNew YorkAzencott, R. (ed.) (1992) Simulated annealing: parallelization techniques, New York: Wiley. P Billingsley, Convergence of Probability Measures. New YorkJohn Wiley & Sons, IncBillingsley, P. (1968) Convergence of Probability Measures, New York: John Wi- ley & Sons, Inc. Markov chain Monte Carlo method and its application. S P Brooks, The Statistician. 47Brooks, S. P. (1998) "Markov chain Monte Carlo method and its application," The Statistician, 47, 69-100. Nonparametric Convergence Assessment for MCMC Model Selection. S P Brooks, P Giudici, Philippe , A , Journal of Computational and Graphical Statistics. 12Brooks, S. P., Giudici, P., and Philippe, A. (2003) "Nonparametric Convergence Assessment for MCMC Model Selection," Journal of Computational and Graph- ical Statistics, 12, 1, 1-22. Optimization using simulated annealing. S P Brooks, B J T Morgan, The Statistician. 44Brooks, S. P., and Morgan, B. J. T. (1995) "Optimization using simulated an- nealing," The Statistician, 44, 241-257. Convergence assessment techniques for Markov chain Monte Carlo. S P Brooks, G O Roberts, Statistics and Computing. 8Brooks, S. P., and Roberts, G. O. (1998) "Convergence assessment techniques for Markov chain Monte Carlo," Statistics and Computing, 8, 319-335. Rough large deviation estimates for simulated annealing: application to exponential schedules. O Catoni, The Annals of Probability. 20Catoni, O. (1992) "Rough large deviation estimates for simulated annealing: application to exponential schedules," The Annals of Probability, 20, 3, 1109- 1146. Intoduction to Modern Statistical Mechanics. D Chandler, Oxford University PressNew YorkChandler, D. (1987) Intoduction to Modern Statistical Mechanics, New York: Oxford University Press. The use of maximum likelihood estimates in χ 2 tests for goodness of fit. H Chernoff, E L Lehmann, The Annals of Mathematical Statistics. 25Chernoff, H. and Lehmann, E. L. (1953) "The use of maximum likelihood esti- mates in χ 2 tests for goodness of fit," The Annals of Mathematical Statistics, 25, 3, 579-586. Markov Chain Monte Carlo Convergence Diagnostics: A Comparative Review. M K Cowles, B P Carlin, Journal of the American Statistical Association. 91Cowles, M. K., and Carlin, B. P. (1996) "Markov Chain Monte Carlo Conver- gence Diagnostics: A Comparative Review," Journal of the American Statistical Association, 91, 883-904. Sampling-Based Approaches to Calculating Marginal Densities. A E Gelfand, A F M Smith, Journal of the American Statistical Association. 85Gelfand, A. E., and Smith, A. F. M. (1990) "Sampling-Based Approaches to Calculating Marginal Densities," Journal of the American Statistical Association, 85, 398-409. Stochastic Relaxation, Gibbs Distribution, and the Bayesian Restoration of Images. S Geman, D Geman, IEEE Transactions on Pattern Analysis and Machine Intelligence. 6Geman, S., and Geman, D. (1984) "Stochastic Relaxation, Gibbs Distribution, and the Bayesian Restoration of Images," IEEE Transactions on Pattern Anal- ysis and Machine Intelligence, 6, 721-741. Discretization of Continuous Markov Chains and Markov Chain Monte Carlo Convergence Assessment. C Guihenneuc-Jouyaux, C P Robert, Journal of the American Statistical Association. 93Guihenneuc-Jouyaux, C., and Robert, C. P. (1998) "Discretization of Continu- ous Markov Chains and Markov Chain Monte Carlo Convergence Assessment," Journal of the American Statistical Association, 93, 443, 1055-1067. Monte Carlo sampling methods using Markov chains and their applications. W K Hastings, Biometrika. 55Hastings, W. K. (1970) "Monte Carlo sampling methods using Markov chains and their applications," Biometrika, 55, 97-109. . J M Hammersley, D C Handscomb, Monte Carlo methods. Hammersley, J. M., and Handscomb, D. C. (1964) Monte Carlo methods, London: Methuen. On the Markov chain central limit theorem. G Jones, Probability Surveys. 1Jones, G. (2004) "On the Markov chain central limit theorem", Probability Sur- veys, 1, 299-320. Markov Chain Monte Carlo in Practice: A Roundtable Discussion. R E Kass, B P Carlin, A Gelman, R M Neal, The American Statistician. 52Kass, R. E., Carlin, B. P., Gelman, A., and Neal, R. M. (1998) "Markov Chain Monte Carlo in Practice: A Roundtable Discussion," The American Statistician, 52, 93-100. Optimization by Simulated Annealing. S Kirkpatrick, C D Gelatt, M P Vecchi, Science. 220Kirkpatrick, S., Gelatt, C. D., and Vecchi, M. P. (1983) "Optimization by Sim- ulated Annealing," Science, 220, 671-680. Equi-energy Sampler with Applications in Statistical Inference and Statistical Mechanics. S C Kou, Q Zhou, W H Wong, The Annals of Statistics. 34Kou, S. C., Zhou, Q., and Wong, W. H. (2006) "Equi-energy Sampler with Applications in Statistical Inference and Statistical Mechanics," The Annals of Statistics, 34, 4, 1581-1619. . M Loève, D. Van Nostrand Company (Canada), LtdTorontoProbability TheoryLoève, M. (1963) Probability Theory, Toronto: D. Van Nostrand Company (Canada), Ltd. Monte Carlo strategies in scientific computing. J S Liu, SpringerNew YorkLiu, J. S. (2001) Monte Carlo strategies in scientific computing, New York: Springer. A survey of matrix theory and matrix inequalities. M Marcus, Ming , H , Dover Publications, IncNew YorkMarcus, M., and Ming, H. (1964) A survey of matrix theory and matrix inequal- ities, New York: Dover Publications, Inc. J Medhi, New Age International (P) Ltd. New DelhiStochastic Processes. second editionMedhi, J. (1994) Stochastic Processes, New Delhi: New Age International (P) Ltd., second edition. Equation of State Calculations by Fast Computing Machines. N Metropolis, A W Rosenbluth, M N Rosenbluth, A H Teller, E Teller, The Journal of Chemical Physics. 21Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., and Teller, E. (1953) "Equation of State Calculations by Fast Computing Machines," The Journal of Chemical Physics, 21, 1087-1092. Probabilistic Inference Using Markov Chain Monte Carlo Methods. R M Neal, CRG-TR-93-1Department of Computer Science, University of TorontoTechnical ReportNeal, R. M. (1993) Probabilistic Inference Using Markov Chain Monte Carlo Methods, Technical Report CRG-TR-93-1, Department of Computer Science, University of Toronto. Slice Sampling. The Annals of Statistics. 31with discussion and a rejoinder by the author(2003) "Slice Sampling," The Annals of Statistics, 31, 3, 705-767 (with dis- cussion and a rejoinder by the author). . J R Norris, Cambridge University PressNew YorkNorris, J. R. (1997) Markov Chains, New York: Cambridge University Press. How Many Iterations in the Gibbs Sampler?. A E Raftery, Lewis , S , Bayesian Statistics. J. M. Bernardo, J. O. Berger, A. P. Dawid, and A. F. M. SmithOxford, U.K.Oxford University Press4Raftery, A. E., and Lewis, S. (1992) "How Many Iterations in the Gibbs Sam- pler?", in Bayesian Statistics 4, eds. J. M. Bernardo, J. O. Berger, A. P. Dawid, and A. F. M. Smith, Oxford, U.K.: Oxford University Press, 763-773. Facilitating the Gibbs Sampler: The Gibbs Stopper and the Griddy-Gibbs Sampler. C Ritter, M A Tanner, Journal of the American Statistical Association. 87Ritter, C., and Tanner, M. A. (1992) "Facilitating the Gibbs Sampler: The Gibbs Stopper and the Griddy-Gibbs Sampler," Journal of the American Statistical Association, 87, 861-868. Convergence Control Methods for Markov Chain Monte Carlo Algorithms. C P Robert, Statistical Science. 10Robert, C. P. (1995) "Convergence Control Methods for Markov Chain Monte Carlo Algorithms," Statistical Science, 10, 3, 231-253. Discretization and MCMC Convergence Assessment. Lecture Notes in Statistics. 135Springer(ed.) (1998) Discretization and MCMC Convergence Assessment, Lecture Notes in Statistics, 135, New York: Springer. . C P Robert, G Casella, Monte Carlo Statistical Methods. Springer-Verlagsecond editionRobert, C. P., and Casella, G. (2004) Monte Carlo Statistical Methods, New York: Springer-Verlag, second edition. General state space Markov chains and MCMC algorithms. G O Roberts, J S Rosenthal, Probability Surveys. 1Roberts, G. O., and Rosenthal, J. S. (2004) "General state space Markov chains and MCMC algorithms," Probability Surveys, 1, 20-71. Bayesian Statistics Without Tears: A Sampling-Resampling Perspective. A F M Smith, A E Gelfand, The American Statistician. 26Smith, A. F. M., and Gelfand, A. E. (1992) "Bayesian Statistics Without Tears: A Sampling-Resampling Perspective," The American Statistician, 26, 84-88. Bayesian Computation via the Gibbs Sampler and Related Markov Chain Monte Carlo Methods. A F M Smith, G O Roberts, Journal of the Royal Statistical Society, Ser. B. 55Smith, A. F. M., and Roberts, G. O. (1993) "Bayesian Computation via the Gibbs Sampler and Related Markov Chain Monte Carlo Methods," Journal of the Royal Statistical Society, Ser. B, 55, 3-23. The Calculation of Posterior Distributions by Data Augmentation. M A Tanner, W H Wong, Journal of the American Statistical Association. 82Tanner, M. A., and Wong, W. H. (1987) "The Calculation of Posterior Distribu- tions by Data Augmentation," Journal of the American Statistical Association, 82, 528-540. A Useful Convergence Theorem. V S Varadarajan, Sankhya. 20Varadarajan, V. S. (1958) "A Useful Convergence Theorem", Sankhya, 20, 221- 222. Gibbs Sampler Convergence Criteria. A Zellner, Min , C , Journal of the American Statistical Association. 90Zellner, A., and Min, C. (1995) "Gibbs Sampler Convergence Criteria," Journal of the American Statistical Association, 90, 921-927.
[]
[ "AFTERGLOW POLARIZATIONS IN A STRATIFIED MEDIUM WITH EFFECT OF THE EQUAL ARRIVAL TIME SURFACE", "AFTERGLOW POLARIZATIONS IN A STRATIFIED MEDIUM WITH EFFECT OF THE EQUAL ARRIVAL TIME SURFACE" ]
[ "Mi-Xiang Lan [email protected] \nCenter for Theoretical Physics\nCollege of Physics\nJilin University\n130012ChangchunChina\n", "Xue-Feng Wu \nPurple Mountain Observatory\nChinese Academy of Sciences\n210023NanjingChina\n\nSchool of Astronomy and Space Sciences\nUniversity of Science and Technology of China\n230026HefeiChina\n", "Zi-Gao Dai \nDepartment of Astronomy\nSchool of Physical Sciences\nUniversity of Science and Technology of China\n230026HefeiChina\n" ]
[ "Center for Theoretical Physics\nCollege of Physics\nJilin University\n130012ChangchunChina", "Purple Mountain Observatory\nChinese Academy of Sciences\n210023NanjingChina", "School of Astronomy and Space Sciences\nUniversity of Science and Technology of China\n230026HefeiChina", "Department of Astronomy\nSchool of Physical Sciences\nUniversity of Science and Technology of China\n230026HefeiChina" ]
[]
The environment of gamma-ray burst (GRB) has an important influence on the evolution of jet dynamics and of its afterglow. Here we investigate the afterglow polarizations in a stratified medium with the equal arrival time surface (EATS) effect. Polarizations of multi-band afterglows are predicted. The effects of the parameters of the stratified medium on the afterglow polarizations are also investigated. We found the influences of the EATS effect on the afterglow polarizations become important for off-axis detections and PD bumps move to later times with the EATS effect. Even the magnetic field configurations, jet structure and observational angles are fixed, polarization properties of the jet emission could still evolve. Here, we assume a large-scale ordered magnetic field in the reverse-shock region and a two-dimensional random field in the forward-shock region. Then PD evolution is mainly determined by the evolution of f 32 parameter (the flux ratio between the reverse-shock region and forward-shock region) at early stage and by the evolution of the bulk Lorentz factor γ at late stage. Through the influences on the f 32 or γ, the observational energy band, observational angles, and the parameters of the stratified medium will finally affect the afterglow polarizations.
null
[ "https://export.arxiv.org/pdf/2305.10590v1.pdf" ]
258,762,196
2305.10590
d5048422e0494ac54cb824994e1d6c3d52865337
AFTERGLOW POLARIZATIONS IN A STRATIFIED MEDIUM WITH EFFECT OF THE EQUAL ARRIVAL TIME SURFACE 17 May 2023 May 19, 2023 Mi-Xiang Lan [email protected] Center for Theoretical Physics College of Physics Jilin University 130012ChangchunChina Xue-Feng Wu Purple Mountain Observatory Chinese Academy of Sciences 210023NanjingChina School of Astronomy and Space Sciences University of Science and Technology of China 230026HefeiChina Zi-Gao Dai Department of Astronomy School of Physical Sciences University of Science and Technology of China 230026HefeiChina AFTERGLOW POLARIZATIONS IN A STRATIFIED MEDIUM WITH EFFECT OF THE EQUAL ARRIVAL TIME SURFACE 17 May 2023 May 19, 2023Draft version Preprint typeset using L A T E X style AASTeX6 v. 1.0Gamma-ray bursts (629); magnetic fields (994); The environment of gamma-ray burst (GRB) has an important influence on the evolution of jet dynamics and of its afterglow. Here we investigate the afterglow polarizations in a stratified medium with the equal arrival time surface (EATS) effect. Polarizations of multi-band afterglows are predicted. The effects of the parameters of the stratified medium on the afterglow polarizations are also investigated. We found the influences of the EATS effect on the afterglow polarizations become important for off-axis detections and PD bumps move to later times with the EATS effect. Even the magnetic field configurations, jet structure and observational angles are fixed, polarization properties of the jet emission could still evolve. Here, we assume a large-scale ordered magnetic field in the reverse-shock region and a two-dimensional random field in the forward-shock region. Then PD evolution is mainly determined by the evolution of f 32 parameter (the flux ratio between the reverse-shock region and forward-shock region) at early stage and by the evolution of the bulk Lorentz factor γ at late stage. Through the influences on the f 32 or γ, the observational energy band, observational angles, and the parameters of the stratified medium will finally affect the afterglow polarizations. INTRODUCTION Short-duration gamma-ray bursts (sGRBs) are proved to be originated from the mergers of the double compact objects (Abbott et al. 2017), while long-duration gamma-ray bursts (lGRB) are associated with collapse of massive stars (Mazzali et al. 2003). So the environments of sGRBs would be the interstellar medium (ISM), while they could be the stellar wind for lGRBs. However, the works (Yi et al. 2013) had suggested that the environments of GRBs would be neither a uniform ISM with a constant density nor a stellar wind with a number density n(r) proportional to r −2 . Assuming a power-law profile of the environment density, i.e., n(r) ∝ r −k , it is found that the typical value of k is ∼ 1 (Yi et al. 2013). Since there might be large-scale ordered magnetic field in the reverse-shock region, carried out with the outflow from the central engine, the emission from the reverse-shock region would be highly polarized. The profiles of the GRB environments would affect the dynamics and the emission during afterglow phase. Besides the forward shock emission, the contribution of emission from the reverse-shock region to the total jet emission would also be affected by the environment (Kobayashi 2000;Chevalier & Li 2000;Wu et al. 2003). Therefore, the environment may affect the polarizations of early afterglow. Because of relativistic motion, radiations at different radius r will arrive the observer at same observational time t and the locus of these radii forms the equal arrival time surface (EATS, Sari (1998)). EATS effect would become important for a stratified medium. Therefore, afterglow polarizations with EATS effect should be investigated. Afterglow polarization had been investigated widely in the literature. Polarization of the jet emission in a threedimensional anisotropic random magnetic field were considered by Sari (1999) and Gruzinov (1999). The afterglow polarizations with various jet structure were studied by Rossi et al. (2004), Lazzati et al. (2004), Wu et al. (2005), and Lan et al. (2018). Granot & Königl (2003) discussed the afterglow polarizations with an large-scale ordered magnetic field component in the ambient medium. Lazzati et al. (2004) had also discussed the late-time afterglow polarization with a toroidal magnetic field component in the shocked ISM. Afterglow polarizations considering both the reverse-shock and the forward-shock emission was investigated by Lan et al. (2016). Recently, afterglow polarizations of an off-axis top-hat jet with lateral expansion in a stratified medium was discussed (Pedreira et al. 2022). However, they did not include the EATS effect in their treatment which might be important for off-axis detections (Huang et al. 2007). In this paper, we consider afterglow polarizations with EATS effect in an arbitrary outer medium with a power-law number density. And two cases are studied, i.e., the reverse-shock emission dominates the early afterglow (Case I) and the forward-shock radiation dominates the emission of the whole afterglow phase (Case II). The paper is arranged as follows. In Section 2, we describe our model. Our new results with EATS effect and other revises, compared with the results in Lan et al. (2016), are shown in Section 3. Afterglow polarizations in a stratified medium are presented in Section 4. Finally, our conclusions and discussion are in Section 5. Throughout the paper, a flat universe with Ω Λ = 0.73, Ω M = 0.27, and H 0 = 71 km s −1 Mpc −1 is adopted. THE MODEL The dynamics An ultrarelativistic outflow from a GRB central engine is usually thought to be collimated. With the propagation of this collimated outflow ( i.e. the jet) in an outer medium, two shocks (i.e., the reverse and forward shocks) will be formed. So there are four regions in system: the unshocked outflow (Region 4), the shocked outflow (Region 3 or the reverse-shocked region), the shocked medium (Region 2 or the forward-shocked region), and the unshocked medium (Region 1). Observations of early optical flashes suggest the magnetization of GRB outflow could not be high (σ ≤ 1) (Zhang & Kobayashi 2005;Lan et al. 2016). Therefore, the magnetic field will be dynamically unimportant. The dynamic model here follows that in Lan et al. (2016), where the energy conservation of the system is used to derive the dynamics. A homogeneous interstellar medium (ISM) was considered in Lan et al. (2016), here we generate our study to a stratified medium with a power-law density distribution of n(r) = n 0 (r/r 0 ) −k . r is the radius from central engine. We fix n 0 = 1 cm −3 throughout this paper and consider the effects of different values of r 0 and k on the dynamics. The lateral expansion is not considered and the half-opening angle of the top-hat jet is fixed to be θ j = 0.1 rad. With the EATS effect, it is convenient to express the dynamical quantities (e.g., the bulk Lorentz factor γ) as functions of the radius r. The EATS used here reads (Sari 1998) t b − r cos θ c = t 1 + z (1) where t b is the dynamic time at the burst source frame. θ is the angle between the local velocity direction and the line of sight. c is the speed of the light and z is the redshift of the source. We assume before the initial emission radius r b the outflow moves with a constant initial Lorentz factor η. So we have t b (r b ) = r b /β 0 c, where the initial dimensionless velocity is β 0 = 1 − η −2 . In this paper, we set r b = 10 13 cm. Afterglow polarizations High-level PDs observed in GRB afterglow phase (Steele et al. 2009;Mundell et al. 2013) indicate that there would be large-scale magnetic field remnants in the afterglow emission region. Theoretically, large-scale ordered magnetic field can be carried with the outflow from the GRB central engine (Blandford & Znajek 1977;Spruit et al. 2001;Drenkhahn 2002). Because of conservation of magnetic flux, the radial and transverse components of the ordered magnetic field will decrease as r −2 and r −1 , respectively. At large radii (e.g., the emission region of an afterglow), the transverse component will dominate. Literally, there are two kinds of transverse field (Spruit et al. 2001;Drenkhahn 2002). One is toroidal configuration, corresponding to a parallel rotator (e.g., a black hole). The other is aligned field, usually related to an inclined central engine (e.g. a magnetar). Here, the radiation mechanism in the afterglow phase is assumed to be synchrotron emission and the inverse-Compton scattering is not considered. As in Lan et al. (2016), we assume the magnetic field in Region 3 is large-scale ordered and neglect the random field. So our results of PD will be the upper limits for the reverse-shock-emission dominated cases at early stage. In the forward shock region, a two-dimensional random field confined in the shock plane is assumed, which is likely to be generated or amplified by the forward shock. Therefore, the emission from reverse-shock region with large-scale ordered magnetic field will be highly polarized, while the emission from forward-shock region with a two-dimensional random field will be lowly polarized unless it is viewed off-axis (Waxman 2003;Lan et al. 2016). For an aligned field configuration in Region 3 (It orientation is assumed to be π/4 throughout this paper), the final PDs of the system are always positive (Π a = Q 2 ν + U 2 ν /f ν ) and the polarization directions are depicted by the polarization angles (PAs, χ a = 1/2 arctan(U ν /Q ν )), where Q ν , U ν , and f ν are the total Stokes parameter Q, U, and the flux density of the system, including contributions from both Regions 2 and 3 (e.g. f ν = f ν,2 + f ν,3 , f ν,2 and f ν,3 are the flux density of Regions 2 and 3, respectively.). For a toroidal configuration in Region 3, because of axial symmetry, the Stokes parameter U ν of the system is zero. PD of such system is defined as Π t = Q ν /f ν . So depending on the sign of Q ν , Π t can be positive or negative. And the polarization direction of Π t > 0 will have a 90 • difference with that of Π t < 0. The concrete expressions for the Stokes parameters and the relative quantities all can be found in Lan et al. (2016). However, the polarization calculation of a two-dimensional random magnetic field in Lan et al. (2016) was not correctly considered, which have been revised in Lan et al. (2019a). Here, the integration will be on the EATS, while it is on the jet surface in Lan et al. (2016). In the reverse-shocked or forward-shocked region, the injected energy spectrum of shock-accelerated electrons is N (γ e ) ∝ γ −pi e . γ e is the Lorentz factor of shock-accelerated electrons. In this paper, we fix the spectral index of the injected electrons in Region i (i = 2 for Region 2 and i = 3 for Region 3) to be p i = 2.5. The minimum and maximum Lorentz factor of injected electrons in Region i can be expressed as γ m,i = (p i − 2)/(p i − 1)ǫ e,i e i /(n i m e c 2 ) + 1 and γ max,i = 6πe/(σ T B ′ i ), respectively. The internal energy and the number density of Region i are denoted as e i and n i , respectively. m e and e are the mass and charge of the electron, respectively. σ T is the cross section of Thomson scattering. The strength of magnetic field in the comoving frame of Region i is denoted as B ′ i . The cooling of the electrons in the shocked region is also included and the cooling Lorentz factor of electrons is γ c,i = 6πm e c/(σ T B ′ 2 i t ′ ). t ′ is the time in the comoving frame of the shocked region and t ′ = dt ′ = r r b dr/βγc, where β = 1 − γ −2 is the dimensionless velocity of the jet. COMPARISONS The Setup We compare the results considering EATS effect with that without this effect in (Lan et al. 2016). The parameters of dynamics and emission for the four cases considered here are same as that in (Lan et al. 2016). The fixed dynamical parameters for Cases 1 and 3 are the same (corresponding to the thick shell case): the isotropic equivalent energy E iso = 10 52 erg, initial Lorentz factor η = 300, and initial width of the outflow ∆ 0 = 3 × 10 12 cm. And the fixed dynamical parameters for Cases 2 and 4 are also the same (corresponding to the thin shell case): E iso = 10 50 erg, η = 100, and ∆ 0 = 3 × 10 10 cm. The shocks in each case, including both the reverse shock and the forward shock, are assumed to be adiabatic. The redshift of the source is fixed to be z = 1. We assume equal-partition of the shocked energy in Regions 2 and 3. A fraction of ǫ e,i and of ǫ B,i of the shocked energy go to the electrons and magnetic field, respectively. We fix ǫ e,3 = ǫ B,3 = 0.1, ǫ e,2 = 0.05, and ǫ B,2 = 0.002 in Cases 1 and 4; ǫ e,3 = 0.015, ǫ B,3 = 0.01, ǫ e,2 = 0.02, and ǫ B,2 = 0.005 in Case 2; ǫ e,3 = 0.01, ǫ B,3 = 0.005, ǫ e,2 = 0.02, and ǫ B,2 = 0.01 in Case 3. We summarize the parameters of the four cases in the following Table 1. The other parameters are the same for four cases: θ j = 0.1 rad, p 2 = p 3 = 2.5, n 1 = 1 cm −3 , and z = 1. For comparison, as in Lan et al. (2016) an ISM environment (k = 0) is considered. It should be noted that the sets for repeating the results of Lan et al. (2016) here are the same as that in Lan et al. (2016) except that the polarization calculation for a two-dimensional random magnetic field is corrected. For the new sets here, the differences are that EATS effect is considered, the polarization calculation for a two-dimensional random magnetic field is corrected (see Lan et al. (2019a)), and the local PD in an ordered field is expressed as π 0 = G(x)N (γ e )dγ e F (x)N (γ e )dγ e ,(2) where F (x) = x ∞ x K 5/3 (t)dt and G(x) = xK 2/3 (x) . And x = ν ′ /ν ′ c , ν ′ = (1 + z)ν/D and ν ′ c are the observational frequency in the comoving frame and the critical frequency of electrons with Lorentz factor γ e , respectively. K 5/3 (x) and K 2/3 (x) denote the modified Bessel functions with 5/3 and 2/3 orders. It should be noted that in the forwardshock-dominated cases (i.e., Cases 2 and 3) the emission from the reverse shock region is also included although it is unimportant for the whole jet emission. The results Here, as representations, we consider two observational angles with one on-axis observation (q ≡ θ V /θ j = 0.6) and the other off-axis detection (q = 2.0). Our results are shown in Figs. 1 and 2. In each case, roughly around the jet break time when 1/γ = θ V + θ j , there are two small PD bumps for on-axis observation and the PA changes abruptly by 90 • between the two PD bumps (Sari 1999;Rossi et al. 2004), while there is only one large PD bump for large off-axis detection of the forward-shock emission (Rossi et al. 2004). For off-axis detection, PDs of the four cases at late times are all larger than 0 here, while they are negative in (Lan et al. 2016) with wrong polarization treatment for the two-dimensional random field. In each case, the light curves and PD curves of an aligned field are similar to that of a toroidal fields for both q = 0.6 and q = 2.0. Independent of the observational angle, PA changes (not necessarily 90 • ) usually happen around the minimum values of the PD curves (Lan et al. 2018(Lan et al. , 2019b. For on-axis observation, the profiles of the light curves, PD curves, and PA curves are all similar for the calculations with and without EATS effect. Because of the EATS effect considered and local PD π 0 (Eq. 1) used here, PD values of our new results are larger than that in Lan et al. (2016). The PD peaks at late times, mainly due to the forward-shock emission, will be larger with EATS effect for q < 1, which is consistent with that in Rossi et al. (2004). While they are comparable and only a temporal shift is expected with EATS effect for q > 1. With the EATS effect, the peak times of the reverse-shock emission will move to later times for off-axis detection, which is different from a fixed peak time (i.e., independent of q) of reverse-shock radiation without EATS effect in Lan et al. (2016). For large off-axis detection (q = 2.0), with the EATS effect, the light curves becomes steeper before their peak times (Huang et al. 2007;Rossi et al. 2004), the peak times of both the reverse-shock and the forward-shock emission will shift to later observational times and will increase with q, and the peak times of the PA curves with an aligned magnetic field are also delayed. With EATS effect, two peaks in the PD curves of Cases 1 and 4 (the reverse-shock-emissiondominated cases) and the only peaks of the PD curves in Cases 2 and 3 (the forward-shock-emission-dominated cases) all shift to later observational times. The first peaks in the PD curves of Cases 1 and 4 are due to the comparable flux from Region 3 (which is highly polarized) to that from Region 2 1 . The second peaks in the PD curves of Cases 1 and 4 and the only peaks of the PD curves in Cases 2 and 3 are mainly because of off-axis observations of forward-shock region, and the peak times will be around the times of the jet break when 1/γ = θ V + θ j . Because of EATS effect considered, there are two small peaks in the PD bump of Case 2 and 3 with q = 2.0 at late times compared with the one peak in the PD bump for the non-EATS cases. Because the dynamics for Cases 1 and 3 are same, the peak time of the second PD bump in Case 1 and of the only PD bump in Case 3 are both around the time when 1/γ = θ V + θ j , hence are almost the same, so does for the peak time of the second PD bump in Case 4 and of the only PD bump in Case 2. AFTERGLOW POLARIZATIONS IN A STRATIFIED MEDIUM In this section, the parameters of the ejecta are fixed to be: E iso = 10 50 erg, η = 100, and ∆ 0 = 3 × 10 10 cm. The shocks are also assumed to be adiabatic. The effects of the parameters of the stratified medium (k and r 0 ) on the dynamics are studied. The redshift of the source is fixed to be z = 0.3. The dynamics used in this section are shown in Fig. 3. In the upper panel, we fix r 0 = 10 17 cm and let k to be variable. The number density n(r) with r < 10 17 cm will increase with k, leading to stronger shocks with higher velocities. Therefore, the reverse shock crossing radius will decrease with the increase of parameter k. In the lower panel, we fix k = 1 and let r 0 to be variable. The number density n(r) at same radius r will be larger for a larger r 0 , so the shocks will also be stronger for the dynamics with larger r 0 . Same as the upper panel for the variable k parameter, the reverse shock crossing radius will decrease with the increase of r 0 . If the Lorentz factor of Region 3 relative to Region 4 γ 34 ≫ 1, the reverse shock will be ultrarelativistic, corresponding to the thick shell case. And if γ 34 − 1 ≪ 1, the reverse shock is Newtonian, corresponding to the thin shell case. Sari & Pian (1995) the reverse shock is ultrarelativistic and otherwise it is Newtonian. n 4 and n 1 are the number density of Region 4 and Region 1, respectively. With the calculation, we get (f /η 2 ) k=2 (r) = (f /η 2 ) k=0 (r)(r/r 0 ) 2 . At the reverse shock crossing radius of k = 2 case (R c = 1.56 × 10 14 cm), (f /η 2 ) k=0 (R c ) = 4.8 × 10 4 ≫ 1 and (f /η 2 ) k=2 (R c ) = (f /η 2 ) k=0 (R c )(R c /r 0 ) 2 = 0.1 ≪ 1. So at r = R c = 1.56 × 10 14 cm, the reverse shock for k = 0 is Newtonian while it is relativistic for k = 2. Since whether or not the reverse shock is relativistic also depends on the parameters of the outer medium (i.e., k and r 0 ), in the following we will not distinguish the thick shell and the thin shell. In this section, the local PD in an ordered field is presented in Eq. 1, the EATS effect is included, and the polarization treatment for a two-dimensional random field is corrected. In the following, we consider two cases, i.e., the reverse- shock emission dominates the early afterglow (Case I) and the forward-shock radiation dominates the emission during the whole afterglow phase (Case II). We take ǫ e,3 = ǫ B,3 = ǫ e,2 = 0.1 and ǫ B,2 = 0.01 for Case I and ǫ e,3 = ǫ B,3 = 10 −6 and ǫ e,2 = ǫ B,2 = 0.1 for Case II. With the parameters we take for Case II, the ratio f 32 ≡ f ν,3 /f ν,2 will be smaller than 0.001, so we will neglect the contributions of the reverse-shock region to the total Stokes parameters for Case II. Therefore, with the assumed parameters, there are two emission regions including both the reverse-shock region and the forward-shock region for Case I. And there is finally only one emission region (the forward-shock region) for Case II. As shown in Section 3, PD curves with an aligned field are similar to that with a toroidal field. So in the following we only consider an aligned field in Region 3 and a two-dimensional random field in Region 2. The sets of In the upper panel, r0 is fixed to be 10 17 cm. The red-solid, green-dashed, and blue-dotted lines correspond to k = 0, 1, and 2, respectively. In the lower panel, we fix k = 1. The magenta-dash-dot and olive-dash-dot-dot lines correspond to r0 = 10 15 cm and 10 16 cm, respectively. The vertical lines show the corresponding reverse shock crossing radii. the parameters in this section are presented in Table 2. Multi-band With our improved model, we give the polarization predictions of the multi-band GRB afterglows, including wavelength of optical (R-band), X-ray (2 keV), and γ−ray (200 keV). The parameters of the stratified medium are set to be the typical values of k = 1 and r 0 = 10 17 cm (Yi et al. 2013), and the dynamics used in this subsection is shown as green-dashed lines in Fig. 3. The results of the polarization evolutions are shown in Fig. 4. Case II 10 50 erg 100 3 × 10 10 cm 10 −6 10 −6 0.1 0.1 The other parameters are the same for two cases: θ j = 0.1 rad, p 2 = p 3 = 2.5, n 0 = 1 cm −3 , and z = 0.3. Independent of different observational angles and of different Cases, polarization curves (including both the PD and PA curves) almost coincide for the observational energy band of 2 keV and 200 keV. For on-axis observations in Case I, the reverse shock emission, which is highly polarized, dominates the total flux density during the early reverse-shock crossing stage at optical R-band (f 32 ∼ 20 ≫ 1), so there is a high-level PD plateau before reverse-shock crossing time. The small PD peaks in both X-ray and γ-ray bands at reverse-shock crossing time are due to the small amount flux contributions of highly polarized emission from Region 3 (f 32 ∼ 0.01 ≪ 1). Therefore, PD evolution before reverse-shock crossing time is mainly determined by f 32 for on-axis observations. For off-axis detection in Case I, the first PD peaks in three observational energy band are due to the highly polarized emission from Region 3. The second PD peaks are because of the off-axis observations of the forward shock emission. The differences of the PD evolutions between three observational energy band are tiny. For the on-axis and off-axis observations in Case II, PD curves are almost coincide for the three observational energy band. Same as in Section 3, if the forward-shock emission dominates the whole afterglow phase, there are two small PD bumps for on-axis observation and there is only one large PD bump for large off-axis detection. For on-axis observation, the first PD bump begins around 200s, roughly correspond to 1/γ = θ j − θ V . The second PD bump begins around 10 4 s, roughly correspond to the jet break time at 1/γ = θ j + θ V . For off-axis detection, the only PD bump begins around 10 3 s (roughly correspond to 1/γ = θ V − θ j ) and reaches it peak around 2 × 10 4 s (roughly correspond to 1/γ = θ j + θ V ). Therefore, PD bumps in Case II (i.e., the forward-shock emission dominates the whole afterglow radiation) are due to the evolutions of the bulk Lorentz factor. Various observational angles Observational geometry also affects polarization properties of the jet emission significantly (Waxman 2003). The light curves and polarization properties will be similar for on-axis observations (i.e., q ≤ 1) of an aligned field (Lan et al. 2016). And the EATS effect on the light curves and on the polarization curves become important for off-axis detections, as shown in Section 3. Therefore, we consider four observational angles, one on-axis observations with q = 0.6 and three off-axis detections with q = 1.2, 2.0, and 3.0. The results are shown in Fig. 5. In this subsection, we set the parameters of the stratified medium as their typical value k = 1 and r 0 = 10 17 cm (Yi et al. 2013) and the dynamics is shown as green-dashed line in Fig. 3. The observational frequency is set at optical R-band. The peak times of the light curves from the reverse-shock region will increase with q, so does for the forward-shock region. Depending on the ratio f 32 and on the decrease of bulk Lorentz factor γ, the situation is very complicated for slightly off-axis detection q = 1.2. In Case I, a relatively large f 32 leads to a relatively high PDs (∼ 30%) at early times. The decrease of the bulk Lorentz factor will lead to two effects, corresponding to two PD peaks at late observational times. One PD peak will appear around observational time when 1/γ = θ V − θ j (Waxman 2003) and the other will happen around the jet break time when 1/γ = θ j + θ V . In our calculation, dynamics are the same for Cases I and II. Because the forward-shock radiation will dominate the total flux during the whole afterglow phase in Case II, to study the effect of the decaying bulk Lorentz factor on the polarization properties, we focus our analysis on Case II. For q = 1.2, the first PD bump begins to rise around 10 s, just around the observational time when 1/γ = θ V − θ j (corresponding to γ ∼ 50 around 16 s). The second PD bump peaks around 4.2 × 10 4 s, after the jet break time when 1/γ = θ j + θ V (corresponding to γ ∼ 4.5 around 1.9 × 10 4 s). For large off-axis observational angles (i.e., q = 2.0 and 3.0) in Case II, The times of the only PD peaks are always slightly after the peak times of the corresponding light curves. The PD peaks will rise around the time when 1/γ = θ V − θ j and reach their maximum value around the time when 1/γ = θ j + θ V for both q = 2.0 and 3.0. Because θ V will be larger for q = 3.0 compared with q = 2.0, the γ value at 1/γ = θ j + θ V will be smaller for q = 3.0. So the PD peaks around 1/γ = θ j + θ V will shift to late observational times with the increase of q. In Case II, for slight off-axis observations (i.e., the forward-shock-dominated case), there will be two PD bumps in the PD curve, While there will be only one PD bump for large off-axis observations. The first and second PD bumps of the slight off-axis observations begin around 1/γ = θ V − θ j and 1/γ = θ j + θ V , respectively. So the difference of the bulk Lorentz factor between the two beginning times of the PD bumps reads ∆γ = 2/θ j /(q 2 − 1). With the increase of the observational angle q, ∆γ will decrease. Because the dynamics for Case II is independent of q values, the time interval between the two PD bumps will shorter with the increase of q. So when the beginning time of the second PD bump is smaller than the end time of the first PD bump, the two PD bumps of the slight off-axis observations will begin to merge to one PD bump. With the calculation, the convergence of the two PD bumps into one will happen when 1.5 < q < 1.8 under the dynamics used here, and the trend is consistent with the results shown in Fig. 8 of Rossi et al. (2004). This is consistent with the statement reached just above that the only one PD bump also begins around 1/γ = θ V − θ j and reaches its peak around 1/γ = θ j + θ V for large off-axis observations. Granot & Königl (2003) had also considered the PD evolutions for q < 1 with a two-dimensional random magnetic field in GRB afterglow phase. However, PD signs of our results are opposite with their results, but consistent with that in Sari (1999) and Ghisellini & Lazzati (1999). For large q with q > 1, the only PD bump in PD curve will move to late observational time and its peak value will increase with an increasement of q value. This trend is consistent with the case for a two-dimensional random magnetic field in the emission region of Rossi et al. (2004) and Pedreira et al. (2022). The effects of the k parameter In this subsection, we investigate the effects of the k parameter on the light curves and on the polarization properties. We fix r 0 = 10 17 cm and the observational frequency is set at optical R-band. The dynamics for k = 0, 1, and 2 are shown as red-solid, green-dashed, and blue-dotted lines in Fig. 3. The results are shown in Fig. 6. Since we take n 0 = 1 cm −3 and r 0 = 10 17 cm, a larger k will lead to a larger number density of the outer medium when r < r 0 and then lead to stronger shocks. Therefore, the flux density at early observational times will increase with k value. Because the reverse-shock crossing time becomes shorter with a stronger reverse shock, the peaks of the light curves in Cases I (the reverse-shock emission dominates the early radiation) will shift to early times with a larger k value. For on-axis observation (q = 0.6) of Case I, depending on the ratio f 32 , the early PD curve can be a plateau (for k = 1 and 2) or a bump (for k = 0). The early emission in Case I with q = 0.6 is dominated by reverse-shock region, we calculate the PD of the emission from Region 3 and it reads P D 3 ≡ Q 2 ν,3 + U 2 ν,3 f ν,3(3) We find that for each k value initially there is a PD plateau of P D 3 curve before the reverse-shock crossing time, then the PD curve will decrease, finally it will increase to a roughly constant value of 0.77 at late observational times. The values of P D 3 s at their plateau phases are 0.705 (for k = 0), 0.686 (for k = 1), and ∼ 0.650 (for k = 2), respectively. P D 3 at plateau phase and k are negatively correlated. From Fig. 3, the bulk Lorentz factor decays faster with a larger k before the reverse-shock crossing radius, which means at same radius the bulk Lorentz factor will be smaller for larger k value (i.e., a larger 1/γ cone with a larger k value). Because the cancellation effect of polarization over a larger 1/γ cone becomes more important, P D 3 will be smaller with a larger 1/γ cone. Therefore, the values of P D 3 at their early plateau phases are negatively correlated with k value. To interpret the evolution of the P D 3 curve, we calculate thef 3 parameter and it is defined as follows (Lan & Dai 2020) f 3 ≡ 1/γ θmin df ν,3 θmax 1/γ df ν,3(4) where θ min = 0 for on-axis observations and θ min = θ V − θ j for off-axis detections, while θ max = min(θ V + θ j , θ(r b )) and θ(r b ) corresponds to the θ value at r b on one EATS. In Fig. 7, we find P D 3 andf 3 parameter are positively correlated for q = 0.6, hence the evolutions of the P D 3 curves are mainly determined by thef 3 parameter for on-axis observations. For off-axis detection q = 2.0, after t ∼ 10 4 s (after the time when 1/γ = θ V − θ j ), P D 3 is positively correlated tof 3 , while before t ∼ 10 4 s,f 3 equals to 0, however the P D 3 still evolve with time. The reason for the P D 3 evolution whenf 3 = 0 may be because of the changes of the asymmetry of the system with the evolution of γ, however, we do not have a proper parameter to depict this. Finally,f 3 will reach infinity (totally low-latitude emission) after 3.6 × 10 4 s for q = 0.6 and after 2.5 × 10 5 s for q = 2.0. For off-axis detection (q = 2.0), P A 3 is initially positive with a value of 0.7774 rad, and then changes gradually to a negative value of -0.6949 rad with a change of ∆P A 3 ∼ π/2. At the beginning of the evolution,f 3 = 0 (It means that all the emissions are from the high-latitude region without 1/γ cone.), while at late stage, it is ∞ (It means that all the emissions come from low-latitude region within 1/γ cone.). So the PAs of the low-and high-latitude emissions will have a roughly 90 • difference. P D 3 is roughly 0.77 for the low-latitude emission (withf 3 = ∞). Its value for the high-latitude emission (withf 3 = 0) is about 0.6 and is relatively high, which is very different from a low PD value of the high-latitude emission during the GRB prompt phase (Lan & Dai 2020;Lan et al. 2021). For Case I with q = 2.0, the first PD bumps for different k values are around the peak times of the f ν,3 curves, corresponding to the reverse-shock crossing times. Because the shocks will become stronger with an increasing k value, the crossing time of the reverse shock becomes shorter, thus the positions of the first PD bumps will decrease with k. The second PD bump is because of the off-axis detection of the forward-shock radiation. For the same 1/γ value, the corresponding r or t will be smaller for larger k. Therefore, the second PD bump will begin and peak at early observational time for larger k value. For Case II with q = 0.6 and 2.0, because the bulk Lorentz factor γ decreases faster before r < 10 17 cm for larger k value, leading to smaller observational times both when 1/γ = θ j − θ V and when 1/γ = θ j + θ V for larger k. Therefore, both the two PD bumps for q = 0.6 and the only one PD bump for q = 2.0 shift toward short observational times with an increasing k value. With the increasement of k, the polarization evolution is slower for both q = 0.6 and 2.0. And our results for the wind environment with k = 2 are consistent with that shown in Fig. 1 of Lazzati et al. (2004). The effects of the normalization parameter r 0 The effects of the normalization parameter r 0 on the polarization evolutions are also considered and the results are shown in Fig. 8. We take k = 1 and the observational frequency is set at the optical R-band. Three values of r 0 s are considered (i.e., 10 15 cm, 10 16 cm, and 10 17 cm). Dynamics for r 0 = 10 15 cm, 10 16 cm, and 10 17 cm are shown as magenta-dash-dot, olive-dash-dot-dot, and green-dashed lines in the lower panel of Fig. 3. The number density n(r) with a larger r 0 at radius r will be larger, which will lead to stronger shocks (corresponding to shorter reverse-shock crossing time), then to a higher flux density. Therefore, in Case I the flux density will increase and the peak time of the light curve will decrease with r 0 for on-axis observations. For on-axis observation (q = 0.6) of Case I, same as that for different k values in Section 4.3, depending on the ratio f 32 , PD curves can be a bump (r 0 = 10 15 cm) or a plateau (r 0 = 10 17 cm) at early times. For off-axis observation (q = 2.0) of Case I, a larger r 0 will also lead to a stronger shock then to a shorter reverse-shock crossing time, so the first PD bumps (corresponding to the peak of the emission from Region 3) with larger r 0 will shift to early observational times. A larger r 0 will lead to a smaller bulk Lorentz factor γ(r), so for the same 1/γ, r (or the corresponding observational time t) will be smaller for a larger r 0 . Therefore, the second PD bumps (due to the off-axis observation Case I, q=0.6 limit. We compare our new results with that in Lan et al. (2016) without EATS effect. Then, we apply our model to predict the multi-band afterglow polarizations in a stratified medium with a power-law number density distribution. And the effects of the observational angle and of the parameters of the stratified medium on the afterglow polarizations are also discussed. The dynamics in a stratified medium, including both the reverse-shock and forward-shock regions, are studied. We found that for the fixed n 0 and r 0 value, the bulk Lorentz factor for larger k value will decrease faster at early stage and then decrease slower at late stage. For the fixed n 0 and k value, the bulk Lorentz factor for larger r 0 value will decrease earlier. The reverse-shock crossing radius will shift to smaller radius with the increase of k or r 0 . We found the EATS effect on the afterglow polarizations becomes important for off-axis observations. For the forward-shock-dominated case, there will be only one large PD bump for large off-axis detection, and this PD bump begins roughly at 1/γ = θ V − θ j and peaks roughly around the jet break time when 1/γ = θ j + θ V . Compared with the non-EATS case, the amplitude of the PD bump is similar with the EATS effect, but the peak will shift to late observational time Rossi et al. (2004). For the reverse-shock-dominated case, there will be two PD bumps for off-axis detection, the PD value of the first PD bump is determined by f 32 , while its value at the second PD bump is usually determined by both the q value and the evolution of the bulk Lorentz factor γ. Compared with a negative PD values in Lan et al. (2016) for off-axis detections at late observational times, PDs will be larger than 0 with the corrected polarization treatment for two-dimensional random magnetic field in the forward-shock region. For on-axis observations, assuming the large-scale ordered field in the outflow carried from a central engine, PD value at early afterglow phase is mainly determined by the value of f 32 , i.e., the larger the f 32 , the higher the PD of the jet emission, and vice versa. When f 32 ≫ 1, PD of the jet emission will reach its maximum value of P D 3 . When f 32 ≪ 1, PD will be roughly 0. There are two small PD bumps around the jet break time due to the forward-shock emission (Sari 1999). The first and second PD bumps begin around 1/γ = θ j − θ V and 1/γ = θ V + θ j (the jet break time), respectively. And there will be an abrupt 90 • PA change between the two PD bumps (Sari 1999). We found given the magnetic field configurations in the emission regions, the jet structure, and the observational angles, the evolutions of the P D 3 with an ordered magnetic field in Region 3 is mainly determined byf 3 parameter (positively correlated), while the evolutions of the afterglow polarizations of the whole jet emission are mainly determined by both the value of f 32 or the evolutions of bulk Lorentz factor γ. Through the influences on f 32 or γ, various observational energy band, observational angles, k values, and r 0 values will finally affect the evolutions of the PD curves. Figure 1 . 1had discussed the analytical solution of the outflow dynamics and pointed out that if η 2 ≫ f ≡ n 4 /n 1 Light curves (upper panel), PD curves (middle panel), and PA curves (lower panel) in four cases with an aligned magnetic field in Region 3. The red lines correspond to old results inLan et al. (2016) but with corrected polarization treatment for two-dimensional random magnetic field, the black lines represent our new results with EATS effect. The solid and dashed lines correspond to on-axis (q = 0.6) and off-axis (q = 2.0) observations, respectively. Figure 2 . 2Light curves (upper panel) and PD curves (lower panel) in four cases with a toroidal magnetic field in Region 3. The red lines correspond to old results inLan et al. (2016) but with corrected polarization treatment for two-dimensional random magnetic field, the black lines represent our new results with the EATS effect. The solid and dashed lines correspond to on-axis (q = 0.6) and off-axis (q = 2.0) observations, respectively. Figure 3 . 3The effects of the parameters of the stratified medium on the dynamics. The upper and lower panels show the dynamics with different k and different r0 values, respectively. Figure 4 . 4Light curves (upper panel), PD curves (middle panel), and PA curves (lower panel) for different observational energy band of Case I are shown in the first row. Light curves (upper panel) and PD curves (middle panel) for various observational energy band of Case II are presented in the second row. The red-solid, green-dashed, and blue-dotted lines correspond to the observational energy band of optical R-band, X-ray (2 keV), and γ-ray (200 keV), respectively. Figure 5 . 5Light curves (upper panel), PD curves (middle panel), and PA curves (lower panel) for different q value of Case I are shown in the first row. Light curves (upper panel) and PD curves (middle panel) for various q value of Case II are presented in the second row. The red-solid, green-dashed, blue-dotted, and magenta-dash-dot lines correspond to q = 0.6, 1.2, 2.0, and 3.0, respectively. For illustration, the blue and orange solid lines in the second row for Case II correspond to q = 1.5 and 1.8, respectively. Figure 6 . 6Light curves (upper panel), PD curves (middle panel), and PA curves (lower panel) for different k value of Case I are shown in the first row. Light curves (upper panel) and PD curves (middle panel) for various k value of Case II are presented in the second row. The green-dashed, red-solid, and blue-dotted lines represent k = 0, 1, and 2, respectively. Figure 7 . 7Light curves (upper panel), PD curves (middle panel), and PA curves (lower panel) of the reverse-shock radiation in Case I with q = 0.6 (solid lines) and 2.0 (dashed-lines). And also the evolutions of thef3 parameter (blue lines) are shown with respect to the right axis of the middle panel. Table 1 . 1Dynamical and Emission Parameters of the Cases 1, 2, 3, and 4cases Eiso η ∆0 ǫe,3 ǫB,3 ǫe,2 ǫB,2 Table 2 . 2Dynamical and Emission Parameters of the Cases I and IIcases Eiso η ∆0 ǫe,3 ǫB,3 ǫe,2 ǫB,2 Case I 10 50 erg 100 3 × 10 10 cm 0.1 0.1 0.1 0.01 The peak times of the light curves of the emission from Region 3 are around the first peaks in the PD curves of Cases 1 and 4. of the forward-shock emission) also shift to early observational time with an increasing r 0 .Because the bulk Lorentz factor γ decreases earlier for larger r 0 value (see the lower panel ofFig. 3), leading to smaller observational times both when 1/γ = θ j − θ V and when 1/γ = θ j + θ V for larger r 0 . Therefore, both the two PD bumps for q = 0.6 and the only PD bumps for q = 2.0 shift toward short observational time with the increasing r 0 value.CONCLUSIONS AND DISCUSSIONIn this paper, we consider the EATS effect to study the light curves and polarization evolutions of the GRB afterglows. We assume a large-scale ordered aligned field in the reverse-shock region, so our PD results at early stage give the upper . B P Abbott, R Abbott, T D Abbott, ApJL. 84813Abbott, B. P., Abbott, R., Abbott, T. D., et al. 2017, ApJL, 848, L13 . R D Blandford, R L Znajek, MNRAS. 179433Blandford, R. D., & Znajek, R. L. 1977, MNRAS, 179, 433 . R A Chevalier, Z.-Y Li, ApJ. 536195Chevalier, R. A., & Li, Z.-Y. 2000, ApJ, 536, 195 . G Drenkhahn, A&A. 387714Drenkhahn, G. 2002, A&A, 387, 714 . G Ghisellini, D Lazzati, MNRAS. 3097Ghisellini, G., & Lazzati, D. 1999, MNRAS, 309, L7 . J Granot, A Königl, ApJL. 59483Granot, J., & Königl, A. 2003, ApJL, 594, L83 . A Gruzinov, ApJL. 52529Gruzinov, A. 1999, ApJL, 525, L29 . Y.-F Huang, Y Lu, A Y L Wong, K S Cheng, ChJA&A. 7397Huang, Y.-F., Lu, Y., Wong, A. Y. L., & Cheng, K. S. 2007, ChJA&A, 7, 397 . S Kobayashi, ApJ. 545807Kobayashi, S. 2000, ApJ, 545, 807 . M.-X Lan, Z.-G Dai, ApJ. 892141Lan, M.-X., & Dai, Z.-G. 2020, ApJ, 892, 141 . M.-X Lan, J.-J Geng, X.-F Wu, Z.-G Dai, ApJ. 87096Lan, M.-X., Geng, J.-J., Wu, X.-F., & Dai, Z.-G. 2019a, ApJ, 870, 96 . M.-X Lan, H.-B Wang, S Xu, S Liu, X.-F Wu, ApJ. 909184Lan, M.-X., Wang, H.-B., Xu, S., Liu, S., & Wu, X.-F. 2021, ApJ, 909, 184 . M.-X Lan, X.-F Wu, Z.-G Dai, ApJ. 81644ApJLan, M.-X., Wu, X.-F., & Dai, Z.-G. 2016, ApJ, 816, 73 -. 2018, ApJ, 860, 44 . M.-X Lan, R Xue, D Xiong, ApJ. 878140Lan, M.-X., Xue, R., Xiong, D., et al. 2019b, ApJ, 878, 140 . D Lazzati, S Covino, J Gorosabel, A&A. 422121Lazzati, D., Covino, S., Gorosabel, J., et al. 2004, A&A, 422, 121 . P A Mazzali, J Deng, N Tominaga, ApJL. 59995Mazzali, P. A., Deng, J., Tominaga, N., et al. 2003, ApJL, 599, L95 . C G Mundell, D Kopač, D M Arnold, Nature. 504119Mundell, C. G., Kopač, D., Arnold, D. M., et al. 2013, Nature, 504, 119 . A C C D E S Pedreira, N Fraija, S Dichiara, arXiv:2210.12904arXiv e-printsPedreira, A. C. C. d. E. S., Fraija, N., Dichiara, S., et al. 2022, arXiv e-prints, arXiv:2210.12904 . E M Rossi, D Lazzati, J D Salmonson, G Ghisellini, MNRAS. 35486Rossi, E. M., Lazzati, D., Salmonson, J. D., & Ghisellini, G. 2004, MNRAS, 354, 86 . R Sari, ApJL. 49443ApJLSari, R. 1998, ApJL, 494, L49 -. 1999, ApJL, 524, L43 . H C Spruit, F Daigne, G Drenkhahn, A&A. 369694Spruit, H. C., Daigne, F., & Drenkhahn, G. 2001, A&A, 369, 694 . I A Steele, C G Mundell, R J Smith, S Kobayashi, C Guidorzi, Nature. 462767Steele, I. A., Mundell, C. G., Smith, R. J., Kobayashi, S., & Guidorzi, C. 2009, Nature, 462, 767 . E Waxman, Nature. 423388Waxman, E. 2003, Nature, 423, 388 . X F Wu, Z G Dai, Y F Huang, T Lu, MNRAS. 3421131Wu, X. F., Dai, Z. G., Huang, Y. F., & Lu, T. 2003, MNRAS, 342, 1131 . MNRAS. 3571197-. 2005, MNRAS, 357, 1197 . S.-X Yi, X.-F Wu, Z.-G Dai, ApJ. 776120Yi, S.-X., Wu, X.-F., & Dai, Z.-G. 2013, ApJ, 776, 120 . B Zhang, S Kobayashi, ApJ. 628315Zhang, B., & Kobayashi, S. 2005, ApJ, 628, 315
[]
[ "Preparation of cavity Fock state superpositions by reinforcement learning exploiting measurement back-action", "Preparation of cavity Fock state superpositions by reinforcement learning exploiting measurement back-action" ]
[ "Arthur Perret \nFaculté de génie\nInstitut quantique and Département de génieélectrique et de génie informatique\nUniversité de Sherbrooke\nJ1K 2R1SherbrookeQuébecCanada\n", "Yves Bérubé-Lauzière \nFaculté de génie\nInstitut quantique and Département de génieélectrique et de génie informatique\nUniversité de Sherbrooke\nJ1K 2R1SherbrookeQuébecCanada\n" ]
[ "Faculté de génie\nInstitut quantique and Département de génieélectrique et de génie informatique\nUniversité de Sherbrooke\nJ1K 2R1SherbrookeQuébecCanada", "Faculté de génie\nInstitut quantique and Département de génieélectrique et de génie informatique\nUniversité de Sherbrooke\nJ1K 2R1SherbrookeQuébecCanada" ]
[]
Preparation of bosonic and general cavity quantum states usually relies on using open-loop control to reach a desired target state. In this work, a measurement-based feedback approach is used instead, exploiting the non-linearity of weak measurements alongside a coherent drive to prepare these states. The extension of previous work on Lyapunov-based control is shown to fail for this task. This prompts for a different approach, and reinforcement learning (RL) is resorted to here for this purpose. With such an approach, cavity eigenstate superpositions can be prepared with fidelities over 98% using only the measurements back-action as the non-linearity, while naturally incorporating detection of cavity photon jumps. Two different RL frameworks are analyzed: an off-policy approach recently introduced called truncated quantile critic (TQC) and the on-policy method commonly used in quantum control, namely proximal policy optimization (PPO). It is shown that TQC performs better at reaching higher target state fidelity preparation.
null
[ "https://export.arxiv.org/pdf/2305.11047v1.pdf" ]
258,762,200
2305.11047
d8ffd0f6d0bba27de584ae52dbabe5cdd8a7084e
Preparation of cavity Fock state superpositions by reinforcement learning exploiting measurement back-action Arthur Perret Faculté de génie Institut quantique and Département de génieélectrique et de génie informatique Université de Sherbrooke J1K 2R1SherbrookeQuébecCanada Yves Bérubé-Lauzière Faculté de génie Institut quantique and Département de génieélectrique et de génie informatique Université de Sherbrooke J1K 2R1SherbrookeQuébecCanada Preparation of cavity Fock state superpositions by reinforcement learning exploiting measurement back-action (Dated: May 19, 2023) Preparation of bosonic and general cavity quantum states usually relies on using open-loop control to reach a desired target state. In this work, a measurement-based feedback approach is used instead, exploiting the non-linearity of weak measurements alongside a coherent drive to prepare these states. The extension of previous work on Lyapunov-based control is shown to fail for this task. This prompts for a different approach, and reinforcement learning (RL) is resorted to here for this purpose. With such an approach, cavity eigenstate superpositions can be prepared with fidelities over 98% using only the measurements back-action as the non-linearity, while naturally incorporating detection of cavity photon jumps. Two different RL frameworks are analyzed: an off-policy approach recently introduced called truncated quantile critic (TQC) and the on-policy method commonly used in quantum control, namely proximal policy optimization (PPO). It is shown that TQC performs better at reaching higher target state fidelity preparation. I. INTRODUCTION Quantum states in the Hilbert space of microwave cavities have been at the forefront of recent efforts towards a fault-tolerant quantum computer [1][2][3]. Improvements in the coherence times of cavities, along with the well understood and highly biased noise hints at a scalable path using such devices. Indeed, a whole class of bosonic codes -that is, encoding information in the multiple energy levels of harmonic oscillators -is being developped [4,5]. Preparation of states with such encodings can however be challenging, using for instance dissipative or adiabatic approaches for the cat states [6][7][8]. More general bosonic states, such as binomial code states or arbitrary superpositions, rely on open-loop protocols [9][10][11][12][13][14][15]. While arbitrary states can be prepared this way, these techniques do not exploit the advantages made possible by feedback approaches, namely their added robustness. Indeed, even if full unitary control is available, there is no guarantee that one can recover from an error such as a cavity jump occuring during state preparation. By repeatedly monitoring the cavity state during state preparation, one can adjust the control action as necessary in a feedback loop, without using more physical resources than for an openloop approach. Deterministic state preparation using feedback has been demonstrated through the pioneering work of Haroche's group, whereby cavity eigenstates were prepared using the back-action from weak measurements as a decimation procedure [16,17]. Similar approaches have also been developed for both superconducting and semiconductor qubits, using either classically-inspired feedback techniques [18] or reinforcement learning approaches [19,20]. In these works, however, only eigenstates are being prepared. Recently, Porotti et al. [19] ex- * [email protected] tended this idea using continuous-time multiplexed measurements [21] to prepare both single Fock states and superpositions. Preparing superpositions, however, proved to be more challenging with relatively low state preparation fidelities, and was restricted to a limited set of states. The present work focuses on preparing cavity state superpositions in a circuit quantum electrodynamic (cQED) architecture [22][23][24], using a generalized version of the measurements used by Haroche's group. It is shown that careful choice of these measurements, and therefore of their back-action on the cavity's state, allows reaching target states with high fidelity, but at the cost of a more elaborate optimization procedure. Indeed, Haroche's method crucially relied on having the cavity Fock states being the fixed points (eigenstates) of the measurement operators. The control problem was therefore simplified to bring the current cavity state near the targeted fixed point, with the measurement back-action playing the role of an attractor, hence allowing asymptotic convergence [25]. Extending Haroche's approach to prepare superpositions of cavity eigenstates is, however, not direct. As discussed in references [26][27][28][29], degeneracies of the measurement operators eigenvalues, while necessary to stabilize superpositions, makes wholes subspaces invariant under these operators. In such cases, relying on the measurements back-action alone will only ensure convergence towards the subspace containing the target state. To counteract this, one would need to add a second Hamiltonian control term to lift the degeneracies [28]. However, additional control introduces new error pathways, and makes the whole control protocol more complex. Such an approach will thus not be pursued here. Since previous approaches can only converge to subspaces, this raises the following question: Is it possible to find a controller that allows convergence to specific state superpositions with high fidelity without additional Hamiltonian control terms? This is the question addressed in the present work. A further limitation in the present work is that only coherent displacements can be applied to drive the cavity. The limited control drive problem is interesting in its own right, as it puts forward the question of how to interact in an optimal way with the quantum back-action resulting from repeated weak measurements. Recent work focusing on feedback-based quantum control [19,[30][31][32][33] has begun to address similar questions with deep reinforcement learning (RL) through the underlying formalism of quantum-observable Markov decision processes [34], the quantum analogue of the classical RL framework. In the present work, the large state space along with the high variance of the dynamics coming from random measurement outcomes makes the preparation of state superpositions with RL challenging. Another objective of the present paper is thus also to pinpoint an RL method able to handle such dynamics and provide for convergence to specific desired cavity state superpositions. The rest of this paper is structured as follows: In Sect. II, the cQED model considered is defined with its allowed measurements and controls. We develop the theory showing that with the available measurements, only certain subspaces of superpositions can be stabilized. Considering this limitation, the control problem is then considered in the light of RL. Sect. III presents results for the ideal decoherence free setting as well as when decoherence is present. While decoherence reduces the state preparation fidelities that can be achieved, state-of-theart devices operate in a regime where our approach allows the stabilization and recovery from photon jumps in the cavity. In Sect. IV, the learned behavior of the RL agent is analyzed, comparing its optimal policy with other approaches, namely an on-policy RL agent along with a Lyapunov function-based controller. II. SYSTEM DEFINITION AND CONTROL APPROACH A. System considered A standard cQED architecture is considered here, in which a qubit is dispersively coupled to one mode of a microwave cavity, with the following Hamiltonian H system = ω c N + ω q 2 σ z + χN σ z ,(1) subject to a resonant control drive on the cavity given by H control = i(εa † − ε * a).(2) Here, N = a † a is the number operator, with a and a † being respectively the annihilation and creation operators of the cavity mode, and χ is the dispersive coupling strength between the qubit and the cavity mode. Associated with the control Hamiltonian is the displacement operator given by D(α) = e αa † −α * a ,(3) with α = εt, the coherent resonant drive amplitude. Assuming the cavity to be in state |ψ , then after application of the resonant control drive it becomes |ψ = D(α)|ψ . In the case the density operator ρ is used to specify the state rather than the state vector |ψ , then the state ρ after application of the drive is given by ρ = D(α)ρD(α) † = D(α)ρD(−α).(4) This transformation of ρ into ρ can be written in superoperator form as [16] ρ = D(α)ρ. In the dispersive regime, no energy exchange occurs between the qubit and the cavity mode. Instead, the cavity experiences a light shift depending on the state of the qubit, while the latter experiences a phase shift depending on the cavity's state. As detailed below, this interaction is at the core of the present proposal, with the phase information acquired by the qubit updating the knowledge about the cavity's state. It will be seen that the back-action from qubit measurements then acts as a decimation procedure to prepare a target quantum state in the cavity [17]. This back-action is thus a resource for controlling the cavity. The measurement scheme's full sequence, as shown in Fig. 1, starts by preparing the qubit in state |+ = |e +|g √ 2 by applying a π/2 pulse on it, where |g and |e are respectively its ground and excited states, and then letting the qubit interact with the cavity for a given interaction time t int . After the interaction, the qubit and cavity evolve to the state |Ψ = 1 √ 2 |g ⊗e − i 2 (φ0a † a+δφ) |ψ +|e ⊗e i 2 (φ0a † a+δφ) |ψ ,(6) where φ 0 = t int · χ is the phase shift per photon present in the cavity, which is an experimentally tunable parameter, δφ is a constant phase shift whose exact form is not important here, and |ψ is the state of the cavity. Applying a second π/2 pulse projects the qubit back onto its energy eigenbasis, with the measurement probabilities determined by the phase between the two superposed qubit eigenstates, a procedure known as Ramsey interferometry. In the case the cavity is in one of its eigenstate |n (number state or Fock state), the state of the qubit after interaction can be written as |q = |e + e i(φ0n+δφ) |g √ 2 .(7) This can be seen as the cavity state |n imparting a phase φ 0 n + δφ to the initial qubit superposition (|e +|g ) √ 2 . To each cavity eigenstate |n , there corresponds a direction on the equatorial plane of the qubit's Bloch sphere, this direction being specified by the angle φ 0 n + δφ. In this case, Ramsey interferometry allows the weak QND measurement of the number of photons in the cavity [35]. | ⟩ |q⟩ ! " ⁄ ! " ⁄ ∅ ! Qubit Cavity FIG. 1. Schematic of the measurement scheme for a single feedback cycle. A π/2 Ramsey pulse is applied to the qubit initially in state |e to prepare the qubit's |+ state. After interaction between the qubit and cavity for a given time (interaction time), the qubit's state, which has now become dependent on the cavity's state that prevailed prior to the interaction, is projected back onto its energy eigenbasis, and subsequently measured. This Ramsey protocol implements a weak measurement of the cavity's state. Such weak measurements combined with coherent drive excitation of the cavity was the building block of the iterative measurement-based quantum feedback (MBQFB) control approach pioneered by Serge Haroche et al. to prepare cavity Fock states using atoms in Rydberg states as probe qubits [16,17] (also called ancilla or auxiliary qubits in the error-correction literature). B. Stabilization scheme Owing to the dispersive interaction between the microwave cavity and the qubit, the back-action on the cavity's state following a measurement of the state of the qubit giving as outcome e or g is obtained through the following measurement operators M g = cos φ 0 N − ϕ R 2 ,(8)M e = sin φ 0 N − ϕ R 2 .(9) Here, ϕ R = φ R − δφ, where φ R is the phase of the second π 2 Ramsey pulse relative to the first one, which is a tunable parameter. More precisely, assuming the cavity to be in state |ψ prior to a measurement of the qubit, and given that the qubit is measured to be in state s, with s = e or g, the state of the cavity after the qubit measurement is |ψ = M s |ψ . If the density operator is used instead of the state vector, this becomes ρ = M s ρM † s tr M s ρM † s(10) which in superoperator form is written as ρ = M s ρ.(11) To prepare a cavity Fock state superposition with an iterative feedback loop protocol in which a measurement operator M e or M g affects the cavity's state in each loop, similarly to the protocol of Haroche et al.. for preparing single Fock states, it is necessary that the targeted Fock state superposition be left unchanged by the measurement's back-action so as to keep stable the state reached upon convergence of the feedback loops. This means that the Fock state superposition must be an eigenstate of both measurement operators. For simplicity, taking as target the two-state superposition ψ target = c 1 |n 1 + c 2 |n 2 ,(12) where without loss of generality n 2 > n 1 , the following conditions must therefore hold: M g ψ target = λ g ψ target , M e ψ target = λ e ψ target .(13) These conditions translate to cos φ 0 n 1 − ϕ R 2 = cos φ 0 n 2 − ϕ R 2 , sin φ 0 n 1 − ϕ R 2 = sin φ 0 n 2 − ϕ R 2 .(14) Since both cosines and sines must be equal, the arguments must be equal to within 2πk, with k an integer, which imposes that the phase shift per photon be of the following form: φ 0 = 4πk ∆n ,(15) with ∆n = n 2 − n 1 (k = 1 is used in the sequel). It is to be noted that only ∆n matters in φ 0 . Furthermore, it is easily seen that the argument for the two-state superposition is readily generalizable to an arbitrary superposition of Fock states whose numbers differ by ∆n. Thus, entire subspaces containing Fock states with numbers differing by ∆n can be stabilized. Specifically, for a given ∆n that can be chosen, leading to a specific value of φ 0 , states in the following subspaces can be stabilized by the measurement operators These subspaces W ∆n m , m = 0, . . . , ∆n − 1, will be called the stabilizable subspaces. It is seen that each basis state in the generic subspace W ∆n m contains a number n of photons such that n mod ∆n = m. Only superpositions of Fock states living inside each of these subspaces can be prepared using the iterative measurement feedback protocol described above. While the conditions given in Eq. (13) restrict the state superpositions that can be prepared, the modulo nature of the number of photons contained in the stabilizable subspaces is a resource that is exploited in many bosonic codes that allow for error correction [5]. Similarly to the discussion following Eq. (7) about the effect on the qubit state superposition when the cavity is in a Fock state, here any cavity state in a stabilizable subspace W ∆n m imparts the same phase Φ ∆n (m) on a qubit superposition. Indeed, the qubit state after interaction with the cavity in such a state is given by |q ∆n m = |e + e iΦ(W ∆n m ) |g √ 2 ,(17) with Φ W ∆n m ≡ Φ(m + l∆n) = Φ ∆n (m) = 4πkm ∆n + δφ (mod 2π). Hence, each subspace W ∆n m is mapped to its own direction in the equatorial plane of the qubit's Bloch sphere, since all basis vectors |m + l∆n in this subspace are mapped to the same angle Φ ∆n (m), which does not depend on l. Fig. 2 illustrates an example for ∆n = 5. It is to be noted that in the case ∆n = 2p is even, with p a positive integer, the subspaces with indices m and m + ∆n/2 will be mapped to the same angle Φ ∆n (m) (mod 2π) in the equatorial plane. Furthermore, if ∆n = 2p, with p even, that is ∆n = 4r, with r a positive integer (r = 1, 2, . . .), stabilizable subspaces with indices m and m + ∆n/4 will be mapped to angles Φ ∆n (m) opposing in the equatorial plane (i.e. angles that differ by π (mod 2π)). Since the case ∆n = 4r is a particular case of ∆n = 2p, it means that in the case ∆n = 4r, four subspaces, namely the subspaces with indices m, m + ∆n/4, m + 2∆n/4 and m + 3∆n/4 will be mapped on the same angle or opposing angle (in fact two on the same angle and the other two on the opposing angle). The significance of this is that it is not possible to discriminate subspaces which are mapped to the same angle (mod 2π) through the probabilities of the measurement outcomes e or g when measuring the qubit. However, as discussed below, by an appropriate choice of ϕ R , subspaces that are mapped to opposing angles, can be discriminated through the probabilities of the measurement outcomes for the qubit (note that for example if ϕ R is taken equal to π/2, that is when the Ramsey interferometer operates at mid-fringe as is often done [16], then one cannot discriminate subspaces with opposing angles). Odd and even stabilizable subspaces A 4π factor appears in the numerator of Eq. 15, as opposed to the 2π more often seen in usual parity measurements [36]. This is caused by the necessity of having the same eigenvalue for all superposed states after a measurement, rather than preserving the same measurement probability for these states. Indeed, setting the prefactor as 2π would imply to keep track of an alternating phase between each measurement for the corresponding Fock states which have non-zero population. Such tracking of the phase is possible in superconducting circuits, as it is always possible to know when a system measurement is carried out. To determine the value of the parameter φ 0 in the measurement operators, this implies that setting a 4π factor for odd subspaces leads to an adequate subspace stabilizer, with no overlapping problem. For even subspaces, the only option is to choose a 2π factor to prevent the overlapping, and then resort to phase tracking. The second parameter defining the measurement operators, ϕ R , is free. Dotsenko et al. [16] is followed here, and its value is set at mid fringe visibility for odd ∆n subspaces. On the Bloch sphere representation, this means the projection axis is perpendicular to the target subspace. For even subspaces, each vector on the Bloch sphere is always opposing another one, which prevents the use of a perpendicular projection axis. Rather, φ R is set at an angle of 2π 5 from the target subspace, a value chosen so that all subspaces can be assigned a different probability of being measured in either |g or |e . C. Control decision problem As mentioned in the introduction, conventional control techniques to stabilize specific superpositions such as Lyapunov-based control are difficult to implement, and are bound to get trapped in local minima, as the measurement back-action stabilizes the whole subspace consisting of superpositions of states with n mod ∆n photons. In this context, designing a Lyapunov function that decreases motonically towards the target state rather than the target subspace is not straightforward, if possible at all. Such an approach would prevent both large early displacements in the feedback sequence that temporarily move the state further from the target, as well as small displacement corrections within the target subspace. To overcome this difficulty, a reinforcement learning (RL) approach is used herein, which in principle can learn the system dynamics directly in a model-free manner from the experimental apparatus. While previous work on feedback RL-based quantum control in continuous actions spaces has mainly focused on on-policy architectures [19,31,33], here use is made of the more sample efficient off-policy learning paradigm. Specifically, an actor-critic architecture is resorted to, where the actor network updates its policy by deriving optimal state-action tuples from the Q-function learned by the critic. In contrast to Monte Carlo based methods such as proximal policy optimization (PPO), the critic here bootstraps state-action estimates in the update of its own Q-function, which reduces the variance in the reward estimate. However, this also increases sensitivity to bias in the estimator, which can be problematic in stochastic setting, and lead to overestimation of the Q-function [37]. To prevent this, a distributional RL approach [38,39] is used here, which learns the Q-function by regressing over the distribution of the returns, rather than their mean. Although it is still unclear what makes such an approach more stable, it is believed to help Q-learning when using non-linear functions approximation [40], by providing richer information about the environment dynamics. Additionally, a recent algorithm has been introduced, called truncated quantile critic (TQC), which builds a distributional version of the Q-function on top of a soft-actor critic algorithm. TQC allows for finer control of overestimation bias, and has demonstrated superior performance compared to other state-of-the-art algorithms, particularly in high-dimensional stochastic environments [41]. Here, the TQC implementation from Stable-Baselines3 [42] is resorted to, whereas the specific hyperparameters used can be found in Appendix A. Finally, the reward function must be defined. Its formulation is critical in guiding the agent towards achieving maximum fidelity with respect to the target state, using as few feedback cycles as possible. Similar to Porotti et al. [19], it was found, for the present purposes, that a reward consisting of higher powers of the fidelity was most effective to prevent convergence to a local optimum. Defining the fidelity between two density matrices as F (t) = tr ρ(t)ρ target ρ(t) 2 ,(18) the reward function is chosen to be r(t) = F (t) 4 + 4 F (t) 25 .(19) The choice of the form of the reward function was guided by heuristics, and the specific numerical values of the exponents (4 and 25) and coefficients (1 and 4) were !(#) ! !"# = F(% $ , ' $ , . . , % ! , ' ! ) Buffer Actor Critic !"# = ! ! ! Critic Actor !(#) ! !"# = F(% $ , ' $ , . . , % ! , ' ! ) Buffer Actor Critic Buffer FIG. 3. Schematic of the RL procedure. The cavity state is estimated by a quantum filter F , which is fed as input to the actor. The actor then outputs a displacement amplitude to be applied to the cavity. At every step, the critic samples from a buffer of past experiences to approximate the Q-function. During policy iteration, the actor optimizes its policy with the Q-function learned by the critic. empirically chosen so as to give agents which reach highests fidelities. The right-hand side of Eq. (19) includes two terms that influence the training process. The first term helps to accelerate training by providing a dense reward to the agent, whereas the second ensures that the maximum reward available is sharply peaked near unit fidelity. This borrows from approaches found in curriculum learning [43,44], where the agent is given guidance in a first step on how to reach the correct subspace, and then in a second step on how to reach the actual target state inside this subspace. D. Quantum filter To use a neural network to process the sequence of measurement outcomes e or g, it is necessary to incorporate some form of memory of past inputs into the network. One option is to use a recurrent neural network, which can process sequential data by using feedback connections that allow information to be passed from one time step to the next. Alternatively, the density matrix of the cavity state can be used as an input vector, which encodes all past information about the system. This approach requires using a quantum filter to estimate the cavity state recursively, using the displacement drive α and measurement outcome as inputs at each time step. A quantum filter, which provides a state estimator analogous to a Kalman filter in classical control theory, allows obtaining in the computer an estimate of the true state of the physical system of interest in real-time. Here, as input vector, use is made of the vectorized density matrix of the cavity state, separated into two parts to account for both real and imaginary components. For target states that do not contain any imaginary part, only the real components are kept to minimize the network dimensions. The cavity state is therefore estimated recursively using a quantum filter. Following Haroche's group previous work [16,17], and in absence of decoherence, this filter can be expressed in superoperator form as ρ t+1 = M t D t ρ t ,(20) with ρ t being the cavity density matrix estimated at time step t, and ρ t+1 being the estimate at the next time step t + 1. M t and D t are respectively the measurement and displacement superoperators at time step t associated with the measurement operators given in Eqs. (8) and (9) and the displacement operator given in Eq. (3). For states with a relative phase, the RL agent outputs two actions, corresponding to the real and imaginary parts of the displacement amplitude α which are limited to be in the interval [−1, +1]. In the case of a state without a relative phase, the agent outputs only one action for the real component of the displacement. The cavity is initialized as an educated guess to a coherent state with mean energy similar to that in the target Fock state superposition. The initial cavity state for the feedback sequence is then: ρ t=0 = D (α guess ) |0 0| , where the modulus of α guess is given by the mean photon number of the target state. its exact form is then given by α guess = √ ne iθ with θ the relative phase between the superposed states. After that initial displacement, the agent is then allowed to make an additional one before the measurement sequence in order to fine tune the initial guess. In the absence of quantum jumps and other decoherence channels, this filtered density matrix corresponds exactly to the cavity state. Below in section III B, this filter, along with the cavity state evolution, will be modified to account for noise in the system. III. RESULTS A. Idealized case An idealized case is first considered, in which measurements are assumed perfect and the cavity is not subjected to decoherence such as decay and dephasing. Simulations rely on the stochastic evolution of trajectories, updating the cavity state density matrix at every feedback cycle, according to the recursive quantum filter of Eq. (20). During training, the maximum number of feedback cycles per episode is limited to 50, and the cavity Hilbert space is truncated to n = 29 photons. Whenever the photon number population in the Fock states n = 28 or n = 29 is above a 2% threshold, the episode is stopped to prevent the RL agent from being biased by Hilbert space truncation. Fig. 4 depicts the training curves for three different bosonic states, obtained by averaging final states fidelities obtained for 600 trajectories. The state |ψ = (|1 + |4 ) / √ 2 is significantly harder to learn for the agent, in part because it has support on only two Fock states. As such, it is harder to control leakage to other states inside the stabilized manifold. This is similar to the limitations of a Lyapunov-based controller, which stops once the state reaches the target subspace, rather than the target state. This state will thus be taken as a benchmark to explore the behavior of the feedback protocol further on. 5 shows two examples of trajectories that reach the target state |ψ = (|1 + |4 ) / √ 2, starting from an initial coherent state of amplitude equal to the square root of the target state's mean photon number as mentioned previously. Both cases converge within about 5 feedback cycles. The trajectory in the top panel converges monotonically towards the target state, although with the presence of some leakage to the |n = 7 Fock state. The trajectory shown in the bottom panel does not show such leakage, but requires larger displacements to converge to the target state. This illustrates a key benefit of reinforcement learning (RL): As it directly learns the control dynamics from experience, it can handle nonlinearities in the control space that are essential for fast convergence, but difficult to handle analytically. Fig. 6 (a) shows the time-evolution of the fidelity with a fully trained RL agent. It is seen that the mean fidelity increases slowly compared to the median, Also, 75% of the trajectories are above 98% fidelity after about 10 feedback cycles. Comparing this RL agent to a Lyapunov function-based controller (see its derivation in Appendix B), Fig 6 (b) shows the fidelity distribution at the end of a 50 cycles sequence. The RL framework outperforms the Lyapunov controller, as its fidelity distribution is clearly above that obtained with Lyapunov control. Remarkably, it also does so without increasing the amount of unsuccessful state preparations. Section IV further explores the disparities in the two approaches. RL Target Performances of fully trained RL agents for the preparation of a variety of bosonic states are shown in Fig. 6 (c). In all cases, average fidelities are above 96.6%, with the median consistently above the average by about 2%. Similar to the previous case, the distribution is sharply asymmetric, with only a handful of trajectories failing to converge towards the target state after 50 cycles. In a real experiment, as the fidelity of each specific trajectory is tracked with the quantum filter, one could simply discard any trajectory that fails to converge after a given pre-fixed number of feedback cycles. Such heralded state preparation has already been realized ex-perimentally [45]. This illustrates a main advantage of measurement-based feedback over open-loop control, and the reason why the median fidelity metric is in this case more representative of the performance than the mean for the proposed scheme. In the inset, it can be observed that the RL procedure converges to higher fidelities, preventing early stopping due to local optima. (c) Final fidelities distribution for a set of different cavity states. All states have most of their distributions above the 95% mark, even for more difficult binomial encodings. Here, Bin corresponds to the binomial code with support on number states {0, 3, 6, 9}. The multi-components cat states are defined on the n mod 3 = 0 number states for the 3-cat state and on n mod 4 = 1 for the 4-cat state. ! ! # $ ! % $ #& "# $ ⁄ |(⟩ % ! # |%⟩ % ! # * |(⟩ % $ # |+⟩ % ! ! # $ ! % Kitten Bin Kitten |!⟩ ! 3-cat 4-cat Bin |!⟩ ! (a) (b) (c) Unsurprisingly, states with higher mean photon population have lower fidelities, which is likely due to the need for larger displacement drives to prepare these states. These larger drives tend to populate other Fock states within the stabilized subspace. This could be improved by varying the phase shift per photon in time to decimate these in-subspace Fock states, or by directly leaving the measurement parameters as additional controls for the RL agent as in Ref. [46]. It was also observed empirically that a smaller ∆n requires less feedback cycles, caused by the larger sensitivity of the measurements to changes in m (i.e. more ef-ficient discrimination between different subspaces). This makes the back-action stronger, hence decimating the population in other subspaces faster. Interestingly, superposition states that contain a |0 component have higher fidelities and less dispersion in their distribution. A possible explanation is that the |0 state is a lower bound of the Hilbert space, in the sense that there cannot be negative numbers of photons. Along with the nature of the coherent drive which has a decreasing exponential envelop, this possibly lowers the probability of a transition to another subspace. Such features of the control drive and environment may make it easier for the agent to find the best policy for preparing superpositions. B. Realistic case with decoherence and imperfections Decoherence adds additional loss channels to the system Hamiltonian, with the master equation in Lindblad form now governing the evolution of the state density operator, and given in the rotating frame by dρ dt = Lρ = κ aρa † − 1 2 a † aρ + ρa † a ,(21) where the cavity is assumed to be at zero temperature, so that only photon decay contributes to decoherence. Because of the negligible intrisinc dephasing rate of superconducting cavities, photon loss is therefore the only decoherence channel affecting the cavity considered here [47]. Errors coming from the probe qubit are, however, taken into account, as these impact the resulting backaction on the cavity, given by the updated filter terms in Eq. (20) (see also the upcoming Eq. (25) which takes measurement errors into account). Since qubit T 2 errors commute with the interaction Hamiltonian, they can be considered as occurring after the interaction. Such errors have an effect that is similar to measurment effors, since both are induced by the σ z operator. As for qubit T 1 errors, which would dephase the cavity, an error transparency method is considered, where relaxation events do not impact the cavity state [48]. So, it is assumed here that our protocol is T 1 fault-tolerant. In summary, qubit errors are modeled in the simulation as an effective σ z type error, with the effective decoherence rate being the sum of measurement and T 2 errors. Quantum filter and simulation As the agent input should consist of the best possible estimate of the true quantum state of the cavity, the filter update equation (Eq. (20)) needs to be adapted to account for the decoherence channels mentioned above. In the simulations, two distinct density matrices are evolved. One evolution concerns the true cavity state, where photon loss events correspond to discrete quantum jumps. This is thus a simulation of the true physical system. The other evolution provides a filtered estimated state of the cavity which does not have access to these discrete jumps. This filtered estimated state is indeed the only information available about the actual state in practice for control purposes (built from measurements of the true system), since having full information about the state is impossible. The latter evolution estimates the cavity relaxation using a first order expansion of the Lindblad dissipator given in Eq. (21), which can be added to the filtering equation with a superoperator of the form Tρ = (1 + T cav L) ρ,(22) where T cav is the cavity lifetime. In the case of a discrete jump, the filter will then move away from the actual cavity state, while subsequent measurements obtained from the true cavity simulation will update the filter until it converges back towards the real cavity state. This is the beauty of quantum filtering, which is analogous to Kalman filtering in classical control theory, as it is able to improve the estimate of the state as information is accumulated over time, consisting of the known measurements results obtained and the actions performed on the system. Due to the finite measurement accuracy of the probe qubit, the resulting estimated state is a statistical mixture of the two possible outcomes. This is taken into account with the following superoperators P e ρ = (1 − P f,e ) M e ρ + P f,e M g ρ,(23)P g ρ = (1 − P f,g ) M g ρ + P f,g M e ρ,(24) where the weights P f,e and P f,g of each measurement operator depend on the erroneous state assignation probabilities η e|g and η g|e in a measurement. Following Ref. [17], in the case of a measurement that delivers e, whereas it should have been g, the weight is given by P f,g = η e|g P e /[ 1 − η e|g P g + η e|g P e ], where P e = tr M e ρM † e and P g = tr M g ρM † g . In the other case that a measurement delivers g, whereas it should have been e, the weight is P f,e = η g|e P g /[ 1 − η g|e P e + η g|e P g ]. The estimated state is then updated using the following modified recursive filtering equation ρ t+1 = P t T D t ρ t .(25) It is to be noted that only the filtered state is given as input to the agent, as this would be the only information available in a real experiment. Simulation results Following a procedure similar to that in section III A, 3000 trajectories are simulated, each consisting of 2000 feedback cycles (the smaller number of total trajectories simulated here compared to than in section III A is due to the need to simulate more feedback cycles for each trajectory in the present case). In these simulations, a time of 1 µs is considered for one feedback cycle (300 ns for the cavity-qubit interaction time, while leaving 700 ns for qubit measurement and rotations; according to literature, these values are well within current device performances [14,49]). The cavity lifetime considered here is 1 ms [36,47]. The errors in qubit state measurement assignments are η e|g = 0.01 and η g|e = 0.02. Fig. 7 shows state evolution and preparation results with decoherence, with 7 (a) depicting trajectory examples with noticeable photon loss events and subsequent recovery. In this case, the agent is able to recover from photon loss events with a delay depending on the speed at which the filter recognizes the loss event. An interesting behavior is shown in the bottom panel of Fig. 7 (a), where no photon loss events are registered; the cavity state instead evolves deterministically inside the stabilized subspace as indicated by the small and slow decay of the fidelity between jumps. This decay is due to the slow population transfer from the |4 state to the |1 state; this comes from the Zeno dynamics induced by the back-action of the measurements [50], whereby the evolution of the state is slowed down by the measurements. The out-of-subspace leakage from the deterministic evolution of the stochastic master equation between quantum jumps is hence being suppressed by the measurements. To correct this slow fidelity decay, the RL agent has limited control. Indeed, between jumps, the controller shows a jittering behavior of growing amplitude lasting over 0.1 ms as the fidelity decays. This is distinct from the behavior following a photon loss, where the correction applied appears as an isolated sharp peak in the control amplitude. This phenomenon will be discussed further in section IV, where it will be shown that small corrections once near the target state are not the primary means to achieve high fidelity. Fig. 7 (b) shows the behavior of an ensemble of trajectories, for the same loss parameters as above. Unsurprisingly, the mean is significantly lower, as photon loss events, occurring randomly, drag it downwards. The median stabilizes around a fidelity of 95.5%, after having reached fidelities of 98%. This drop can be explained by the RL agent failing to recover fully from photon losses on average. To compare with the situation corresponding to no control (free evolution), the green curve shows the average evolution where perfect state preparation is assumed at time t = 0. This shows that the approach proposed here, even with limited control, namely measurement back-action combined with coherent driving in a feedback loop, helps stabilize the target state. IV. DISCUSSION A. Robustness to noise To further analyze the robustness of the TQC agent proposed here, its performance is studied with a fully trained agent for ranges of cavity lifetimes and probe qubit imperfections. Note that all results were obtained using the same RL agent, trained on the ideal model. It was found that such a model usually performs better than one trained on a lossy system. One possible explanation is that the Markovian nature of the quantum filter com-bined with the exploration properties of the RL training are sufficient to learn an optimal policy. Indeed, as the RL agent learns about the environment, it explores states similar to those resulting from quantum jumps. However, contrary to the case with significant decoherence, it is also able to explore high fidelity states, which then makes it a more complete agent, able to perform well under different system dynamics. In Fig. 8, the lifetimes are expressed as the ratio of the total feedback cycle operation time (1 µs is considered here) and the cavity lifetime (which is varied to include currently experimentally realistic values). Also, as mentioned in section III B, errors from the probe qubit are summarized into an effective error denoted probe , consisting of the sum of all individual qubit decoherence channels. Fig. 8 shows the maximum median and average fidelities attained during a 50 cycles state preparation procedure, as a function of both cavity decay and qubit errors for the |1 + |4 / √ 2 and |0 +|4 / √ 2 states, which respectively have an odd and even ∆n. For the median fidelity metric, it is seen in Fig. 8 that both states are robust to a large range of parameters, with a drop in fidelity being seen in the top right corner corresponding to high error rates in both cavity and qubit. The impact of decoherence is unsurprisingly more pronounced for the mean fidelity metric, indicating that while the majority of state preparation sequences lead to high fidelity states, a small fraction, however, completely fail to reach high fidelity. The drop in the median fidelity value as measurement errors are more prominent can be attributed to a state estimation problem for the controller. When measurement errors are high, the quantum filter needs more measure-ment results to construct an accurate estimate of the true cavity state. On some occasions, the displacement applied to the cavity can increase the deviation between the estimated state and the true cavity state in such a way that the former fails to converge back to the cavity state. Such instances of deviation between the cavity and the estimated states happen more frequently following a photon loss, which could also explain why the state |1 + |4 / √ 2 is more subject to decoherence, as it has a slightly larger mean photon number, and also contains the state |1 , which can still decay down to |0 as compared to the state |0 + |4 / √ 2 in which |0 cannot further decay down. B. Understanding the trained policies Attention will now be turned to the policies learned by different types of agents. The TQC agent proposed herein will be compared with two other approaches: the RL-based PPO and the Lyapunov based controller mentioned previously. This controller chooses the best displacement drive by performing a line search over a linearization of the Lyapunov fidelity following application of the displacement operator to the current cavity state, see Appendix B. The policy space has the form of a binary tree with 2 N possible trajectories as each measurement leads to two distinct possibilities in the decision tree. Here, N is the depth of the tree corresponding to the number of feedback cycles, which in this case is chosen to be 10 as a compromise so that the computation time does not become prohibitive since here all trajectories are exhaustively examined. Fig. 9 provides details on the procedure and how the results in Fig. 10 are obtained. Starting from a given initial state, the cavity states of all possible combinations of g and e measurement outcomes are computed at every feedback cycle. The corresponding metric, that is the fidelity between the cavity state and the target state, is given by F M s k D s k , ..., M s2 D s2 , M s1 D s1 |ψ 0 , ψ target . (26) To each measurement outcome corresponds a probability of occurrence, with the product of these individual probabilities for a given trajectory being the probability of occurrence of the whole trajectory. Fig. 10 shows such trajectory distributions, with the color scale corresponding to the fidelity after a measurement at a given timestep (feedback cycle), lighter colors corresponding to higher fidelities. Displaying the fidelity in this manner provides a high-level view of the policies learned, and tells how far from the target state (and thus out of subspace) the agent can go in order to maximize the final fidelity. Fig. 10. At each additional feedback cycle, the binary tree is expanded by a factor of two, corresponding by two possible measurements outcomes in each branch. Starting from a coherent initial state Trajectories are shown in the top row of Fig. 10, with the initial state of the feedback sequence being the same initial coherent state used during training of the RL agents. There are differences between the agents in the structure of their trajectories. For instance, the Lyapunov controller has well defined paths in the fidelity space, influenced by previous measurements results. This behavior is similar for the PPO agent, although less pronounced. These branchings are much less apparent for the TQC agent. This indicates a better ability to exploit the combined effect of measurement back-action and coherent drive. In other words, RL agents, and especially the TQC agent, appear to be less influenced by the measurement outcomes than they are exploiting them. It is seen in Fig. 10, especially in the upper part corresponding to a first ground state measurement, that both RL agents learn to do penalizing displacements in the early feedback cycles, that will however allow reaching higher fidelities later on, and significantly more so for the TQC agent. Although the first few timesteps have lower fidelities compared to the other two approaches, the TQC agent is nevertheless able to reach high fidelity states early on in the control sequence. First feedback cycles The aforementioned behavior of the first few feedback cycles is depicted in Fig. 11, where the evolution of the cavity density matrix is shown for all approaches. In situations where the current state is far from the target, a Newton-based line-search method, such as that used in the Lyapunov control approach implemented here (see Appendix B), will not optimally determine the displacement needed to be applied to the cavity state. This is a consequence of the locally convergent behavior of line-search methods. Indeed, it was found that the Lyapunov control was constantly selecting large displacements in such cases, which prevented further conver-gence to high fidelity states. This required to limit the maximum allowed displacement for the Lyapunov control. This was here done by optimizing over a range of candidate maximum amplitudes, and choosing the one maximizing the median fidelity to the target state. The α = ±0.3 displacement performed by the Lyapunov control approach shown in Fig. 11 is the result of such optimization. Compared to the RL methods, this a serious drawback as the choice of the large displacement to be applied early in the control sequence is crucial to adequately balance the amplitudes of the resulting state after the control sequence, and should ideally be conditioned on a specific state, rather than be optimized over all trajectories. For RL agents, it is indeed found that they select displacements of similar amplitudes applied to the cavity after the first excited state qubit measurement, but which is better adapted as it takes into account the impact it may have in future feedback cycles. This is even more striking in the case following a ground state measurement at the beginning of the sequence. Here, the TQC agent performs a large displacement of amplitude 0.77, which effectively does an operation akin to a state reset. On a short time scale, this is penalizing as shown in the states that follow in the sequence, which are farther from the target state than for the other approaches. However, as seen in the top row of Fig. 10, this opens the way to states with high fidelity, and indeed higher fidelities are reached than with the other approaches. The TQC agent is thus better at exploiting the effect of the measurement back-action to reach its target state. One can also notice the larger initial displacement, in absolute value, taken by the RL agents, correcting the initial α guess value. RL agents are able to infer, simply by retro-propagating future fidelities observed, that the initial state, even though it is one with the best overlap with the target state, is not the optimal one when taking into account the measurement dynamics, and when considering the global objective to be reached on a longer horizon. Out of subspace initial state Finally, the bottom row of Fig. 10 shows results of a sequence initialized with the state |Ψ = (|0 + |3 ) / √ 2, which has amplitudes similar to those of the target state, but which is in another subspace. This case is perhaps the most interesting, as the different approaches show drastically different behaviors. PPO is not able to learn how to leave the subspace, as shown by the higher probabilities for trajectories that are associated to low fidelity states. Indeed, it was found that it only performs small displacements of about α = 0.03 which are not sufficient to transfer enough population to the target subspace, with the back-action simply annihilating all target subspace populations at every step. In this situation, Lyapunov control performs better. In this case, it ap- 10. Comparison of policies learned by different agents to prepare the state |1 +|4 √ 2 . Top row left: Schematic of the exhaustive trajectory search in the form of a binary tree. The initial state is a coherent state with a mean photon number of 2.5; each branch corresponds to a specific trajectory outcome. The rest of the top row shows the evolution of the fidelity along each of the 2 10 trajectories, for Lyapunov-based control, the TQC and PPO RL agents. Also shown are the probabilities of occurence of the different trajectories plotted on a log scale. RL based methods perform best, owing to their strong initial displacements allowing to reach higher final fidelities. Bottom row: Same as in top row, but initializing with the state |0 +|3 √ 2 , which is in a stabilizable subspace different than that of the target state. Lyapunov and TQC agents are both able to reach the target subspace. The PPO agent fails at learning any policy, which indicates a lack of exploration during training. Ρ !"#$ Ρ !"#$ Ρ !"#$ | = 2.5⟩ 0 + |3⟩ √2 FIG. plies the maximum allowed displacement α = 0.3 at the beginning (this cannot be seen from the figure; it is the value given by the algorithm), which allows it to eventually reach the target subspace. The TQC agent, however, appears to find the best policy (α = 0.54 is applied at the first cycle of feedback -value from the algorithm), transferring 40% of subsequent trajectories towards high fidelity values. It is also able to learn how to bring a large portion of the remaining 60% towards higher fidelities. In fact, as the TQC policy evolved during training, it was seen to opt for a trade-off. In early stages, it was performing a larger initial displacement, which favored a higher probability for a measurement to bring it back to the target subspace. However, it did so at the expense of the remaining trajectories, which never reached the target state. As training evolved, it lowered the initial displacement, so as to still handle the remaining states resulting from the other measurement outcomes. It should also be emphasized that during training, the agent was never initialized in the (|0 + |3 ) / √ 2 state, but only in the coherent state as mentioned above. As such, the policy found by the TQC agent is a result of its higher exploration and generalization capabilities. This analysis shows that the simple extension of the task of preparing a single Fock state to that of preparing superpositions translates into a qualitatively different control problem. Whereas the former only needs the state to be steered towards the desired eigenstate of suitably chosen measurement operators, the latter requires a complex interaction between measurement outcomes and displacement operations. In some cases, even something akin to a state reset is necessary in order to maximize the achievable fidelity. It should also be noted that while an on-policy was compared to an off-policy method here, it is still unclear what is the role of adding a distributional approach on top of a soft-actor critic algorithm. Nevertheless, compared to the standard soft actor-critic implementation [51], it was found in the present work that the distributional version is more robust to hyperparameters tuning and more stable during training. More work would however be needed, such as ablation studies, to better understand the role of the distributional procedure in the quantum setting. V. CONCLUSION In this work, a measurement-based quantum-feedback protocol was proposed and analyzed to prepare and stabilize superpositions of Fock states in a superconducting cavity. By using a generalization of parity measurements, states in a target subspace with Fock basis states with equally spaced number of photons can be stabilized, but also prepared using a coherent drive as the only control. It was shown that a classical control technique such as Lyapunov function-based control, which was previously developped for the stabilization of Fock states [16,17], fail to prepare superposition due to their lack of exploitation of the measurement back-action. Here, using an RL method proved useful in overcoming this limitation. Indeed, by exploring the optimal policies learned by different algorithmic methods, the measurement back-action could be emphasized as a useful resource in itself to create the non-linearities required to prepare quantum state superpositions. It also highlighted the interaction between control actions with weak measurements in a way that takes into account the impact of the control actions later in the feedback cycle, in order to reach a target state with high fidelity. Because such measurements are the same as those used in some error-correction procedures, our proposed protocol could easily be integrated in the bosonic computation paradigm. From an RL point of view, the ability of TQC to reuse past experiences, and thus learn a more general policy than, for instance, PPO-like algorithms, might prove useful as future quantum control experiments scale up in complexity. We believe that further exploration of these different behaviors already noticed in the low complexity settings considered here would be an interesting and potentially fruitful avenue of investigation. ACKNOWLEDGMENTS This project was supported by Institut quantique (IQ) at Université de Sherbrooke (UdS) through the Canada First Research Excellence Fund. AP acknowledges support from the QSciTech program funded through an NSERC-CREATE grant. YBL acknowledges fruitful discussions with Pierre Rouchon and Rémi Azouit. Tables I and II give the hyperparameter values used for both the TQC and PPO agents. All others hyperparameters not presented here are those by default in the Stable-Baselines 3 implementation. In Lyapunov function-based control, a positive-definite function V of the state ρ is used, so that the minimum of V , which is necessarily zero owing to positivedefiniteness, is reached for a targeted state ρ target . Control actions are performed iteratively in a feedback loop, whereby the goal of each iteration's control action on ρ is to reduce the value of V , ultimately reaching the minimum, hence the targeted state [26]. The essential details of this approach in the present specific context will now be provided. Positive-definite and Lyapunov functions and their significance in control Given two arbitrary states ρ 1 and ρ 2 , a positive-definite function d(ρ 1 , ρ 2 ) will be considered in the sequel, that is with the following properties d(ρ 1 , ρ 2 ) > 0 for ρ 1 = ρ 2 , = 0 ⇔ ρ 1 = ρ 2 .(B1) This is akin to a distance function, but it is not required here that d obeys the triangle inequality (which is one of the axioms that a distance function must satisfy). Given such a function and a targeted state ρ target , one can in turn define a positive-definite function over the set of states ρ by V target (ρ) = d(ρ target , ρ).(B2) The significance of such a function in the present context is that if α in Eq. (5) is chosen so that V target (ρ ) < V target (ρ),(B3) then by performing a series of steps k = 1, 2, . . ., whereby at each step this inequality is satisfied, i.e. V target (ρ k+1 ) < V target (ρ k ), then the state will converge to the targeted state since the value of V target (ρ k ) will eventually reach zero, and, by hypothesis, for V target (ρ k ) to equal zero the only possibility is that ρ k = ρ target . Such a function decreasing over the evolution of a system is known in the control litterature as a Lyapunov function [52]. Fidelity-based positive-definite and Lyapunov functions One of the simplest positive-definite functions that can be considered in quantum mechanics is based on the Frobenius scalar product between two operators, which for density operators is given by F r(ρ 1 , ρ 2 ) = tr(ρ 1 ρ 2 ),(B4) which for pure states ρ 1 = |ψ 1 ψ 1 | and ρ 2 = |ψ 2 ψ 2 | amounts to F r(ρ 1 , ρ 2 ) = | ψ 1 |ψ 2 | 2 . The Frobenius scalar product is also sometimes simply called the fidelity [16] (note that this fidelity is different from that defined in Eq. (18)). Since 0 ≤ F r(ρ 1 , ρ 2 ) ≤ 1 with F r(ρ 1 , ρ 2 ) = 1 ⇔ ρ 1 = ρ 2 and ρ 1 is pure, one can define the positive-definite function d F r (ρ 1 , ρ 2 ) = 1 − F r(ρ 1 , ρ 2 ),(B5) which satisfies the conditions given in Eq. (B1). This will be called the Frobenius distance, although it does not formally satisfies all the axioms of a distance. This leads to the positive-definite function defined over states where I is the identity operator, and Υ target = I − ρ target . Here the superscript "target" on Υ reminds that Υ depends on the targeted state. Such a Lyapunov function will be called a fidelity Lyapunov function, and will be denoted V target Υ (ρ) in the sequel, hence V target Υ (ρ) = tr Υ target ρ .(B8) 3. Second order expansion of the Lyapunov function with actuator action It will now be assumed that a Lyapunov function given in the generalized form of Eq. (B8) is defined. The objective of the feedback law can now be stated by requiring that, for a given state ρ, α must be chosen such that V target Υ (ρ ) < V target Υ (ρ), that is V target Υ (D(α)ρ D(−α)) < V target Υ (ρ). (B9) Obtaining such a condition is similar to a line search in numerical minimization [53] and a common way is to develop to second order in α the left hand side of this inequality, and use the approximation thus obtained to find a value of α that will allow satisfying this condition. To do this, Eq. (4) is first expanded to second order in α by resorting to an expansion of D(α) to second order. This leads to ρ = D(α)ρ D(−α) ≈ ρ + [αa † − α * a, ρ] + 1 2 [ρ, αa † − α * a], αa † − α * a . (B10) With this, V target Υ (ρ ) can be approximated to second order by V target Υ (ρ ) = tr Υ target ρ ≈ V target Υ (ρ) + T (1) (α) + 1 2 T (2) (α), (B11) where T (1) (α) is the first order term given by T (1) (α) = tr Υ target [αa † − α * a, ρ] ,(B12) and T (2) (α) is the second order term given by T (2) (α) = tr Υ target [ρ, αa † − α * a], αa † − α * a . (B13) By expanding the commutator and reordering terms, the first order term can be rewritten as T (1) (α) = − tr [αa † − α * a, Υ target ]ρ .(B14) This form is more convenient since it leads to commutators that can be precomputed. Indeed, so expressed, the first order term can be further explicited as T (1) (α) can simply be rewritten as T (1) (α) = αζ * + α * ζ.(B21) It is convenient for the sequel to define the commutator C Υ target = [a, Υ target ],(B22) which can be precomputed; with this B = C Υ target ρ (B23) and ζ = tr C Υ target ρ . Now, by similar reasoning as for T (1) (α), T (2) (α) can be rewritten as T (2) (α) = tr αa † − α * a, [αa † − α * a, Υ target ] ρ = tr(Kρ),(B25) where K = αa † − α * a, [αa † − α * a, Υ target ] .(B26)K = α 2 G Υ target † + α * 2 G Υ target − 2|α| 2 E Υ target ,(B31) and hence T (2) (α) = tr(Kρ) = α 2 tr G Υ target † ρ + α * 2 tr G Υ target ρ −2|α| 2 tr E Υ target ρ .(B32) It can be shown that T (2) (α) is a real quantity (the development will not be provided here). tr G Υ target † ρ = tr G Υ target † ρ † = tr ρG Υ target † = tr ρG Υ target * Defining γ = tr G Υ target ρ ,(B33) and χ = tr E Υ target ρ ,(B34) where χ can be demonstrated to be real, T (2) (α) can be written as T (2) (α) = α 2 γ * + α * 2 γ − 2|α| 2 χ.(B35) With the previous developments, the second order expansion of the Lyapunov function can be rewritten as (refer back to Eq. (B11)) V target Υ (ρ ) ≈ V target Υ (ρ) + αζ * + α * ζ + 1 2 α 2 γ * + α * 2 γ − 2|α| 2 χ = V target Υ (ρ) + q(α),(B36) with q(α) being the following quadratic form: q(α) = αζ * + α * ζ + 1 2 α 2 γ * + α * 2 γ − 2|α| 2 χ . (B37) Recall that it is required to determine α according to the inequality given in Eq. (B9), which means that q(α) must be negative, i.e. q(α) < 0, and ideally q(α) shall be as negatively large as possible. There are different ways in which q(α) < 0. One standard possibility is equivalent to gradient steepest descent, and a second one is equivalent to a Newton method in numerical optimization, which is resorted to, as it has faster convergence properties. Newton method The quadratic form q(α) is first be represented in terms of real quantities. First, q(α) is written as follows: q(α) = 2 Re{αζ * } + Re α 2 γ * − |α| 2 χ. The complex quantities appearing in q(α) are written as α = x + iy, (B40) ζ = u + iv, (B41) γ = g + ih,(B42) This allows writing the quadratic form as q(α) ≡ q(x, y) = x y Q x y + 2L x y ,(B43)with Q = g − χ h h −(g + χ) , L = u v .(B44) The Newton approach in optimization is to take the direction x y T which minimizes the quadratic form. This leads to Q x y = −L T , which is equivalent to x y = −Q −1 L T .(B45) The inverse of Q is easily obtained and given by Q −1 = 1 (g 2 + h 2 − χ 2 ) g + χ h h −(g − χ) . Re-expressing everything in term of the complex quantities ζ, γ, and χ gives Re{χζ + γζ * } Im{χζ + γζ * } , (B47) hence α = 1 χ 2 − |γ| 2 (χζ + γζ * ). Equation B48 is therefore the one defining the update rule for the control amplitude parameter in the feedback procedure. For Newton's method, it is necessary that the matrix Q be positive-definite [53]. This is true if both eigenvalues of Q are positive. These eigenvalues are easily found to be λ ± = −χ ± |γ|. (B49) Hence, for Q to be positive-definite the following must hold: χ < −|γ|. (B50) This condition is numerically verified. For the values of x and y given through Eq. (B45), the value of q(x, y) is found to be q(x, y) = −L Q −1 L T ,(B51) which is negative whenever Q is positive-definite, since in this case Q −1 is also positive definite. {|0 , |∆n , |2∆n , . . {|1 , |1 + ∆n , |1 + 2∆n , . . .} , . . .W ∆n m = span {|m , |m + ∆n , . . . |m + l∆n , . . .} , . . . W ∆n ∆n−1 = span {|∆n − 1 , |2∆n − 1 , |3∆n − 1 , . . .} . FIG. 2 . 2Example of the mapping between stabilizable subspaces W ∆n m and directions in the equatorial plane of the qubit's Bloch sphere in the case ∆n = 5 (φ0 = 4π/5). FIG. 4 . 4Training curves of the TQC RL agent for: i) a 3components cat state with mean photon number n = 3 (red), ii) an equal superposition of the two kitten binomial logical states (light blue), and the state |ψ = 1 √ 2 (|1 + |4 ) (dark blue). On the right are the Wigner functions for prepared states, evaluated after training and compared with the target state. Fig. Fig. 5 shows two examples of trajectories that reach the target state |ψ = (|1 + |4 ) / √ 2, starting from an initial coherent state of amplitude equal to the square root of the target state's mean photon number as mentioned previously. Both cases converge within about 5 feedback cycles. The trajectory in the top panel converges monotonically towards the target state, although with the presence of some leakage to the |n = 7 Fock state. The trajectory shown in the bottom panel does not show such leakage, but requires larger displacements to converge to the target state. This illustrates a key benefit of reinforcement learning (RL): As it directly learns the control dynamics from experience, it can handle nonlinearities in the control space that are essential for fast convergence, but difficult to handle analytically. Fig. 6 (a) shows the time-evolution of the fidelity with a fully trained RL agent. It is seen that the mean fidelity increases slowly compared to the median, Also, 75% of the trajectories are above 98% fidelity after about FIG. 5 . 5Trajectory examples for the preparation of the state |ψ = (|1 + |4 ) / √ 2, with Hinton plots of the density matrix at different steps during the preparation sequence. A monotonically increasing fidelity state preparation is shown in the top panel, and the bottom panel shows a trajectory that requires stronger control drives to recover from a sequence of measurements projecting away from the target state. FIG. 6 . 6(a) Time evolution of the state for the preparation of the (|1 + |4 )/ √ 2 target state, showing the median with its 25-75 percentile distribution (shaded area) as a function of time, with the mean converging slower than the median to the target state. Most of the state preparation occurs within the first 10 feedback cycles. (b) Distribution of the final fidelities in (a) for the RL agent alongside a Lyapunov function-based controller. FIG. 7 . 7Preparation and stabilization of the |1 + |4 / √ 2 state under decoherence. (a) The RL agent is able to recover from photon loss events, as shown in top panel. When no photon loss occurs (bottom panel), Zeno dynamics take place, with population transfer occuring only inside the target subspace (due to measurement back-action suppression of population leakage out of the subspace), but which does not prevent the slow decay of the fidelity. The control amplitude α (green curve) is 0 for most of the sequence, except when the filter (orange curve) detects a photon loss, or when the fidelity goes below certain value (bottom panel). (b) Time evolution for a cavity lifetime T1 of 1 ms and a feedback cycle time of 1 µs. Median fidelities are still able to reach values similar to the ideal case, although they stabilize around about 95% when multiple photon loss events occur. The mean fidelity remains at 90% throughout, pushed down by the momentary photon loss. With perfect initialization at t = 0, the average state fidelity would have decayed according to the master equation, as shown by the green curve. FIG. 8 . 8Maximum fidelities reached by the RL agent during state preparation as a function of decoherence parameters, with the horizontal and vertical axes corresponding respectively to the cavity lifetime and the probe qubit's errors. FIG. 9 . 9Structure of the trajectories calculations presented in FIG. 11 . 11First feedback cycles for the cavity initialized as a coherent state, showing how the cavity states evolves at each step of measurements and displacements. For each agent, two different trajectories are shown, consisting of a sequence of g-e or e-g measurement outcomes (respectively top and bottom row). The α value at the left before the first measurement corresponds to the first control, adjusting the initial αguess value. While the bottom row is similar for all agents, it is seen in the top row that the TQC agent opts for a drastically different strategy by performing a large displacement to reset the cavity in a state similar to a coherent state. ) = 1 − F r(ρ target , ρ) = 1 − tr ρ target ρ . (I − ρ target )ρ = tr Υ target ρ , (B7) T ( 1 )B 1(α) = −α tr [a † , Υ target ]ρ + α * tr [a, Υ target ]ρ . † = ρ † [a, Υ target ] † = −ρ[a † , Υ target ], (B17) hence tr B † = − tr ρ[a † , Υ target ] = − tr [a † , Υ target ]ρ . (B18) Now, since tr B † = (tr B) * , then tr ([a † , Υ target ]ρ) = − (tr{(B)}) * .(B19)With these developments, and setting ζ = tr B = tr [a, Υ target ]ρ , TABLE I . IHyperparameters for TQC. Target update interval (τ ) 0.001 TABLE II. Hyperparameters for PPO.Hyperparameter Value Number of Layers 2 Actor Neurons per Layer 256 Critics Neurons per Layer 512 Discount(γ) 0.95 Batch size 1024 Activation Function tanh Entropy coeffcient 0.09 Number of critics 5 Learning Rate 0.0001 Hyperparameter Value Number of Layers 2 Neurons per Layer 256 Discount(γ) 0.95 Number of steps 2048 Batch size 256 Activation Function tanh Learning Rate 0.0001 Appendix B: Derivations for Lyapunov function-based control Developing this operator, and using Jacobi's identity[A, [B, C]] + [C, [A, B]] + [B, [C, A]] = 0 along with [a, a † ] = I, one obtainsK = α 2 a † , [a † , Υ target ] + α * 2 a, [a, Υ target ] G Υ target † = a † , [a † , Υ target ] (B30)−2|α| 2 a † , [a, Υ target ] . (B27) Setting G Υ target = a, [a, Υ target ] = [a, C Υ target ], (B28) E Υ target = a † , [a, Υ target ] = [a † , C Υ target ], (B29) and using that leads to Quantum information processing with bosonic qubits in circuit QED. A Joshi, K Noh, Y Y Gao, 10.1088/2058-9565/abe989Quantum Science and Technology. 633001A. Joshi, K. Noh, and Y. Y. Gao, Quantum information processing with bosonic qubits in circuit QED, Quantum Science and Technology 6, 033001 (2021). Quantum control of bosonic modes with superconducting circuits. W.-L Ma, S Puri, R J Schoelkopf, M H Devoret, S Girvin, L Jiang, 10.1016/j.scib.2021.05.024Science Bulletin. 661789W.-L. Ma, S. Puri, R. J. Schoelkopf, M. H. Devoret, S. Girvin, and L. Jiang, Quantum control of bosonic modes with superconducting circuits, Science Bulletin 66, 1789 (2021). Bosonic quantum error correction codes in superconducting quantum circuits. W Cai, Y Ma, W Wang, C.-L Zou, L Sun, 10.1016/j.fmre.2020.12.006Fundamental Research. 150W. Cai, Y. Ma, W. Wang, C.-L. Zou, and L. Sun, Bosonic quantum error correction codes in superconducting quan- tum circuits, Fundamental Research 1, 50 (2021). New class of quantum error-correcting codes for a bosonic mode. M H Michael, M Silveri, R T Brierley, V V Albert, J Salmilehto, L Jiang, S M Girvin, 10.1103/PhysRevX.6.031006Phys. Rev. X. 631006M. H. Michael, M. Silveri, R. T. Brierley, V. V. Albert, J. Salmilehto, L. Jiang, and S. M. Girvin, New class of quantum error-correcting codes for a bosonic mode, Phys. Rev. X 6, 031006 (2016). Quantum computing with rotation-symmetric bosonic codes. A L Grimsmo, J Combes, B Q Baragiola, 10.1103/PhysRevX.10.011058Phys. Rev. X. 1011058A. L. Grimsmo, J. Combes, and B. Q. Baragiola, Quan- tum computing with rotation-symmetric bosonic codes, Phys. Rev. X 10, 011058 (2020). Stabilization and operation of a Kerr-cat qubit. A Grimm, N E Frattini, S Puri, S O Mundhada, S Touzard, M Mirrahimi, S M Girvin, S Shankar, M H Devoret, Nature. 584205A. Grimm, N. E. Frattini, S. Puri, S. O. Mundhada, S. Touzard, M. Mirrahimi, S. M. Girvin, S. Shankar, and M. H. Devoret, Stabilization and operation of a Kerr-cat qubit, Nature 584, 205 (2020). Enhancing dissipative cat-state generation via nonequilibrium pump fields. Z.-Y Zhou, C Gneiting, W Qin, J Q You, F Nori, 10.1103/PhysRevA.106.023714Phys. Rev. A. 10623714Z.-Y. Zhou, C. Gneiting, W. Qin, J. Q. You, and F. Nori, Enhancing dissipative cat-state generation via nonequi- librium pump fields, Phys. Rev. A 106, 023714 (2022). Dynamically protected cat-qubits: A new paradigm for universal quantum computation. M Mirrahimi, Z Leghtas, V V Albert, S Touzard, R J Schoelkopf, L Jiang, M H Devoret, 10.1088/1367-2630/16/4/045014New Journal of Physics. 1645014M. Mirrahimi, Z. Leghtas, V. V. Albert, S. Touzard, R. J. Schoelkopf, L. Jiang, and M. H. Devoret, Dynam- ically protected cat-qubits: A new paradigm for univer- sal quantum computation, New Journal of Physics 16, 045014 (2014). Arbitrary control of a quantum electromagnetic field. C K Law, J H Eberly, 10.1103/PhysRevLett.76.1055Physical Review Letters. 761055C. K. Law and J. H. Eberly, Arbitrary control of a quan- tum electromagnetic field, Physical Review Letters 76, 1055 (1996). Generation of fock states in a superconducting quantum circuit. M Hofheinz, E M Weig, M Ansmann, R C Bialczak, E Lucero, M Neeley, A D O&apos;connell, H Wang, J M Martinis, A N Cleland, Nature. 454310M. Hofheinz, E. M. Weig, M. Ansmann, R. C. Bialczak, E. Lucero, M. Neeley, A. D. O'Connell, H. Wang, J. M. Martinis, and A. N. Cleland, Generation of fock states in a superconducting quantum circuit, Nature 454, 310 (2008). Synthesizing arbitrary quantum states in a superconducting resonator. M Hofheinz, H Wang, M Ansmann, R C Bialczak, E Lucero, M Neeley, A D O&apos;connell, D Sank, J Wenner, J M Martinis, A N Cleland, Nature. 459546M. Hofheinz, H. Wang, M. Ansmann, R. C. Bialczak, E. Lucero, M. Neeley, A. D. O'Connell, D. Sank, J. Wen- ner, J. M. Martinis, and A. N. Cleland, Synthesizing ar- bitrary quantum states in a superconducting resonator, Nature 459, 546 (2009). Universal control of an oscillator with dispersive coupling to a qubit. S Krastanov, V V Albert, C Shen, C.-L Zou, R W Heeres, B Vlastakis, R J Schoelkopf, L Jiang, 10.1103/PhysRevA.92.040303Phys. Rev. A. 9240303S. Krastanov, V. V. Albert, C. Shen, C.-L. Zou, R. W. Heeres, B. Vlastakis, R. J. Schoelkopf, and L. Jiang, Uni- versal control of an oscillator with dispersive coupling to a qubit, Phys. Rev. A 92, 040303 (2015). Cavity state manipulation using photon-number selective phase gates. R W Heeres, B Vlastakis, E Holland, S Krastanov, V V Albert, L Frunzio, L Jiang, R J Schoelkopf, 10.1103/PhysRevLett.115.137002Physical Review Letters. 115137002R. W. Heeres, B. Vlastakis, E. Holland, S. Krastanov, V. V. Albert, L. Frunzio, L. Jiang, and R. J. Schoelkopf, Cavity state manipulation using photon-number selective phase gates, Physical Review Letters 115, 137002 (2015). Implementing a universal gate set on a logical qubit encoded in an oscillator. R W Heeres, P Reinhold, N Ofek, L Frunzio, L Jiang, M H Devoret, R J Schoelkopf, 10.1038/s41467-017-00045-1Nature Communications. 8R. W. Heeres, P. Reinhold, N. Ofek, L. Frunzio, L. Jiang, M. H. Devoret, and R. J. Schoelkopf, Implementing a uni- versal gate set on a logical qubit encoded in an oscillator, Nature Communications 8, 10.1038/s41467-017-00045-1 (2017). . T Fösel, S Krastanov, F Marquardt, L Jiang, arXiv:2004.14256Efficient cavity control with SNAP gates. quant-phT. Fösel, S. Krastanov, F. Marquardt, and L. Jiang, Effi- cient cavity control with SNAP gates, arXiv:2004.14256 [quant-ph] (2020). Quantum feedback by discrete quantum nondemolition measurements: Towards on-demand generation of photon-number states. I Dotsenko, M Mirrahimi, M Brune, S Haroche, J.-M Raimond, P Rouchon, 10.1103/PhysRevA.80.013805Physical Review A. 8013805I. Dotsenko, M. Mirrahimi, M. Brune, S. Haroche, J.-M. Raimond, and P. Rouchon, Quantum feedback by dis- crete quantum nondemolition measurements: Towards on-demand generation of photon-number states, Physi- cal Review A 80, 013805 (2009). Real-time quantum feedback prepares and stabilizes photon number states. C Sayrin, I Dotsenko, X Zhou, B Peaudecerf, T Rybarczyk, S Gleyzes, P Rouchon, M Mirrahimi, H Amini, M Brune, J.-M Raimond, S Haroche, 10.1038/nature10376Nature. 477C. Sayrin, I. Dotsenko, X. Zhou, B. Peaudecerf, T. Rybarczyk, S. Gleyzes, P. Rouchon, M. Mirrahimi, H. Amini, M. Brune, J.-M. Raimond, and S. Haroche, Real-time quantum feedback prepares and stabilizes pho- ton number states, Nature 477, 10.1038/nature10376 (2011). State initialization of a hot spin qubit in a double quantum dot by measurement-based quantum feedback control. A Aarab, R Azouit, V Reiher, Y Bérubé-Lauzière, 10.1103/PhysRevB.106.235309Phys. Rev. B. 106235309A. Aarab, R. Azouit, V. Reiher, and Y. Bérubé-Lauzière, State initialization of a hot spin qubit in a double quan- tum dot by measurement-based quantum feedback con- trol, Phys. Rev. B 106, 235309 (2022). Deep Reinforcement Learning for Quantum State Preparation with Weak Nonlinear Measurements. R Porotti, A Essig, B Huard, F Marquardt, 10.22331/q-2022-06-28-7476747R. Porotti, A. Essig, B. Huard, and F. Marquardt, Deep Reinforcement Learning for Quantum State Preparation with Weak Nonlinear Measurements, Quantum 6, 747 (2022). . K Reuer, J Landgraf, T Fösel, J O&apos;sullivan, L Beltrán, A Akin, G J Norris, A Remm, M Kerschbaum, J.-C , K. Reuer, J. Landgraf, T. Fösel, J. O'Sullivan, L. Beltrán, A. Akin, G. J. Norris, A. Remm, M. Kerschbaum, J.-C. Realizing a deep reinforcement learning agent discovering realtime feedback control strategies for a quantum system. F Besse, A Marquardt, C Wallraff, Eichler, 10.48550/arxiv.2210.16715Besse, F. Marquardt, A. Wallraff, and C. Eichler, Realiz- ing a deep reinforcement learning agent discovering real- time feedback control strategies for a quantum system, arXiv:2210.16715 10.48550/arxiv.2210.16715 (2022). Multiplexed photon number measurement. A Essig, Q Ficheux, T Peronnin, N Cottet, R Lescanne, A Sarlette, P Rouchon, Z Leghtas, B Huard, 10.1103/PhysRevX.11.031045Phys. Rev. X. 1131045A. Essig, Q. Ficheux, T. Peronnin, N. Cottet, R. Les- canne, A. Sarlette, P. Rouchon, Z. Leghtas, and B. Huard, Multiplexed photon number measurement, Phys. Rev. X 11, 031045 (2021). Cavity quantum electrodynamics for superconducting electrical circuits: An architecture for quantum computation. A Blais, R.-S Huang, A Wallraff, S M Girvin, R J Schoelkopf, 10.1103/PhysRevA.69.062320Phys. Rev. A. 6962320A. Blais, R.-S. Huang, A. Wallraff, S. M. Girvin, and R. J. Schoelkopf, Cavity quantum electrodynamics for superconducting electrical circuits: An architecture for quantum computation, Phys. Rev. A 69, 062320 (2004). Circuit quantum electrodynamics. A Blais, A L Grimsmo, S M Girvin, A Wallraff, 10.1103/RevModPhys.93.025005Rev. Mod. Phys. 9325005A. Blais, A. L. Grimsmo, S. M. Girvin, and A. Wallraff, Circuit quantum electrodynamics, Rev. Mod. Phys. 93, 025005 (2021). Strong coupling of a single photon to a superconducting qubit using circuit quantum electrodynamics. A Wallraff, D I Schuster, A Blais, L Frunzio, R S Huang, J Majer, S Kumar, S M Girvin, R J Schoelkopf, Nature. 431162A. Wallraff, D. I. Schuster, A. Blais, L. Frunzio, R. S. Huang, J. Majer, S. Kumar, S. M. Girvin, and R. J. Schoelkopf, Strong coupling of a single photon to a super- conducting qubit using circuit quantum electrodynamics, Nature 431, 162 (2004). Feedback stabilization of discrete-time quantum systems subject to non-demolition measurements with imperfections and delays. H Amini, R A Somaraju, I Dotsenko, C Sayrin, M Mirrahimi, P Rouchon, Automatica. 492683H. Amini, R. A. Somaraju, I. Dotsenko, C. Sayrin, M. Mirrahimi, and P. Rouchon, Feedback stabilization of discrete-time quantum systems subject to non-demolition measurements with imperfections and delays, Automat- ica 49, 2683 (2013). Stabilizing feedback controls for quantum systems. M Mirrahimi, R Van Handel, 10.1137/050644793SIAM Journal on Control and Optimization. 46445M. Mirrahimi and R. Van Handel, Stabilizing feedback controls for quantum systems, SIAM Journal on Control and Optimization 46, 445 (2007). Feedback control of quantum state reduction. R Van Handel, J Stockton, H Mabuchi, 10.1109/TAC.2005.849193IEEE Transactions on Automatic Control. 50768R. van Handel, J. Stockton, and H. Mabuchi, Feedback control of quantum state reduction, IEEE Transactions on Automatic Control 50, 768 (2005). Lyapunov-based feedback preparation of GHZ entanglement of N qubit systems. Y Liu, S Kuang, S Cong, 10.1109/TCYB.2016.2584698IEEE Transactions on Cybernetics. 473827Y. Liu, S. Kuang, and S. Cong, Lyapunov-based feedback preparation of GHZ entanglement of N qubit systems, IEEE Transactions on Cybernetics 47, 3827 (2017). A survey of quantum Lyapunov control methods. S Cong, F Meng, The Scientific World Journal. 967529S. Cong and F. Meng, A survey of quantum Lyapunov control methods, The Scientific World Journal 2013, 967529 (2013). Reinforcement learning with neural networks for quantum feedback. T Fösel, P Tighineanu, T Weiss, F Marquardt, 10.1103/PhysRevX.8.031084Phys. Rev. X. 831084T. Fösel, P. Tighineanu, T. Weiss, and F. Marquardt, Re- inforcement learning with neural networks for quantum feedback, Phys. Rev. X 8, 031084 (2018). Model-free quantum control with reinforcement learning. V V Sivak, A Eickbusch, H Liu, B Royer, I Tsioutsios, M H Devoret, 10.1103/PhysRevX.12.011059Phys. Rev. X. 1211059V. V. Sivak, A. Eickbusch, H. Liu, B. Royer, I. Tsioutsios, and M. H. Devoret, Model-free quantum control with re- inforcement learning, Phys. Rev. X 12, 011059 (2022). Deep reinforcement learning control of quantum cartpoles. Z T Wang, Y Ashida, M Ueda, 10.1103/PhysRevLett.125.100401Phys. Rev. Lett. 125100401Z. T. Wang, Y. Ashida, and M. Ueda, Deep reinforcement learning control of quantum cartpoles, Phys. Rev. Lett. 125, 100401 (2020). Measurement-based feedback quantum control with deep reinforcement learning for a double-well nonlinear potential. S Borah, B Sarma, M Kewming, G J Milburn, J Twamley, 10.1103/PhysRevLett.127.190403Phys. Rev. Lett. 127190403S. Borah, B. Sarma, M. Kewming, G. J. Milburn, and J. Twamley, Measurement-based feedback quantum con- trol with deep reinforcement learning for a double-well nonlinear potential, Phys. Rev. Lett. 127, 190403 (2021). Quantum partially observable markov decision processes. J Barry, D T Barry, S Aaronson, 10.1103/PhysRevA.90.032311Phys. Rev. A. 9032311J. Barry, D. T. Barry, and S. Aaronson, Quantum par- tially observable markov decision processes, Phys. Rev. A 90, 032311 (2014). S Haroche, J.-M Raimond, 10.1093/acprof:oso/9780198509141.001.0001Exploring the Quantum: Atoms, Cavities, and Photons. Oxford University PressS. Haroche and J.-M. Raimond, Exploring the Quantum: Atoms, Cavities, and Photons (Oxford University Press, 2006). Tracking photon jumps with repeated quantum non-demolition parity measurements. L Sun, A Petrenko, Z Leghtas, B Vlastakis, G Kirchmair, K M Sliwa, A Narla, M Hatridge, S Shankar, J Blumoff, L Frunzio, M Mirrahimi, M H Devoret, R J Schoelkopf, 10.1038/nature13436Nature. 511444L. Sun, A. Petrenko, Z. Leghtas, B. Vlastakis, G. Kirch- mair, K. M. Sliwa, A. Narla, M. Hatridge, S. Shankar, J. Blumoff, L. Frunzio, M. Mirrahimi, M. H. Devoret, and R. J. Schoelkopf, Tracking photon jumps with repeated quantum non-demolition parity measurements, Nature 511, 444 (2014). Issues in using function approximation for reinforcement learning. S Thrun, A Schwartz, Proceedings of 4th Connectionist Models Summer School (Erlbaum Associates. 4th Connectionist Models Summer School (Erlbaum AssociatesS. Thrun and A. Schwartz, Issues in using function ap- proximation for reinforcement learning, in Proceedings of 4th Connectionist Models Summer School (Erlbaum As- sociates, 1993). A distributional perspective on reinforcement learning. M G Bellemare, W Dabney, R Munos, Proceedings of the 34th International Conference on Machine Learning, Proceedings of Machine Learning Research. D. Precup and Y. W. Tehthe 34th International Conference on Machine Learning, Machine Learning Research70M. G. Bellemare, W. Dabney, and R. Munos, A distribu- tional perspective on reinforcement learning, in Proceed- ings of the 34th International Conference on Machine Learning, Proceedings of Machine Learning Research, Vol. 70, edited by D. Precup and Y. W. Teh (PMLR, 2017) pp. 449-458. M G Bellemare, W Dabney, M Rowland, Distributional Reinforcement Learning. MIT PressM. G. Bellemare, W. Dabney, and M. Rowland, Dis- tributional Reinforcement Learning (MIT Press, 2023) http://www.distributional-rl.org. A comparative analysis of expected and distributional reinforcement learning. C Lyle, M G Bellemare, P S Castro, 10.1609/aaai.v33i01.33014504Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence334504C. Lyle, M. G. Bellemare, and P. S. Castro, A compara- tive analysis of expected and distributional reinforcement learning, Proceedings of the AAAI Conference on Artifi- cial Intelligence 33, 4504 (2019). Controlling overestimation bias with truncated mixture of continuous distributional quantile critics. A Kuznetsov, P Shvechikov, A Grishin, D Vetrov, International Conference on Machine Learning. PMLRA. Kuznetsov, P. Shvechikov, A. Grishin, and D. Vetrov, Controlling overestimation bias with truncated mixture of continuous distributional quantile critics, in Interna- tional Conference on Machine Learning (PMLR, 2020) pp. 5556-5566, iSSN: 2640-3498. Stable-baselines3: Reliable reinforcement learning implementations. A Raffin, A Hill, A Gleave, A Kanervisto, M Ernestus, N Dormann, Journal of Machine Learning Research. 221A. Raffin, A. Hill, A. Gleave, A. Kanervisto, M. Ernes- tus, and N. Dormann, Stable-baselines3: Reliable rein- forcement learning implementations, Journal of Machine Learning Research 22, 1 (2021). Curriculum learning. Y Bengio, J Louradour, R Collobert, J Weston, 10.1145/1553374.1553380Proceedings of the 26th Annual International Conference on Machine Learning, ICML '09. the 26th Annual International Conference on Machine Learning, ICML '09New York, NY, USAAssociation for Computing MachineryY. Bengio, J. Louradour, R. Collobert, and J. Weston, Curriculum learning, in Proceedings of the 26th Annual International Conference on Machine Learning, ICML '09 (Association for Computing Machinery, New York, NY, USA, 2009) p. 41-48. Curriculumbased deep reinforcement learning for quantum control. H Ma, D Dong, S X Ding, C Chen, 10.1109/TNNLS.2022.3153502IEEE Transactions on Neural Networks and Learning Systems. 1H. Ma, D. Dong, S. X. Ding, and C. Chen, Curriculum- based deep reinforcement learning for quantum control, IEEE Transactions on Neural Networks and Learning Systems , 1 (2022). Comparing and combining measurement-based and driven-dissipative entanglement stabilization. Y Liu, S Shankar, N Ofek, M Hatridge, A Narla, K M Sliwa, L Frunzio, R J Schoelkopf, M H Devoret, 10.1103/PhysRevX.6.011022Phys. Rev. X. 611022Y. Liu, S. Shankar, N. Ofek, M. Hatridge, A. Narla, K. M. Sliwa, L. Frunzio, R. J. Schoelkopf, and M. H. Devoret, Comparing and combining measurement-based and driven-dissipative entanglement stabilization, Phys. Rev. X 6, 011022 (2016). R Porotti, V Peano, F Marquardt, 10.48550/ARXIV.2203.04271Gradient ascent pulse engineering with feedback. R. Porotti, V. Peano, and F. Marquardt, Gradient as- cent pulse engineering with feedback, arXiv:2203.04271 10.48550/ARXIV.2203.04271 (2022). Quantum memory with millisecond coherence in circuit QED. M Reagor, W Pfaff, C Axline, R W Heeres, N Ofek, K Sliwa, E Holland, C Wang, J Blumoff, K Chou, M J Hatridge, L Frunzio, M H Devoret, L Jiang, R J Schoelkopf, 10.1103/PhysRevB.94.014506Phys. Rev. B. 9414506M. Reagor, W. Pfaff, C. Axline, R. W. Heeres, N. Ofek, K. Sliwa, E. Holland, C. Wang, J. Blumoff, K. Chou, M. J. Hatridge, L. Frunzio, M. H. Devoret, L. Jiang, and R. J. Schoelkopf, Quantum memory with millisecond co- herence in circuit QED, Phys. Rev. B 94, 014506 (2016). Fault-tolerant detection of a quantum error. S Rosenblum, P Reinhold, M Mirrahimi, L Jiang, L Frunzio, R J Schoelkopf, 10.1126/science.aat3996Science. 361266S. Rosenblum, P. Reinhold, M. Mirrahimi, L. Jiang, L. Frunzio, and R. J. Schoelkopf, Fault-tolerant detec- tion of a quantum error, Science 361, 266 (2018). Realizing repeated quantum error correction in a distance-three surface code. S Krinner, N Lacroix, A Remm, A Di Paolo, E Genois, C Leroux, C Hellings, S Lazar, F Swiadek, J Herrmann, G J Norris, C K Andersen, M Müller, A Blais, C Eichler, A Wallraff, Nature. 605669S. Krinner, N. Lacroix, A. Remm, A. Di Paolo, E. Genois, C. Leroux, C. Hellings, S. Lazar, F. Swiadek, J. Her- rmann, G. J. Norris, C. K. Andersen, M. Müller, A. Blais, C. Eichler, and A. Wallraff, Realizing repeated quantum error correction in a distance-three surface code, Nature 605, 669 (2022). Quantum Zeno subspaces. P Facchi, S Pascazio, 10.1103/PhysRevLett.89.080401Phys. Rev. Lett. 8980401P. Facchi and S. Pascazio, Quantum Zeno subspaces, Phys. Rev. Lett. 89, 080401 (2002). Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. T Haarnoja, A Zhou, P Abbeel, S Levine, Proceedings of the 35th International Conference on Machine Learning, Proceedings of Machine Learning Research. J. Dy and A. Krausethe 35th International Conference on Machine Learning, Machine Learning ResearchPMLR80T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, Soft actor-critic: Off-policy maximum entropy deep reinforce- ment learning with a stochastic actor, in Proceedings of the 35th International Conference on Machine Learn- ing, Proceedings of Machine Learning Research, Vol. 80, edited by J. Dy and A. Krause (PMLR, 2018) pp. 1861- 1870. H Khalil, Nonlinear Systems. Prentice Hall3rd ed.H. Khalil, Nonlinear Systems, 3rd ed. (Prentice Hall, 2001). Newton-like methods. R Fletcher, 10.1002/9781118723203.ch3Practical Methods of Optimization. LtdJohn Wiley & SonsR. Fletcher, Newton-like methods, in Practical Methods of Optimization (John Wiley & Sons, Ltd, 2000) Chap. 3, pp. 44-79.
[]
[ "Learning to diagnose common thorax diseases on chest radiographs from radiology reports in Vietnamese", "Learning to diagnose common thorax diseases on chest radiographs from radiology reports in Vietnamese" ]
[ "Thao Nguyen \nSmart Health Center\nVinBigData JSC\nHanoiVietnam\n", "Tam M Vo \nSmart Health Center\nVinBigData JSC\nHanoiVietnam\n", "Thang V Nguyen \nSmart Health Center\nVinBigData JSC\nHanoiVietnam\n", "Hieu H Pham Id \nSmart Health Center\nVinBigData JSC\nHanoiVietnam\n\nCollege of Engineering and Computer Science\nVinUniversity\nHanoiVietnam\n\nVinUni-Illinois Smart Health Center\nHanoiVietnam\n", "Ha Q Nguyen \nSmart Health Center\nVinBigData JSC\nHanoiVietnam\n\nCollege of Engineering and Computer Science\nVinUniversity\nHanoiVietnam\n" ]
[ "Smart Health Center\nVinBigData JSC\nHanoiVietnam", "Smart Health Center\nVinBigData JSC\nHanoiVietnam", "Smart Health Center\nVinBigData JSC\nHanoiVietnam", "Smart Health Center\nVinBigData JSC\nHanoiVietnam", "College of Engineering and Computer Science\nVinUniversity\nHanoiVietnam", "VinUni-Illinois Smart Health Center\nHanoiVietnam", "Smart Health Center\nVinBigData JSC\nHanoiVietnam", "College of Engineering and Computer Science\nVinUniversity\nHanoiVietnam" ]
[]
Deep learning, in recent times, has made remarkable strides when it comes to impressive performance for many tasks, including medical image processing. One of the contributing factors to these advancements is the emergence of large medical image datasets. However, it is exceedingly expensive and time-consuming to construct a large and trustworthy medical dataset; hence, there has been multiple research leveraging medical reports to automatically extract labels for data. The majority of this labor, however, is performed in English. In this work, we propose a data collecting and annotation pipeline that extracts information from Vietnamese radiology reports to provide accurate labels for chest X-ray (CXR) images. This can benefit Vietnamese radiologists and clinicians by annotating data that closely match their endemic diagnosis categories which may vary from country to country. To assess the efficacy of the proposed labeling technique, we built a CXR dataset containing 9,752 studies and evaluated our pipeline using a subset of this dataset. With an F1-score of at least 0.9923, the evaluation demonstrates that our labeling tool performs precisely and consistently across all classes. After building the dataset, we train deep learning models that leverage knowledge transferred from large public CXR datasets. We employ a variety of loss functions to overcome the curse of imbalanced multi-label datasets and conduct experiments with various model architectures to select the one that delivers the best performance. Our best model (CheXpert-pretrained EfficientNet-B2) yields an F1-score of 0.6989 (95% CI 0.6740, 0.7240), AUC of 0.7912, sensitivity of 0.7064 and specificity of 0.8760 for the abnormal diagnosis in general. Finally, we demonstrate that our coarse classification (based on five specific locations of abnormalities) yields comparable results to fine classification (twelve pathologies) on the benchmark CheXpert dataset for general anomaly detection while delivering better performance in terms of the average performance of all classes.
10.1371/journal.pone.0276545
[ "https://export.arxiv.org/pdf/2209.04794v1.pdf" ]
252,199,784
2209.04794
fa15b8eadf8c02cff8c78ca968c96d2cb75b3b56
Learning to diagnose common thorax diseases on chest radiographs from radiology reports in Vietnamese Published: October 31, 2022 Thao Nguyen Smart Health Center VinBigData JSC HanoiVietnam Tam M Vo Smart Health Center VinBigData JSC HanoiVietnam Thang V Nguyen Smart Health Center VinBigData JSC HanoiVietnam Hieu H Pham Id Smart Health Center VinBigData JSC HanoiVietnam College of Engineering and Computer Science VinUniversity HanoiVietnam VinUni-Illinois Smart Health Center HanoiVietnam Ha Q Nguyen Smart Health Center VinBigData JSC HanoiVietnam College of Engineering and Computer Science VinUniversity HanoiVietnam Learning to diagnose common thorax diseases on chest radiographs from radiology reports in Vietnamese Published: October 31, 2022Received: May 12, 2022 Accepted: October 7, 2022RESEARCH ARTICLE ☯ These authors contributed equally to this work. * [email protected] OPEN ACCESS Citation: Nguyen T, Vo TM, Nguyen TV, Pham HH, Nguyen HQ (2022) Learning to diagnose common thorax diseases on chest radiographs from radiology reports in Vietnamese. PLoS ONE 17(10): e0276545. https://doi.org/10.1371/journal. pone.0276545 Editor: Tarik A. Rashid, University of Kurdistan Hewler, IRAQ Copyright: Data Availability Statement: Data are available from the Institutional Review Board (IRB) of the Phu Tho General Hospital. Data access may be requested from Dr. Luc Quang Nguyen, Head of Radiology Department, Phu Tho General Hospital, at "[email protected]," for researchers who meet the criteria for access to confidential data. Funding: The author(s) received no specific funding for this work. Deep learning, in recent times, has made remarkable strides when it comes to impressive performance for many tasks, including medical image processing. One of the contributing factors to these advancements is the emergence of large medical image datasets. However, it is exceedingly expensive and time-consuming to construct a large and trustworthy medical dataset; hence, there has been multiple research leveraging medical reports to automatically extract labels for data. The majority of this labor, however, is performed in English. In this work, we propose a data collecting and annotation pipeline that extracts information from Vietnamese radiology reports to provide accurate labels for chest X-ray (CXR) images. This can benefit Vietnamese radiologists and clinicians by annotating data that closely match their endemic diagnosis categories which may vary from country to country. To assess the efficacy of the proposed labeling technique, we built a CXR dataset containing 9,752 studies and evaluated our pipeline using a subset of this dataset. With an F1-score of at least 0.9923, the evaluation demonstrates that our labeling tool performs precisely and consistently across all classes. After building the dataset, we train deep learning models that leverage knowledge transferred from large public CXR datasets. We employ a variety of loss functions to overcome the curse of imbalanced multi-label datasets and conduct experiments with various model architectures to select the one that delivers the best performance. Our best model (CheXpert-pretrained EfficientNet-B2) yields an F1-score of 0.6989 (95% CI 0.6740, 0.7240), AUC of 0.7912, sensitivity of 0.7064 and specificity of 0.8760 for the abnormal diagnosis in general. Finally, we demonstrate that our coarse classification (based on five specific locations of abnormalities) yields comparable results to fine classification (twelve pathologies) on the benchmark CheXpert dataset for general anomaly detection while delivering better performance in terms of the average performance of all classes. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 Introduction Radiography has always been one of the most ubiquitous diagnostic imaging modalities so far, while chest X-ray (CXR) is the most commonly performed diagnostic X-ray examination [1]. CXRs has an important role in clinical practice, effectively assisting radiologists to detect pathologies related to the airways, pulmonary parenchyma, vessels, mediastinum, heart, pleura and chest wall [2]. In recent years, great advances in GPU computing and research in the fields of machine learning have led to the trend of automating CXR image diagnostics [3][4][5][6][7][8][9] and many other X-ray modalities [10][11][12][13]. In addition, the availability of large-scale public datasets [14][15][16][17][18][19] has sparked interest in study and application, with some of them already being used and integrated into the Computer-Aided Diagnosis (CAD) system to reduce the rate of CXR misdiagnosis. Several datasets, including CheXpert [14], MIMIC-CXR [15], PadChest [16], Chest-xray8, Chest-xray14 [17] and VinDr-CXR [19,20], VinDr-PCXR [21,22], had a significant impact on increasing labeling methods and model quality. Building a reliable CXR dataset for a specific project, on the other hand, remains a difficult and challenging task because medical data is difficult to obtain due to numerous restrictions on patient information confidentiality, and label quality is heavily influenced by the doctors' experience and subjective opinion [1]. This is costly and time-consuming but essential, especially for a task that tackles specific challenges, such as focusing on a certain set of patients or illnesses. In such a way that adopting the aforesaid large-scale datasets is sometimes ineffective, possibly because the image quality, labeling, or data characteristics are no longer appropriate. Additionally, CXR images and medical reports corresponding to each examination are also stored in hospital storage systems such as Picture Archiving and Communication System (PACS) and Hospital Information System (HIS) during the radiology process. This is a tremendous available resource to build largescale CXR datasets in which the annotation can be automatically interpolated from the free text report without any involvement of radiologists. Therefore, pipelines or methods to create datasets from available resources are always valuable. Some previous works also developed methods to relabel public large datasets or constructed a new one. Wang et al. [17] proposed a method for extracting a hospital-scale CXR dataset from the PACS via an unified weakly-supervised multi-label image classification and disease localization formulation by applying natural language processing (NLP) techniques. NegBio [23], a rule-based algorithm that utilizes universal dependencies and subgraph matching, known as providing regular expression infrastructure for negation and uncertain detection in radiology reports. Filice et al. [24] investigated the benefit of utilizing AI models to create annotations for review before adjudication in order to speed up the annotation process while sacrificing specificity. Johnson et al. [15] extracted and classified mentions from the associated reports using two NLP tools, CheXpert and NegBio, before aggregating them to arrive at the final label. To construct structured labels for the images, Irvin et al. [14] created an automated rule-based labeler to extract observations and capture uncertainties contained in free-text radiology reports. Padchest [16] labeled the majority of the dataset using a recurrent neural network with an attention mechanism. This dataset contains excerpts from Spanish radiology reports, however the labels have been mapped to biological vocabulary unique identifier codes, making the resource useful regardless of the language. RadGraph [25] introduced a new dataset of clinical entities and relations annotated in full-text radiology reports taken from CheXpert and MIMIC. This research made use of a novel information extraction schema that extracts clinically relevant information associated with a radiologist's interpretation of a medical image. More advanced NLP approaches, such as Bidirectional Encoder Representations from Transformers (BERT) [26], are used in some studies. Chexpert++ [27], a BERT-based, high fidelity approximation labeler applied to CheXpert, is significantly faster, fully differentiable, and probabilistic in outputs. VisualCheXbert [28] utilized a biomedically-pretrained BERT model to map directly from a radiology report to the image labels, with a supervisory signal determined by a computer vision model trained to detect medical conditions from chest X-ray images. CheXbert [29] is a BERT-based approach to medical image report labeling that exploits both the scale of available rule-based systems and the quality of expert annotations. Dictionary-based heuristics are another popular way for creating structured labels from free-text data. For instance, MedLEE [30] utilizes a pre-defined lexicon to convert radiology reports into a structured format. Mayo Clinic's Text Analysis and Knowledge Extraction System (cTAKES) [31] tool combines dictionary and machine learning methods, and uses the Unified Medical Language System https://www.nlm.nih.gov/research/umls/index.html (UMLS) for dictionary inquiries. Dictionary-based NLP systems have a key flaw is that they do not always establish high performance when handling in-house raw clinical texts, especially those with misspellings, abbreviations, and non-standard terminology. On top of that, the mentioned systems only cover English language and cannot handle non-English clinical texts. Languages other than English, including Vietnamese, do not have sufficient clinical materials to build a medical lexicon. In nations where English is not the official language, this has been a huge obstacle in building clinical NLP systems. In current work, our data pipeline can be applied for the available data in PACS and HIS, which can assist minimize data labeling costs, time, and effort while reducing radiologists' involvement in the workflow. We propose a set of matching rules to convert a typical radiology report to the normal/abnormal status of classes. Other than the above-mentioned differences in labeling methods, our label selection is also different from previous studies. So far, most of the studies were developed for classifying common thoracic pathologies or localizing multiple classes of lesions. For instance, most deep learning models were developed on the MIMIC-CXR [32] and CheXpert [33][34][35] datasets for classifying 14 common thoracic pathologies on CXRs in recent years. The earlier dataset ChestX-ray14 [17], an expansion of ChestX-ray8 [17], including the same set of 14 findings has been used to develop deep learning models [36,37]. Nevertheless, these approaches are far different from how Vietnamese radiologists work. In clinical practice, a CXR radiology report always includes four descriptions that correlate to four fixed anatomical regions of the thorax: chest wall, pleura, pulmonary parenchyma and cardiac. Therefore, it is not practical for Vietnamese radiologists to utilize a CAD system that provides suggestions for the presence of 14 diseases. Typically, when examining a CXR image, radiologists analyze that image by region; consequently, it is more convenient for the system to indicate the abnormality of each area, eliminating the need to match the lesion type with the region being viewed. To address the realistic demand of Vietnamese radiologists, we developed a system to classify CXRs into 5 classes depending on the position of pathologies: chest wall, pleura, parenchyma, cardiac abnormality and the existence of abnormalities in the CXRs, if any. When tested on the benchmark CheXpert dataset, we found that this coarse classification produces results comparable to the detailed classifier of 14 findings in terms of abnormal class and gives better results in terms of macro average F1 score of all classes. Our work was developed on the dataset collected at Phu Tho General Hospital-a Vietnamese provincial hospital. To develop trainable images with corresponding labels, DICOM files in PACS are matched with radiology reports retrieved from HIS. By extracting data from radiology reports, generating normal/abnormal status of 5 classes and treating it as the groundtruth reference, we can conclude that there were positive results when classifying CXRs according to 5 groups of pathologies, which are modeled after the radiologist's description in their medical report. Unlike the automatic data labeling methods mentioned above, our proposed method is simple yet accurate by filtering the descriptions alluding to no findings first, then searching for phrases implying abnormalites in each position. Therefore, the labeling process is strictly controlled through stages, making it easy to detect errors and correct them. In addition, adding a manual step to the labeling process helps us deal with misspellings, which was neglected by the previous method. In this step, we also find infrequent phrases, adding them to our list of phrases indicating abnormality to make it more complete. Furthermore, a report always includes descriptions corresponding to four fixed anatomical regions of the thorax, thus by generating set of labels matching these regions, we can minimize the chance that a label is uncertain. Material and method Dataset building pipeline Our proposed pipeline consists of five steps: (1) data collection, (2) PA-view filtering, (3) XML parser, (4) data matching and (5) data annotation. Fig 1 illustrates the above five steps in detail. Firstly, DICOM files stored in PACS will be acquired and filtered to retain only posterior-anterior (PA) view CXRs by the PA classifier application programming interface (API). Meanwhile, radiology reports stored in HIS as XML files will be parsed to attain some specific information. Afterward, DICOM files and radiology reports belonging to the same patient will be matched to generate pairs of DICOM-XML files of the same examination. Once a DICOM file has been determined to match with an XML files, that DICOM file will be converted to JPG format and the XML file will be the subject of a labeling tool to generate a set of corresponding labels. At the end of the procedure, we can obtain a trainable dataset which includes JPG images and their corresponding labels. Data collection. We retrospectively collected chest radiography studies from Phu Tho General Hospital, which were performed within five months from November 2020 to March 2021, along with their associated radiology reports. The ethical clearance of these studies was approved by the Institutional Review Board (IRB) of Phu Tho General Hospital. With this approval, the IRB allows us to access their data and analyze raw chest X-ray scans using our VinDr's platform, which will be used for data filtering. The need for obtaining informed patient consent was waived because this retrospective study did not impact clinical care or workflow at the hospitals, and all patient-identifiable information in the data has been removed. We decided to select four types of pathologies because of their prevalence in the medical reports and clinical practice. An example of a typical description extracted from a radiology report is shown in Fig 2. The description is divided into four main categories: lungs, cardiac, pleura and chest wall by most Vietnamese radiologists. From the four groups of pathology, we PLOS ONE create an annotation set consisting of five classes, with the first four classes corresponding to these four groups and the other indicating the presence of abnormalities on CXRs, if any. PA-view filtering. The collected data was mostly of Posterior-Anterior (PA)-view CXR, but also included a large number of outliers such as images of body parts other than chest, low-quality images or images with different views than PA-view. To guarantee that only CXRs of PA-view will be retained, we ran an API that is powered by VinDr's platform https://vindr. ai/vindr-lab. The API takes a DICOM file as an input and returns the probability that the image saved in that file is a PA-view CXR. The DICOM file will proceed to the next stage of data pre-processing if this probability exceeds 0.5-a normalized threshold; else, the file will be marked as ignored. XML parser. We use the same procedure for the XML parsing and data matching process as in our previous study [38], shown in Fig 3. The figure illustrates the procedure of extracting radiology reports from HIS. Each assessment and treatment session was saved in the Extensible Markup Language (XML) file format by HIS. A session includes all information of the patient between check-in and check-out time. The XML parser can read the header of a session that includes SESSION_ID, PATIENT_ID, CHECK_IN_TIME, and CHECK_OUT_TIME. These attributes are shared among all radiology reports belonging to the same session and will be used to link to the corresponding DICOM file. All reports are also interpreted using the XML parser to obtain the SERVICE_ID, REPORT_TIME, and DESCRIPTION properties. Only reports with a SERVICE ID matching the values expressly assigned by the Vietnamese Ministry of Health for chest radiography were preserved to exclude extraneous reports. Data matching. To match the DICOM file with the corresponding XML file, we have simulated the algorithm in [38], which is depicted in Fig 4. Since the HIS and PACS are linked by PATIENT_ID, this key is used by the matching algorithm to determine whether the DICOM file and radiography report belong to the same patient. Moreover, REPORT_TIME must be within 24 hours of STUDY_TIME, which is a regulated protocol of the hospital. Finally, STUDY_TIME has to be between CHECK_IN_TIME and CHECK_OUT_TIME. If all of the conditions are fulfilled, the DICOM file and the radiology report are matched. One problem we encountered here is that one DICOM file matched multiple reports and vice versa, because their STUDY_TIME attributes were separated by a period of less than 24 hours. In such a short period of time, the examination results are often the same, the reason for taking additional radiographs may be due to the poor quality of the first image. Therefore, the description from the reports is usually the same, and this DICOM file is assigned to one of the matched reports. In several cases where the descriptions in the reports are different, the DICOM file will be given to a radiologist to review and match the correct report. Data annotation. After extracting descriptions that match the DICOM files, we developed a simple labeling algorithm that takes the radiologists' description as input and returns a list of five binary elements, corresponding to the presence or absence of abnormalities belonging to 5 classes. Fig 5 illustrates the major steps of data annotation, which is implemented in semi-automated manner, including (1) pattern filtering, (2) keyword detection (3) abnormality interpolating and (4) manually labeling. Pattern Filtering The dataset we obtained from Phu Tho General Hospital is unbalanced, with the majority of the images exhibiting no pathology. We have obtained 1,568 different templates from all the descriptions. Filtering descriptions that are elements of the predetermined set of templates (specifically 11 templates imply no findings) would help us save a significant amount of time when it comes to data labeling. A CXR is considered normal if one of 11 templates exactly appears in the DESCRIPTION of the corresponding radiology report. Keyword detection After pattern filtering, most of the instances without pathologies are retained. In this step, we have to handle most of the abnormality descriptions and some remaining normality ones. Keyword detection is divided into four sub-stages, which could be performed simultaneously, to detect keywords indicating abnormalities in the chest wall, pleura, parenchyma, and mediastinum. To find keywords for each class, e.g. chest wall, we break down the radiologist's description into 4 categories (categories are separated by "-" (dash) in the radiology descriptions). From the sentences in the chest wall category, we gather keywords indicating abnormalities, such as "fracture", "osteoporosis", "bone fusion surgery" to create the fixed set of keywords. Descriptions containing keywords in the chest wall set will be annotated as 1 for the corresponding class, similarly for the pleura, parenchyma, and cardio classes. Some common keywords setting for the four classes are listed in Table 1. Abnormality interpolating The first four classes have been annotated at the keyword detection stage, here the abnormality class labeling is implemented by inferring from those others. Abnormality value will be set to 1 (positive) if any of the other classes are noted as anomalies or has any other anomaly even though it does not belong to the four groups above. Manual Labeling Descriptions that neither belong to the 11 normality templates nor contain any of the keywords in the four fixed sets have a high probability of being misspelled or describing rare pathologies or including pathologies that cannot be assigned to one of the four main regions. To handle such cases, we inspected them to correct spelling mistakes manually, then forwarded confusing descriptions to a radiologist of Phu Tho General Hospital for annotating. These cases account for less than 0.5% of the total descriptions, thus labeling the remain is not a time-consuming task, that minimizes the doctor's involvement in data labeling. Over five months, we obtained the total number of 12.367 XML files and 12,376 DICOM files coresponding to 11,088 studies. 10,847 DICOM files were PA chest radiographs, and 10,002 of them matched with information extracted from XML files. Table 2 details the number of positive and negative samples of the five classes in the collected dataset. For model development, we split the dataset into training and validation sets with the ratio of 7/3 and one constraint is that the distribution of each class in training and validation sets is approximated to the distribution of the original dataset. Quality control To ensure the quality of the dataset is guaranteed, we randomly take 5% of the data to inspect if there are any inappropriate images or labels that do not match the corresponding report. If any incorrectness is found, we will find out and correct it, then the 5% selection process is repeated until no more errors are detected. The inspection was carried out by a medical student majoring in radiology and was double checked by a radiologist of Phu Tho General Hospital. Labeler results We evaluate the effectiveness of the proposed labeling procedure by manually labeling the samples and considering the result as the ground truth. F1-score will be used as the main metric to evaluate the quality of our labeling tool. Evaluation set. The reported evaluation set consists of 3001 radiology reports from 3001 instances-that totally overlap with the reports in the validation set. We manually annotated these radiology reports without access to additional patient information. We labeled whether there is any abnormality in chest wall, pleura, pulmonary parenchyma and cardio following a list of labeling conventions that was agreed upon ourselves. After we independently labeled each of the 3001 reports, disagreements were resolved by consensus discussion or radiologist's consultation. The resultant annotation serves as ground truth on the report in evaluation set. Evaluation results. After having the results as the radiologists' annotation, combined with the set of labels generated by our method, the evaluation results of each class are listed in Table 3, with the metrics of precision, recall and F1 score. Overall, our labeling pipeline delivers the high values of F1 score in all classes, with the lowest figures of 0.9926 and 0.9985being recorded in pleura and parenchyma classes, respectively. In chest wall, cardio and abnormal classes, our tool delivers the highest performance, without any mislabeled samples. Experiment and results Model development Chest X-ray interpretation with deep learning methods usually relies on pre-trained models developed for ImageNet. Nevertheless, it was proved that architectures achieving remarkable accuracy on ImageNet are unlikely to give the same performance when experienced on the CheXpert dataset and the choice of model family deliver better improvement than image resizing within a family for medical imaging tasks [39]. We decided to choose the model family that has been proved to be highly efficient for CXR interpretation-ResNet50 [40], Dense-Net121 [41], Inception-V3 [42] and EfficientNet-B2 [43]. We also leverage large public CXR datasets such as CheXpert to develop pre-trained models and compare the use of some benchmark chest X-ray datasets for transfer learning to ImageNet pre-trained models. Furthermore, the unbalance between classes has a negative impact on our dataset; for example, the chest wall class has a positive/negative ratio of 0.003. To address this problem, along with the conventional Binary Cross Entropy Loss (BCE), we used and assessed other loss functions established for multi-label imbalanced datasets, such as Asymmetric Loss (ASL) [44] and Distribution-balanced Loss (DBL) [45]. For each model architecture, we use the Adam optimizer (beta1 = 0.9, beta2 = 0.999 and learning rate = 1e-3), cooperating with Cosine annealing learning rate with gradual warm-up scheduler, a batch size of 16, three different loss functions: cross-entropy, distribution-balanced and asymmetric loss, image sizes of 768 and 1024. Training was conducted on a Nvidia GTX 1080 with CUDA 10.1 and Intel Xeon CPU ES-2609. For one run of a specific model, we train for 160 epochs and evaluate each model every 413 gradient steps. Finally, checkpoint with the highest F1-score will be considered the best model for each training procedure. PLOS ONE Learning to diagnose common thorax diseases on chest radiographs from radiology reports in Vietnamese We also used the nonparametric bootstrap [46] to estimate 95% confidence intervals for each statistic. There are 3,000 replicates are drawn from the validation set, and the statistic is calculated for each replicate. This procedure generates a distribution for each statistic, by reporting the 2.5 and 97.5 percentiles, the confidence intervals are obtained and significance is assessed at the p = 0.05 level. Experimental result In this work, chest X-ray classification models were trained on the training set detailed in Table 2. The models are distinguished from each other based on four attributes: (1) model architecture, (2) pre-trained dataset, (3) loss function and (4) image size, while sharing the common training procedure. First, we compare the effect of using pre-trained datasets and the impact of some loss functions on the multi-label problem. We choose ImageNet and CheXpert to transfer their knowledge to our target data. BCE-a common loss function, ASL and DBLthe two loss functions for multi-label issue were used in our experiment. The reported metrics are macro average (Av.) F1-score, AUC, sensitivity and specificity of the five classes. We only use ResNet50 architecture to compare these aspects with the same setup hyper parameters. As we can see in Table 4, model using ASL and CheXpert dataset as pre-trained-initial parameters give the best result. All the metrics are higher than that of the others, especially when using ASL. This loss function always gives big value but is very effective because it heavily "penalizes" misclassified positive samples and hardly penalizes easy negative one. CheXpert is also useful in spite of containing similar patterns to our target data. We decide to use pre-trained model by CheXpert and ASL for later experiments. To discover which family of architectures really fits our dataset, we do more experiments with Inception-V3, DenseNet121 and EfficientNet-B2, which are reported to perform well with radiographic images; and two sizes of image 768 and 1024. The result is shown in Table 5, which indicates that bigger image sizes do not give rise to better results, but affect training time. In the matter of model architectures, EfficientNet-B2 outperforms the others. In conclusion, model with EfficientNet-B2 architecture and input size of 768 delivers the best performance. Detailed result of our best model is also presented in Table 6. By using ASL, the chest wall class has improved significantly when increasing to nearly 32% compared to the model using BCE and not using CheXpert as pre-trained. The pleura class has less samples than the chest wall, but the results do not improve much after using ASL, possibly because the chest wall class has a more diverse number of abnormal manifestations in our data, so the model focused more on this class. The same procedure is also applied to build the two models of fine classification (detection of 14 pathologies) and coarse classification (detection of abnormalities in 4 locations in CXR images), in order to evaluate the effectiveness of the coarse classification compared to the fine classification. We use the CheXpert benchmark dataset to build and evaluate two models sharing the same configurations to retain the sense of objectivity. The data in the CheXpert dataset are labeled with 14 classes corresponding to 13 abnormalities in the chest radiograph and an implication of no findings. We infer where the lesion is in the 4 considered positions based on the type of lesion indicated in the CheXpert dataset. Table 7 shows the mappings between CheXpert data labels (14 classes) and the proposed set of labels (5 classes). Comparison of coarse and fine classification on Table 8. Based on the results shown in the Table 8, it can be seen that the coarse classification method gives a higher F1 score in both the abnormal class and the macro average F1 score. We also plot Grad-CAMs [47] to give the visual explanations of how the model fulfil predictions. Fig 7 illustrates the original images and their respective Grad-CAMs. In both cases, the pathologies in the collarbone (nondisplaced fracture) and in the pleura (pleural effusion) were correctly highlighted. The results are attained when performing with the EfficientNet-B2 architecture, the input size is 768x768, using the CheXpert dataset to build the pretrained model and apply the asymmetric loss function. Conclusion In current work, we propose a semi-automatic process of building an accurate CXR dataset, which can take advantage of the resources stored in PACS and HIS systems, especially minimizing the intervention of radiologists. We also suggest a coarse classification method based on the location of abnormalities in radiographs, which can address the realistic demand for Vietnamese radiologists and be more efficient than classification based on pathology types. Finally, we demonstrate that building pre-trained models using large CXR datasets can Table 7. The mappings between CheXpert data labels (14 classes) and the proposed set of labels (5 classes). P and N refer to positive and negative respectively. significantly improve performance compared to using ImageNet datasets. The models finetuned from CheXpert pre-trained models with asymmetric loss function achieve significant gains over ImageNet pre-trained models, which we believe will serve as a strong baseline for future research. We also believe that this method will be applied for other languages which have the same characteristic and task requirement. Chest wall Fig 1 . 1Overview diagram of the process of collecting and building medical image dataset. The process consists of five steps: data collection from PACS and HIS, PA-view filtering, XML parser, data matching and data annotation. https://doi.org/10.1371/journal.pone.0276545.g001 Fig 2 . 2The description in a typical radiology report in Vietnam. The description is divided into four main categories: chest wall, pleura, lungs (parenchyma) and cardiac.https://doi.org/10.1371/journal.pone.0276545.g002 Fig 3 . 3Radiology reports extraction process for CXR examinations collected from HIS [38]. The original Vietnamese counterparts are put inside square brackets. https://doi.org/10.1371/journal.pone.0276545.g003 Fig 4 . 4Algorithm for matching a DICOM file obtained from PACS with a radiology report collected from HIS. https://doi.org/10.1371/journal.pone.0276545.g004 Fig 5 . 5Semi-automated data annotation pipeline. The system consists of 4 steps, the first 3 steps are automatic and the last one is carried out manually.https://doi.org/10.1371/journal.pone.0276545.g005 Fig 6 illustrates plots on all tasks. The model achieves the best AUC on pleura class (0.96), and the worst on chest wall class (0.81). The abnormal class recorded 0.87 AUC, the parenchyma and cardiac classes witness figures of 0.86 and 0.92, respectively. Fig 7 . 7Original images and respective Grad-CAMs. There is a collarbone (nondisplaced fracture) in the first two figures, while the last two ones containing pleural effusion in the pleura. Both of these pathologies were correctly highlighted.https://doi.org/10.1371/journal.pone.0276545.g007 Table 1 . 1Examples of Vietnamese keywords indicate abnormalities in chest wall, pleura, parenchyma, cardiac classes and abnormality out of these four group. English translations are enclosed in square brackets.Class name Keywords Chest wall (bone) Gãy xương [Bone fracture] Thưa xương [Osteoporosis] Tiêu xương [Bone resorption] Pleura Dày màng phổi trái/phải [Left/right pleural thickening] Mờ góc sườn hoành màng phổi trái/phải [Left/right costophrenic angle blunting] Tù góc sườn hoành trái/phải [Loss of the left/right costophrenic angle] Parenchyma Dày thành phế quản [Bronchial wall thickening] Dày tổ chức kẽ [Interstitial pulmonary thickening] Dải mờ giữa phổi trái/phải [Opacity between left/right lung Cardio Quaiđộng mạch chủ (đmc) vồng [Ascending aortic arch] Hình tim trái/phải to [Enlarged /right cardiomegaly] Giãn cung thất trái/ phải [Left/right ventricular arch dilatation] Other abnormality Liềm hơi dưới vòm hoành trái/phải [Sickle of air below the left/right diaphragm] https://doi.org/10.1371/journal.pone.0276545.t001 Table 2 . 2Number of instances which contain five labeled observations in training, validation and the whole dataset.Position of pathology Positive Negative Chest wall Training 166 Training 6835 Validation 71 Validation 2930 Total 237 (2.37%) Total 9765 (97.63%) Pleura Training 155 Training 166 Validation 67 Validation 71 Total 222 (2.22%) Total 9780 (97.78%) Parenchyma Training 1520 Training 6846 Validation 652 Validation 2934 Total 2172 (21.72%) Total 7830 (78.28%) Cardio Training 548 Training 6453 Validation 235 Validation 2766 Total 783 (7.83%) Total 9219 (92.17%) Abnormal Training 1976 Training 5025 Validation 848 Validation 2153 Total 2824 (28.23%) Total 7178 (71.77%) Table 3 . 3Evaluation results of proposed labeling tool. Evaluation was performed on 3001 samples of the validation set.Class TP FP TN FN Precision Recall F1 score Chest wall 71 0 2930 0 1 1 1 Pleura 67 1 2933 0 0.9853 1 0.9926 Parenchyma 652 1 2347 1 0.9985 0.9985 0.9985 Cardio 235 0 2766 0 1 1 1 Abnormal 848 0 2153 0 1 1 1 https://doi.org/10.1371/journal.pone.0276545.t003 Table 4 . 4Experimental results with different pre-train datasets and loss functions. Model pre-trained on CheXpert dataset and using Asymmetric loss function yields the best performance.https://doi.org/10.1371/journal.pone.0276545.t004Table 5. Experimental results with different backbones and input sizes. Model with EfficientNet-B2 architecture and input size of 768 delivers the best performance.Table 6. Performance of EfficientNet-B2 on five classes. https://doi.org/10.1371/journal.pone.0276545.t006Fig 6. Area under the ROC curve. Pleura class delivered the highest AUC value, at 0.96 (95% CI 0.94, 0.97) whereas chest wall class performed the lowest AUC value, with the figure of 0.81 (95% CI 0.75, 0.85). https://doi.org/10.1371/journal.pone.0276545.g006Pretrained dataset + Loss function Class F1 score AUC Sensitivity Specificity ImageNet + BCE Bone 0.098 0.6622 0.3239 0.8713 Pleura 0.4196 0.9348 0.4478 0.9843 Parenchyma 0.5742 0.8351 0.6380 0.8378 Cardio 0.4513 0.8605 0.5617 0.9212 Abnormal 0.6366 0.8337 0.7323 0.7761 Average 0.4359 0.8253 0.5408 0.887 ImageNet + ASL Bone 0.3800 0.7123 0.2676 0.9966 Pleura 0.4925 0.9239 0.4925 0.9884 Parenchyma 0.5941 0.8389 0.5982 0.8846 Cardio 0.5278 0.9115 0.6255 0.9367 Abnormal 0.6674 0.8482 0.7123 0.8337 Average 0.5324 0.847 0.5392 0.928 ImageNet + DBL Bone 0.1882 0.6993 0.1010 0.9799 Pleura 0.2647 0.8691 0.403 0.9625 Parenchyma 0.5566 0.8195 0.6748 0.7918 Cardio 0.3929 0.8289 0.4723 0.9208 Abnormal 0.6123 0.8126 0.6993 0.7696 Average 0.4029 0.8059 0.4909 0.8894 CheXpert + BCE Bone 0.0706 0.5412 0.0423 0.9962 Pleura 0.2623 0.8540 0.2388 0.9867 Parenchyma 0.537 0.7921 0.6396 0.0.794 Cardio 0.3872 0.8205 0.4638 0.9208 Abnormal 0.581 0.7789 0.6875 0.7325 Average 0.3676 0.7573 0.4144 0.886 CheXpert + ASL Bone 0.4348 0.7757 0.3521 0.9935 Pleura 0.5323 0.9424 0.4925 0.9918 Parenchyma 0.6274 0.8624 0.6702 0.8706 Cardio 0.5536 0.9197 0.6043 0.9508 Abnormal 0.6777 0.8658 0.7512 0.8165 Average 0.5651 0.8732 0.5741 0.9247 CheXpert + DBL Bone 0.1674 0.6912 0.2535 0.957 Pleura 0.4698 0.9513 0.5224 0.984 Parenchyma 0.5958 0.8450 0.6104 0.8782 Cardio 0.5094 0.9009 0.5745 0.9422 Abnormal 0.6498 0.8493 0.7134 0.8100 Average 0.4758 0.8475 0.5349 0.9143 https://doi.org/10.1371/journal.pone.0276545.t007Table 8. Comparison of coarse and fine classification on CheXpert. https://doi.org/10.1371/journal.pone.0276545.t008Pleura Parenchyma Cardio Abnormal No Finding N N N N N Enlarged Cardiom N N N P P Cardiomegaly N N N P P Lung Lesion N N P N P Lung Opacity N N P N P Edema N N P N P Consolidation N N P N P Pneumonia N N P N P Atelectasis N N P N P Pneumothorax N P N N P Pleural Effusion N P N N P Pleural Other N P N N P Fracture P N N N P Support Devices N N N N P Architecture 5 classes 12 classes Macro F1 score F1 score on Abnormal class Macro F1 score F1 score on Abnormal class ResNet50 [40] 0.7109 0.9443 0.4849 0.9444 DenseNet121 [41] 0.7208 0.9519 0.4650 0.9438 InceptionV3 [42] 0.7181 0.9491 0.4846 0.9492 EfficientB2 [43] 0.7429 0.9520 0.5044 0.9450 PLOS ONE | https://doi.org/10.1371/journal.pone.0276545 October 31, 2022 Author ContributionsConceptualization: Thang V. Nguyen, Ha Q. Nguyen. Difficulties in the interpretation of chest radiography. L Delrue, R Gosselin, B Ilsen, A Van Landeghem, J De Mey, P Duyck, Comparative interpretation of CT and standard radiography of the chest 2011. Berlin, HeidelbergSpringerDelrue L, Gosselin R, Ilsen B, Van Landeghem A, de Mey J, Duyck P. Difficulties in the interpretation of chest radiography. In Comparative interpretation of CT and standard radiography of the chest 2011 (pp. 27-49). Springer, Berlin, Heidelberg. ACR-SPR-STR practice parameter for the performance of chest radiography. American College, Radiology, American College of Radiology. ACR-SPR-STR practice parameter for the performance of chest radi- ography 2011; Available at: https://www.acr.org/-/media/ACR/Files/Practice-Parameters/ChestRad. pdf. Accessed August 22, 2021. Learning to automatically diagnose multiple diseases in pediatric chest radiographs using deep convolutional neural networks. Thanh T Tran, Hieu H Pham, Thang V Nguyen, Tung T Le, Hieu T Nguyen, Ha Q Nguyen, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer Vision2021Tran, Thanh T and Pham, Hieu H and Nguyen, Thang V and Le, Tung T and Nguyen, Hieu T and Nguyen, Ha Q. Learning to automatically diagnose multiple diseases in pediatric chest radiographs using deep convolutional neural networks. Proceedings of the IEEE/CVF International Conference on Computer Vision, (3314-3323), 2021. Hieu H Pham, Ha Q Nguyen, Hieu T Nguyen, Linh T Le, Lam Khanh, arXiv:2208.03545An Accurate and Explainable Deep Learning System Improves Interobserver Agreement in the Interpretation of Chest Radiograph. Pham, Hieu H and Nguyen, Ha Q and Nguyen, Hieu T and Le, Linh T and Khanh, Lam. An Accurate and Explainable Deep Learning System Improves Interobserver Agreement in the Interpretation of Chest Radiograph. arXiv:2208.03545, 2022. Learning from Multiple Expert Annotators for Enhancing Anomaly Detection in Medical Image Analysis. Khiem H Le, Tuan V Tran, Hieu H Pham, Hieu T Nguyen, Tung T Le, Ha Q Nguyen, arXiv:2203.10611arXiv preprintLe, Khiem H and Tran, Tuan V and Pham, Hieu H and Nguyen, Hieu T and Le, Tung T and Nguyen, Ha Q. Learning from Multiple Expert Annotators for Enhancing Anomaly Detection in Medical Image Analy- sis. arXiv preprint arXiv:2203.10611, 2022. CheXpedition: investigating generalization challenges for translation of chest x-ray algorithms to the clinical setting. Pranav Rajpurkar, Anirudh Joshi, Anuj Pareek, Phil Chen, Amirhossein Kiani, Jeremy Irvin, arXiv:2002.11379arXiv preprintRajpurkar, Pranav and Joshi, Anirudh and Pareek, Anuj and Chen, Phil and Kiani, Amirhossein and Irvin, Jeremy et al. CheXpedition: investigating generalization challenges for translation of chest x-ray algorithms to the clinical setting. arXiv preprint arXiv:2002.11379, 2020. Hongyu Wang, Yong Xia, arXiv:1807.03058Chestnet: A deep neural network for classification of thoracic diseases on chest radiography. arXiv preprintWang, Hongyu and Xia, Yong. Chestnet: A deep neural network for classification of thoracic diseases on chest radiography. arXiv preprint arXiv:1807.03058, 2018. Automated abnormality classification of chest radiographs using deep convolutional neural networks. Tang Yu, -Xing , Tang You, - Bao, Peng Yifan, Yan Ke, Bagheri Mohammadhadi, Redd Bernadette, A , NPJ Digital Medicine. Nature Publishing GroupTang Yu-Xing and Tang You-Bao and Peng Yifan and Yan Ke and Bagheri Mohammadhadi and Redd Bernadette A et al. Automated abnormality classification of chest radiographs using deep convolutional neural networks. NPJ Digital Medicine (Nature Publishing Group), 2018. A transfer learning method with deep residual network for pediatric pneumonia diagnosis. Liang Gaobo, Zheng Lixin, Computer Methods and Programs in Biomedicine. 2020Liang Gaobo and Zheng Lixin. A transfer learning method with deep residual network for pediatric pneu- monia diagnosis. Computer Methods and Programs in Biomedicine, (pp. 104-964), 2020. VinDr-SpineXR: A deep learning framework for spinal lesions detection and classification from radiographs. Hieu T Nguyen, Hieu H Pham, Nghia T Nguyen, Ha Q Nguyen, Thang Q Huynh, Minh Dao, International Conference on Medical Image Computing and Computer-Assisted Intervention. 2021Nguyen, Hieu T and Pham, Hieu H and Nguyen, Nghia T and Nguyen, Ha Q and Huynh, Thang Q and Dao, Minh et al. VinDr-SpineXR: A deep learning framework for spinal lesions detection and classifica- tion from radiographs. International Conference on Medical Image Computing and Computer-Assisted Intervention, (pp. 291-301), 2021. Dicom imaging router: An open deep learning framework for classification of body parts from dicom x-ray scans. medRxiv. Pham Hieu, H , Do Dung, V , Nguyen Ha, Q , Pham Hieu H and Do Dung V and Nguyen Ha Q. Dicom imaging router: An open deep learning frame- work for classification of body parts from dicom x-ray scans. medRxiv, 2021. A novel multi-view deep learning approach for BI-RADS and density assessment of mammograms. Huyen Nguyen, Tx, Sam B Tran, Dung B Nguyen, Hieu H Pham, Ha Q Nguyen, arXiv:2112.04490arXiv preprintNguyen, Huyen TX and Tran, Sam B and Nguyen, Dung B and Pham, Hieu H and Nguyen, Ha Q. A novel multi-view deep learning approach for BI-RADS and density assessment of mammograms. arXiv preprint arXiv:2112.04490, 2021. Deep learning to improve breast cancer detection on screening mammography. Shen Li, Margolies Laurie, R , Rothstein Joseph, H , Fluder Eugene, Mcbride Russell, Sieh Weiva, 10.1038/s41598-019-48995-4Scientific Reports. 31467326Nature Publishing GroupShen Li and Margolies Laurie R and Rothstein Joseph H and Fluder Eugene and McBride Russell and Sieh Weiva. Deep learning to improve breast cancer detection on screening mammography. Scientific Reports (Nature Publishing Group, (pp. 1-12) 2019. https://doi.org/10.1038/s41598-019-48995-4 PMID: 31467326 J Irvin, P Rajpurkar, M Ko, Y Yu, S Ciurea-Ilcus, C Chute, A large chest radiograph dataset with uncertainty labels and expert comparison. InProceedings of the AAAI conference on artificial intelligence. 33Irvin J, Rajpurkar P, Ko M, Yu Y, Ciurea-Ilcus S, Chute C, et al. Chexpert: A large chest radiograph data- set with uncertainty labels and expert comparison. InProceedings of the AAAI conference on artificial intelligence 2019 Jul 17 (Vol. 33, No. 01, pp. 590-597). A E Johnson, T J Pollard, N R Greenbaum, M P Lungren, C Y Deng, Y Peng, arXiv:1901.07042a large publicly available database of labeled chest radiographs. arXiv preprintJohnson AE, Pollard TJ, Greenbaum NR, Lungren MP, Deng CY, Peng Y, et al. MIMIC-CXR-JPG, a large publicly available database of labeled chest radiographs. arXiv preprint arXiv:1901.07042. 2019 Jan 21. A large chest x-ray image dataset with multi-label annotated reports. A Bustos, A Pertusa, J M Salinas, M De La Iglesia-Vayá, Padchest, 10.1016/j.media.2020.10179732877839Med Image Anal. 66101797Bustos A, Pertusa A, Salinas JM, de la Iglesia-Vayá M. PadChest: A large chest x-ray image dataset with multi-label annotated reports. Med Image Anal. 2020; 66:101797. https://doi.org/10.1016/j.media. 2020.101797 PMID: 32877839 Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. X Wang, Y Peng, L Lu, Z Lu, M Bagheri, R M Summers, InProceedings of the IEEE conference on computer vision and pattern recognition 2017. Wang X, Peng Y, Lu L, Lu Z, Bagheri M, Summers RM. Chestx-ray8: Hospital-scale chest x-ray data- base and benchmarks on weakly-supervised classification and localization of common thorax diseases. InProceedings of the IEEE conference on computer vision and pattern recognition 2017 (pp. 2097- 2106). Hoang C Nguyen, Tung T Le, Hieu H Pham, Ha Q Nguyen, Vindr-Ribcxr, arXiv:2107.01327A Benchmark Dataset for Automatic Segmentation and Labeling of Individual Ribs on Chest X-rays. arXiv preprintNguyen, Hoang C and Le, Tung T and Pham, Hieu H and Nguyen, Ha Q. VinDr-RibCXR: A Benchmark Dataset for Automatic Segmentation and Labeling of Individual Ribs on Chest X-rays. arXiv preprint arXiv:2107.01327, 2021. H Q Nguyen, K Lam, L T Le, H H Pham, D Q Tran, D B Nguyen, arXiv:2012.15029An open dataset of chest X-rays with radiologist's annotations. arXiv preprintNguyen HQ, Lam K, Le LT, Pham HH, Tran DQ, Nguyen DB, et al. VinDr-CXR: An open dataset of chest X-rays with radiologist's annotations. arXiv preprint arXiv:2012.15029. 2020 Dec 30. H Q Nguyen, H H Pham, L Tuan Linh, M Dao, L Khanh, Vindr-Cxr, 10.13026/3akn-b287An open dataset of chest Xrays with radiologist annotations. version 1.0.0) PhysioNetNguyen, H. Q., Pham, H. H., Tuan Linh, L., Dao, M., Khanh, L. VinDr-CXR: An open dataset of chest X- rays with radiologist annotations (version 1.0.0) PhysioNet, https://doi.org/10.13026/3akn-b287, 2021. VinDr-PCXR: An open, large-scale chest radiograph dataset for interpretation of common thoracic diseases in children. Ngoc H Nguyen, Hieu H Pham, Thanh T Tran, Tuan Nguyen, Ha Q Nguyen, arXiv:2203.10612arXiv preprintNguyen, Ngoc H and Pham, Hieu H and Tran, Thanh T and Nguyen, Tuan NM and Nguyen, Ha Q. VinDr-PCXR: An open, large-scale chest radiograph dataset for interpretation of common thoracic dis- eases in children. arXiv preprint arXiv:2203.10612, 2022. VinDr-PCXR: An open, large-scale pediatric chest X-ray dataset for interpretation of common thoracic diseases. H H Pham, T T Tran, H Q Nguyen, 10.13026/k8qc-na36PhysioNet (version 1.0.0Pham, H. H., Tran, T. T., Nguyen, H. Q. VinDr-PCXR: An open, large-scale pediatric chest X-ray data- set for interpretation of common thoracic diseases. PhysioNet (version 1.0.0), https://doi.org/10.13026/ k8qc-na36, 2022 NegBio: a high-performance tool for negation and uncertainty detection in radiology reports. Y Peng, X Wang, L Lu, M Bagheri, R Summers, Z Lu, 29888070AMIA Summits on Translational Science Proceedings. 188Peng Y, Wang X, Lu L, Bagheri M, Summers R, Lu Z. NegBio: a high-performance tool for negation and uncertainty detection in radiology reports. AMIA Summits on Translational Science Proceedings. 2018; 2018:188. PMID: 29888070 Crowdsourcing pneumothorax annotations using machine learning annotations on the NIH chest X-ray dataset. R W Filice, A Stein, C C Wu, V A Arteaga, S Borstelmann, 10.1007/s10278-019-00299-9Journal of digital imaging. 33231768897Filice RW, Stein A, Wu CC, Arteaga VA, Borstelmann S, et al. Crowdsourcing pneumothorax annota- tions using machine learning annotations on the NIH chest X-ray dataset. Journal of digital imaging. 2020 Apr; 33(2):490-6. https://doi.org/10.1007/s10278-019-00299-9 PMID: 31768897 S Jain, A Agrawal, A Saporta, S Q Truong, T Bui, P Chambon, Extracting Clinical Entities and Relations from Radiology Reports. Conference on Neural Information Processing Systems. Jain S, Agrawal A, Saporta A, Truong SQ, Bui T, Chambon P, et al. RadGraph: Extracting Clinical Enti- ties and Relations from Radiology Reports. Conference on Neural Information Processing Systems (NeurIPS 2021) J Devlin, M W Chang, K Lee, Toutanova K Bert, arXiv:1810.04805Pre-training of deep bidirectional transformers for language understanding. arXiv preprintDevlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of deep bidirectional transformers for lan- guage understanding. arXiv preprint arXiv:1810.04805. 2018 Oct 11. Approximating the chexpert labeler for speed, differentiability, and probabilistic output. M B Mcdermott, T M Hsu, W H Weng, M Ghassemi, P Szolovits, Chexpert++, PMLRMachine Learning for Healthcare Conference. McDermott MB, Hsu TM, Weng WH, Ghassemi M, Szolovits P. Chexpert++: Approximating the chex- pert labeler for speed, differentiability, and probabilistic output. In Machine Learning for Healthcare Con- ference 2020 Sep 18 (pp. 913-927). PMLR. S Jain, A Smit, S Q Truong, C D Nguyen, M T Huynh, M Jain, VisualCheXbert: addressing the discrepancy between radiology report labels and image labels. InProceedings of the Conference on Health, Inference, and Learning. Jain S, Smit A, Truong SQ, Nguyen CD, Huynh MT, Jain M, et al. VisualCheXbert: addressing the dis- crepancy between radiology report labels and image labels. InProceedings of the Conference on Health, Inference, and Learning 2021 Apr 8 (pp. 105-115). A Smit, S Jain, P Rajpurkar, A Pareek, A Y Ng, M P Lungren, arXiv:2004.09167CheXbert: combining automatic labelers and expert annotations for accurate radiology report labeling using BERT. arXiv preprintSmit A, Jain S, Rajpurkar P, Pareek A, Ng AY, Lungren MP. CheXbert: combining automatic labelers and expert annotations for accurate radiology report labeling using BERT. arXiv preprint arXiv:2004.09167. 2020 Apr 20. Natural language processing in an operational clinical information system. C Friedman, G Hripcsak, W Dumouchel, S B Johnson, P D Clayton, 10.1017/S1351324900000061Natural Language Engineering. 11Friedman C, Hripcsak G, DuMouchel W, Johnson SB, Clayton PD. Natural language processing in an operational clinical information system. Natural Language Engineering. 1995 Mar; 1(1):83-108. https:// doi.org/10.1017/S1351324900000061 Mayo clinical Text Analysis and Knowledge Extraction System (cTAKES): architecture, component evaluation and applications. G K Savova, J J Masanz, P V Ogren, J Zheng, S Sohn, K C Kipper-Schuler, 10.1136/jamia.2009.00156020819853Journal of the American Medical Informatics Association. 175Savova GK, Masanz JJ, Ogren PV, Zheng J, Sohn S, Kipper-Schuler KC, et al. Mayo clinical Text Anal- ysis and Knowledge Extraction System (cTAKES): architecture, component evaluation and applications. Journal of the American Medical Informatics Association. 2010 Sep 1; 17(5):507-13. https://doi.org/10.1136/jamia.2009.001560 PMID: 20819853 J Irvin, P Rajpurkar, M Ko, Y Yu, S Ciurea-Ilcus, C Chute, A large chest radiograph dataset with uncertainty labels and expert comparison. 33Proceedings of the AAAI Conference on Artificial IntelligenceIrvin J, Rajpurkar P, Ko M, Yu Y, Ciurea-Ilcus S, Chute C, et al. CheXpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In: Proceedings of the AAAI Conference on Arti- ficial Intelligence. vol. 33; 2019. p. 590-597. CheXpedition: Investigating generalization challenges for translation of chest X-ray algorithms to the clinical setting. P Rajpurkar, A Joshi, A Pareek, P Chen, A Kiani, J Irvin, arXiv:2002.11379eess.IVRajpurkar P, Joshi A, Pareek A, Chen P, Kiani A, Irvin J, et al. CheXpedition: Investigating generaliza- tion challenges for translation of chest X-ray algorithms to the clinical setting. 2020. arXiv:2002.11379 [eess.IV]. Interpreting chest X-rays via CNNs that exploit hierarchical disease dependencies and uncertainty labels. H H Pham, T T Le, D Q Tran, D T Ngo, H Q Nguyen, arXiv:191106475. 2020arXiv preprintPham HH, Le TT, Tran DQ, Ngo DT, Nguyen HQ. Interpreting chest X-rays via CNNs that exploit hierar- chical disease dependencies and uncertainty labels. arXiv preprint arXiv:191106475. 2020. MIMIC-CXR, a de-identified publicly available database of chest radiographs with free-text reports. Aew Johnson, T J Pollard, S J Berkowitz, N R Greenbaum, M P Lungren, Deng Cy, 10.1038/s41597-019-0322-0Scientific Data. 6131831740Johnson AEW, Pollard TJ, Berkowitz SJ, Greenbaum NR, Lungren MP, Deng Cy, et al. MIMIC-CXR, a de-identified publicly available database of chest radiographs with free-text reports. Scientific Data. 2019; 6(1):317. https://doi.org/10.1038/s41597-019-0322-0 PMID: 31831740 Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists. P Rajpurkar, J Irvin, R L Ball, K Zhu, B Yang, H Mehta, 10.1371/journal.pmed.100268630457988PLoS Medicine. 1511Rajpurkar P, Irvin J, Ball RL, Zhu K, Yang B, Mehta H, et al. Deep learning for chest radiograph diagno- sis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLoS Medicine. 2018; 15(11):1-17. https://doi.org/10.1371/journal.pmed.1002686 PMID: 30457988 Chest radiograph interpretation with deep learning models: Assessment with radiologist-adjudicated reference standards and population-adjusted evaluation. A Majkowska, S Mittal, D F Steiner, J J Reicher, S M Mckinney, G E Duggan, 10.1148/radiol.201919129331793848Radiology. 2942Majkowska A, Mittal S, Steiner DF, Reicher JJ, McKinney SM, Duggan GE, et al. Chest radiograph interpretation with deep learning models: Assessment with radiologist-adjudicated reference standards and population-adjusted evaluation. Radiology. 2020; 294(2):421-431. https://doi.org/10.1148/radiol. 2019191293 PMID: 31793848 A clinical validation of VinDr-CXR, an AI system for detecting abnormal chest radiographs. N H Nguyen, H Q Nguyen, N T Nguyen, T V Nguyen, H H Pham, T N Nguyen, arXiv:2104.02256arXiv preprintNguyen NH, Nguyen HQ, Nguyen NT, Nguyen TV, Pham HH, Nguyen TN. A clinical validation of VinDr- CXR, an AI system for detecting abnormal chest radiographs. arXiv preprint arXiv:2104.02256. 2021 Apr 6. CheXtransfer: performance and parameter efficiency of ImageNet models for chest X-Ray interpretation. A Ke, W Ellsworth, O Banerjee, A Y Ng, P Rajpurkar, InProceedings of the Conference on Health, Inference, and Learning. Ke A, Ellsworth W, Banerjee O, Ng AY, Rajpurkar P. CheXtransfer: performance and parameter effi- ciency of ImageNet models for chest X-Ray interpretation. InProceedings of the Conference on Health, Inference, and Learning 2021 Apr 8 (pp. 116-124). Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, InProceedings of the IEEE conference on computer vision and pattern recognition. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. InProceedings of the IEEE conference on computer vision and pattern recognition 2016 (pp. 770-778). Densely connected convolutional networks. G Huang, Z Liu, L Van Der Maaten, K Q Weinberger, InProceedings of the IEEE conference on computer vision and pattern recognition 2017. Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely connected convolutional networks. InPro- ceedings of the IEEE conference on computer vision and pattern recognition 2017 (pp. 4700-4708). Rethinking the inception architecture for computer vision. C Szegedy, V Vanhoucke, S Ioffe, J Shlens, Z Wojna, InProceedings of the IEEE conference on computer vision and pattern recognition. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for com- puter vision. InProceedings of the IEEE conference on computer vision and pattern recognition 2016 (pp. 2818-2826). M Tan, Le Q Efficientnet, PMLRRethinking model scaling for convolutional neural networks. InInternational Conference on Machine Learning. Tan M, Le Q. Efficientnet: Rethinking model scaling for convolutional neural networks. InInternational Conference on Machine Learning 2019 May 24 (pp. 6105-6114). PMLR. Asymmetric loss for multi-label classification. E Ben-Baruch, T Ridnik, N Zamir, A Noy, I Friedman, M Protter, arXiv:2009.14119arXiv preprintBen-Baruch E, Ridnik T, Zamir N, Noy A, Friedman I, Protter M, et al. Asymmetric loss for multi-label classification. arXiv preprint arXiv:2009.14119. 2020 Sep 29. Distribution-balanced loss for multi-label classification in longtailed datasets. T Wu, Q Huang, Z Liu, Y Wang, D Lin, European Conference on Computer Vision. ChamSpringerWu T, Huang Q, Liu Z, Wang Y, Lin D. Distribution-balanced loss for multi-label classification in long- tailed datasets. In European Conference on Computer Vision 2020 Aug 23 (pp. 162-178). Springer, Cham. An Introduction to the Bootstrap. No. 57 in Monographs on Statistics and Applied Probability. B Efron, R J Tibshirani, Boca Raton, Florida, USA: Chapman & Hall/CRCEfron B, Tibshirani RJ. An Introduction to the Bootstrap. No. 57 in Monographs on Statistics and Applied Probability. Boca Raton, Florida, USA: Chapman & Hall/CRC; 1993. Grad-cam: Visual explanations from deep networks via gradient-based localization. R R Selvaraju, M Cogswell, A Das, R Vedantam, D Parikh, D Batra, Proceedings of the IEEE international conference on computer vision 2017. the IEEE international conference on computer vision 2017Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision 2017 (pp. 618-626).
[]
[ "Exact Bayesian inference for level-set Cox processes with piecewise constant intensity function", "Exact Bayesian inference for level-set Cox processes with piecewise constant intensity function" ]
[ "Flávio B Gonçalves \nUniversidade Federal de Minas Gerais\nBrazil\n", "Bárbara C C Dias \nUniversidade Federal de Minas Gerais\nBrazil\n" ]
[ "Universidade Federal de Minas Gerais\nBrazil", "Universidade Federal de Minas Gerais\nBrazil" ]
[]
This paper proposes a new methodology to perform Bayesian inference for a class of multidimensional Cox processes in which the intensity function is piecewise constant. Poisson processes with piecewise constant intensity functions are believed to be suitable to model a variety of point process phenomena and, given its simpler structure, are expected to provide more precise inference when compared to processes with non-parametric and continuously varying intensity functions. The partition of the space domain is flexibly determined by a level-set function of a latent Gaussian process. Despite the intractability of the likelihood function and the infinite dimensionality of the parameter space, inference is performed exactly, in the sense that no space discretization approximation is used and MCMC error is the only source of inaccuracy. That is achieved by using retrospective sampling techniques and devising a pseudo-marginal infinite-dimensional MCMC algorithm that converges to the exact target posterior distribution. Computational efficiency is favored by considering a nearest neighbor Gaussian process, allowing for the analysis of large datasets. An extension to consider spatiotemporal models is also proposed. The efficiency of the proposed methodology is investigated in simulated examples and its applicability is illustrated in the analysis of some real point process datasets.
10.1080/10618600.2022.2092117
[ "https://export.arxiv.org/pdf/2012.05764v6.pdf" ]
228,083,806
2012.05764
af18d68cdee10d47820ea94cd6c7aafd1b61e266
Exact Bayesian inference for level-set Cox processes with piecewise constant intensity function 15 Nov 2022 Flávio B Gonçalves Universidade Federal de Minas Gerais Brazil Bárbara C C Dias Universidade Federal de Minas Gerais Brazil Exact Bayesian inference for level-set Cox processes with piecewise constant intensity function 15 Nov 2022arXiv:2012.05764v6 [stat.ME]Gaussian processPseudo-marginal MCMCPoisson estimatorretrospective samplingNNGP This paper proposes a new methodology to perform Bayesian inference for a class of multidimensional Cox processes in which the intensity function is piecewise constant. Poisson processes with piecewise constant intensity functions are believed to be suitable to model a variety of point process phenomena and, given its simpler structure, are expected to provide more precise inference when compared to processes with non-parametric and continuously varying intensity functions. The partition of the space domain is flexibly determined by a level-set function of a latent Gaussian process. Despite the intractability of the likelihood function and the infinite dimensionality of the parameter space, inference is performed exactly, in the sense that no space discretization approximation is used and MCMC error is the only source of inaccuracy. That is achieved by using retrospective sampling techniques and devising a pseudo-marginal infinite-dimensional MCMC algorithm that converges to the exact target posterior distribution. Computational efficiency is favored by considering a nearest neighbor Gaussian process, allowing for the analysis of large datasets. An extension to consider spatiotemporal models is also proposed. The efficiency of the proposed methodology is investigated in simulated examples and its applicability is illustrated in the analysis of some real point process datasets. Introduction Point pattern statistical models aim at modeling the occurrence of a given event of interest in some region. This is often a compact region is R 2 such that each data point is interpreted as the location of occurrence of a given event of interest. The most widely used point process model is the Poisson process (PP), in which the number of events in any region has Poisson distribution and is independent for disjoint regions. The Poisson process dynamics is mainly determined by its intensity function (IF) which, roughly speaking, determines the instant rate of occurrence of the event of interest across the region being considered. If the IF is assumed to vary stochastically, the resulting process is called a Cox process. Several classes of Cox process models have already been proposed in the literature, including nonparametric models in which the IF varies continuously as a function of a latent Gaussian process (Møller et al., 1998;Gonçalves and Gamerman, 2018). For several of the real examples considered to fit those models, inference results suggest that a piecewise constant IF ought to be suitable to accommodate the variability of the observed process. Figure 1 shows three examples of estimated intensity functions regarding white oaks in Lansing Woods, USA, particles in a bronze filter section profile, and fires in a region of New Brunswick, Canada. All the datasets are available in the R package spatstat (Baddeley et al., 2015) and are revisited in the analyzes presented in Section 5. The IF estimates are obtained via kernel smoothing using the R package splancs (Rowlingson et al., 2012) through the function kernel2d. Results suggest that a piecewise constant IF assuming up to five different values should be suitable to fit those datasets. This is based on the variance behavior of the Poisson process given its IF which, in turn, is based on the variance of the Poisson distribution. Other motivating examples can be found in Hildeman et al. (2018). A piecewise constant structure for the IF also allows for the analysis of the point pattern phenomenon to be performed under a cluster analysis perspective. This may be quite useful and interpretable in some applications. Each region with a constant IF constitutes a cluster and the clustering structure may be related to some practical aspect of the problem. This paper considers a class of Cox process models with piecewise constant intensity function that is able to define the regions in which the IF is constant in a flexible way. The motivation is to have models that are suitable to explain and predict the variability of point process phenomena, yet providing more precise estimates than methodologies with continuously varying IF's. That is achieved through the level-set Cox process (LSCP), originally proposed in Hildeman et al. (2018), which is based on a structure proposed in Dunlop et al. (2016) to define a piecewise constant function in a given space by the levels of a latent Gaussian process (GP). This means that the region in which the intensity function assumes a given value is defined by the region in which a latent Gaussian processes assumes values in a given interval. This construction is considerably flexible to define space partitions, allowing for various shapes and sizes of the regions, including disjoint regions with the same IF. Hildeman et al. (2018) actually proposes a more general version of levelset Cox process in which the observed point process follows independent log-Gaussian Cox processes in each region defined by the random partition, meaning that the IF depends on independent Gaussian process in each region. Therefore, the level-set Cox process considered in this paper is a particular case of the model proposed in Hildeman et al. (2018), in which the IF is constant inside each region. The methodology from Hildeman et al. (2018) however considers a discretized (finitedimensional) approximation of the originally proposed model to approach the problem of performing statistical inference based on observations of the level-set Cox process. The authors argue that "some finite-dimensional approximation of the LSCP model is needed if it is to be used for inference". The discrete approximation is based on a regular lattice that defines a joint model on the number of observations in each cell of the lattice as conditionally (on the respective rates) independent Poisson distributions. An important consequence of this approach, as it is mentioned by the authors, is that the information on the finescale behavior of the point pattern is lost. Furthermore, the latent Gaussian processes from the original LSCP are replaced by the respective multivariate normal distributions on one location inside each square of the lattice (usually the center). Finally, although the authors provide results to establish that the posterior distribution based on the discrete approximation converges (in total variation distance) to the posterior distribution under the continuous model, no bounds for the approximation error are provided. The particular case of the LSCP proposed in Hildeman et al. (2018) in which the IF is piecewise constant with only two levels is proposed in Myllymäki and Penttinen (2010), where the authors also consider a discrete approximation of the process to perform Bayesian inference. This, in turn, is a special case of the random-set-generated Cox process described in Illian et al. (2008, p. 382), in which the random dynamics of the random partition (in 2 regions) is not specified. The main aim of this paper is to devise an exact methodology to perform Bayesian inference for level-set Cox process models in which the IF is piecewise constant. The term exact here means that no discrete approximation of any kind is assumed and MCMC error is the only source of inaccuracy, as in any standard Bayesian analysis. This is not a trivial task due to: i) the intractability of the likelihood function of the proposed model (to be made clear in Section 2); and ii) the infinite dimensionality of the model's parameter space due to the latent Gaussian process component. These two issues arise in several classes of statistical models nowadays (see, for example, Beskos et al., 2006;Gonçalves and Gamerman, 2018;Gonçalves et al., 2017) and, given the high complexity involved, it is common to only find solutions in the literature that are based on discretization of continuous processes, as is the case with LSCP. The use of such approximations however has considerable disadvantages (see Simpson et al., 2016). It induces a bias in the estimates, which is typically hard to quantify and control. Furthermore, even if limiting results guarantee some type of convergence to the continuous model when the discretization gets finer, the computational cost involved to get reasonably good approximations may be unknown and/or too high. Finally, discrete approximations may lead to serious model mischaracterization, compromising the desired properties of the model. The exact inference methodology proposed in this paper makes use of a simulation technique called retrospective sampling which basically allows to deal with infinite-dimensional random variables by unveiling only a finite-dimensional representation of this. A pseudomarginal MCMC algorithm that converges to the exact posterior distribution of all the unknown quantities in the model is proposed. These quantities include the intensity function and the random partition that defines the piecewise constant structure. Also, the Monte Carlo approach makes it straightforward to sample from the posterior predictive distribution of various appealing functions. The known high computational cost involved in algorithms that deal with the simulation of Gaussian processes is mitigated by considering the nearest neighbor Gaussian process (NNGP) (Datta et al., 2016). This has a particular conditional independence structure that leads to a sparse covariance structure and, consequently, to huge computational gains when compared to traditional Gaussian processes. An example with more than 5 thousand observations is presented in Section 5. This is, to the best of our knowledge, the first work to consider a latent NNGP within a complicated likelihood structure that does not allow for directly sampling from the posterior or full conditional distribution of the NNGP component. In this sense, this paper also offers some methodological contributions to deal with latent NNGPs. Another major contribution in this paper is the introduction of an extension of LSCP to consider spatiotemporal processes. This means that the point process is observed on the same region over discrete time and a temporal correlation structure is introduced in the model to explain the evolution of both the space partition and the IF levels. Section 2 of the paper presents the level-set Cox process model and discusses its most important properties. The proposed MCMC algorithm and the extension to consider spatiotemporal models is presented in Section 3, which also addresses some relevant computational issues. Section 4 explores some simulated examples to discuss important aspects and investigate the efficiency of the proposed methodology. Finally, Section 5 applies the methodology to some real datasets. For one of them, results are compared to those obtained with a continuously varying IF Cox process model. Level-set Cox process models Let Y = {Y (s) : s ∈ S} be a Poisson process in some compact region S ∈ R n with intensity function λ S = {λ(s) : s ∈ S}, λ(s) : S → R + and define S K = {S 1 , . . . , S K }, K ∈ N, to be a finite partition of S. We shall focus on the case where S ⊂ R 2 given its practical appealing, although all the definitions and results to be presented in this paper are valid for R n or any other measurable space (see Kingman, 1993). Now let λ = (λ 1 , . . . , λ K ) be a vector of positive parameters such that the IF of Y on S k is λ k , k = 1, . . . , K. Let also c = (c 1 , . . . , c K−1 ) ∈ R K−1 , −∞ = c 0 < c 1 < . . . < c K−1 < c K = ∞, be the values that define the level sets of a latent Gaussian process β on S and, consequently, the finite partition S K of S. The level-set Cox process model is then defined as follows: (Y |λ S ) ∼ P P (λ S ),(1)λ(s) = K k=1 λ k I k (s), s ∈ S,(2)S k = {s ∈ S; c k−1 < β(s) < c k }, k = 1, . . . , K, (3) β ∼ GP (µ, Σ(σ 2 , τ 2 )), (4) π(c) = 1(c 1 < . . . < c K−1 ), (5) λ ∼ prior,(6) where I k (s) is the indicator of {s ∈ S k } and GP (µ, Σ(σ 2 , τ 2 )) is a stationary Gaussian process with mean µ and covariance function Σ(σ 2 , τ 2 ), where σ 2 is the stationary variance and τ 2 is a range parameter indexing the correlation function. The prior on λ will be properly defined in Section 3.4.1. Note that the levels of the IF are not a function of the GP, which simply specifies, along with c, the partition of S that defines the piecewise constant structure. Finally, β, c and λ's are assumed to be independent a priori. One may also consider an IF of the type λ(s) = K k=1 κ(s)λ k I k (s), where κ(s) is a known offset term. This is useful for example when observing cases of some human disease in in a region with varying population density. Notice that the likelihood of the proposed level-set Cox process model is not identifiable. That is because, for each point in the (infinite-dimensional) parameter space, there are an uncountable number of other points that return the same likelihood value. That is basically implied by the non-identification of the scale of the GP β. In order to see that, let us redefine β as β = µ + σβ * , where β * ∼ N(0, Σ(1, τ 2 )). Then, any transformation of the type µ * = aµ + b, σ * = aσ and c * k = b + ac k , ∀m ∈ R, a ∈ R + , ∀k, defines the same partition S K and, consequently, the same likelihood value. A simple way to solve this problem whilst not compromising the flexibility of the model is to fix either c or the hyperparameters (µ, σ 2 ). We shall adopt the latter, which also avoids the high complexity involved in estimating those parameters. Model identifiability could also be compromised, in theory, by label-switching of the coordinates of λ. Nevertheless, given the complexity of the sample space, this is not expected to happen in an MCMC context, as it was the case for all the examples to be presented in this paper. A theoretical limitation of the model is the neighboring structure implied by the continuity of the latent Gaussian process. For any model with K ≥ 3, regions 1 and K share a border with only one other region, 2 and K − 1, respectively, and any other region k shares a border with regions k − 1 and k + 1. Whilst this represents a clear theoretical limitation of the model, it is not expected to be a practical problem in most cases. That is because the uncertainty around the borders is higher than the uncertainty away from them, so the need to pass through a third region to change between two other ones should typically not affect the model fitting. Furthermore, the estimated ordering of the λ k 's will consider the likelihood of the different neighboring configurations. Figure 2 shows an example of a neighboring structure with K = 3 that is not contemplated by the proposed model and three possible structures that may be estimated. Despite this restriction, we highlight the great flexibility of the model to define the partition S K of S. Basically, given the neighboring restriction described above, any smooth partition of the space is contemplated by the model. In particular, it is possible to have disjoint regions with the same IF. We consider the number of levels K for the IF to be fixed. The choice of this value may be based on prior information about the phenomenon, the type of structure the researcher wants to estimate, or even some empirical analysis of the data, for example, based on kernel smoothing estimates of the IF (see Rowlingson et al., 2012). The choice of K should consider the trade-off between fitting and parsimony and take into account the scale of the Poisson distribution. Finally, the piecewise constant structure allows for the analysis to be performed under a cluster analysis perspective. This may be useful and interpretable in some applications. Each region S k constitutes a cluster and the clustering structure may be related to some practical aspect of the problem. Nearest neighbor Gaussian process prior for β The computational bottleneck of the methodology proposed in this paper is the sampling of the Gaussian process β, which is performed not only when updating this component but also, retrospectively, when updating N * and λ. The cost to simulate from a d-dimensional multivariate normal distribution is O(d 3 ) and, in our case, d will typically be in the order of 10 3 or 10 4 . Several solutions have been proposed in the literature to deal with this problem. Some of them are exact in the sense that the approximation process defines a valid probability measure. This property is highly desirable as it guarantees that the analysis is performed under the Bayesian paradigm. In this work, we consider the use of the Nearest neighbor Gaussian process (NNGP), proposed in Datta et al. (2016). The NNGP process was originally designed to approximate some Gaussian process, called the parent GP, in classical geostatistical problems in which the (discretely) observed process is either the GP itself or the GP + i.i.d. noise. In our context, the GP is latent in a more complex way. Nevertheless, it is used only to determine the partition of S and not the actual values of the IF. For that reason and because the NNGP defines a valid Gaussian process probability measure, it is reasonable to see the NGPP simply as the GP prior for β and not an approximation for desirable traditional GP. The NNGP is a valid Gaussian process, devised from a parent GP (µ, Σ(σ 2 , τ 2 )) by imposing some conditional independence structure that leads to a sparse covariance structure. For a reference set S = {s 1 , . . . , s r } and a maximum number m of neighbors, the NNGP factorizes the distribution of β (conditional on parameters) as follows: π(β) = π(β S )π(β S\S |β S ),(7) π(β S ) = π P G (β s 1 )π P G (β s 2 |β s 1 )π P G (β s 3 |β s 1 , β s 2 ) . . . π P G (β s m+1 |β s 1 , . . . , β sm ) (8) π P G (β s m+2 |β N (s m+2 ) ) . . . π P G (β sr |β N (sr) ), π P G (β S 0 |β S ) = I i=1 π P G (βs i |β N (s i ) ), for any finite set S 0 = {s 1 , . . . , s I } ⊂ S \ S,(9) where π P G is the respective density under the parent GP measure, N (s i ) is the set of the m closest neighbors of s i in {s 1 , . . . , s i−1 }, for i ≥ m + 2, and N (s i ) is the set of the m closest neighbors of s i in S. We shall refer to the resulting NNGP process as NNGP (µ,Σ(σ 2 , τ 2 )). In traditional geostatistical models, in which the GP is observed (with error) in a missing completely at random (MCAR) set of locations, the reference set is conveniently defined to be the locations of the observations. In our context however, by the very nature of the process being observed, that is not a reasonable choice. Instead, we set S to be a regular lattice on S. Based on the results in Datta et al. (2016) and results of several simulated examples with our model, we set r = 2500 and m = 16. The NNGP leads to massive gains in computational cost when compared to traditional GPs due to its particular conditional independence structure. Moreover, the distribution of π(β S 0 |β S ) is conditionally independent among the locations in any finite set S 0 ⊂ S \ S, which means that the algorithm to sample from this distribution can be parallelized. This is an appealing feature in our case as all the locations from Y and N are in S \ S and have will be sampled from the NNGP prior on every iteration of the MCMC algorithm. The specific steps of the proposed MCMC algorithm that can be parallelized are indicated in the algorithm shown in Appendix B. Bayesian inference Inference for the level-set Cox process model is performed under the Bayesian paradigm, meaning that it is based on the posterior distribution of all the unknown quantities of the model. As it was mentioned before, the stationary mean and variance of the Gaussian process are fixed to identify the model. We also choose to fix the correlation parameter τ 2 . We believe this can be done in a reasonable way based on the scale of the domain Sthis issue will be discussed in Section 3.4.1 and explored in the simulated studies in Section 4.1. Also, fixing the parameters that index the GP brings huge computational gains to the inference process. Regarding parameter c, we set a uniform improper prior with the restriction c 1 < . . . < c K−1 . Defining θ = {λ, c, β} to be all the unknown quantities of the model, the likelihood function for the level-set Cox process model is obtained by writing the density of Y w.r.t. the measure of a unit intensity Poisson process on S (see Gonçalves and Franklin, 2019), which is given by L(θ; Y ) ∝ exp − K k=1 λ k µ k K k=1 (λ k ) |Y k | ,(11) where µ k is the area of S k and |Y k | is the number of events from Y on S k . The posterior distribution of θ as well as the full conditional distribution of any of its components have densities proportional to the joint density π(θ, Y ) = L(θ, Y )π(θ), such that (12) where π P G (β) is written w.r.t. some suitable dominating measure, which is irrelevant for the derivation of the inference methodology. π(θ, Y ) ∝ exp − K k=1 λ k µ k K k=1 (λ k ) |Y k | π(λ k ) K−1 k=1 π(c k )I (c 1 <...<c k−1 ) π P G (β), Given its complexity, the posterior distribution of θ is assessed via MCMC. This is not a trivial task for two main reasons. First, the MCMC algorithm is infinite-dimensional because of the infinite dimensionality of the coordinate β. Second, the likelihood in (11) is analytically intractable, since the areas µ k of the regions S k cannot be computed exactly. It is then quite challenging to devise a valid and efficient MCMC that is exact in the sense of converging to the exact posterior distribution of θ. To deal with the infinite-dimensionality of β we resort to a simulation technique called retrospective sampling. In the context of simulation of infinite-dimensional random variables, this means that only a finite-dimensional representation of the infinite-dimensional r.v. is simulated and this representation has the following two properties: i) it is enough to unveil only this representation to execute the algorithm in context (an MCMC in our case); ii) any finite-dimensional part of the infinite-dimensional remainder of that r.v. can be simulated conditional on this representation. This means that the GP β is to be simulated only at a finite (but random) collection of locations on each iteration of the MCMC chain. It is the particular random structure of those locations that guarantee the two properties above. The idea of retrospective sampling in the context of simulation of infinite-dimensional r.v.'s was introduced in Beskos and Roberts (2005) to perform exact simulation of diffusion paths. It was later used in a statistical context in several works (see, for example, Beskos et al., 2006;Gonçalves and Gamerman, 2018;Gonçalves et al., 2017). The intractability of the likelihood function precludes us from performing standard Metropolis-Hastings (MH) steps for any coordinate of the chain since all of them appear in this function. Our solution resorts to a powerful and flexible general MCMC algorithm called the pseudo-marginal Metropolis-Hastings (PMMH), proposed in Andrieu and Roberts (2009). This algorithm allows us to replace the likelihood terms in the expression of the MH acceptance probability with a pointwise unbiased and almost surely non-negative estimator of the likelihood function. This leads to an augmented Markov chain that has the desired posterior distribution as the marginal invariant distribution of the chain -marginalized w.r.t. the random seed of the aforementioned unbiased estimator. Naturally, the efficiency of the algorithm relies on the properties of that estimator. Roughly speaking, the smaller is its variance the better (see Andrieu and Vihola, 2015). The only unknown quantities in the expression of the likelihood function in (11) are the areas µ k . This means that, in order to obtain an unbiased estimator for the likelihood, we need an unbiased estimator for M = exp − K k=1 λ k µ k . Note that this quantity does not depend on the observed Poisson process events. Although unbiased estimators for the µ k 's can be easily obtained using uniform r.v.'s on S, it is not straightforward to devise an unbiased estimator for M from this. We resort to a neat class of unbiased estimators called the Poisson estimator (see Beskos et al., 2006), which devises an unbiased estimator for M as a function of a random Poisson number of uniformly distributed r.v.'s on S. The unbiased estimator for M and some of its important properties are given in Propositions 1 and 2, respectively. Proposition 1. Let N be a unit rate Poisson process in the cylinder with base S and height in [0, +∞) and define N = g(N * , λ * ) as the projection on S of the points from N * that have height smaller than λ * , where λ * = (δλ M − λ m ), λ M = max k {λ k } and λ m = min k {λ k }. Then, for any δ > 1, an unbiased and almost surely positive estimator for M is given bŷ M = e −µ(S)λm K k=1 δλ M − λ k δλ M − λ m |N k | ,(13) where µ(S) is the area of S and |N k | is the number of points from N falling in S k . Proposition 2. EstimatorM has a finite variance which is a decreasing function of δ. Proof. See Appendix A for the proofs of Propositions 1 and 2. In our retrospective sampling context, it is N that determines the locations at which β is to be simulated, besides the locations from Y . Furthermore, the mean number of locations from N is (δλ M − λ m )µ(S), which gives the intuition for the result in Proposition 2. This establishes a trade-off related to the choice of δ, as an increase in its value reduces the variance ofM (and consequently improves the mixing of the MCMC chain) but increases the computational cost per iteration of the MCMC algorithm (and vice-versa). We define a pseudo-marginal MCMC algorithm to sample from the posterior distribution of θ based on the estimator in (13). On each iteration of the Markov chain, the general algorithm proposes a move (θ, N * ) → (θ,N * ) from a density q(θ,N * |θ, N * ) = q(θ|θ)q(N * |N * ), where q(N * |N * ) = q(N * ) is the Poisson process defined in Proposition 1, which we shall call the pseudo-marginal proposal, and accepts with probability given by 1 ∧ π(θ;N * ) π(θ; N * ) q(θ|θ) q(θ|θ) ,(14) whereπ (θ; N * ) = e −µ(S)λm K k=1 δλ M − λ k δλ M − λ m |N k | (λ k ) |Y k | π(λ k ) π(c)π P G (β).(15) The algorithm above is bound to be inefficient as it is, given the complexity of the chain's coordinates. We adopt simple yet important changes to obtain a reasonably efficient algorithm. First, we split the coordinates into blocks, making this a Gibbs sampling with pseudo-marginal MH steps. This implies that the acceptance probability of any block is also given by (14). Also, the choice to define N as a function of N * , with the distribution of the latter being independent of θ, instead of working directly with N, allows us to sample N * alone as one block of the Gibbs sampler. Furthermore, note that N * has an infinite collection of points but, in order to evaluate the acceptance probability in (14), it is enough to unveil N -the projection on S of the points of N * that are below λ * . This means that, as is the case for β, N * is also sampled retrospectively. The blocks of the Gibbs sampling are: N * , β, λ, c. The algorithm to sample from each block is described below. Sampling N * The standard version of the pseudo-marginal algorithm proposes a move in N * from the pseudo-marginal proposal q(N * ) and accepts with probability given by (14). Furthermore, the fact that the acceptance probability in (14) depends on N * only through the points falling below λ * , implies that we only need to unveil N in order to update N * . Nevertheless, since N * will typically have many points falling below λ * , this proposal might have a low acceptance rate which, in turn, may compromise the mixing of the chain. Instead, we adopt a proposal distribution that updates N * below and above λ * , separately. The latter is proposed from the pseudo-marginal proposal and accepted with probability 1, given that it does not appear in (14). For that reason, this step is only performed conceptually and points from N * above λ * are sampled retrospectively, if required (when the proposal value for λ leads to a higher value of λ * ), from a P P (1). For the points below λ * , we split S into L regular squares (assuming that S is a rectangle) and update N * in each of the respective cylinders separately. Standard properties of Poisson processes imply that, under the pseudo-marginal proposal, N is mutually independent among the L cylinders and follows a P P (λ * ) in each of them. This splitting strategy imposes an optimal scaling problem w.r.t. L. Empirical analyzes for several simulated examples (some of which are presented in Section 4) suggest that the value of L should be chosen so that the average acceptance rate among the L squares is around 0.8. We shall refer to N restricted to the l-th square as N l . A move N l →N l is accepted with probability α N l = K k=1 δλ M − λ k δλ M − λ m |N l |−|N l | ,(16) where |N l | is the number of points from N l . Sampling β The latent process β is sampled retrospectively, due to its infinite dimensionality. This means that it is sampled at a finite collection of locations which are enough to perform all the steps of the MCMC algorithm. This collection is defined by the locations of S, Y and N, with the third one changing along the MCMC on the update steps of N * and λ. The first important fact to be noted here is that we are unable to sample β directly from its full conditional distribution. Second, the proposal for β has to be such that the expression of the acceptance probability in (14) can be analytically computed. More specifically, we need a proposal distribution that cancels out the term π GP (β). The conditional independence structure of the NNGP demands extra care to specify its proposal distribution. For example, it is unwise to define a proposal that, at each iteration of the MCMC, fixes β at a random finite collection of points from S and propose the remainder from the NNGP prior. Conditional distributions under the NNGP that do not follow the ordering in S will not benefit from the conditional independence structure to have computational gains. A adopt a non-centered random walk proposal for β. More specifically, a move β →β is proposed from:β (s) = 1 − ς 2 β(s) + ςε(s), s ∈ S,(17)ε ∼ NNGP (0,Σ). This proposal is called the preconditioned Crank-Nicolson proposal (pCN) and was introduced by Cotter et al. (2013), not in an NNGP context. In a finite-dimensional context, the pCN proposal differs slightly from the traditional centered random walk but, unlike the latter, leads to an acceptance probability that does not depend on the prior density of the component being updated. Furthermore, the pCN proposal is valid also in the infinitedimensional context, as defined in (17), whereas the centered random walk is not. The proposal variance ς 2 is chosen so to have an acceptance rate of approximately 0.234 (see Cotter et al., 2013). The acceptance probability of a move β →β is given by α β = 1 ∧ K k=1 δλ M − λ k δλ M − λ m |N k |−|N k | (λ k ) |Ÿ k |−|Y k | ,(18) where |N k | and |Y k | are the respective values obtained from β and |N k | and |Ÿ k | are the respective values obtained fromβ. Sampling λ and c The vector λ is sampled jointly from a proposal given by a Gaussian random walk with a properly tuned covariance matrix that is adapted, based on the respective empirical covariance matrix of the chain, up to a certain iteration (see Roberts and Rosenthal, 2009) so to have the desired acceptance rate -varying from 0.4 to 0.234 according to the dimension of λ. The acceptance probability of a move λ →λ is given by α λ = 1 ∧    e −µ(S)(λm−λm)    K k=1 δλ M −λ k δλ M −λm |N k | δλ M −λ k δλ M −λm |N k | λ k λ k |Y k |    π(λ) π(λ)    ,(19) where π(λ) is the prior density of λ to be defined in Section 3.4.1. Also,N k is the respective value obtained fromN = g(N * ,λ * ) andλ * = (δλ M −λ m ). The parameter vector c is jointly sampled from a uniform random walk proposal with a common (and properly tuned) length for each of its components. If the ordering of the proposed values is preserved, a move c →c is accepted with probability α c = 1 ∧ K k=1 δλ M − λ k δλ M − λ m |N k |−|N k | (λ k ) |Ÿ k |−|Y k | ,(20) whereN k is the respective value obtained from the S k region defined byc. Computational aspects Covariance function and model identifiability The specification of the covariance function Σ(σ 2 , τ 2 ) of the parent process in the NNGP prior plays an important role in the proposed methodology. Empirical results of several simulated examples (omitted here) suggest that the powered exponential with an exponent close to 2 is a good and robust choice. This is given by Cov(β(s), β(s ′ )) = exp − 1 2τ 2 |s − s ′ | γ ,(21) where |s − s ′ | is the Euclidian distance between s and s ′ and we set γ = 1.95. The specification of the parameter τ 2 is related to the smoothness of the estimated IF and to model identifiability issues as discussed next. We call the reader's attention to the fact that the Poisson process likelihood in (11) is ill-posed. Note that the likelihood function increases indefinitely as the IF increases in balls centered around the observations, with these balls getting smaller, and approaches zero outside them. The Cox process formulation is a way to regularize the likelihood function by assigning a prior to the IF which is, in our particular level-set formulation, a non-parametric prior. Naturally, this prior has a great impact on the resulting posterior distribution. In particular, Bayes theorem implies that the posterior distribution of β is absolutely continuous w.r.t. its prior, implying that all the almost surely properties under the prior are preserved under the posterior. Due to the aforementioned misbehavior pattern of the likelihood function, the information contained in the data about the likelihood will favor values of the IF that go in the direction of the characteristics described at the beginning of this paragraph. As a consequence, the likelihood will favor smaller values of the smoothness parameter τ 2 -that make the IF less smooth. For that reason, fixing the value of τ 2 is a reasonable strategy. The value of this parameter will determine the smoothness of the estimated IF and should therefore be chosen based on the researcher's preference. We believe that, typically, partitions with very small regions should be avoided. In such cases, it might be more reasonable to resort to continuously varying intensity functions. Another reason why the estimation of τ 2 should be avoided is the fact that the full conditional distribution of this parameter would depend on the joint density of the NNGP at all the locations of Y and N and, as the locations of N are latent (missing data) and numerous, the mixing of the MCMC chain could be seriously compromised. The choice of τ 2 is discussed and illustrated in the simulated examples in Section 4.1. Model identifiability issues may arise from the existence of local modes in the posterior density, especially for small datasets, and the prior information on the IF may not be enough to avoid the existence of significant local modes. A reasonable way to mitigate that is by adding extra coherent prior information in the model through the prior distribution of λ. Under a model parsimony perspective, it is reasonable to fit LSCPs with fewer levels and clearly distinct rate values than with more levels with similar rate values. One way to introduce this information in the model and, consequently, improve model identifiability, is by adopting a joint repulsive prior for λ such that the λ k 's tend to repel each other. We define a repulse prior based on the Rep distribution proposed in Quinlan et al. (2021). However, instead of directly penalizing the differences between the λ k 's, we consider a scaled version of those differences, as follows. π(λ) ∝ K i=1 π G (λ k ) R(λ; ρ, ν), π G (λ k ) ∝ λ α k −1 k e −η k λ k , α k > 0, η k > 0, k = 1, . . . , K, R(λ; ρ, ν) = 1≤k 1 <k 2 ≤K 1 − exp −ρ |λ k 1 − λ k 2 | λ k 1 + λ k 2 ν .(22) We shall call this the repulsive gamma prior with notation RG(α, η, ρ, ν). The scale factor λ k 1 + λ k 2 is meant to penalize the proximity of the λ k 's considering the scale of the Poisson distribution. For example, it is not reasonable to equally penalize the pairs (5, 2) and (13, 10). Note that the scale fator is the sum of the standard deviations of the Poisson distributions with means λ k 1 and λ k 2 . Results from simulated studies with different combinations suggest that ρ ∈ [1, 5] and ν = 3 are reasonable choices. The plot of the penalizing factor r(x) = (1 − exp {−ρx ν }) is shown in Figure 11 in Appendix C. Note that the repulsive prior is proper since π G is a probability density and R(λ; ρ, ν) is bounded. The RG prior on λ may be useful to identify the most suitable value of K to be used. This value is typically chosen based on an empirical analysis of the kernel smoothing estimates by analyzing aspects of the estimated IF such as minimum and maximum values, homogeneity and estimated value in regions of the space domain, variation across the spatial domain. These aspects ought to be interpreted in terms of the standard deviation of the Poisson distribution. Naturally, this is an empirical strategy and it might be wise to fit the model for different values of K. In this case, the RG prior can be very useful to indicate if a chosen value of K is higher than necessary. As the prior repulses the values of the λ K 's, the area of one or more regions in the partition may be estimated to be zero, effectively meaning that K should be smaller. This happened to all the real examples analyzed in Section 5.1. Finally, model selection criteria can also be used to choose K, as it is shown in Section 4.2. MCMC virtual updates Despite the NNGP prior, the computational cost of the MCMC algorithm may still be compromised by a large accumulation of points from β resulting from successive rejections on the update step of this component and the simulation of extra points on the update steps of λ and N * . The infinite dimensionality of β and the retrospective sampling context provide an elegant and efficient solution for this problem. We add virtual update steps to the MCMC algorithm that update β in S \ {S, Y, N}. Since the acceptance probability (14) of the pseudo-marginal algorithm does not depend on β at those locations, the proposal is accepted with probability 1. Furthermore, the retrospective sampling approach implies that those steps consist of simply deleting all the stored values of β at S \ {S, Y, N}, justifying the term "virtual" step. A virtual update is performed in-between every block update of the Gibbs sampling as long as the set of sampled locations of β in S \ {S, Y, N} is not empty at that moment of the algorithm. Other important issues The choice of the initial values of the MCMC algorithm plays an important role to determine the efficiency of the algorithm in terms of mixing and estimation. Results from simulation studies suggest that it is reasonable to generate the initial values of β from its NNGP prior and set λ k = |Y |/µ(S), for all k. Typically, the ordering of the λ k parameters assumed in the first iterations of the MCMC does not change along the chain. This ordering will depend on the initial value of the GP and may not be the best one. Therefore, it may be convenient to impose a fixed ordering to the λ k parameters. Conditional on the initial values of λ, the initial value of N is generated from pseudo-marginal proposal q(N * ). The choice of δ is also investigated in simulation studies under the trade-off defined by the fact that an increase in δ improves the mixing of the MCMC algorithm but also increases the computational cost per iteration of the algorithm. Results indicate that the distribution of the pseudo-marginal estimator has a very heavy tail to the right and that a choice of δ based on the mean number of points of N is reasonably robust among different models and datasets. In particular, results were good for a variety of examples (some of which are presented in Section 4.1) when the mean number of points from N, under the pseudo-marginal distribution, was around 6000. This is a valid result for any scale and shape of the domain S. Note that both the function M to be estimated by the pseudomarginal estimator and E[|N|] (under the pseudo-marginal distribution) depend on the areas of the partition regions only through the mean number of events in each region. Also, the variance ofM depends on those areas through the mean number of events in each one and the relative differences (δλ M − λ k )/(δλ M − λ m ). The proposed MCMC algorithm is highly parallelizable due to the conditional independence properties of the NNGP prior on β. In particular, the following two very expensive steps of the MCMC algorithm can be performed in parallel: simulation of β at the Y and N locations (conditional on β S ); the update of N in each of the L squares. This means that parallelization leads to huge computational gains and the running time of the algorithm is heavily influenced by the number of cores available. Considering all the algorithms and issues described in this section, the MCMC algorithm to sample from the posterior distribution for the level-set Cox process model is as presented in Appendix B. The parallelizable steps are indicated as such. Spatiotemporal extension The level-set Cox process model proposed in Section 2 can be extended to a spatiotemporal context in which the data can be seen as a time series of point processes in a common space S in discrete time. The temporal dependence is defined by a spatiotemporal Gaussian process and, possibly, a temporal structure for the level parameters of the IF. Conditional on those components, the observed Poisson process is independent among different times. We consider a particular case of the well-known Dynamic Gaussian processes (DGP) to model the temporal dependency among the random partitions. DGPs are a wide and flexible family of spatiotemporal Gaussian processes (see Gamerman, 2010). Suppose that the point process Y is observed at T + 1 times -0, . . . , T , where Y t = {Y t (s) : s ∈ S} is a Poisson process with IF λ t,S = {λ t (s) : s ∈ S}. For each time t, we define a finite partition S t,K = {S t1 , . . . , S t,K }, K ∈ N, of S and a sequence c = (c 1 , . . . , c K−1 ) ∈ R K−1 , with −∞ = c 0 < c 1 < . . . < c K−1 < c K = ∞. The spatiotemporal model is defined as follows. (Y t |λ t,S ) ind. ∼ P P (λ t,S ), t = 0, . . . , T,(23)λ t (s) = K k=1 λ t,k I t,k (s), s ∈ S, t = 0, . . . , T,(24)S t,k = {s ∈ S; c k−1 < β t (s) < c k }, k = 1, . . . , K, t = 0, . . . , T,(25) β = (β 0 , . . . , β T ) ∼ DNNGP (µ, Σ(σ 2 , τ 2 ), Σ(ξ 2 , ̺ 2 )), (26) c ∼ 1(c 1 < . . . < c K−1 ) (27) λ k = (λ 0,k , . . . λ T,k ) ind ∼ NGAR1(a 0,k , b 0,k , w k , a k ), k = 1, . . . , K,(28) where I t,k (s) is the indicator of {s ∈ S t,k }, DNNGP is a dynamic NNGP and NGAR1 is an order 1 non-Gaussian non-linear autoregressive model. We consider a DNNGP of the form: β 0 ∼ NNGP (µ,Σ(σ 2 , τ 2 )), (29) β t (s) = β t−1 (s) + ζ t (s), s ∈ S, t = 1, . . . , T, (30) ζ t ∼ NNGP (0,Σ(ξ 2 , ̺ 2 )), (31) (β t (s)|β t,S ) ∼ NNGP (µ,Σ(σ 2 , τ 2 )), s ∈ S \ S, t = 1, . . . , T, As in the case of the spatial model, we set µ = 0, σ 2 = 1 and fix τ 2 at a suitable value. Parameters ξ 2 and ̺ 2 are also fixed such that ξ 2 ≤ σ 2 and ̺ 2 ≥ τ 2 . The DNNGP in (29)-(32) differs from the one in Datta et al. (2016) in the distribution of (β t (s)|β t,S ). Datta et al. (2016) consider the formulation in (29)-(31) for all s ∈ S. Our modification is motivated by the fact that we require to sample β at different sets of locations (N and Y ) for each time t and the DNNGP from Datta et al. (2016) would be computationally inefficient in this case. Note that, under our DNNGP, the temporal dependence is explicit only in S and, conditional on β at those locations, the remainder in conditionally independent w.r.t. time. We define the following NGAR1 model. λ 0 ∼ RG(a 0 , b 0 , ρ, ν), (33) λ t,k = w −1 k λ t−1,k ǫ t,k , t = 1, . . . , T, k = 1 . . . , K, (34) ǫ t,k ∼ Beta(w k a k , (1 − w k )a k ),(35) where RG is the repulsive gamma prior defined in (22) and parameters a 0 , b 0 , ρ, ν, w k and a k are fixed at suitable values. This NGAR1 model imposes a random walk type structure to the logarithm of λ t,k and is inspired by the non-Gaussian state space model proposed in Smith and Miller (1986). A temporal structure can be considered to model both the random partitions and the levels of the IF as in the model above or to model only the former, in which case the λ t,k parameters are all independent with repulsive gamma priors. Inference for the spatiotemporal level-sex Cox process model requires some adaptations to the MCMC algorithm proposed for the spatial model. First, we need to define one pseudo-marginal estimator for the likelihood of Y in each time t and we shall define the respective auxiliary variables as N * = (N * 0 , . . . , N * T ). Now, each N * t is an independent unit rate Poisson process on the infinite height cylinder with base S and N t = g(N * t , λ * t ), λ * t = (δ t λ t,M − λ t,m ), λ t,M = max k {λ t,k } and λ t,m = min k {λ t,k }. Furthermore, whenever β is to be sampled retrospectively on the update steps of the λ k 's and N * , it is sampled from the spatiotemporal NNGP prior and, therefore, conditional on all the locations of β already sampled at all times 0 to T , considering the respective conditional independence structure. The sampling step of N * is performed analogously to the spatial case at each time t, independently. Parameter c uses the same proposal of the spatial case and accepts a move c →c with probability α c = 1 ∧ T t=0 K k=1 δ t λ tM − λ t,k δ t λ tM − λ tm |N t,k |−|N t,k | (λ t,k ) |Ÿ t,k |−|Y t,k | .(36) Process β is proposed from the following spatiotemporal pCN proposal. β t = 1 − ς 2 β t + ςε t , t = 0, . . . , T, (ε 0 , . . . , ε T ) ∼ DNNGP (0, Σ(σ 2 , τ 2 ), Σ(ξ 2 , ̺ 2 )), The proposal variance ς 2 is chosen so to have an acceptance rate of approximately 0.234 (Cotter et al., 2013). The acceptance probability of a move β →β is given by α β = 1 ∧ T t=0 K k=1 δ t λ tM − λ t,k δ t λ tM − λ tm |N t,k |−|N t,k | (λ t,k ) |Ÿ t,k |−|Y t,k | .(37) Finally, if no temporal dependence structure is considered for the λ k 's, the same algorithm from the spatial model is considered independently at each time t to sample (λ t,1 , . . . , λ t,K ). If however the NGAR1 prior is adopted, the λ k 's are jointly sampled by proposing from a properly tuned Gaussian random walk. The respective acceptance probability is given by α λ = 1 ∧       T t=0 e −µ(S)(λt,m−λt,m) K k=1 δtλ t,M −λ t,k δtλ t,M −λt,m |N t,k | δtλ t,M −λ t,k δtλ t,M −λt,m |N t,k | λ t,k λ t,k |Y t,k |    π(λ) π(λ)    ,(38) where π(λ) is the density of the NGAR1 prior. Prediction It is often the case that the analysis of point process phenomena also aims at predicting unknown quantities conditional on the data. In the spatial context, this may include some functional of the IF in a given region of S or future replications of the observed process. In the spatiotemporal context, prediction about future times may be considered. In both cases, it is straightforward to obtain a sample from the desired posterior predictive distribution based on the output of the MCMC. The algorithm to sample from the predictive distribution of a function h(λ S ) under the spatial model consists of computing h(λ S ) for each sampled value of λ S in the MCMC chain (after a reasonable burn-in). If h(λ S ) is intractable it may still be possible to obtain a sample from the predictive distribution of an unbiased estimator of h(λ S ). For example, define h(λ S ) = S 0 λ(s)ds := Λ S 0 , for some known S 0 ⊂ S, and U ind. ∼ Unif (S 0 ). Then, an unbiased estimator of h(λ S ) is given by (see Gonçalves and Gamerman, 2018, Section 4.3) h = µ(S 0 )λ(U).(39) A sample from the predictive distribution of (39) is obtained by sampling U i ∼ Unif (S) and λ(U i ) on each iteration of the MCMC. To perform prediction for replications of Y it is enough to simulate Y conditional on each sampled value of λ S in the MCMC using a Poisson thinning algorithm. This consist of simulating a P P (λ M ) on S and keeping each point s with probability λ(s)/λ M , where the value of λ(s) is obtained by sampling β(s) retrospectively from the GP prior on each MCMC iteration. Now consider the full Bayesian model of a level-set Cox process Y in S for times 0, . . . , T, . . . , T + d, d ∈ N, and let y be a realization of the process at times 0, . . . , T . Define h(Y, λ d ) to be some measurable function, in the probability space of the full Bayesian model, that depends on (Y t , λ t,S ) only for times t ∈ {T + 1, . . . , T + d}. Then, prediction about h(Y, λ d ) is made through the predictive distribution of (h(Y, λ d )|y). This is sampled by simulating h(Y, λ d ), conditional on the output of the MCMC on each iteration, based on the following identity. π(h(Y, λ d )|y) = π(h(Y, λ d )|λ 0:T , y)π(λ 0:T |y)dλ 0:T . Appealing examples of h(Y, λ d ) include: i. (λ T +1,S , . . . , λ T +d,S ); ii. Λ S,d = S λ t (s)ds, for t = T + 1, . . . , T + d; iii. (Y T +1 , . . . , Y T +d ). Simulated examples We perform a series of simulation studies to investigate the main issues regarding the methodology proposed in this paper. First, we present a sensitive analysis w.r.t. the prior specifications of the covariance function of the Gaussian process prior for two models that differ in terms of the size of the dataset. Then, we explore the choice of the number of levels K. All the examples presented in this paper are implemented in Ox (Doornik, 2009) and run in an i7 3.50GHz processor with 6 cores (12 threads) and 16GB RAM. Codes for the spatial and spatiotemporal models are available at https://github.com/fbambirra/MCMC-LSCP. In all the simulations, we consider the initial values c = (−0.5, 0.5). The initial values of β, λ and N are set as described in Section 3.4.3. The repulsive Gamma prior is adopted for λ with α k = 1.2, η k = 0.04, for all k, ρ = 1 and ν = 3. The value of δ is set to have |N| ≈ 6000. The efficiency of the proposed methodology is investigated in terms of estimation and computational cost. Sensitivity analysis We consider three examples with K = 3 and the same partition for the piecewise structure of the IF. The three examples differ in the values of λ. We call them examples 1, 2 and 3, having true λ equals to (1,4,12), (3,20,50) and (20, 50, 100), respectively. We consider one replication of example 3, which has 5267 observations, to illustrate the applicability of our methodology to large datasets and 10 replications of examples 1 (around 500 observations) and 2 (around 2200 observations). Figure 3 presents the true IF for examples 1 and 2 and Figure 13 in Appendix D presents the true IF for example 3. In the first sensitivity analysis, we compare the results for one replication of examples 1 and 2, for τ 2 = 0.5, 1, and 2. Figure 11 in Appendix C shows a plot of the three respective correlation functions. Comparison is performed in terms of the computational cost and estimation of the intensity function. Tables 1 and 2 show some of the results and Figure 3 shows the estimated intensity function. We can clearly see that as the value of τ 2 increases, the estimated IF gets smoother, as expected. Values 1 and 2 provided quite a good recovery of the true IF, with the latter performing a bit better in example 1 and the former performing a bit better for example 2 (see the left-bottom part of the area with the highest IF). To compare the results for the replications of both examples, we choose a common value τ 2 = 1 to analyze the 10 replications of examples 1 and 2 and the one replication of example 3. The estimated IFs for all the replications are presented in Figures 14 and 15 in Appendix D. Some posterior statistics are presented in Table 4. Each MCMC chain runs for 300 thousand iterations and the average running time is around 15.5 hours for example 1 and around 19.5 hours for example 2, with very small variations among replications and different values of τ 2 , and around 48 hours for example 3. The effective sample sizes (ESS) reported here are computed with the R package CODA (Plummer et al., 2006). Trace plots and autocorrelation plots for the case where τ 2 = 1 are presented in Figure 12 in Appendix D and strongly suggest the convergence of the algorithm. Model fit We now explore the issue of choosing the number of levels K. We fit the LSCP model for one replication of example 1 and one of example 2 for two values of K, with τ 2 = 1. We compare the results with those obtained by the methodology proposed in Geng et al. (2021), who use a mixture of finite mixtures model to detect the number of clusters K and estimate the IF in each of them after discretizing the space. We consider 3 levels of discretization -10 × 10, 15 × 15 and 20 × 20. For the dataset of example 1, their algorithm estimates K = 3 for 10 × 10 and K = 2 for 15 × 15 and 20 × 20. For example 2, it estimates K = 4 for 10 × 10 and K = 3 for 15 × 15 and 20 × 20. The IF estimates are shown in Figure 16 in Appendix D. We fit the LSCP model to the dataset of example 1 for K = 2 and 3 and to the dataset of example 2 for K = 3 and 4. In our case, the models are compared via DIC (Spiegelhalter et al., 2002). The values of the DIC are -952.11 for K = 2 and -1011.844 for K = 3, for example 1, and -10319.6 for K = 3 and -10278.7 for K = 4, for example 2, which correctly indicates the respective true models. The estimates of the IF are shown in Figure 4. Table 4: Results for the sensitivity analysis regarding the specification of K. Posterior mean and standard deviation of the λ k parameters. Comparison to discrete approximation method To illustrate the advantages of the exact approach of the methodology proposed in this paper, we compare it to a discretized version of the LSCP model for different levels of discretization -20×20, 50×50 and 100×100. We consider the same discrete approximation as in Hildeman et al. (2018) but with a (discretized) NNGP prior for β and the respective pCN proposal to update this coordinate via MH, and the repulsive prior for the λ k 's. We compare the results for one of the replications of example 2 and one of the applications presented in Section 5.1. Results are shown in Figures 5 and 6. The MCMC for the discretized method has to run for around 900 thousand iterations to obtain a reasonable MC sample for inference. This means a running time of around 25 hours for both examples with a lattice of 100 × 100. Results show that there is still a significant difference between the posterior distributions of the approximate and exact methods. Applications We apply the LSCP model to analyze some real point process datasets -three spatial and one spatiotemporal. The three spatial datasets consist of: 1. locations of white oak trees in some region in the USA; 2. locations of particles in a bronze filter; 3. locations of fires in a region of New Brunswick, Canada, for a period of 12 years. The spatiotemporal dataset also considers the locations of fires in New Brunswick, but disaggregates the data per periods of 3 years. All the datasets are available in the R package spatstat (Baddeley et al., 2015). We consider the initial values c = 0, c = (−0.5, 0.5) and c = (−0.7, 0, 0.7), for K = 2, 3 and 4, respectively. The initial values of β, λ and N are set as described in Section 3.4.3. The repulsive Gamma prior is adopted for λ with α k = 1.2, η k = 0.04, for all k, and ν = 3. Parameters ρ from the RG prior and the range parameter τ 2 vary among the examples as follows: white oak -ρ = 5 and τ 2 = 0.5; bronze filter -ρ = 5 and τ 2 = 1; fires -ρ = 1 and τ 2 = 0.5. Those values are based on the empirical analysis of the kernel smoothing estimates of the IF (see Figure 1) in terms of the levels and smoothness of the IF expected to provided a good fit. The value of δ is set to have |N| ≈ 6000. For the first example, we also present an analysis comparing the level-set Cox process model to a Cox process model in which the IF is a continuous function of a latent Gaussian process. The latter is proposed in Gonçalves and Gamerman (2018), who also present an exact methodology to perform Bayesian inference. Their model assumes λ(s) = λ * Φ(β(s)), where λ * is an unknown parameter, β is a Gaussian process and Φ is the standard normal c.d.f. Spatial examples We analyze a dataset regarding the locations of white oak trees in Lansing Woods, Michigan. The data consists of the location of 448 white oaks in an area of 924 × 924 feet, which we rescale to (0, 10) × (0, 10). An empirical analysis based on the kernel smoothing estimation of the IF suggested that K = 3 would be a suitable choice. Indeed, a model with K = 4 was also fit but the area of one of the four regions converged to zero along the MCMC chain. The largest value of λ is truncated a priori to be smaller than 30 when K = 3. Figure 7 shows the posterior mean and mode of the IF. The latter defines the partition using its pointwise mode and colors each region with the posterior mean of the respective λ k . We also consider the prediction of the integrated IF in the whole observed domain and in two regions -S 1 = (5 , 7) × (8 , 10) and S 2 = (8 , 10) × (4.5 , 6.5), see Table 5. The results for the two models are considerably different in some aspects of the estimated IF. Generally speaking, and as expected, the estimate is smoother for the continuous IF model. The repulsive prior for the IF values in the LSCP pushes those values apart and estimates a small cluster (with a mean area around 1.66) with a much higher IF. All the 3 predicted functions of the IF have a smaller predictive variance for the LSCP, but also a slightly larger bias for the point estimates (posterior mean). If we combine the bias and variance through the expected quadratic error -E θ [(h(λ S ) − true) 2 ], this is smaller for the LSCP for Λ S (70%) and Λ S 1 (88%) and smaller for the continuous IF model for Λ S 2 (92%). The second example considers the locations of 678 particles observed in a longitudinal plane section of 18 × 7 mm through a gradient sinter filter made from bronze powder. The original area is rescaled to (0, 10) × (0, 4) and an empirical analysis via kernel smoothing suggests K ≈ 4. We fit the model for K = 3 and 4 but area of one of the four regions converges to zero along the MCMC when K = 4. The estimated IF for K = 3 is shown in Figure 8, which also brings the extra information about the radius of each particle. Although this information is not used in the analysis, the clear relation between radius and particle concentration was captured by the IF estimate. LSCP Finally, the third example considers the locations of 2313 fires in a rectangular region (rotated 90 o to the left and rescaled to 8.5×10) containing most of the area of New Brunswick, Canada, from 1992 to 2003. We fit the LSCP for K = 3 and K = 4 but the area of one of the 4 regions converges to zero along the MCMC when K = 4. The estimated IF is presented in Figure 8. The estimated levels of the IF for all the 3 examples are presented in Table 6. Table 6: Posterior mean and standard deviation of λ for the three spatial application. The results for K = 4 in all three examples show the impact of the repulsive gamma prior used for the λ k parameters. It penalizes scenarios with similar values of λ k 's and, as the values are pushed apart, estimates one of the areas to be zero. Spatiotemporal example We consider the New Brunswick fires dataset for the years 1992 to 2003 aggregating every 3 years as one time t in the model. The number of fires per each interval of 3 years is 414, 385, 450 and 415, respectively. We fit the model in (29)-(32) with K = 3 and perform prediction of the IF and its integral for the interval of 3 years 2004-2006. We set ρ = 5, τ 2 = 0.5, ξ 2 = 1, ̺ 2 = 0.5. Independent repulsive gamma priors are assumed for each λ t,k and the NGAR1 prior, with w k = 0.5 and (a 1 , a 2 , a 3 ) = (5, 15, 30), for all k, is assumed between the respective levels from times 3 and 4 in order to perform prediction for the latter. All the other specifications are as chosen for the spatial examples. Results are shown in Table 7 Conclusions This paper proposed a novel methodology to perform exact Bayesian inference for a class of level-set Cox processes in which the intensity function is piecewise constants. The model is flexible enough to accommodate any smooth partition structure and aims at providing a more parsimonious alternative to Cox process models with continuously varying IF. The methodology is exact in the sense of not involving discrete finite-dimensional approximations and is the first one with this feature for the class of LSCP models. The inference is performed via an infinite-dimensional pseudo-marginal MCMC algorithm. The MCMC chain has the exact posterior distribution of all the unknown components of the model as its invariant distribution. This means that only MCMC error is involved despite the intractability of the likelihood function and infinite dimensionality of the parameter space. Retrospective sampling and pseudo-marginal Metropolis are used to circumvent the infinite dimensionality and intractable likelihood problems, respectively. Efficient proposal distributions are carefully devised for the latent Gaussian process component and the pseudo-marginal auxiliary variable. Computational cost issues are mitigated by adopting a NNGP approach for the latent Gaussian process and by adding a virtual retrospective sampling step to the MCMC algorithm that deletes extra sampled locations of the GP component. A variety of issues related to the efficiency of the proposed MCMC algorithm are discussed and empirically explored through simulations. Model fitting regarding the choice of the number of levels for the IF is also explored. Results show a considerably good performance of the proposed methodology. Finally, a spatiotemporal version of the level-set Cox process model is introduced and applied to a real dataset regarding fires in the province of New Brunswick, Canada. Some interesting directions may be pursued as future work. For example, the sensitivity of the proposed methodology to the choice of the covariance function of the latent GP. Any valid covariance function can be used within the proposed methodology, so it is natural to question if the partition estimation may benefit from more complex structures such as non-stationary ones. Three possible extensions of the proposed methodology may also be considered. First, estimating the number of levels K may be particularly useful in applications in which selecting K is not a trivial task. Second, the methodology from this paper can be merged with that from Gonçalves and Gamerman (2018) so that the IF is a continuous function of independent Gaussian processes, conditional on the partition, and the inference methodology is still exact. Third, one may consider the use of spatial covariates in the intensity function, for example, λ(s) = λ k,0 + λ k,1 X 1 (s), for some covariate X 1 . implying that V ar(M ) = exp {−2µ(S)λ m } V ar M 1 . Finally, ∂V ar(M 1 ) ∂δ = exp(κ) K k=1 µ k λ M δλ M − λ k δλ M − λ m 2 − 2(δλ M − λ k ) − 1 < 0, where κ ∈ R. Appendix C -Plots Appendix D -Further results from the simulations Figure 12 shows the trace plots and ACF plots of the pseudo-marginal likelihood function for the examples in Section 4.1. Figure 12: Trace plots and ACF plots of the log pseudo-marginal likelihood function for one replication of example 1 (left) and one for example 2 (right). The ACF plots are based on a sub-sample of the chain with a lag of 100. Figure 13 presents the true and estimated IF for example 3. Figures 14 and 15 show the estimated IF for the remaining 9 replications of examples 1 and 2, respectively, from Section 4.1. Figure 16 shows the estimated IF obtained by applying the methodology from Geng et al. (2021) for three levels of discretization. Figure 1 : 1Examples of continuously varying estimated intensity function of Poisson processes. From left to right: white oaks in Lansing Wood, particles in a bronze filter and fires in New Brunswick. Figure 2 : 2Example of a neighboring structure with K = 3 that is not contemplated by the proposed model (far left) and three possible structures that may be estimated. for example 1. Second row reports the ESS of the log pseudo-marginal likelihood. The remaining rows show the posterior mean and standard deviation of the λ k parameters. Figure 3 : 3True IF (1st column) and its posterior mean for examples 1 (top) and 2 (bottom), for τ 2 = 0.5 (2nd column), 1 (3rd column) and 2 (4th column). 3 : 3Results for the 10 replications of examples 1 and 2 and for the 1 replication of example 3. Second row reports the mean and s.d. of the ESS of the log pseudo-marginal likelihood over the 10 replications. The remaining rows show the mean and standard deviation, over the 10 replications, of the posterior mean and s.d. of the λ k parameters. Figure 4 : 4Estimated IF for example 1 with K = 2 and K = 3 and for example 2 with K = 3 and K = Figure 5 : 5Discrete approximation results for example 2. Top: estimated IF for lattices 20 × 20, 50 × 50 and 100 × 100, and with the exact method. Bottom: empirical posterior density of λ for 100 × 100 (red) and for exact method (black). Figure 6 : 6Discrete approximation results for the white oak example. Top: estimated IF for lattices 20 × 20, 50 × 50 and 100 × 100, and with the exact method. Bottom: empirical posterior density of λ for 100 × 100 (red) and for exact method (black). Figure 7 : 7Posterior mean (left) and mode (middle) of the IF under the LSCP model and posterior mean (right) of the IF under the continuous IF model for the white oak example. Figure 8 : 8Posterior mean of the IF for the bronze filter example (left) and for the New Brunswick fires example (right). Figure 9 : 9Posterior mean of the IF, at time 0 to 3, for the spatiotemporal example. Figure 10 : 10Predictive mean (middle) and pointwise 95% credibility interval (left and right) of the IF in year 2004 for the spatiotemporal example. Figure 11 : 11Left: penalizing factor r(x) = (1 − exp {−ρx ν }) of the RG prior. Right: powered exponential covariance function with γ = 1.95 and τ 2 = 0.4 (red), 1 (blue) and 2 (black). Figure 13 : 13True and posterior mean of the IF of example 3. Figure 14 : 14Posterior mean of the IF of the replications of example 1. Figure 15 : 15Posterior mean of the IF of the replications of example 2. Figure 16 : 16Top: Posterior mean of the IF for example 2 obtained with Geng et al. (2021)' methodology for discretizations 10x10, 15x15, 20x20. Bottom: Posterior mean of the IF for the New Brunswick fines example obtained Geng et al. (2021)'s methodology for discretizations 10x10, 15x15, 20x20. Table 2 : 2Results for example 2. Second row reports the ESS of the log pseudo-marginal likelihood. The remaining rows show the posterior mean and standard deviation of the λ k parameters.Example 1 Example 2 Example 3 aver. ESS 596(237) 1103(498) 889 True Est. True Est True Est λ 1 1 0.84(0.21) / 0.22(0.09) 3 3.47(0.64) / 0.33(0.04) 20 21.88 / 0.88 λ 2 4 4.13(0.70) / 0.42(0.15) 20 20.27(0.81) / 0.73(0.08) 50 48.66 / 1.11 λ 3 12 12.59(1.32) / 0.83/(0.11) 50 50.70(1.56) / 1.42(0.14) 100 100.34 / 1.55 Table Continuous IF model Λ S 448 447.22 / 20.50 (414,482) -421.20 448.44 / 24.14 (408,490) -594.24 Λ S 1 27 29.18 / 3.62 (22.91,35.18) -17.92 25.35 / 4.18 (18.65,32.45) -20.19Λ S 2 9 11.47 / 1.86 (8.65,14.87) -9.62 9.98 / 2.82 (5.64,14.84) -8.88 Table 5 : 5Statistics of the posterior predictive distribution of the estimator in (39). Each cell shows: Mean / s.d. (95% C.I.) -expected quadratic error. Table 7 : 7Posterior mean and standard deviation of λ for the spatiotemporal application. AcknowledgementsThe first author would like to thank FAPEMIG -Grant PPM-00745-18 and CNPq -Grant 310433/2020-7, for financial support. The second author would like to thank CAPES for financial support. The authors would like to thank Gareth Roberts for insightful discussions about the MCMC algorithm.Appendix A -ProofsProof of Proposition 1Let I nk = I k (s n ) be the indicator of s n ∈ S k , where s n is the n-th point from N and I = (I 1 , . . . , I |N | ), where I n = (I n1 , . . . , I nK ) ∼ Mult 1, µ 1 µ(S) , . . . , µ K µ(S) . Therefore, E(I nk ) = µ k µ(S) and µ(S)I nk is an unbiased estimator of µ k . Then,Proof of Proposition 2We shall compute the variance ofMWe use the basic probability result that if X ∼ P oisson(λ), then E[a nX ] = exp{−λ(1 − a n )}, n ∈ N, and the fact that |N k | ∼ P oisson(µ k λ * ) to computeAppendix B -The MCMC algorithm The pseudo-marginal approach for efficient monte carlo computations. C Andrieu, G O Roberts, The Annals of Statistics. Andrieu, C. and G. O. Roberts (2009). The pseudo-marginal approach for efficient monte carlo computations. The Annals of Statistics, 697-725. Convergence properties of pseudo-marginal Markov chain Monte Carlo algorithms. C Andrieu, M Vihola, The Annals of Applied Probability. 25Andrieu, C. and M. Vihola (2015). Convergence properties of pseudo-marginal Markov chain Monte Carlo algorithms. The Annals of Applied Probability 25, 1030-1077. Spatial point patterns: methodology and applications with R. A Baddeley, E Rubak, R Turner, Chapman and Hall/CRCBaddeley, A., E. Rubak, and R. Turner (2015). Spatial point patterns: methodology and applications with R. Chapman and Hall/CRC. Exact and computationally efficient likelihood-based inference for discretely observed diffusion processes (with discussion). A Beskos, O Papaspiliopoulos, G O Roberts, P Fearnhead, Journal of the Royal Statistical Society. Series B. 683Beskos, A., O. Papaspiliopoulos, G. O. Roberts, and P. Fearnhead (2006). Exact and com- putationally efficient likelihood-based inference for discretely observed diffusion processes (with discussion). Journal of the Royal Statistical Society. Series B 68 (3), 333-382. Exact simulation of diffusions. A Beskos, G O Roberts, The Annals of Applied Probability. 154Beskos, A. and G. O. Roberts (2005). Exact simulation of diffusions. The Annals of Applied Probability 15 (4), 2422-2444. MCMC methods for functions: Modifying old algorithms to make them faster. S L Cotter, G O Roberts, A M Stuart, D White, Statistical Science. 28Cotter, S. L., G. O. Roberts, A. M. Stuart, and D. White (2013). MCMC methods for functions: Modifying old algorithms to make them faster. Statistical Science 28, 424-446. Hierarchical nearestneighbor Gaussian process models for large geostatistical datasets. A Datta, S Banerjee, A O Finley, A E Gelfand, Journal of the American Statistical Association. 111Datta, A., S. Banerjee, A. O. Finley, and A. E. Gelfand (2016). Hierarchical nearest- neighbor Gaussian process models for large geostatistical datasets. Journal of the Amer- ican Statistical Association 111, 800-812. An object-oriented matrix programming language ox 6. J A Doornik, Doornik, J. A. (2009). An object-oriented matrix programming language ox 6. Hierarchical bayesian level set inversion. M M Dunlop, M A Iglesias, A M Stuart, Statistics and Computing. Dunlop, M. M., M. A. Iglesias, and A. M. Stuart (2016). Hierarchical bayesian level set inversion. Statistics and Computing, 1-30. Handbook of Spatial Statistics, Chapter Dynamic spatial models including spatial time series. D Gamerman, CRC / Chapman & HallLondonGamerman, D. (2010). Handbook of Spatial Statistics, Chapter Dynamic spatial models including spatial time series, pp. 437-448. London: CRC / Chapman & Hall. Bayesian nonparametric nonhomogeneous Poisson process with applications to USGS earthquake data. J Geng, W Shi, G Hu, Spatial Statistics. 41100495Geng, J., W. Shi, and G. Hu (2021). Bayesian nonparametric nonhomogeneous Poisson process with applications to USGS earthquake data. Spatial Statistics 41, 100495. Exact Bayesian inference in spatiotemporal Cox processes driven by multivariate Gaussian processes. F B Gonçalves, D Gamerman, Journal of the Royal Statistical Society, Series B. 80Gonçalves, F. B. and D. Gamerman (2018). Exact Bayesian inference in spatiotemporal Cox processes driven by multivariate Gaussian processes. Journal of the Royal Statistical Society, Series B 80, 157-175. Exact Monte Carlo likelihood-based inference for jump-diffusion processes. F B Gonçalves, G O Roberts, K G Latuszynski, arXiv:1707.00332Gonçalves, F. B., G. O. Roberts, and K. G. Latuszynski (2017). Exact Monte Carlo likelihood-based inference for jump-diffusion processes. [arXiv:1707.00332] . On the definition of likelihood function. F B Gonçalves, P Franklin, arXiv:1906.10733Gonçalves, F. B. and P. Franklin (2019). On the definition of likelihood function. [arXiv:1906.10733] . Level set cox processes. A Hildeman, D Bolin, J Wallin, J B Illian, Spatial Statistics. 28Hildeman, A., D. Bolin, J. Wallin, and J. B. Illian (2018). Level set cox processes. Spatial Statistics 28, 169-193. Statistical Analysis and Modelling of Spatial Point Patterns. J Illian, A Penttinen, H Stoyan, D Stoyan, John Wiley & Sons70Illian, J., A. Penttinen, H. Stoyan, and D. Stoyan (2008). Statistical Analysis and Modelling of Spatial Point Patterns, Volume 70. John Wiley & Sons. Poisson processes. J F C Kingman, Wiley Online LibraryKingman, J. F. C. (1993). Poisson processes. Wiley Online Library. Log gaussian cox processes. J Møller, A R Syversveen, R P Waagepetersen, Scandinavian journal of statistics. 253Møller, J., A. R. Syversveen, and R. P. Waagepetersen (1998). Log gaussian cox processes. Scandinavian journal of statistics 25 (3), 451-482. Bayesian inference for gaussian excursion set generated cox processes with set-marking. M Myllymäki, A Penttinen, Statistics and Computing. 20Myllymäki, M. and A. Penttinen (2010). Bayesian inference for gaussian excursion set generated cox processes with set-marking. Statistics and Computing 20, 305-315. CODA: Convergence diagnosis and output analysis for. M Plummer, N Best, K Cowles, K Vines, MCMC. R News. 61Plummer, M., N. Best, K. Cowles, and K. Vines (2006). CODA: Convergence diagnosis and output analysis for MCMC. R News 6 (1), 7-11. On a class of repulsive mixture models. J J Quinlan, F A Quintana, G L Page, Test. 30Quinlan, J. J., F. A. Quintana, and G. L. Page (2021). On a class of repulsive mixture models. Test 30, 445-461. Examples of adaptive mcmc. G O Roberts, J S Rosenthal, Journal of Computational and Graphical Statistics. 182Roberts, G. O. and J. S. Rosenthal (2009). Examples of adaptive mcmc. Journal of Computational and Graphical Statistics 18 (2), 349-367. Package splancs. B Rowlingson, P Diggle, M R Bivand, gen. 141Rowlingson, B., P. Diggle, and M. R. Bivand (2012). Package splancs. gen 14 (1). Going off grid: Computationally efficient inference for log-gaussian cox processes. D Simpson, J B Illian, F Lindgren, S H Sørbye, H Rue, Biometrika. 1031Simpson, D., J. B. Illian, F. Lindgren, S. H. Sørbye, and H. Rue (2016). Going off grid: Computationally efficient inference for log-gaussian cox processes. Biometrika 103 (1), 49-70. A non-Gaussian state space model and application to prediction of records. R L Smith, J E Miller, Journal of the Royal Statistical Society, Series B. 48Smith, R. L. and J. E. Miller (1986). A non-Gaussian state space model and application to prediction of records. Journal of the Royal Statistical Society, Series B 48, 79-88. Bayesian measures of model complexity and fit. D J Spiegelhalter, N G Best, B P Carlin, A Van Der Linde, Journal of the Royal Statistical Society, Series B. 64Spiegelhalter, D. J., N. G. Best, B. P. Carlin, and A. Van Der Linde (2002). Bayesian measures of model complexity and fit. Journal of the Royal Statistical Society, Series B 64, 583-639. Algorithm 1 MCMC for the level-set Cox process model Input: K, L, δ, τ 2 , and initial values for N, β, λ, c. Output: MCMC posterior sample of θ. 1: Simulate β S from the pCN proposal in (17). 2: (In parallel) Simulate β at Y and N from the pCN proposal. conditional on the β S simulated on the previous stepAlgorithm 1 MCMC for the level-set Cox process model Input: K, L, δ, τ 2 , and initial values for N, β, λ, c. Output: MCMC posterior sample of θ. 1: Simulate β S from the pCN proposal in (17). 2: (In parallel) Simulate β at Y and N from the pCN proposal, conditional on the β S simulated on the previous step. If the proposed value of λ yields a valueλ * larger than the current one λ * , simulate N * between the current and proposal values of λ * -simulate the number of locations from a P oisson((λ * − λ * )µ(S)) and distribute them uniformly in the section of the cylinder. 10: (In parallel) If extra locations of N * are simulated in the previous step, simulate β at those locations from the NNGP prior conditional on its values at {Y, N}. 11: Accept the proposed value of λ w.p. given in (21). 12: Perform virtual update (delete all the values of β in S \ {Y, N}). 13: Propose c from a properly tuned uniform random walk and accept w.p. given in (20). 14: If enough iterations of the MCMC have been performed, stop; otherwise, go back to 1. The virtual update is performed if the set of the unveiled locations of β in S \ {Y. Perform virtual update (delete all the values of β in S \ {Y, N}). 7: end for 8: Propose λ from a properly tuned Gaussian random walk. 6For the l-th square, propose N from the pseudo-marginal proposal q(N * ) and accept w.p. given in. N} is not empty. The tuning of the random walk proposal is performed in a pre-run of the algorithmFor the l-th square, propose N from the pseudo-marginal proposal q(N * ) and accept w.p. given in (16). 6: Perform virtual update (delete all the values of β in S \ {Y, N}). 7: end for 8: Propose λ from a properly tuned Gaussian random walk. 9: If the proposed value of λ yields a valueλ * larger than the current one λ * , simulate N * between the current and proposal values of λ * -simulate the number of locations from a P oisson((λ * − λ * )µ(S)) and distribute them uniformly in the section of the cylinder. 10: (In parallel) If extra locations of N * are simulated in the previous step, simulate β at those locations from the NNGP prior conditional on its values at {Y, N}. 11: Accept the proposed value of λ w.p. given in (21). 12: Perform virtual update (delete all the values of β in S \ {Y, N}). 13: Propose c from a properly tuned uniform random walk and accept w.p. given in (20). 14: If enough iterations of the MCMC have been performed, stop; otherwise, go back to 1. The virtual update is performed if the set of the unveiled locations of β in S \ {Y, N} is not empty. The tuning of the random walk proposal is performed in a pre-run of the algorithm.
[ "https://github.com/fbambirra/MCMC-LSCP." ]
[ "Adaptive Graph Convolution for Point Cloud Analysis", "Adaptive Graph Convolution for Point Cloud Analysis" ]
[ "Haoran Zhou \nNanjing University\n\n", "Yidan Feng \nNanjing University of Aeronautics and Astronautics\n\n", "Mingsheng Fang \nNanjing University\n\n", "Mingqiang Wei \nNanjing University of Aeronautics and Astronautics\n\n", "Jing Qin \nThe Hong Kong Polytechnic University\n\n", "Tong Lu \nNanjing University\n\n" ]
[ "Nanjing University\n", "Nanjing University of Aeronautics and Astronautics\n", "Nanjing University\n", "Nanjing University of Aeronautics and Astronautics\n", "The Hong Kong Polytechnic University\n", "Nanjing University\n" ]
[]
Convolution on 3D point clouds that generalized from 2D grid-like domains is widely researched yet far from perfect. The standard convolution characterises feature correspondences indistinguishably among 3D points, presenting an intrinsic limitation of poor distinctive feature learning. In this paper, we propose Adaptive Graph Convolution (AdaptConv) which generates adaptive kernels for points according to their dynamically learned features. Compared with using a fixed/isotropic kernel, AdaptConv improves the flexibility of point cloud convolutions, effectively and precisely capturing the diverse relations between points from different semantic parts. Unlike popular attentional weight schemes, the proposed AdaptConv implements the adaptiveness inside the convolution operation instead of simply assigning different weights to the neighboring points. Extensive qualitative and quantitative evaluations show that our method outperforms state-of-the-art point cloud classification and segmentation approaches on several benchmark datasets. Our code is available at https://github.com/ hrzhou2/AdaptConv-master.
10.1109/iccv48922.2021.00492
[ "https://arxiv.org/pdf/2108.08035v2.pdf" ]
237,194,767
2108.08035
c64c827a6094b894985f28bb980a1aa452b33870
Adaptive Graph Convolution for Point Cloud Analysis Haoran Zhou Nanjing University Yidan Feng Nanjing University of Aeronautics and Astronautics Mingsheng Fang Nanjing University Mingqiang Wei Nanjing University of Aeronautics and Astronautics Jing Qin The Hong Kong Polytechnic University Tong Lu Nanjing University Adaptive Graph Convolution for Point Cloud Analysis Convolution on 3D point clouds that generalized from 2D grid-like domains is widely researched yet far from perfect. The standard convolution characterises feature correspondences indistinguishably among 3D points, presenting an intrinsic limitation of poor distinctive feature learning. In this paper, we propose Adaptive Graph Convolution (AdaptConv) which generates adaptive kernels for points according to their dynamically learned features. Compared with using a fixed/isotropic kernel, AdaptConv improves the flexibility of point cloud convolutions, effectively and precisely capturing the diverse relations between points from different semantic parts. Unlike popular attentional weight schemes, the proposed AdaptConv implements the adaptiveness inside the convolution operation instead of simply assigning different weights to the neighboring points. Extensive qualitative and quantitative evaluations show that our method outperforms state-of-the-art point cloud classification and segmentation approaches on several benchmark datasets. Our code is available at https://github.com/ hrzhou2/AdaptConv-master. Introduction Point cloud is a standard output of 3D sensors, e.g., Li-DAR scanners and RGB-D cameras; it is considered as the simplest yet most efficient shape representation for 3D objects. A variety of applications arise with the fast advance of 3D point cloud acquisition techniques, including robotics [31], autonomous driving [20,43] and high-level semantic analysis [35,15]. Recent years have witnessed considerable attempts to generalize convolutional neural networks (CNNs) to point cloud data for 3D understanding. However, unlike 2D images, which are organized as regular grid-like structures, 3D points are unstructured and unordered, discretely distributed on the underlying surface of a sampled object. * Co-corresponding authors ([email protected]/[email protected]) One common approach is to convert point clouds into regular volumetric representations and hence traditional convolution operations can be naturally applied to them [24,30]. Such a scheme, however, often introduces excessive memory cost and is difficult to capture fine-grained geometric details. In order to handle the irregularity of point clouds without conversions, PointNet [27] applies multilayer perceptrons (MLPs) independently on each point, which is one of the pioneering works to directly process sparse 3D points. More recently, several researches have been proposed to utilize the graph-like structures for point cloud analysis. Graph CNNs [44,21,41,4,7] represent a point cloud as graph data according to the spatial/feature similarity between points, and generalize 2D convolutions on images to 3D data. In order to process an unordered set of points with varying neighborhood sizes, standard graph convolutions harness shared weight functions over each pair of points to extract the corresponding edge feature. This leads to a fixed/isotropic convolution kernel, which is applied identically to all point pairs while neglecting their different feature correspondences. Intuitively, for points from different semantic parts of the point cloud (see the neighboring points in Fig. 1), the convolution kernel should be able to distinguish them and determine their different contributions. To address this drawback, several approaches [41,38] are proposed inspired by the idea of attention mechanism [3,5]. As shown in Fig. 1 (b), proper attentional weights a i corresponding to the neighboring points are assigned, trying to identify their different importance when performing the convolution. However, these methods are, in principle, still based on the fixed kernel convolution, as the attentional weights are just applied to the features obtained similarly (see the black arrows in Fig. 1 (b)). Considering the intrinsic isotropy of current graph convolutions, these attempts are still limited for detecting the most relevant part in the neighborhood. Differently, we propose to adaptively establish the relationship between a pair of points according to their learned features. This adaptiveness represents the diversity of kernels unique to each pair of points rather than relying on the predefined weights. To achieve this, in this paper, we propose a new graph convolution operator, named adaptive graph convolution (AdaptConv). AdaptConv generates adaptive kernelsê i for points in the convolution which replace the aforementioned isotropic kernels (see Fig. 1 (c)). The key contribution of our work is that the proposed AdaptConv is employed inside the graph convolution rather a weight function that is based on the resulting feature. Furthermore, we explore several choices for the feature convolving design, offering more flexibility to the implementation of the adaptive convolution. Extensive experiments demonstrate the effectiveness of the proposed AdaptConv, achieving state-of-the-art performances in both classification and segmentation tasks on several benchmark datasets. Related Work Voxelization-based and multi-view methods. The voxelization/projection strategy has been explored as a simple way in point cloud analysis to build proper representations for adapting the powerful CNNs in 2D vision. A number of works [24,47,16,42] project point clouds onto regular grids, but inevitably suffer from information loss and enormous computational cost. To alleviate these problems, OctNet [30] and Kd-Net [14] attempt to use more efficient data structures and skip the computations on empty voxels. Alternatively, the multi-view-based methods [12,34] treat a point cloud as a set of 2D images projected from multiple views, so as to directly leverage 2D CNNs for subsequent processing. However, it is fundamentally difficult to apply these methods to large-scale scanned data, considering the struggle of covering the entire scene from single-point per-spectives. Point-based methods. In order to handle the irregularity of point clouds, state-of-the-art deep networks are designed to directly manipulate raw point cloud data, instead of introducing an intermediate representation. In this way, PointNet [27] first proposes to use MLPs independently on each point and subsequently aggregate global features through a symmetric function. Thanks to this design, PointNet is invariant to input point orders, but fails to encode local geometric information, which is important for semantic segmentation tasks. To solve this issue, PointNet++ [29] proposes to apply PointNet layers locally in a hierarchical architecture to capture regional information. Alternatively, Huang et al. [9] sorts unordered 3D points into an ordered list and employs Recurrent Neural Networks (RNN) to extract features according to different dimensions. More recently, various approaches have been proposed for effective local feature learning. PointCNN [19] aligns points in a certain order by predicting a transformation matrix for local point set, which inevitably leads to sensitivity in point order since the operation is not permutationinvariant. SpiderCNN [48] defines its convolution kernel as a family of polynomial functions, relying on the neighbors' order. PCNN [2] designs point kernels based on the spatial coordinates and further KPConv [36] presents a scalable convolution using explicit kernel points. RS-CNN [22] assigns channel-wise weights to neighboring point features according to the geometric relations learned from 10-D vectors. ShellNet [51] splits local point set into several shell areas, from which features are extracted and aggregated. Recently, [53,6] utilize the successful transformer structures in natural language processing [37,45] to build dense selfattention between local and global features. The graph-based methods treat points as nodes of a graph, and establish edges according to their spatial/feature relationships. Graph is a natural representation for a point cloud to model local geometric structures but is challenging for processing due to its irregularity. The notion of Graph Convolutional Network is proposed by [13], which generalizes convolution operations over graphs by averaging features of adjacent nodes. Similar ideas [32,44,8,19,17] have been explored to extract local geometric features from local points. Shen et al. [32] define kernels according to euclidean distances and geometric affinities in the neighboring points. DGCNN [44] gathers nearest neighboring points in the feature space, followed by the EdgeConv operators for feature extraction, in order to identify semantic cues dynamically. MoNet [25] defines the convolution as Gaussian mixture models in a local pseudo-coordinate system. Inspired by the idea of attention mechanism, several works [38,41,39] propose to assign proper attentional weights to different points/filters. 3D-GCN [21] develops deformable kernels, focusing on shift and scale-invariant properties in Convolution on point clouds. State-of-the-art researches have proposed many methods to define a proper convolution on point clouds. To improve the basic designs using fixed MLPs in PointNet/PointNet++, a variety of works [38,41,39,36,22] try to introduce weights based on the learned features, with more varients of convolution inputs [44,25,48]. Other methods [33,46,10] try to learn a dynamic weight for the convolution. However, their idea is to approximate weight functions from the direct 3D coordinates while AdaptConv uses features to learn the kernels, which represents more adaptiveness. In addition, their implementation is heavily memory consuming when convolving with high-dimensional features. Thus, the main focus of this paper is to handle the isotropy of point cloud convolutions, by developing an adaptive kernel that is unique to each point in the convolution. Method We exploit local geometric characteristics in point cloud analysis by proposing a novel adaptive graph convolution (AdaptConv) in the spirit of graph neural networks (Sec. 3.1). Afterwards, we discuss several choices for the feature decisions in the adaptive convolution (Sec. 3.2). The details of the constructed networks are shown in Sec. 3.3. Adaptive graph convolution We denote the input point cloud as X = {x i |i = 1, 2, ..., N } ∈ R N ×3 with the corresponding features de- fined as F = {f i |i = 1, 2, ..., N } ∈ R N ×D . Here, x i processes the (x, y, z) coordinates of the i-th point, and, in other cases, can be potentially combined with a vector of additional attributes, such as normal and color. We then compute a directed graph G(V, E) from the given point cloud where V = {1, ..., N } and E ⊆ V × V represents the set of edges. We construct the graph by employing the k-nearest neighbors (KNN) of each point including self-loop. Given the input D-dimensional features, our AdaptConv layer is designed to produce a new set of M -dimensional features with the same number of points while attempting to more accurately reflect local geometric characteristics than previous graph convolutions. Denote that x i is the central point in the graph convolution, and N (i) = {j : (i, j) ∈ E} is a set of point indices in its neighborhood. Due to the irregularity of point clouds, previous methods usually apply a fixed kernel function on all neighbors of x i to capture the geometric information of the patch. However, different neighbors may reflect different feature correspondences with x i , particularly when x i is located at salient regions, such as corners or edges. In this regard, the fixed kernel may incapacitate the geometric representations generated from the graph convolution for classification and, particularly, segmentation. In contrast, we endeavor to design an adaptive kernel to capture the distinctive relationships between each pair of points. To achieve this, for each channel in the output Mdimensional feature, our AdaptConv dynamically generates a kernel using a function over the point features (f i , f j ): e ijm = g m (∆f ij ), j ∈ N (i).(1) Here, m = 1, 2, ..., M indicates one of the M output dimensions corresponding to a single filter defined in our Adapt-Conv. In order to combine the global shape structure and feature differences captured in a local neighborhood [44], we define ∆f ij = [f i , f j − f i ] as the input feature for the adaptive kernel, where [·, ·] is the concatenation operation. The g(·) is a feature mapping function, and here we use a multilayer perceptron. Like the computations in 2D convolutions, which obtain one of the M output dimensions by convolving the D input channels with the corresponding filter weights, our adaptive kernel is convolved with the corresponding points (x i , x j ): h ijm = σ ê ijm , ∆x ij ,(2) where ∆x ij is defined as [x i , x j − x i ] similarly, ·, · represents the inner product of two vectors outputting h ijm ∈ R and σ is a nonlinear activation function. As shown in Fig. 2 Figure 3. AdaptConv network architectures for classification and segmentation tasks. The GraphConv layer denotes our standard convolution without an adaptive kernel. The segmentation model uses pooling and interpolating to build a hierachical graph structure, while the classification model applies a dynamic structure [44]. with the spatial relations ∆x ij of the corresponding point x j ∈ R 3 , which means the size of the kernel should be matched in the dot product, i.e., the aforementioned feature mapping is g m : R 2D → R 6 . In this way, the spatial positions in the input space can be efficiently incorporated into each layer, combined with the feature correspondences extracted dynamically from our kernel. Stacking the h ijm of each channel yields the edge feature h ij = [h ij1 , h ij2 , ..., h ijM ] ∈ R M between the connected points (x i , x j ). Finally, we define the output feature of the central point x i by applying an aggregating function over all the edge features in the neighborhood (see Fig. 2 (right part)): f i = max j∈N (i) h ij ,(3) where max is a channel-wise max-pooling function. Overall, the convolution weights of AdaptConv are defined as Θ = (g 1 , g 2 , ..., g M ). Feature decisions In our method, AdaptConv generates an adaptive kernel for each pair of points according to their individual features (f i , f j ). Then, the kernelê ijm is applied to the point pair of (x i , x j ) in order to describe their spatial relations in the input space. The feature decision of ∆x ij in the convolution of Eq. 2 is an important design. In other cases, the inputs can be x i ∈ R E including additional dimensions representing other valuable point attributes, such as point normals and colors. By modifying the adaptive kernel to g m : R 2D → R 2E , our AdaptConv can also capture the relationships between feature dimensions and spatial coordinates which are from different domains. Note that, this is another option in our AdaptConv design, and we use the spatial positions as input x i by default in the convolution in our experiments. As an optional choice, we replace the ∆x ij with ∆f ij in Eq. 2 with a modified dimension ofê ijm . Therefore, the adaptive kernel of a pair of points is designed to establish the relations of their current features (f i , f j ) in each layer. This is a more direct solution, similar to other convolution operators, that produces a new set of learned features from features in the preceding layer of the network. However, we recommend xyz rather than feature in that: (i) the point feature f j has been already included in the adaptive kernel and convolving again with f j leads to redundancy of feature information; (ii) it is easier to learn spatial relations through MLPs, instead of detecting feature correspondences in a high-dimensional space (e.g. 64, 128 dimensional features); (iii) the last reason is the memory cost and more specifically the large computational graph in the training stage which cannot be avoided. We evaluate all these choices in Sec. 4.4. Network architecture We design two network architectures for point cloud classification and segmentation tasks using the proposed AdaptConv layer. The network architectures are shown in Fig. 3. In our experiments, the AdaptConv kernel function is implemented as a two-layer MLP with residual connections to extract important geometric information. More details are available in the supplemental material. The standard graph convolution layer with a fixed kernel uses the same feature inputs ∆f ij as in the adaptive kernels. Graph pooling. For segmentation tasks, we reduce the number of points progressively in order to build the network in a hierarchical architecture. The point cloud is subsampled using furthest point sampling algorithm [27] with a sampling rate of 4, and is applied by a pooling layer to output aggregated features on the coarsened graph. In each graph pooling layer, a new graph is constructed corresponding to the sampled points. The feature pooled at each point in the subcloud can be simply obtained by a max-pooling function within its neighborhood. Alternatively, we can use a AdaptConv layer to aggregate this pooled features. To predict point-wise labels for segmentation purpose, we need to Segmentation network. Our segmentation network architecture is illustrated in Fig. 3. The AdaptConv encoder includes 5 layers of convolutions in which the last one is a standard graph convolution layer, as well as several graph pooling layers. The subsampled features are interpolated and concatenated for the final point features which are fed to the decoder part. Classification network. The classification network uses a similar encoder part as in the segmentation model (see Fig. 3). For sparser point clouds used in the ModelNet40 classification dataset, we simply apply dynamic graph structures [44] without pooling and interpolation. Specifically, the graph structure is updated in each layer according to the feature similarity among points, rather than fixed using spatial positions. That is, in each layer, the edge set E l is recomputed where the neighborhood of point x i is N (i) = {j 1 , j 2 , ..., j k } such that the corresponding features f j1 , f j2 , ..., f j k are closest to f i . This encourages the network to organize the graph semantically and expands the receptive field of local neighborhood by grouping together similar points in the feature space. Evaluation In this section, we evaluate our models using Adapt-Conv for point cloud classification, part segmentation and indoor segmentation tasks. Detailed network architectures and comparisons are provided. Classifcation Data. We evaluate our model on ModelNet40 [47] dataset for point cloud classification. This dataset contains 12,311 meshed CAD models from 40 categories, where 9,843 models are used for training and 2,468 models for testing. We follow the experimental setting of [27]. 1024 points are uniformly sampled for each object and we only use the (x, y, z) coordinates of the sampled points as input. The data augmentation procedure includes shifting, scaling and perturbing of the points. Network configuration. The network architecture is shown in Fig. 3. Following [44], we recompute the graph based on the feature similarity in each layer. The number k of neighborhood size is set to 20 for all layers. Shortcut connections are included and one shared fully-connected layer (1024) is applied to aggregate the multi-scale features. The global feature is obtained using a max-pooling function. All layers are with LeakyReLU and batch normalization. We use SGD optimizer with momentum set to 0.9. The initial learning rate is 0.1 and is dropped until 0.001 using cosine annealing [23]. The batch size is set to 32 for all training models. We use PyTorch [26] implementation and train the network on a RTX 2080 Ti GPU. The hyperparameters are chosen in a similar way for other tasks. Results. We show the results for classification in Tab. 1. The evaluation metrices on this dataset are the mean class accuracy (mAcc) and the overall accuracy (OA). Our model achieves the best scores on this dataset. For a clear comparison, we show the input data types and the number of points corresponding to each method. Our AdaptConv only considers the point coordinates as input with a relatively small size of 1k points, which already outperforms other methods using larger inputs. Part segmentation Data. We further test our model for part segmentation task on ShapeNetPart dataset [50]. This dataset contains 16,881 shapes from 16 categories, with 14,006 for training and 2,874 for testing. Each point is annotated with one label from 50 parts and each point cloud contains 2-6 parts. We follow the experimental setting of [29] and use their provided data for benchmarking purpose. 2,048 points are sampled from each shape. The input attributes include the point normals apart from the 3D coordinates. Network configuration. Following [27], we include a one-hot vector representing category types for each point. It is stacked with the point-wise features to compute the segmentation results. Other training parameters are set the same as in our classification task. Note that, we use spatial positions (without normals) as ∆x ij as discussed in Sec. 3.2. Other choices will be evaluated later in Sec. 4.4. Results. We report the mean class IoU (mcIoU) and mean instance IoU (mIoU) in Tab Table 3. Ablation studies on ShapeNetPart dataset for part segmentation. uation scheme of [27], the IoU of a shape is computed by averaging the IoU of each part. The mean IoU (mIoU) is computed by averaging the IoUs of all testing instances. The class IoU (mcIoU) is the mean IoU over all shape categories. We also show the class-wise segmentation results. Our model achieves the state-of-the-art performance compared with other methods. Indoor scene segmentation Data. Our third experiment shows the semantic segmentation performance of our model on the S3DIS dataset [1]. This dataset contains 3D RGB point clouds from six indoor areas of three different buildings, covering a total of 271 rooms. Each point is annotated with one semantic label from 13 categories. For a common evaluation protocol [35,27,15], we choose Area 5 as the test set which is not in the same building as other areas. Real scene segmentation. The large-scale indoor datasets reveal more challenges, covering larger scenes in a real-world enviroment with a lot more noise and outliners. Thus, we follow the experimental settings of KPConv [36], and train the network using randomly sampled clouds in spheres. The subclouds contain more points with varing sizes, and are stacked into batches for training. In the test stage, spheres are uniformly picked in the scenes, and we ensure each point is tested several times using a voting scheme. The input point attributes include the RGB colors Results. We report the mean classwise intersection over union (mIoU), mean classwise accuracy (mAcc) and overall accuracy (OA) in Tab. 4. The IoU of each class is also provided. The proposed AdaptConv outperforms the stateof-the-arts in most of the categories, which further demonstrates the effectiveness of adaptive convolutions over fixed kernels. The qualitative results are visualized in Fig. 4 where we show rooms from different areas of the building. Our method can correctly detect less obvious edges of, e.g., pictures and boards on the wall. Ablation studies In this section, we explain some of the architecture choices used in our network, and demonstrate the effectiveness of AdaptConv compared to several ablation networks. Adaptive convolution vs Fixed kernels. We compare our AdaptConv with fixed kernel convolutions, including methods using the attention mechanism and standard graph convolution (DGCNN [41]), as discussed in the intro- Table 4. Semantic segmentation results on S3DIS dataset evaluated on Area 5. We report the mean classwise IoU (mIoU), mean classwise accuracy (mAcc) and overall accuracy (OA). IoU of each class is also provided. duction. We train these models on ShapeNetPart dataset for segmentation, and design several ablation networks by replacing AdaptConv layers with fixed kernel layers and keeping other architectures the same. Specifically, [38] assign attentional weights to different neighboring points and [41] further designs a channel-wise attentional function. We use their layers and denote these two ablations as Attention Point and Attention Channel in Tab. 3 respectively. We only replace the AdaptConv layers in our network and the feature inputs ∆f ij are the same as our model. Besides, we also show the result by using standard graph convolutions (GraphConv), which can be seen as a similar version of DGCNN [44]. From the comparison, we see that our method achieves better results than the fixed kernel graph convolutions. Feature decisions. In AdaptConv, the adaptive kernel is generated from the feature input ∆f ij , and subsequently convolved with the corresponding ∆x ij . Note that, in our experiments, ∆x ij corresponds to the (x, y, z) spatial coordinates of the points. We have discussed several other choices of ∆x ij in Eq. 2 in Sec. 3.2, which can be evaluated by designing these ablations: • Feature -In Eq. 2, we convolve the adaptive kernel e ijm with their current point features. That is, ∆x ij is replaced with ∆f ij and the kernel function is g m : R 2D → R 2D . This makes the kernel learn to adapt to the features from previous layer and extracts the feature relations. • Initial attributes -The point normals (n x , n y , n z ) are included in the part segmentation task on ShapeNetPart, leading to a 6-dimensional initial feature attributes for each point. Thus, we design three ablations where we use only spatial inputs (Ours), only normal inputs (Normal) and both of them (Initial attributes). The kernel function is modified correspondingly. The resulting IoU scores are shown in Tab. 3. As one can see, (x, y, z) is the most critical initial attribute (probably the only attribute) in point clouds, thus it is recommended to use them in the convolution with adaptive kernels. Althrough achieving a promising result, the computa- tional cost for the Feature ablation is extremely high since the network expands heavily when it is convolved with a high-dimensional feature. Robustness test We further evaluate the robustness of our model to point cloud density and noise perturbation on ModelNet40 [47]. We compare our AdaptConv with several other graph convolutions as discussed in Sec. 4.4. All the networks are trained with 1k points and neighborhood size is set to k = 20. In order to test the influence of point cloud density, a series of numbers of points are randomly dropped out during testing. For noise test, we introduce additional Gaussian noise with standard deviations according to the point cloud radius. From Fig. 5, we can see that our method is robust to missing data and noise, thanks to the adaptive kernel in which the structural connections can be extracted dynamically in a sparser area. Also, we experiment the influence of different numbers k of the nearest neighboring points in Tab. 5. We choose several typical sizes for testing. Reducing the number of neighboring points leads to less computational cost while the performance will degenerate due to the limitation of receptive field. Our network still achieves a promising result when k is reduced to 5. On the other hand, with certain point density, a larger k doesn't improve the performance since the local information dilutes within a larger neighborhood. Efficiency To compare the complexity of our model with previous state-of-the-arts, we show the parameter numbers and the corresponding results of networks in Tab. 6. These models are based on ModelNet40 for classification task. From the table, we see that our model achieves the best performance of 93.4% overall accuracy and the model size is relatively small. Compared with DGCNN [44] which can be seen as a standard graph convolution version in our ablation studies, the proposed adaptive kernel performs better while being efficient. Visualization and learned features To achieve a deeper understanding of AdaptConv, we explore the feature relations in several intermediate layers of the network to see how AdaptConv can distinguish points with similar spatial inputs. In this experiment, we train our model on ShapeNetPart dataset for segmentation. In Fig. 6, two target points (blue and green stars in 1-st and 2-nd rows respectively) are selected which belong to different parts of the object. We then compute the euclidean distances to other points in the feature space, and visualize them by coloring the points with similar learned features in red. We can see that, while being spatially close, our network can capture their different geometric characteristics and segment them properly. Also, from the 2-nd row of Fig. 6, points belonging to the same semantic part (the wings) share similar features while they may not be spatially close. This shows that our model can extract valuable information in a non-local manner. Conclusion In this paper, we propose a novel adaptive graph convolution (AdaptConv) for 3D point cloud. The main contribution of our method lies in the designed adaptive kernel in the convolution, which is dynamically generated according to the point features. Instead of using a fixed kernel that captures correspondences indistinguishably between points, our AdaptConv can produce learned features that are more flexible to shape geometric structures. We have applied AdaptConv to train end-to-end deep networks for several point cloud analysis tasks, outperforming the state-of-thearts on several public datasets. Further, AdaptConv can be easily integrated into existing graph CNNs to improve their performance by simply replacing the existing kernels with the adaptive kernels. Figure 1 . 1Illustration of adaptive kernels and fixed kernels in the convolution. (a) The standard graph convolution applies a fixed/isotropic kernel (black arrow) to compute features for each point indistinguishably. (b) Based on these features, several attentional weights ai are assigned to determine their importance. (c) Differently, AdaptConv generates an adaptive kernelêi that is unique to the learned features of each point. Figure 2 . 2The illustration of AdaptConv processed in the neighborhood of a target point xi. An adaptive kernelêijm is generated from the feature input ∆fij of a pair of points on the edge, which is then convolved with the corresponding spatial input ∆xij. Concatenating hijm of all dimensions yields the edge feature hij. Finally, the output feature f i of the central point is obtained through a pooling function. AdaptConv differs from other graph convolutions in that the convolution kernel is unique for each pair of points. point cloud analysis. Figure 4 . 4Visualization of semantic segmentation results on the S3DIS dataset. We show the input point cloud, and labelled points mapped to RGB colors. and the original heights. Figure 5 . 5Robustness test on ModelNet40 for classification. GraphConv indicates the standard graph convolution network. Attention indicates the ablation where we replace the AdaptConv layers with graph attention layers (point-wise Figure 6 . 6Visualize the euclidean distances between two target points (blue and green stars) and other points in the feature space (red: near, yellow: far). . 2. Following the eval-Method mcIoU mIoU air bag cap car chair ear guitar knife lamp laptop motor mug pistol rocket skate tableplane phone bike board Kd-Net [14] 77.4 82.3 80.1 74.6 74.3 70.3 88.6 73.5 90.2 87.2 81.0 94.9 87.4 86.7 78.1 51.8 69.9 80.3 PointNet [27] 80.4 83.7 83.4 78.7 82.5 74.9 89.6 73.0 91.5 85.9 80.8 95.3 65.2 93.0 81.2 57.9 72.8 80.6 PointNet++ [29] 81.9 85.1 82.4 79.0 87.7 77.3 90.8 71.8 91.0 85.9 83.7 95.3 71.6 94.1 81.3 58.7 76.4 82.6 SO-Net [18] 81.0 84.9 82.8 77.8 88.0 77.3 90.6 73.5 90.7 83.9 82.8 94.8 69.1 94.2 80.9 53.1 72.9 83.0 DGCNN [44] 82.3 85.2 84.0 83.4 86.7 77.8 90.6 74.7 91.2 87.5 82.8 95.7 66.3 94.9 81.1 63.5 74.5 82.6 PointCNN [19] - 86.1 84.1 86.4 86.0 80.8 90.6 79.7 92.3 88.4 85.3 96.1 77.2 95.3 84.2 64.2 80.0 83.0 PointASNL [49] - 86.1 84.1 84.7 87.9 79.7 92.2 73.7 91.0 87.2 84.2 95.8 74.4 95.2 81.0 63.0 76.3 83.2 3D-GCN [21] 82.1 85.1 83.1 84.0 86.6 77.5 90.3 74.1 90.9 86.4 83.8 95.6 66.8 94.8 81.3 59.6 75.7 82.8 KPConv [36] 85.1 86.4 84.6 86.3 87.2 81.1 91.1 77.8 92.6 88.4 82.7 96.2 78.1 95.8 85.4 69.0 82.0 83.6 Ours 83.4 86.4 84.8 81.2 85.7 79.7 91.2 80.9 91.9 88.6 84.8 96.2 70.7 94.9 82.3 61.0 75.9 84.2 Table 2. Part segmentation results on ShapeNetPart dataset evaluated as the mean class IoU (mcIoU) and mean instance IoU (mIoU). Ablations mcIoU(%) mIoU(%) GraphConv 81.9 85.5 Attention Point 78.0 83.3 Attention Channel 77.9 83.0 Feature 82.2 85.9 Normal 83.2 86.2 Initial attributes 83.2 86.1 Ours 83.4 86.4 Method OA mAcc mIoU ceiling floor wall beam column window door table chair sofa bookcase board clutterPointNet [27] - 49.0 41.1 88.8 97.3 69.8 0.1 3.9 46.3 10.8 59.0 52.6 5.9 40.3 26.4 33.2 SegCloud [35] - 57.4 48.9 90.1 96.1 69.9 0.0 18.4 38.4 23.1 70.4 75.9 40.9 58.4 13.0 41.6 PointCNN [19] 85.9 63.9 57.3 92.3 98.2 79.4 0.0 17.6 22.8 62.1 74.4 80.6 31.7 66.7 62.1 56.7 PCCN [43] - 67.0 58.3 92.3 96.2 75.9 0.3 6.0 69.5 63.5 66.9 65.6 47.3 68.9 59.1 46.2 PointWeb [52] 87.0 66.6 60.3 92.0 98.5 79.4 0.0 21.1 59.7 34.8 76.3 88.3 46.9 69.3 64.9 52.5 HPEIN [11] 87.2 68.3 61.9 91.5 98.2 81.4 0.0 23.3 65.3 40.0 75.5 87.7 58.5 67.8 65.6 49.4 GAC [41] 87.7 - 62.8 92.2 98.2 81.9 0.0 20.3 59.0 40.8 78.5 85.8 61.7 70.7 74.6 52.8 KPConv [36] - 72.8 67.1 92.8 97.3 82.4 0.0 23.9 58.0 69.0 81.5 91.0 75.4 75.3 66.7 58.9 PointASNL [49] 87.7 68.5 62.6 94.3 98.4 79.1 0.0 26.7 55.2 66.2 83.3 86.8 47.6 68.3 56.4 52.1 Ours 90.0 73.2 67.9 93.9 98.4 82.2 0.0 23.9 59.1 71.3 91.5 81.2 75.5 74.9 72.1 58.6 ). From the comparison, we can see that our model is more robust to point density and noise perturbation.Table 5. Results of our classification network with different numbers k of nearest neighbors.Number k mAcc(%) OA(%) 5 89.4 92.8 10 90.7 93.2 20 90.7 93.4 40 90.4 93.0 Table 6. The number of parameters and overall accuracy of different models.Method #parameters OA(%) PointNet [27] 3.5M 89.2 PointNet++ [29] 1.48M 91.9 DGCNN [44] 1.81M 92.9 KPConv [36] 14.3M 92.9 Ours 1.85M 93.4 Ioannis Brilakis, Martin Fischer, and Silvio Savarese. 3d semantic parsing of large-scale indoor spaces. Iro Armeni, Ozan Sener, Helen Amir R Zamir, Jiang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionIro Armeni, Ozan Sener, Amir R Zamir, Helen Jiang, Ioannis Brilakis, Martin Fischer, and Silvio Savarese. 3d semantic parsing of large-scale indoor spaces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 1534-1543, 2016. 6 Point convolutional neural networks by extension operators. Matan Atzmon, Haggai Maron, Yaron Lipman, arXiv:1803.10091arXiv preprintMatan Atzmon, Haggai Maron, and Yaron Lipman. Point convolutional neural networks by extension operators. arXiv preprint arXiv:1803.10091, 2018. 2 Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, arXiv:1409.0473arXiv preprintDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. 2 Neural implicit embedding for point cloud analysis. Kent Fujiwara, Taiichi Hashimoto, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition2020Kent Fujiwara and Taiichi Hashimoto. Neural implicit em- bedding for point cloud analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11734-11743, 2020. 1 A convolutional encoder model for neural machine translation. Jonas Gehring, Michael Auli, David Grangier, Yann N Dauphin, arXiv:1611.02344arXiv preprintJonas Gehring, Michael Auli, David Grangier, and Yann N Dauphin. A convolutional encoder model for neural machine translation. arXiv preprint arXiv:1611.02344, 2016. 2 Jun-Xiong Meng-Hao Guo, Zheng-Ning Cai, Tai-Jiang Liu, Mu, Shi-Min Ralph R Martin, Hu, arXiv:2012.09688Pct: Point cloud transformer. arXiv preprintMeng-Hao Guo, Jun-Xiong Cai, Zheng-Ning Liu, Tai-Jiang Mu, Ralph R Martin, and Shi-Min Hu. Pct: Point cloud transformer. arXiv preprint arXiv:2012.09688, 2020. 2 Inductive representation learning on large graphs. Rex William L Hamilton, Jure Ying, Leskovec, arXiv:1706.02216arXiv preprintWilliam L Hamilton, Rex Ying, and Jure Leskovec. Induc- tive representation learning on large graphs. arXiv preprint arXiv:1706.02216, 2017. 1 Pointwise convolutional neural networks. Binh-Son, Minh-Khoi Hua, Sai-Kit Tran, Yeung, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionBinh-Son Hua, Minh-Khoi Tran, and Sai-Kit Yeung. Point- wise convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 984-993, 2018. 2 Recurrent slice networks for 3d segmentation of point clouds. Qiangui Huang, Weiyue Wang, Ulrich Neumann, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionQiangui Huang, Weiyue Wang, and Ulrich Neumann. Re- current slice networks for 3d segmentation of point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2626-2635, 2018. 2 Dynamic filter networks. Xu Jia, Bert De Brabandere, Tinne Tuytelaars, Luc V Gool, Advances in neural information processing systems. 29Xu Jia, Bert De Brabandere, Tinne Tuytelaars, and Luc V Gool. Dynamic filter networks. Advances in neural informa- tion processing systems, 29:667-675, 2016. 3 Hierarchical point-edge interaction network for point cloud semantic segmentation. Li Jiang, Hengshuang Zhao, Shu Liu, Xiaoyong Shen, Chi-Wing Fu, Jiaya Jia, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionLi Jiang, Hengshuang Zhao, Shu Liu, Xiaoyong Shen, Chi- Wing Fu, and Jiaya Jia. Hierarchical point-edge interaction network for point cloud semantic segmentation. In Proceed- ings of the IEEE/CVF International Conference on Com- puter Vision, pages 10433-10441, 2019. 7 3d shape segmentation with projective convolutional networks. Evangelos Kalogerakis, Melinos Averkiou, Subhransu Maji, Siddhartha Chaudhuri, proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionEvangelos Kalogerakis, Melinos Averkiou, Subhransu Maji, and Siddhartha Chaudhuri. 3d shape segmentation with projective convolutional networks. In proceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 3779-3788, 2017. 2 Semi-supervised classification with graph convolutional networks. N Thomas, Max Kipf, Welling, arXiv:1609.02907arXiv preprintThomas N Kipf and Max Welling. Semi-supervised classi- fication with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016. 2 Escape from cells: Deep kd-networks for the recognition of 3d point cloud models. Roman Klokov, Victor Lempitsky, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer Vision56Roman Klokov and Victor Lempitsky. Escape from cells: Deep kd-networks for the recognition of 3d point cloud mod- els. In Proceedings of the IEEE International Conference on Computer Vision, pages 863-872, 2017. 2, 5, 6 Large-scale point cloud semantic segmentation with superpoint graphs. Loic Landrieu, Martin Simonovsky, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition16Loic Landrieu and Martin Simonovsky. Large-scale point cloud semantic segmentation with superpoint graphs. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4558-4567, 2018. 1, 6 Pointgrid: A deep network for 3d shape understanding. Truc Le, Ye Duan, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionTruc Le and Ye Duan. Pointgrid: A deep network for 3d shape understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 9204- 9214, 2018. 2 Spherical kernel for efficient graph convolution on 3d point clouds. Huan Lei, Naveed Akhtar, Ajmal Mian, IEEE transactions on pattern analysis and machine intelligence. 2020Huan Lei, Naveed Akhtar, and Ajmal Mian. Spherical kernel for efficient graph convolution on 3d point clouds. IEEE transactions on pattern analysis and machine intelligence, 2020. 2 So-net: Selforganizing network for point cloud analysis. Jiaxin Li, M Ben, Gim Hee Chen, Lee, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition56Jiaxin Li, Ben M Chen, and Gim Hee Lee. So-net: Self- organizing network for point cloud analysis. In Proceed- ings of the IEEE conference on computer vision and pattern recognition, pages 9397-9406, 2018. 5, 6 Pointcnn: Convolution on χtransformed points. Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Xinhan Di, Baoquan Chen, Proceedings of the 32nd International Conference on Neural Information Processing Systems. the 32nd International Conference on Neural Information Processing Systems67Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Xinhan Di, and Baoquan Chen. Pointcnn: Convolution on χ- transformed points. In Proceedings of the 32nd Interna- tional Conference on Neural Information Processing Sys- tems, pages 828-838, 2018. 2, 5, 6, 7 Deep continuous fusion for multi-sensor 3d object detection. Ming Liang, Bin Yang, Shenlong Wang, Raquel Urtasun, Proceedings of the European Conference on Computer Vision. the European Conference on Computer VisionMing Liang, Bin Yang, Shenlong Wang, and Raquel Urtasun. Deep continuous fusion for multi-sensor 3d object detection. In Proceedings of the European Conference on Computer Vi- sion, pages 641-656, 2018. 1 Convolution in the cloud: Learning deformable kernels in 3d graph convolution networks for point cloud analysis. Zhi-Hao Lin, Sheng-Yu Huang, Yu-Chiang Frank Wang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition6Zhi-Hao Lin, Sheng-Yu Huang, and Yu-Chiang Frank Wang. Convolution in the cloud: Learning deformable kernels in 3d graph convolution networks for point cloud analysis. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1800-1809, 2020. 1, 2, 5, 6 Relation-shape convolutional neural network for point cloud analysis. Yongcheng Liu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition23Bin Fan, Shiming Xiang, and Chunhong PanYongcheng Liu, Bin Fan, Shiming Xiang, and Chunhong Pan. Relation-shape convolutional neural network for point cloud analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8895- 8904, 2019. 2, 3 Sgdr: Stochastic gradient descent with warm restarts. Ilya Loshchilov, Frank Hutter, arXiv:1608.03983arXiv preprintIlya Loshchilov and Frank Hutter. Sgdr: Stochas- tic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016. 5 Voxnet: A 3d convolutional neural network for real-time object recognition. Daniel Maturana, Sebastian Scherer, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE15Daniel Maturana and Sebastian Scherer. Voxnet: A 3d con- volutional neural network for real-time object recognition. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 922-928. IEEE, 2015. 1, 2, 5 Geometric deep learning on graphs and manifolds using mixture model cnns. Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, Michael M Bronstein, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition23Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, and Michael M Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. In Proceedings of the IEEE confer- ence on computer vision and pattern recognition, pages 5115-5124, 2017. 2, 3 Pytorch: An imperative style, high-performance deep learning library. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, arXiv:1912.01703arXiv preprintAdam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An im- perative style, high-performance deep learning library. arXiv preprint arXiv:1912.01703, 2019. 5 Pointnet: Deep learning on point sets for 3d classification and segmentation. Hao Charles Ruizhongtai Qi, Kaichun Su, Leonidas J Mo, Guibas, 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA7Charles Ruizhongtai Qi, Hao Su, Kaichun Mo, and Leonidas J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In 2017 IEEE Con- ference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 77-85, 2017. 1, 2, 4, 5, 6, 7, 8 Volumetric and multi-view cnns for object classification on 3d data. Hao Charles R Qi, Matthias Su, Angela Nießner, Mengyuan Dai, Leonidas J Yan, Guibas, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionCharles R Qi, Hao Su, Matthias Nießner, Angela Dai, Mengyuan Yan, and Leonidas J Guibas. Volumetric and multi-view cnns for object classification on 3d data. In Pro- ceedings of the IEEE conference on computer vision and pat- tern recognition, pages 5648-5656, 2016. 5 Point-net++: Deep hierarchical feature learning on point sets in a metric space. Li Charles R Qi, Hao Yi, Leonidas J Su, Guibas, arXiv:1706.024136arXiv preprintCharles R Qi, Li Yi, Hao Su, and Leonidas J Guibas. Point- net++: Deep hierarchical feature learning on point sets in a metric space. arXiv preprint arXiv:1706.02413, 2017. 2, 5, 6, 8 Octnet: Learning deep 3d representations at high resolutions. Gernot Riegler, Ali Osman Ulusoy, Andreas Geiger, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition1Gernot Riegler, Ali Osman Ulusoy, and Andreas Geiger. Octnet: Learning deep 3d representations at high resolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3577-3586, 2017. 1, 2 Towards 3d point cloud based object maps for household environments. Zoltan Radu Bogdan Rusu, Nico Csaba Marton, Mihai Blodow, Michael Dolha, Beetz, Robotics and Autonomous Systems. 5611Radu Bogdan Rusu, Zoltan Csaba Marton, Nico Blodow, Mi- hai Dolha, and Michael Beetz. Towards 3d point cloud based object maps for household environments. Robotics and Au- tonomous Systems, 56(11):927-941, 2008. 1 Mining point cloud local structures by kernel correlation and graph pooling. Yiru Shen, Chen Feng, Yaoqing Yang, Dong Tian, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionYiru Shen, Chen Feng, Yaoqing Yang, and Dong Tian. Min- ing point cloud local structures by kernel correlation and graph pooling. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4548-4557, 2018. 2 Dynamic edgeconditioned filters in convolutional neural networks on graphs. Martin Simonovsky, Nikos Komodakis, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionMartin Simonovsky and Nikos Komodakis. Dynamic edge- conditioned filters in convolutional neural networks on graphs. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3693-3702, 2017. 3 Evangelos Kalogerakis, and Erik Learned-Miller. Multi-view convolutional neural networks for 3d shape recognition. Hang Su, Subhransu Maji, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionHang Su, Subhransu Maji, Evangelos Kalogerakis, and Erik Learned-Miller. Multi-view convolutional neural networks for 3d shape recognition. In Proceedings of the IEEE in- ternational conference on computer vision, pages 945-953, 2015. 2 Segcloud: Semantic segmentation of 3d point clouds. Lyne Tchapmi, Christopher Choy, Iro Armeni, Junyoung Gwak, Silvio Savarese, 2017 international conference on 3D vision (3DV). 67Lyne Tchapmi, Christopher Choy, Iro Armeni, JunYoung Gwak, and Silvio Savarese. Segcloud: Semantic segmen- tation of 3d point clouds. In 2017 international conference on 3D vision (3DV), pages 537-547. IEEE, 2017. 1, 6, 7 Kpconv: Flexible and deformable convolution for point clouds. Hugues Thomas, R Charles, Jean-Emmanuel Qi, Beatriz Deschaud, François Marcotegui, Leonidas J Goulette, Guibas, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer Vision7Hugues Thomas, Charles R Qi, Jean-Emmanuel Deschaud, Beatriz Marcotegui, François Goulette, and Leonidas J Guibas. Kpconv: Flexible and deformable convolution for point clouds. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6411-6420, 2019. 2, 3, 5, 6, 7, 8 Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, arXiv:1706.03762Attention is all you need. arXiv preprintAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko- reit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Il- lia Polosukhin. Attention is all you need. arXiv preprint arXiv:1706.03762, 2017. 2 . Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, Yoshua Bengio, arXiv:1710.109037Graph attention networks. arXiv preprintPetar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph at- tention networks. arXiv preprint arXiv:1710.10903, 2017. 2, 3, 7 Feastnet: Feature-steered graph convolutions for 3d shape analysis. Nitika Verma, Edmond Boyer, Jakob Verbeek, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition23Nitika Verma, Edmond Boyer, and Jakob Verbeek. Feastnet: Feature-steered graph convolutions for 3d shape analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2598-2606, 2018. 2, 3 Local spectral graph convolution for point set feature learning. Chu Wang, Babak Samari, Kaleem Siddiqi, Proceedings of the European conference on computer vision (ECCV). the European conference on computer vision (ECCV)Chu Wang, Babak Samari, and Kaleem Siddiqi. Local spec- tral graph convolution for point set feature learning. In Pro- ceedings of the European conference on computer vision (ECCV), pages 52-66, 2018. 5 Graph attention convolution for point cloud semantic segmentation. Lei Wang, Yuchun Huang, Yaolin Hou, Shenman Zhang, Jie Shan, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition67Lei Wang, Yuchun Huang, Yaolin Hou, Shenman Zhang, and Jie Shan. Graph attention convolution for point cloud se- mantic segmentation. In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, pages 10296-10305, 2019. 1, 2, 3, 6, 7 O-cnn: Octree-based convolutional neural networks for 3d shape analysis. Peng-Shuai Wang, Yang Liu, Yu-Xiao Guo, Chun-Yu Sun, Xin Tong, ACM Transactions On Graphics (TOG). 364Peng-Shuai Wang, Yang Liu, Yu-Xiao Guo, Chun-Yu Sun, and Xin Tong. O-cnn: Octree-based convolutional neu- ral networks for 3d shape analysis. ACM Transactions On Graphics (TOG), 36(4):1-11, 2017. 2 Deep parametric continuous convolutional neural networks. Shenlong Wang, Simon Suo, Wei-Chiu Ma, Andrei Pokrovsky, Raquel Urtasun, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition17Shenlong Wang, Simon Suo, Wei-Chiu Ma, Andrei Pokrovsky, and Raquel Urtasun. Deep parametric continu- ous convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 2589-2597, 2018. 1, 7 Dynamic graph cnn for learning on point clouds. Yue Wang, Yongbin Sun, Ziwei Liu, E Sanjay, Sarma, Justin M Michael M Bronstein, Solomon, Acm Transactions On Graphics (tog). 385Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. Dynamic graph cnn for learning on point clouds. Acm Transactions On Graphics (tog), 38(5):1-12, 2019. 1, 2, 3, 4, 5, 6, 7, 8 Pay less attention with lightweight and dynamic convolutions. Felix Wu, Angela Fan, Alexei Baevski, Michael Yann N Dauphin, Auli, arXiv:1901.10430arXiv preprintFelix Wu, Angela Fan, Alexei Baevski, Yann N Dauphin, and Michael Auli. Pay less attention with lightweight and dy- namic convolutions. arXiv preprint arXiv:1901.10430, 2019. 2 Pointconv: Deep convolutional networks on 3d point clouds. Wenxuan Wu, Zhongang Qi, Li Fuxin, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionWenxuan Wu, Zhongang Qi, and Li Fuxin. Pointconv: Deep convolutional networks on 3d point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, pages 9621-9630, 2019. 3 3d shapenets: A deep representation for volumetric shapes. Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, Jianxiong Xiao, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition57Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Lin- guang Zhang, Xiaoou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1912-1920, 2015. 2, 5, 7 Spidercnn: Deep learning on point sets with parameterized convolutional filters. Yifan Xu, Tianqi Fan, Mingye Xu, Long Zeng, Yu Qiao, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)25Yifan Xu, Tianqi Fan, Mingye Xu, Long Zeng, and Yu Qiao. Spidercnn: Deep learning on point sets with parameterized convolutional filters. In Proceedings of the European Con- ference on Computer Vision (ECCV), pages 87-102, 2018. 2, 3, 5 Pointasnl: Robust point clouds processing using nonlocal neural networks with adaptive sampling. Xu Yan, Chaoda Zheng, Zhen Li, Sheng Wang, Shuguang Cui, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition67Xu Yan, Chaoda Zheng, Zhen Li, Sheng Wang, and Shuguang Cui. Pointasnl: Robust point clouds processing using nonlocal neural networks with adaptive sampling. In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition, pages 5589-5598, 2020. 5, 6, 7 Alla Sheffer, and Leonidas Guibas. A scalable active framework for region annotation in 3d shape collections. Li Yi, G Vladimir, Duygu Kim, I-Chao Ceylan, Mengyan Shen, Hao Yan, Cewu Su, Qixing Lu, Huang, ACM Transactions on Graphics (ToG). 356Li Yi, Vladimir G Kim, Duygu Ceylan, I-Chao Shen, Mengyan Yan, Hao Su, Cewu Lu, Qixing Huang, Alla Shef- fer, and Leonidas Guibas. A scalable active framework for region annotation in 3d shape collections. ACM Transactions on Graphics (ToG), 35(6):1-12, 2016. 5 Shellnet: Efficient point cloud convolutional neural networks using concentric shells statistics. Zhiyuan Zhang, Binh-Son, Sai-Kit Hua, Yeung, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionZhiyuan Zhang, Binh-Son Hua, and Sai-Kit Yeung. Shellnet: Efficient point cloud convolutional neural networks using concentric shells statistics. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1607- 1616, 2019. 2 Pointweb: Enhancing local neighborhood features for point cloud processing. Hengshuang Zhao, Li Jiang, Chi-Wing Fu, Jiaya Jia, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionHengshuang Zhao, Li Jiang, Chi-Wing Fu, and Jiaya Jia. Pointweb: Enhancing local neighborhood features for point cloud processing. In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, pages 5565-5573, 2019. 7 . Hengshuang Zhao, Li Jiang, Jiaya Jia, Philip Torr, Vladlen Koltun, arXiv:2012.09164Point transformer. arXiv preprintHengshuang Zhao, Li Jiang, Jiaya Jia, Philip Torr, and Vladlen Koltun. Point transformer. arXiv preprint arXiv:2012.09164, 2020. 2
[]
[ "Non-Integrability of the Trapped Ionic System II", "Non-Integrability of the Trapped Ionic System II" ]
[ "Georgi Georgiev \nFaculty of Mathematics and Informatics\nSofia University \"St. Kl. Ohridski\"\n5 James Bourchier Blvd1164SofiaBulgaria\n" ]
[ "Faculty of Mathematics and Informatics\nSofia University \"St. Kl. Ohridski\"\n5 James Bourchier Blvd1164SofiaBulgaria" ]
[]
The idea for writing this paper is to analyze the differences in the results in [1] and [4], correct the errors in [1], and remove the inappropriate comments in [1] about the results in [2]. It is necessary and should be noted that the inaccuracies [2] have been removed.
10.2139/ssrn.4325734
[ "https://export.arxiv.org/pdf/2209.13810v6.pdf" ]
252,567,786
2209.13810
77cd9e77a08462740accd69aa73113e53c1fb2cd
Non-Integrability of the Trapped Ionic System II Dec 2022 Georgi Georgiev Faculty of Mathematics and Informatics Sofia University "St. Kl. Ohridski" 5 James Bourchier Blvd1164SofiaBulgaria Non-Integrability of the Trapped Ionic System II 12Dec 2022Hamiltonian systemMeromorphic non-integrabilityVariational equationMon- odromy groupDifferential Galois group The idea for writing this paper is to analyze the differences in the results in [1] and [4], correct the errors in [1], and remove the inappropriate comments in [1] about the results in [2]. It is necessary and should be noted that the inaccuracies [2] have been removed. Introduction This paper is a continuation of [1] and for this reason we will omit the introduction and research data, referring to the original article (see also [2] and [5] for details). I will not go over the classical and modern theory of meromorphic non-integrability studies, I will simply refer to [6], [21], [22], [26], [10], [11], [9], [8], [23], [27] and [7]. We study two dimensional model H = 1 2 (p 2 r + p 2 z ) + Ar 2 + Bz 2 + Cz 3 + Dr 2 z + Ez 4 + F r 2 z 2 + Gr 4 , (1.1) where A, B, C, D, E, F , and G are a appropriate real constants for existing an additional meromorphic integral of motion. Let we denote with p := 1 + 4F E , and with z i , i = 1, . . . , 4 the roots of polynomial Ez 4 + Cz 3 + Bz 2 + h = 0, where h is a constant (we suppose that z i = z j , for i = j). The result of this article is as follows: Theorem 1. a) Assume that p / ∈ Q, then the system (1.1) has no an additional analytic first integral; b) Let p ∈ Q if N(p) ≥ 4, then the system (1.1) has no an additional meromorphic first integral; c) Let p ∈ Q and N(p) ≤ 3, then the system (1.1) has no an additional meromorphic first integral, if at least one of the following conditions is true: 16 3 , 16A = 5B, then 16A = −15D 2 ; c.87) if D = 0 and C = 0, then A = 0 and A = 3B; c.0) if V 0 = 0, (E = F = G = 0),c.4) if V 3 = r 4 + 6r 2 z 2 + z 4 , (G = E = 6F ), then c.41) if D = 0, C D / ∈ { 1 3 , 2, 16 3 }, or; c.42) if D = 0, C D = 1 3 , then A = B; c.43) if D = 0, C D = 2, then D 2 = B − 2A; c.44) if D = 0, C D = 16 3 , then 16A = 5B; c.45) if D = 0, C D = 16 3 , 16A = 5B, then 20A = −63D 2 ; c.5) if V 4 = z 4 + 1 4 αr 4 , ( E G = 4 α , α = 0), then D = 0; c.6) if V 4 = r 4 + 1 4 αz 4 , ( E G = α 4 , α = 0), then D = 0; c.7) if V 5 = 16r 4 + 3r 2 z 2 + z 4 , (G = 16E, 3G = 4F ), then ; c.71) if D = 0, C D / ∈ { 1 3 , 16 3 }, or; c.72) if D = 0, C D = 1 3 , then A = B; c.73) if D = 0, C D = 16 3 , 16A = 5B; c.74) if D = 0, C D = 16 3 , 16A = 5B, then 16A = −15D 2 ; c.75) if D = 0 and C = 0, then A = 4B; c.8) if V 5 = 16z 4 + 3r 2 z 2 + r 4 , (E = 16G, 3E = 4F ), then c.81) if D = 0, C D / ∈ { 1 3 , 2, 16 3 }, or; c.82) if D = 0, C D = 1 3 , then A = B; c.83) if D = 0, C D = 1 3 , A = B, then 8A = 5D 2 ; c.84) if D = 0, C D = 2, then B = 4A; c.85) if D = 0, C D = 16 3 , 16A = 5B; c.86) if D = 0, C D =c.9) if V 6 = z 4 + 6r 2 z 2 + 8r 4 (E = 8G, E = 6F ), then c.91) if D = 0, or; c.92) if D = 0 and C = 0, A = 4B; c.10) if V 6 = r 4 + 6r 2 z 2 + 8z 4 (F = 6G, 3E = 4F ), then c.101) if D = 0, or; c.102) if D = 0 and C = 0, 4A = B. Here N(r) is the positive denominator of the irreducible r ∈ Q, with V i , i = 1, . . . , 6 denotes all integrable homogeneous potentials of the fourth degree (V 0 is when there is no such potential). It should be noted that the notations in this paper are not quite the same as in [9]. The study of exactly these specific integrable potentials does not restrict the community of the problem applying Remark 1 in paper [8]. This is probably the longest theorem that I've ever formulated, unfortunately this is the result of researching this problem. The idea of the proof is to study for integrability the sums of the potentials of the second degree (in our case it is integrable) with all the integrable potentials of the third degree, then to the Integrable sums are added to all possible integrable potentials of the fourth degree and again check for integrability. The homogeneous integrable potentials of degree three and four have been studied exhaustively in [10], [11], [12], [8], [9], [14], [13]. 2 V E 1 The Hamiltonian equations are: r = p r ,ṗ r = −(2Ar + 2Drz + 2F rz 2 + 4Gr 3 ), z = p z ,ṗ z = −(2Bz + 3Cz 2 + Dr 2 + 4Ez 3 + 2F r 2 z). (2.1) for existing an additional integral of motion (here as usual˙= d dt ). In the our study we will suppose that (F, D) = (0, 0), because if we assume the opposite, the variables in the considered system are separated, i.e. the system is integrable. First, we find a partial solution for (2.1). Let we put r = p r = 0 in (2.1) and we havë z = −(2Bz + 3Cz 2 + 4Ez 3 ), multiplying byż and integrating by the time t we havė z 2 = −2(Ez 4 + Cz 3 + Bz 2 + h), (2.2) where h is a constant. Further, we follow the procedures for Ziglin-Morales-Ramis theory and we find an invariant manifold (r, p r , z, p z ) = (0, 0, z,ż) here z is the solution of (2.2). According to theory, the solution of (2.2) must be a rational function of Weierstrass ℘-function, but it is not important for us right now. Finding the Variation Equations (VE) we have ξ 11 = dr, η 11 = dp r , ξ 12 = dz, and η 12 = dp z and we obtain: ξ 11 = −2(A + Dz + F z 2 )ξ 11 , ξ 12 = −2(B + 3Cz + 6Ez 2 )ξ 12 . (2.3) Next we change the variable in equations (2.3) by ξ 1i (t) = ξ 1i (z(t)), where z(t) is a solution of (2.2) and we have dξ 1i (t) dt = dξ 1i dz . dz(t) dt , for i = 1, 2 and d 2 ξ 11 dt 2 = d 2 ξ 11 dz 2 dz dt 2 + dξ 11 dz d 2 z dt 2 = −2(Ez 4 + Cz 3 + Bz 2 + h) d 2 ξ 11 dz 2 − (2Bz + 3Cz 2 + 4Ez 3 ) dξ 11 dz + 2(A + Dz + F z 2 )ξ 11 = 0, and d 2 ξ 12 dt 2 = d 2 ξ 12 dz 2 dz dt 2 + dξ 12 dz d 2 z dt 2 = −2(Ez 4 + Cz 3 + Bz 2 + h) d 2 ξ 12 dz 2 − (2Bz + 3Cz 2 + 4Ez 3 ) dξ 12 dz + 2(B + 3Cz + 6Ez 2 )ξ 12 = 0. If we denote with ′ = d dz we obtain for the VE two Fuchsian linear differential equations with four singularities: ξ ′′ 11 + 4Ez 3 + 3Cz 2 + 2Bz 2(Ez 4 + Cz 3 + Bz 2 + h) ξ ′ 11 − A + Dz + F z 2 Ez 4 + Cz 3 + Bz 2 + h ξ 11 = 0, ξ ′′ 12 + 4Ez 3 + 3Cz 2 + 2Bz 2(Ez 4 + Cz 3 + Bz 2 + h) ξ ′ 12 − B + 3Cz + 6Ez 2 Ez 4 + Cz 3 + Bz 2 + h ξ 12 = 0. (2.4) The equations ( 2.4) are a Fuchsian and have five regular singularities z i , for i = 1, . . . , 4 and z = ∞. Here we study the equations ( 2.4) for a Liouvillian solutions. Existence of such solutions of a linear differential equation is equivalent to the solvability of the identity component of its Galois group. First we find conditions for branching for solutions of this equation. We will use Lyapunov's idea to prove non-integrability using the branching of the solutions of the variational equations around an appropriate partial solution. If the solutions of the VE branching then the Hamiltonian system has not additional first integral. (See [6] for details.) In this section we will assume that B = 0 and E = 0. We find the indicial equations to the singular points z i and z = ∞. We have λ 2 − 1 2 λ = 0, roots area λ 1 = 0, λ 2 = 1 2 , for z i , second one is ρ 2 −ρ− F E = 0, with roots ρ k = 1± √ 1+ 4F E 2 , k = 1, 2 for z = ∞. The solutions around z i have no branching and near z = ∞ we have branching -when ρ k , / ∈ Q, k = 1, 2. This is an application of the Frobenius's method: the fundamental system of solutions around of the singular points are presented in the form Φ j (z) = z λ j Ψ j (z), where Ψ j (z) are holomorphic functions (locally) for j = 1, 2. Then the solutions of ( 2.4 are branching for λ j , / ∈ Q, j = 1, 2. For z = ∞ we do the same and obtain that ρ k , / ∈ Q, k = 1, 2. It should be noted that it is possible, if λ j ∈ Q and ρ k ∈ Q, we also can achieve not branching by a standard changing of the variables. This change of variables does not affect the commutativity of the unity component of the Galois group. We have proved the following proposition. Proposition 1. Let p = 1 + 4 F E / ∈ Q, then the system (2.1) has no additional holomorphic first integral. This proves a) in Theorem 1. Now we use the notation F = p 2 − 1 4 E, where p ∈ Q. For the first of eq. ( 2.4) we put c 1 (z) := 4Ez 3 + 3Cz 2 + 2Bz 2(Ez 4 + Cz 3 + Bz 2 + h) , and c 2 (z) := − A + Dz + p 2 −1 4 Ez 2 Ez 4 + Cz 3 + Bz 2 + h . In the terminology of [27] we get a ∞ = lim z→∞ zc 1 (z) = 2, b ∞ = lim z→∞ z 2 c 2 (z) = 1 − p 2 4 , ∆ ∞ = (1 − a ∞ ) 2 − 4b ∞ = ±p, and a i = lim z→z i (z − z i )c 1 (z) = 1 2 , b i = lim z→z i (z − z i ) 2 c 2 (z) = 0, ∆ i = (1 − 1/2) 2 − 4.(0) = 1 2 . Then, we get t i = −2 cos( π 2 ) = 0, and t ∞ = 2 cos(π∆ ∞ ) = 2 cos(πp). The values of ∆ i correspond to points z i , for i = 1, . . . , 4. The first result of our exposition is the following proposition: Proposition 2. Let p = 1 + 4F E are rational numbers, then the system (2.1) has no an additional meromorphic first integral when N(p) ≥ 4. Proof: We obtain that t ∞ / ∈ Q[t 1 , t 2 , t 3 , t 4 ] = Q, i. e. t ∞ is transcendental over Q. (See [27] for details.) This finishes the proof of b) by Theorem 1. Hamiltonian systems with homogeneous potentials. The study of integrability of two-dimensional systems with homogeneous potentials of different degrees turns out to be a very interesting and important problem for many researchers (see [8], [9], [10], [11], [12], [13], [15], [16], [17], [18], [19], [20]). In this subsection, we will investigate how integrable potentials interact with each other, i.e. when the sum of several integrable potentials is again an integrable potential. To proceed with the study of integrability in the two-dimensional problem, another different approach is needed, which is well developed in [8] and [9]. The problem reduces to Hamiltonian system with homogeneous potentials and has a well-developed scheme for research. Let explain what this method is and how we can use it for a limited number of integrable cases. We consider a Hamiltonian system H = 1 2 p 2 r + p 2 z + V (r, z), (2.5) with potential V (r, z) = V min (r, z) + · · · + V med (r, z) + · · · + V max (r, z), which is a sum of homogeneous potentials. Here with V min (r, z) (respectively V med (r, z), and V max (r, z) ) we mean the smallest (some intermediate and largest ) possible degree of homogeneous part of V (r, z). Potential V (r, z) is called integrable if its corresponding Hamiltonian system (2.5) is integrable. As it is noted in [11], [12] and [9], if V (r, z) is an integrable potential, then V min (r, z), V med (r, z) and V max (r, z) are also integrable (for each V med (r, z)). This observation gives us a chance to reduce the number of free parameters. In fact, we study for integrability the sum of proportional to integrable homogeneous potentials of degrees two, three and four. We start from the sum of integrable potentials of the second and third degrees, and then we will add the integrable potentials of the fourth degree to them. In our cases, at least for the second (it is integrable) and third degrees, the potentials are not full as in the general case, which simplifies the research a little, for the fourth degree, however, we are not so lucky. In our case we have V min (r, z) = Ar 2 + Bz 2 , V med (r, z) = Cz 3 + Dr 2 z, V max (r, z) = Ez 4 + F r 2 z 2 + Gr 4 and since V min (r, z) is integrable, then the possible integrable cases are those for which V max (r, z) is integrable. These potentials are fully investigated in [9] and using the notation in this paper the possible integrable cases in V max (r, z) are: V 0 , V 1 , V 3 , V 4 , V 5 and V 6 . Let us consider these cases in the context of the fourth degree potential in our task. Now we have the case c.0 V 0 := 0. The integrable homogeneous potentials of degree 3 in our case are z 3 + 3r 2 z, 2z 3 + r 2 z, and 16z 3 + 3r 2 z (see [8] for details). The Hamiltonians in this cases are H = 1 2 (p 2 r + p 2 z ) + Ar 2 + Bz 2 + z 3 + 3r 2 z, H = 1 2 (p 2 r + p 2 z ) + Ar 2 + Bz 2 + 2z 3 + r 2 z, H = 1 2 (p 2 r + p 2 z ) + Ar 2 + Bz 2 + 16z 3 + 3r 2 z, the equations of motion for first Hamiltonian are: r = p r ,ṗ r = −(2Ar + 6rz), z = p z ,ṗ z = −(2Bz + 3r 2 + 3z 2 ). (2.6) We find a partial solution for (2.6), we put r = p r = 0 in (2.6) and we obtain z = −(2Bz + 3z 2 ), andż 2 = −2(z 3 + Bz 2 + h), (2.7) where h is a constant. Next we change the variable t to z in NVE and we obtain ξ ′′ 11 + 3z 2 + 2Bz 2(z 3 + Bz 2 + h) ξ ′ 11 − A + 3z z 3 + Bz 2 + h ξ 11 = 0. (2.8) The equation (2.8) is Fuchsian (Lamé equation) with regular singular points at the roots z i (z i = z j , for i = j) of z 3 + Bz 2 + h = 0 and at infinity. The indicial equations to the singular points z i and z = ∞ are λ 2 − 1 2 λ = 0, roots area λ 1 = 0, λ 2 = 1 2 , for z i , second one is 2ρ 2 − ρ − 6 = 0, with roots ρ 1 = 2, and ρ 2 = − 3 2 for z = ∞. At infinity (changing x := 1 z ) equation (2.8) assumes the following form: d 2 ξ 11 dx 2 + 4hx 3 + 2Bx + 1 2x(hx 3 + Bx + 1) dξ 11 dx − Ax + 3 x 2 (hx 3 + Bx + 1) ξ 11 = 0, (2.9) with a local solutions at x = 0 ξ (1) 11 (x) = x −3/2 1 + ( −2A 5 + 9B 10 )x + ( 2 15 A 2 − 1 3 AB + 3 40 B)x 2 . . . , ξ(2)dξ 12 dx − Bx + 3 x 2 (hx 3 + Bx + 1) ξ 12 = 0, (2.11) with a local solutions at x = 0 ξ (1) 12 (x) = x −3/2 1 + B 2 x − 1 8 B 2 x 2 . . . , ξ(2)12 (x) = x 2 1 − 2B 3 x + 16 33 B 2 x 2 . . . . For V E 2 we do the same transformation of variables t → z → x and we obtain d 2 ξ 11 dx 2 + 4hx 3 + 2Bx + 1 2x(hx 3 + Bx + 1) dξ 11 dx − Ax + 3 x 2 (hx 3 + Bx + 1) ξ 11 = ξ 11 ξ 12 2x(hx 3 + Bx + 1) , d 2 ξ 12 dx 2 + 4hx 3 + 2Bx + 1 2x(hx 3 + Bx + 1) dξ 12 dx − Bx + 3 x 2 (hx 3 + Bx + 1) ξ 12 = 3ξ 2 11 + 3ξ 2 12 2x(hx 3 + Bx + 1) . The necessary condition for integrability is the absence of a residue in the expressions ξ 11 . ξ 11 ξ 12 2x(hx 3 + Bx + 1) and ξ 12 . H = 1 2 (p 2 r + p 2 z ) + Ar 2 + z 3 + 3r 2 z, r = p r ,ṗ r = −(2Ar + 6rz), z = p z ,ṗ z = −(3z 2 + 3r 2 ). For partial solution in this case we can choose p r = r = 0 and we obtainż 2 = −2z 3 − 2h (h = const = 0). After changing the variables z = −2z we getż 2 = 4z 3 − h 2 which can be written in the formz = ℘(t, 0, h 2 ) using the Weierstrass p-function. For the variational equations V E1, V E2 and V E3 are obtained as follows ξ 11 = 12℘(t, 0, h 2 ) − 2A ξ 11 , ξ 12 = 12℘(t, 0, h 2 ) ξ 12 , ξ 21 = 12℘(t, 0, h 2 ) − 2A ξ 21 − 6ξ 11 ξ 12 , ξ 22 = 12℘(t, 0, h 2 ) ξ 22 − 3 (ξ 11 ) 2 + (ξ 12 ) 2 , andξ 31 = 12℘(t, 0, h 2 ) − 2A ξ 31 − 6 (ξ 11 ξ 22 + ξ 12 ξ 21 ) , ξ 32 = 12℘(t, 0, h 2 ) ξ 32 − 6 (ξ 11 ξ 21 + ξ 12 ξ 22 ) . For solutions of V E1 around t = 0 we have ξ (1) 11 (t) = t 4 − A 9 t 5 . . . , ξ(2)11 (t) = t −3 + A 5 t −1 . . . , ξ(1)12 (t) = t 4 − h 364 t 10 . . . , ξ(2) 12 (t) = t −3 + h 28 t 3 . . . . Let us use notation K (1) 2 = − 6 (ξ 11 ξ 12 ) , K 2 = − 3 (ξ 11 ) 2 + (ξ 12 ) 2 , K(2)3 = − 6 (ξ 11 ξ 22 + ξ 12 ξ 21 ) , K(1)2 , 0, K (2) 2 T , f 3 = 0, K(1)3 , 0, K (2) 3 T .(1) Without losing a community we can assume that ξ (1) 12 = 1. Then the fundamental matrix of V E1 and its inverse are X(t) =      ξ (1) 11 ξ (2) 11 0 0 ξ (1) 11ξ (2) 11 0 0 0 0 ξ (1) 12 ξ (2) 12 0 0ξ (1) 12ξ (2) 12      , (2.12) X −1 (t) =     ξ (2) 11 −ξ (2) 11 0 0 −ξ (1) 11 ξ (1) 11 0 0 0 0ξ (2) 12 −ξ (2) 12 0 0 −ξ (1) 12 ξ (2) 12      . (2.13) We look at expression X −1 f 2 for a non-zero residue -there is none in all possibilities, we do the same for expression X −1 f 3 , there is a non-zero residue for A = 0 for the expression ξ For the third Hamiltonian in this case, we consistently obtaiṅ r = p r ,ṗ r = −(2Ar + 6rz), z = p z ,ṗ z = −(2Bz + 48z 2 + 3r 2 ) The partial solution in this case is r = p r = 0 andz = −(2Bz+48z 2 ),ż 2 = −2(16z 3 +Bz 2 +h), NV E in ∞ is d 2 ξ 11 dx 2 + 2hx 3 + 8 x(hx 3 + Bx + 16) dξ 11 dx − Ax + 48 x 2 (hx 3 + Bx + 16) ξ 11 = 0. (2. 14) The 11 (x) = ξ (1) 11 (x) ln(x) + h(x), where h(x)(2) is a holomorphic function (in this case, or meromorphic in in the general case). We denote with Γ the Riemann surface {(x, y) : y 2 = 2hx 4 + 2Bx 2 + 16x}. The field of coefficients of (2.14) is K:=M(Γ) (meromorphic functions on Γ). Let us also denote by L := K[ξ (1) 11 , ξ (2) 11 ] the Picard-Vesiot extension of K. Let us formulate the following Lemma 1. In the above notations for (2.14), if σ ∈ Gal(L/K), then σ(ln(x)) = δ ln(x) + γ, for δ, γ ∈ C. Proof: We have dσ(ln(x)) dx = σ( d ln(x) dx ) = σ( 1 x ), σ ′ . 1 x = σ( 1 x ), let x = e t , d dt σ.e −t = σ(e −t ) = δe −t , δ ∈ C, then we have dσ dt = δ, and σ(t) = δt + γ, then σ(ln(x)) = δ ln(x) + γ, for δ, γ ∈ C. For the local Galois group of (2.14) we have: σ ∈ Gal(L/K), σ(ξ 11 ) = 1 δ ξ (1) 11 , δ ∈ C * , σ(ξ (2) 11 ) = 1 δ ξ (1) 11 σ(ln(x)) + h(x) = 1 δ ξ (1) 11 (δ ln(x) + γ) + h(x) = 1 δ ξ (1) 11 (δ ln(x)) + h(x) + γ δ ξ (1) 11 = ξ (2) 11 + γ δ ξ(1) . Let X(x) is the fundamental matrix of (2.14), then σ(X(x)) = X(x)R, and R = , δ = 0 -solvable but non-commutative. It is not difficult to observe that the condition for noncommutativity of the local Galois group is equivalent to the existence of a logarithm in one of the solutions of (2.14). In the present case, the condition for the existence of a logarithm is 16A = 5B. In order to prove the non-commutativity of the global Galois group of (2.14), we need to study the local group near of the regular singular point z 1 (z 1 is the root of z 4 + Cz 3 + Bz 2 + h = 0). We know the roots of the indicative equation(λ 1 = 0 and λ 2 = 1 2 ) and this allows us to conclude that one local solution is in the field of constants and the other is not. From this we obtain that the local group of Galois consists of matrices of type 1 0 µ 1 , µ = 0 , which in the general case do not commute with those already found in The case c.0) is proved. Unfortunately, the proof of points c) to the theorem 1 is quite extensive (a full consideration of each case deserves a separate study), so we will mention that all methods of studying (second and third variations and existence of logarithm) were already used to prove the case c.0). Remarks and Comments Given the above statement, we can easily conclude that the integrable cases in [4] are exceptions to the already obtained non-integrability conditions in Theorem 1 (the Cases 1, 2 and 3 in [4] correspond to c.3), c.8) and c.4) respectively). Now, to clarify, where do such significant differences in the results of publications [4] and [1] come from? After a careful analysis, it turns out that there are three reasons: the first is that in [1] a very special case h = 0 (zero energy level) is considered, which, although quite interesting as a study, distorts the non-integrability result ; The second reason for the differences is the ignoring in [1] of the logarithmic terms in the local solutions -it turns out that they play an important role in the study of the Galois Group of the variational differential equations under consideration; The third reason is the behavior of sums of different integrable homogeneous potentials -It turns out that not always a sum of homogeneous integrable potentials is an integrable potential. I'd also like to note that studying cases with recurrent roots has proven (thanks to a large number of computations) to be quite a large challenge, and we could frame them as open problems. From what is shown in [1] and the examples in [4], we can also conclude something that I think is important, namely that zero-level (h = 0) non-integrability does not necessarily imply non-integrability in the general case (h = 0). It remains to notice that the first integrals found in [2] and [4] are too limited in number against the background of the integrals that could be found (if we seek for them in polynomial form, because there exists a technique for this, although not so simple). The searching for such integrals is the next open question of this study. , or; c.02) D = 0, C D = 1 3 , A = B; c.03) D = 0, C D = 16 3 , and 16A = 5B;c.1) if V 1 = z 4 , (F = G = 0), then D = 0; c.2) if V 1 = r 4 , (E = F = 0), then D = 0; c.3) if V 2 = (r 2 + z 2 ) 2 , (E = G = F 2 ), then c.31) if D = 0, C D / ∈ { 1 3 , 2,16 3 }, or; c.32) if D = 0, C D = 1 3 , then A = B; c.33) if D = 0, C D = 1 3 , A = B, then A = 0 and 4A = 15D 2 ; c.34) if D = 0, C D = 2, then D 2 = B − 2A; c.35) if D = 0, C D = 16 3 , then 16A = 5B; c.36) if D = 0, C D = 16 3 , 16A = 5B, then 20A = −63D 2 ; x). Then we get three cases of possible integrability: A = B, B = 0 and A 2 − 17 8 .AB − 9 512 .B 2 = 0. By using third variations it turns out that the second and third cases are non-integrable at A = 0.Let us look at the main fragments of the proof in the case when B = 0: The Hamiltonian and equations of motion are 3 = − 6 (ξ 11 ξ 21 + ξ 12 ξ 22 ) , and f 2 = 0, K X −1 f 3 . For the second Hamiltonian in this case, we have not integrability constraints are obtained under the second variations. a local solutions of last equation at x + . . . ) + . . . . From Fuchs's Theorem, if we have a logarithm in one of solutions of linear differential equation of the second order then ξ 16A = 5B. (See [22] for details.) AcknowledgmentsThe author would like to express special thanks to Dr. Idriss Elfakkousy for useful discussions. The author has been partially supported by grant 80-10-53/10.05.2022 from Sofia University "St. Kliment Ohridski". G Georgiev, 10.1016/j.chaos.2021.110994Non-integrability of the Trapped Ionic System. 147110994Georgiev G., (2021), Non-integrability of the Trapped Ionic System, Chaos, Solitons and Fractals 147:110994, doi:10.1016/j.chaos.2021.110994. Painleve analysis and integrability of the trapped ionic system. M Benkhali, J El Kharbach, I Fakkousy, W Chatar, A Rezzouk, M Ouazzani-Jamil, doi.org/10.1016/j.physleta.2018.06.034Phys. Lett. A, v. 382Benkhali M., Kharbach J. El Fakkousy I., Chatar W., Rezzouk A., Ouazzani-Jamil M. (2018), Painleve analysis and integrability of the trapped ionic system, Phys. Lett. A, v. 382, issue 36, doi.org/10.1016/j.physleta.2018.06.034, 2515-2525. G Georgiev, I El Kharbach, W Fakkousy, A Chatar, M Rezzouk, Ouazzani, 10.1016/j.physleta.2020.126932(2020) Comment on Painleve analysis and integrability of the trapped ionic system by M. Benkhali. 384126932Georgiev G., (2020) Comment on Painleve analysis and integrability of the trapped ionic system by M. Benkhali, J. Kharbach, I. El Fakkousy, W. Chatar, A. Rezzouk, and M. Ouazzani-Jamil, Physics Letters A, v. 384, issue 36, https://doi.org/10.1016/j.physleta.2020.126932, pp. 126932. I El Fakkousy, B Zouhairi, J Kharabach, 10.1016/j.chaos.2022.111815(2022) Comment on Non-integrability of the Trapped Ionic System. by Georgi Georgiev. 156111815El Fakkousy I., Zouhairi B. , Kharabach J., (2022) Comment on Non-integrability of the Trapped Ionic System. by Georgi Georgiev, Chaos, Solitons and Fractals 156(1):111815, doi: 10.1016/j.chaos.2022.111815. 2022) Classical and quantum integrability of the three-dimensional generalized trapped ion Hamiltonian. I El Fakkousy, B Zouhairi, M Benmaleka, J Kharabach, A Rezzouk, M Ouazzani-Jamil, doi.org/10.1016/j.chaos.2022.112361Chaos, Solitons and Fractals. 161112361El Fakkousy I., Zouhairi B., Benmaleka M., Kharabach J., Rezzouk A.,Ouazzani- Jamil M., (2022) Classical and quantum integrability of the three-dimensional generalized trapped ion Hamiltonian, Chaos, Solitons and Fractals 161, 112361, doi.org/10.1016/j.chaos.2022.112361 On certain property of the differential equations of the problem of motion of a heavy rigid body having a fixed point. A Lyapunov, Soobsch. Kharkov Math. Obscht., Ser. 2, 4, 1894; 123-140.. in RussianLyapunov, A., (1954), On certain property of the differential equations of the problem of motion of a heavy rigid body having a fixed point, Soobsch. Kharkov Math. Obscht., Ser. 2, 4, 1894; 123-140. (in Russian) On linear independence of trigonometric numbers. A Berger, Carpathian Journal of Mathematics, v. 34Berger A., 2018, On linear independence of trigonometric numbers, Carpathian Journal of Mathematics, v. 34, 2, www.jstor.org/stable/26898724, 157-166. All meromorphically integrable 2D Hamiltonian systems with homogeneous potential of degree 3. A J Maciejewski, M Przybylska, 10.1016/j.physleta.2004.05.042Physics Letters A. 327Maciejewski A. J., Przybylska M., (2004) All meromorphically integrable 2D Hamil- tonian systems with homogeneous potential of degree 3, Physics Letters A 327, doi:10.1016/j.physleta.2004.05.042, pp. 461-473. Darboux points and integrability of Hamiltonian systems with homogeneous polynomial potential. A J Maciejewski, M Przybylska, 10.1063/1.1917311Journal of mathematical physics. 466Maciejewski A. J., Przybylska M., (2005) Darboux points and integrability of Hamiltonian systems with homogeneous polynomial potential, Journal of mathematical physics, v. 46, n. 6, https://doi.org/10.1063/1.1917311, pp. 062901-1-062901-33. A search for integrable two-dimensional hamiltonian systems with polynomial potential. J Hietarinta, 10.1016/0375-9601Physics Letters A. 9683Hietarinta J. (1983) A search for integrable two-dimensional hamiltonian systems with polynomial potential, Physics Letters A 96 , 6, https://doi.org/10.1016/0375- 9601(83)90178-0, pp. 273-278. Direct methods for the search of the second invariant. J Hietarinta, 10.1016/0370-1573(87)90089-5Physics Reports. 147Hietarinta J. (1987) Direct methods for the search of the second invariant, Physics Re- ports, v.147, 2, https://doi.org/10.1016/0370-1573(87)90089-5, pp. 87-154. A list of all integrable two-dimensional homogeneous polynomial potentials with a polynomial integral of order at most four in the momenta. K Nakagawa, H Yoshida, doi10.1088/0305-4470/34/41/316J. Phys. A: Math. Gen. 34Nakagawa K., Yoshida H.(2001) A list of all integrable two-dimensional homogeneous polynomial potentials with a polynomial integral of order at most four in the momenta, J. Phys. A: Math. Gen. 34, doi 10.1088/0305-4470/34/41/316 pp. 8611-8630 A new necessary condition for the integrability of Hamiltonian systems with a two-dimensional homogeneous potential. H Yoshida, 10.1016/S0167-2789(98)00313-3Physica D: Nonlinear Phenomena. 128Yoshida H., (1999) A new necessary condition for the integrability of Hamiltonian systems with a two-dimensional homogeneous potential, Physica D: Nonlinear Phenomena, 128, 1, https://doi.org/10.1016/S0167-2789(98)00313-3, pp. 56-69. On the dynamics aspects for the plane motion of a particle under the action of potential forces in the presence of a magnetic field. C Mnasri, A A Elmandouh, 10.1016/j.rinp.2018.03.025Results in Physics, v.9Mnasri C., Elmandouh A. A. (2018) On the dynamics aspects for the plane motion of a particle under the action of potential forces in the presence of a magnetic field, Results in Physics, v.9, https://doi.org/10.1016/j.rinp.2018.03.025, pp. 825-831. Analytic integrability of Hamiltonian systems with exceptional potentials. J Libre, C Valls, 10.1016/j.physleta.2015.07.034Physics Letters A, v. 379Libre J., Valls C.,(2015) Analytic integrability of Hamiltonian sys- tems with exceptional potentials, Physics Letters A, v. 379 , 38, 9, https://doi.org/10.1016/j.physleta.2015.07.034, pp. 2295-2299. Integrability of Hamiltonian systems with homogeneous potentials of degree zero. G Casale, G Duval, A J Maciejewski, M Przybylska, 10.1016/j.physleta.2009.11.018Physics Letters A, v. 374Casale G., Duval G., Maciejewski A. J., Przybylska M.,(2010) Integrability of Hamilto- nian systems with homogeneous potentials of degree zero, Physics Letters A, v. 374 , 3, 4, https://doi.org/10.1016/j.physleta.2009.11.018, pp. 448-452. Third order integrability conditions for homogeneous potentials of degree -1. T Combot, K Koutschan, 10.1063/1.4746691J. Math. Phys. 5382704Combot T., Koutschan K.,(2012) Third order integrability conditions for homogeneous potentials of degree -1, J. Math. Phys. 53, 8, 082704, https://doi.org/10.1063/1.4746691. Polynomial integrability of the Hamiltonian systems with homogeneous potential of degree -3. J Libre, A Mahdi, C Valls, 10.1016/j.physd.2011.09.003Physica D. v. 240 , 24, 1Libre J., Mahdi A., Valls C.,(2011) Polynomial integrability of the Hamiltonian systems with homogeneous potential of degree -3, Physica D, v. 240 , 24, 1, https://doi.org/10.1016/j.physd.2011.09.003, pp. 1928-1935. Note on integrability of certain homogeneous Hamiltonian systems. W Szuminski, A J Maciejewski, M Przybylska, 10.1016/j.physleta.2015.08.032Physics Letters A, v. 379Szuminski W., Maciejewski A. J., Przybylska M.,(2015) Note on integrability of certain homogeneous Hamiltonian systems, Physics Letters A, v. 379 , 35-46, 4, https://doi.org/10.1016/j.physleta.2015.08.032, pp. 2970-2976. Polynomial integrability of Hamiltonian systems with homogeneous potentials of degree -k. R Oliveira, C Valls, 10.1016/j.physleta.2016.09.033Physics Letters A, v. 380Oliveira R., Valls C.,(2016) Polynomial integrability of Hamiltonian systems with homogeneous potentials of degree -k, Physics Letters A, v. 380 , 46, 1, https://doi.org/10.1016/j.physleta.2016.09.033, pp. 3876-3880. Non-integrability Criteria for Hamiltonians in the case of Lamé Normal Variational Equations. J Morales-Ruiz, C Simó, 10.1006/jdeq.1996.0113J Diff Eq. 129Morales-Ruiz J, Simó C. (1996) Non-integrability Criteria for Hamiltoni- ans in the case of Lamé Normal Variational Equations, J Diff Eq; 129: https://doi.org/10.1006/jdeq.1996.0113, 111-135. J Morales-Ruiz, Differential Galois Theory and Non-integrability of Hamiltonian Systems. BirkhäuserMorales-Ruiz J., (1999), Differential Galois Theory and Non-integrability of Hamiltonian Systems, Birkhäuser . Dynamics and integrability analysis of two pendulums coupled by a spring. W Szuminski, D Wozniak, 10.1016/j.cnsns.2019.105099.105099Commun Nonlinear Sci Numer Simul. 834Szuminski W., Wozniak D. (2020) Dynamics and integrability analysis of two pendu- lums coupled by a spring, Commun Nonlinear Sci Numer Simul 2020; vol. 83(4): doi: 10.1016/j.cnsns.2019.105099.105099,1-16. Picard -Vessiot Theory and integrability. J Morales-Ruiz, doi.org/10.1016/j.geomphys.2014.07.006Journal of Geometry and Physics. 87Morales-Ruiz J., (2015), Picard -Vessiot Theory and integrability, Journal of Geometry and Physics, 87, January 2015, doi.org/10.1016/j.geomphys.2014.07.006, 314-343. Integrability of Hamiltonian systems and differential Galois groups of higher variational equations. J Morales-Ruiz, J-P Ramis, C Simó, doi.org/10.1016/j.ansens.2007.09.002Ann Scient Ec Norm Sup. 40Morales-Ruiz J, Ramis J-P, Simó C. (2007) Integrability of Hamiltonian systems and differential Galois groups of higher variational equations. Ann Scient Ec Norm Sup; 40: doi.org/10.1016/j.ansens.2007.09.002, pp. 845-884. Integrability of Dynamical systems through Differential Galois Theory: practical guide. Contemporary Math; dx. J Morales-Ruiz, J-P Ramis, doi.org/10.1090/conm/509509Morales-Ruiz J., Ramis J-P.(2010) Integrability of Dynamical systems through Differen- tial Galois Theory: practical guide. Contemporary Math; dx.doi.org/10.1090/conm/509, p. 509. On Monodromy Groups of Second-Order Fuchsian Equations. A Baider, R C Churchill, doi.org/10.1137/0521090SIAM J. Math. Anal. 21Baider A., Churchill R. C., (1990), On Monodromy Groups of Second-Order Fuchsian Equations, SIAM J. Math. Anal., 21, 6, doi.org/10.1137/0521090, pp. 1642-1652.
[]
[ "Asymptotic Distribution-Free Independence Test for High Dimension Data", "Asymptotic Distribution-Free Independence Test for High Dimension Data" ]
[ "Zhanrui Cai ", "Jing Lei ", "Kathryn Roeder " ]
[]
[]
Test of independence is of fundamental importance in modern data analysis, with broad applications in variable selection, graphical models, and causal inference. When the data is high dimensional and the potential dependence signal is sparse, independence testing becomes very challenging without distributional or structural assumptions. In this paper, we propose a general framework for independence testing by first fitting a classifier that distinguishes the joint and product distributions, and then testing the significance of the fitted classifier. This framework allows us to borrow the strength of the most advanced classification algorithms developed from the modern machine learning community, making it applicable to high dimensional, complex data. By combining a sample split and a fixed permutation, our test statistic has a universal, fixed Gaussian null distribution that is independent of the underlying data distribution. Extensive simulations demonstrate the advantages of the newly proposed test compared with existing methods. We further apply the new test to a single cell data set to test the independence between two types of single cell sequencing measurements, whose high dimensionality and sparsity make existing methods hard to apply.
10.1080/01621459.2023.2218030
[ "https://export.arxiv.org/pdf/2110.07652v3.pdf" ]
239,009,914
2110.07652
64d3b3561dc3e657d2eaacdaa5c85bb1f10c1857
Asymptotic Distribution-Free Independence Test for High Dimension Data Zhanrui Cai Jing Lei Kathryn Roeder Asymptotic Distribution-Free Independence Test for High Dimension Data and phrases: Test of independencesample splittingneural network Test of independence is of fundamental importance in modern data analysis, with broad applications in variable selection, graphical models, and causal inference. When the data is high dimensional and the potential dependence signal is sparse, independence testing becomes very challenging without distributional or structural assumptions. In this paper, we propose a general framework for independence testing by first fitting a classifier that distinguishes the joint and product distributions, and then testing the significance of the fitted classifier. This framework allows us to borrow the strength of the most advanced classification algorithms developed from the modern machine learning community, making it applicable to high dimensional, complex data. By combining a sample split and a fixed permutation, our test statistic has a universal, fixed Gaussian null distribution that is independent of the underlying data distribution. Extensive simulations demonstrate the advantages of the newly proposed test compared with existing methods. We further apply the new test to a single cell data set to test the independence between two types of single cell sequencing measurements, whose high dimensionality and sparsity make existing methods hard to apply. Introduction Test of independence is a fundamental question in data analysis and statistical inference. Considering two multivariate random vectors X and Y , we are interested in testing whether the two random vectors are independent, namely, H 0 : X⊥ ⊥Y . Such testing problems are relevant in many statistical learning problems, including variable selection in regression, Gaussian graphical models, Markov random fields, and causal inference (Fan et al., 2020;Maathuis et al., 2018;Imbens and Rubin, 2015). In traditional statistical literature, one may choose the Pearson correlation to measure the independence between X and Y when the data has a jointly normal distribution, or opt for the rank correlation when both X and Y are univariate. With the development of information technology, researchers are now able to collect complex and potentially high dimensional data with potentially highly nonlinear dependence. How to perform tests of independence for modern data is a challenging and important problem in the contemporary statistical community. In the past two decades, there have been a series of substantial developments in the testing of independence for general X and Y without assuming their parametric distributions. A natural starting point is to study the difference between P X,Y , the joint measure of (X, Y ), and P X ×P Y , the product measure of X and Y . In one of the most well-known papers on this topic, Székely et al. (2007) proposed the distance correlation by measuring the weighted integrated squared difference between the characteristic functions of P X,Y and P X × P Y , which is later shown to be equivalent to the maximum mean discrepancies in the machine learning community (Sejdinovic et al., 2013), and closely related to the Hilbert-Schmidt independence criterion (Gretton et al., 2005). Extensions of distance correlation have been widely discussed (Székely and Rizzo, 2013;Huo and Székely, 2016;Yao et al., 2018). Zhu et al. (2017) relaxed the moment constraint in distance correlation by combining the Hoeffding coefficient with projection pursuit. Other than comparing characteristic functions, there are also novel methods that compare the density functions , and the cumulative distribution functions (Heller et al., 2012;Moon and Chen, 2020). Kong et al. (2019) and used the appealing idea of conditional mean variance to evaluate the dependence between two random variables. More recently, Shi et al. (2020) and developed the first distribution-free independence test for multivariate random vectors. They define multiple ranks using the theory of measure transportation and propose (multivariate) rank versions of distance covariance and energy statistic for independence testing. But in practice, the computation for measure transportation will grow quickly with the sample size and dimension, which restricts the application of those two tests to large-scale datasets. High dimensional independence test has recently been studied by Zhu et al. (2020b) and Gao et al. (2021). In comparison, our work is more generally applicable as we allow the dependence signal in high dimensional vectors to be very sparse, which is a benefit of implementing the advanced machine learning algorithms. Our work is motivated by challenges arising in single-cell multimodal omics, a research area labeled 'Method of the Year 2019' by Nature Methods. This technological advance builds on the recent breakthroughs in sequencing the RNA of single cells and promises greater insights into gene regulatory networks, cell lineages, and trajectories by permitting the measurement of multiple omics on the same cell (Zhu et al., 2020a;Schier, 2020). Of particular interest are simultaneous measurements of gene expression (RNA-seq) and chromatin accessibility (ATAC-seq). ATAC-seq identifies active regulatory sequences in the genome by finding open chromatin, which determines whether a gene will be actively transcribed. For this reason, it is widely assumed that RNA-seq and ATAC-seq will co-vary. But both data sources tend to be high dimensional and extremely sparse, positing great challenges to performing statistical independence tests for the two random vectors. For example, the data we analyze consists of 11,188 blood cells, each with RNA-seq and ATAC-seq read counts. The dimension of RNA-seq is 29,717 and the dimension of ATAC-seq is 143,887. Only 6.35% entries in the RNA-seq and 5.66% entries in the ATAC-seq are non-zero, making all current independence testing methods practically infeasible. The purpose of this paper is to build a distribution-free test of independence that is pow-erful even under high dimensional, complex data. Existing methods use U-statistics to directly estimate the integrated squared difference between the joint distribution and the product distribution, in the forms of characteristic functions, density functions, or cumulative distributions. Such U-statistics often fail to pick up the hidden signal when there are many noise dimensions in the data, and often require cumbersome resampling procedures to calibrate the null distribution. Our proposal deviates from these methods by aiming at a different and adaptive quantity: Instead of the integrated squared difference between distribution functions, our method seeks to find any potential difference between the joint and product distributions by constructing a classification problem between these two distributions. By leveraging recent developments in two sample testing and sample splitting (Kim et al., 2019;Hu and Lei, 2020;Kim et al., 2021), we develop a test that is more flexible and can borrow strength from the most powerful classification tools, such as deep neural networks, from the machine learning community. It is particularly powerful for high dimensional data when proper regularizations (such as sparsity) are enforced on the classifier. The proposed method consists of three steps: sample splitting, classification, and rank-sum comparison. We fist split the index set I = {1, . . . , n} into two subsets I 1 = {1, 2, . . . , n 1 } and I 2 = {n 1 + 1, . . . , n}. Let D 1A = {(X i , Y i ), i ∈ I 1 } and D 2A = {(X i , Y i ), i ∈ I 2 } be the two subsets of the data. Then we generate two correspondingly permuted datasets by cyclically permuting Y in each of the two subsets. Let D 1B = {(X i , Y i ), i ∈ I 1 } and D 2B = {(X i , Y i ), i ∈ I 2 }, where X i = X i for all i, Y i = Y i+1 for i / ∈ {n 1 , n}, and Y n 1 = Y 1 , Y n = Y n 1 +1 . In the classification step, we train a classifier that aims to distinguish D 1A from D 1B , because the sample points in D 1A are generated from P X,Y while those in D 1B have marginal distribution P X × P Y and weak dependency between sample points. Next, in the rank-sum comparison step we compare the predicted class probabilities in D 2A and D 2B . Under H 0 , the predicted class probabilities of D 2A and D 2B should have the same distribution, while under H 1 , those predicted probabilities of D 2A and D 2B should be different if the classifier is able to pick up the difference between P X,Y and P X × P Y . This intuition motivates a rank-sum test to compare the predicted class probabilities of the two samples. The main technical challenge is that the sample points in D 2A and D 2B are dependent, thus classical U-statistics theory can not be directly applied. Our theoretical development uses Hoeffding's projection to decompose the test statistic into sums of sparsely dependent random variables, and uses a version of Stein's method for sparsely dependent data to establish the normal approximation of the test statistic. To sum up, the proposed method has the following advantages. (i) Completely nonparametric. We require very few assumptions on the data to ensure the test's validity. Under H 0 , the type I error control is automatically guaranteed by sample splitting and the single permutation. Under H 1 , the test will have good power as long as the classifier is better than a random guess, which is practically feasible given the powerful neural networks. (ii) Asymptotic distribution-free and computationally efficient. Our test statistic has a standard normal asymptotic null distribution. This is in critical contrast to other current independence tests that have non-explicit distributions and require the computationally expensive bootstraps to obtain p-values (Székely et al., 2007;Heller et al., 2012;. For the most recent distribution-free independence tests (Shi et al., 2020;, the limiting null distributions are still weighted χ 2 (1), without an analytic form. Although Shi et al. (2020) listed the thresholds for some combinations of dimensions of X and Y , it still needs at least one round of numerical approximation when the dimensions exceed those in Shi et al. (2020). Such improved computational efficiency makes our method particularly appealing for the aforementioned single cell sequencing data. (iii) Applicability to high dimensional data. The test is suitable for high dimensional data. Existing tests based on degenerate U-statistics are hard to apply and have limited power when the data dimension is high and the dependence signal is very sparse. By taking the classification perspective, we can take advantage of adaptive and structured classifiers to pick up weak signals from high dimensional data. Moreover, our framework allows X, Y to take value in infinite-dimensional spaces, as long as the likelihood ratio is well defined. (iv) Flexibility and generality. The method described in this paper is just one example from a general framework. All three steps (permutation, classification, and calibration) can be carried out with other variants that are more suitable to the problem at hand. For example, one can use other dimension reduction or variable selection methods when distinguishing the two distributions, and/or use different two-sample testing methods, such as two-sample t-test, to calibrate the significance of classification. When the original sample (X i , Y i ) has a time-series or random field structure as the index i changes from 1 to n, one can also consider other types of permutations that are more suitable for the particular dependence structure across sample points. 2 Test of Independence by Sample Splitting and Classification Preliminaries and basic ideas Consider independent observations {(X i , Y i ) : 1 ≤ i ≤ n} of a pair of random variables X and Y with joint distribution P X,Y in a space X × Y. Let P X and P Y be the marginal distributions of X and Y respectively. We are interested in testing H 0 : P X,Y = P X × P Y versus H 1 : P X,Y = P X × P Y , where P X × P Y denotes the product distribution. Most existing methods for independence testing focus on a quantity of the form w(x, y)φ(G(x, y), G 1 (x)G 2 (y))dxdy, where G(·), G 1 (·), G 2 (·) are joint and marginal distribution functions, w is a weight function, and φ is a discrepancy measure. This framework covers nearly all the popularly studied independence testing methods, including distance correlation (Székely et al., 2007), Hilbert-Schimidt independence criterion (Gretton et al., 2005(Gretton et al., , 2007, rank-correlation based methods (Heller et al., 2012;Moon and Chen, 2020), and mutual information based methods . While enjoying elegant theoretical properties, these methods rely on specific choices of w, φ, and G functions, making them hard to apply for high-dimensional, complex data. Moreover, the null distributions of the corresponding test statistic usually depend on the unknown underlying distribution P X,Y and must be approximated using resampling methods. The key feature of our method is that it does not rely on a pre-chosen set of functions (w, φ, G). Instead, our method begins with fitting a flexible classifier to distinguish P X,Y and P X × P Y , and then tests whether the fitted classifier does anything different from random guessing. Suppose we have two equal-sized samples, one from P X,Y and one from P X × P Y , and we associate a label K = 1 (K = 0) for each sample point from P X,Y (P X × P Y ). We will discuss how to obtain these samples in the next subsection. Under H 0 , the two samples have the same distribution P X × P Y , so any classifier trying to distinguish these two samples would behave like a random guess. On the other hand, under H 1 , any classifier that can detect the difference between these two distributions should do better than random guess, which can be tested on a holdout pair of samples from the two distributions. More specifically, the conditional label probability θ(x, y) = P(K = 1|x, y) is related to the likelihood ratio L(x, y) def = θ(x, y) 1 − θ(x, y) = dP X,Y d(P X × P Y ) . (2.1) Therefore, θ(x, y) reduces the data dimension to 1, while largely capturing the difference between P X,Y and P X ×P Y as guranteed by the following result. Under the null hypothesis, θ(x, y) ≡ 1/2 and the likelihood ratio L(x, y) ≡ 1, which corresponds to a degenerate case. Proposition 2.1. Let P, Q be two probability distributions on a common measurable space such that P Q and the Radon-Nikodym derivative dP/dQ has a continuous distribution under Q. Let V ∼ P and W ∼ Q be independent and d tv (·, ·) be the total variation distance between two probability measures, then 1 4 d tv (P, Q) ≤ 1 2 − P dP dQ (V ) < dP dQ (W ) ≤ 1 2 d tv (P, Q) . Remark 1 (Dropping the continuity assumption). If (dP/dQ)(W ) has point mass, then it is possible to have (dP/dQ)(V ) = (dP/dQ)(W ). In this case one can associate each of V and W with an independent U (0, 1) random variable, ζ and η, and rank them with randomized tie- breaking I dP dQ (V ) < dP dQ (W ) + I(ζ < η)I dP dQ (V ) = dP dQ (W ) . All the theory, including Proposition 2.1, goes through the same for such a random tie-breaking ranking scheme with more careful bookkeeping. Therefore, in the rest of this paper, we will proceed under the assumption that θ(X, Y ) and its estimate θ(X, Y ) are continuous under P X × P Y for notational simplicity. Such a classification-testing procedure consists of a fitting part and testing part, which need to be carried out on separate subsamples. Splitting the sample reduces the sample size used for both classification and testing. But the benefits are quite substantial: First, in highdimensional data, the signal is often quite weak and concentrates on a low-dimensional subspace or submanifold hidden in the high-dimensional ambient space. It is often more efficient to find out the direction of the signal and then conduct hypothesis tests targeted specifically in that signal direction. The reduced sample sizes can be viewed as our investment in finding the most promising direction of the signal. Second, sample splitting provides great flexibility in the choice of classification algorithms, such as black-box methods and deep neural networks, which are particularly powerful in handling complex data. Even if we split the sample to carry out the classification and test, another challenge remains: How do we obtain samples from the two distributions P X,Y and P X ×P Y , as required by both the classification and the testing steps? We provide a sample-size efficient answer to this question in the next subsection. Sample Splitting and Cyclic Permutation As discusssed in the previous subsection, the classification and testing procedures need to be carried out on separate subsamples to ensure the validity. Suppose we split the index set I = {1 , . . . , n} into two subsets I 1 = {1, 2, . . . , n 1 } and I 2 = {n 1 + 1, . . . , n}, n 2 = n − n 1 , so that the subsample I 1 is used for classification and I 2 is used for testing. However, after such a sample split we still do not have a sample from P X × P Y for classification or testing. A simple idea is to further split I 1 into I 11 and I 12 , and permute the sample pairs in I 12 to form a sample from P X × P Y . A similar second split and permutation can be applied to I 2 for the testing purpose. Although this approach is simple and straightforward to implement, it further splits an already reduced sample size. A natural question is whether one can avoid such a second split and use the sample more efficiently. We provide a positive answer below. To avoid the second split, denote D 1A = {(X i , Y i ), i ∈ I 1 } the subsample in I 1 , and its cyclicly permuted version D 1B = {(X i , Y i ), i ∈ I 1 }, where X i = X i for all i, Y i = Y i+1 for 1 ≤ i ≤ n 1 − 1, and Y n 1 = Y 1 . Similarly D 2A = {(X i , Y i ) : i ∈ I 2 } denotes the subsample in I 2 , and its cyclicly permuted version D 2B = {(X i , Y i ), i ∈ I 2 }, with X i = X i for all i, Y i = Y i+1 for n 1 + 1 ≤ i ≤ n − 1, and Y n = Y n 1 +1 . Our plan is to treat D jA , D jB as approximately independent samples from P X,Y and P X × P Y for classification (j = 1) and two-sample testing (j = 2), because the dependence between the original and cyclicly permuted samples are very sparse. Suppose we apply a classification algorithm on D 1A , D 1B with labels K = 1 for sample points in D 1A and labels K = 0 for those in D 1B , resulting in a function estimate θ(x, y) of θ(x, y) = P(K = 1|x, y) as defined in (2.1). To test the significance of the classifier, we use the rank-sum statistic R = 1 n 2 2 i,j∈I 2 I θ(X i , Y i ) < θ(X j , Y j ) . (2.2) If θ is close to θ under H 1 then Proposition 2.1 suggests we should reject H 0 if R is too small. As detailed in the next subsection, combining the two-sample U -statistic theory and Stein's method for sparsely dependent random variables, we have the following asymptotic scaling of R under H 0 : Var( √ n 2 R) ≈ σ 2 with σ 2 = 1 6 − 2 n 2 n i=n 1 +1 h 1 (X i , Y i ) h 1 (X i , Y i ) − 2 n 2 n i=n 1 +1 h 1 (X i+1 , Y i+1 ) h 1 (X i , Y i ) , (2.3) where h 1 (x, y) = 1/2 − F 2 * ( θ(x, y)), with F 2 * the empirical distribution function of { θ(X i , Y i ) : i ∈ I 2 }, and using the convention (X n+1 , Y n+1 ) = (X n 1 +1 , Y n 1 +1 ). Thus we arrive at the following split-permute-classification-test procedure. Algorithm 1 Test of independence via classification significance 1. Input data D = {(X 1 , Y 1 ), . . . , (X n , Y n )}, classificaion algorithm A. 2. Split {1, . . . , n} into subsets I 1 and I 2 to form subsamples D 1A , D 2A , and cyclicly permuted subsamples D 1B , D 2B as described above. 3. Apply A to D 1A and D 1B to obtain the estimated class probability function θ(·, ·); 4. Calculate the p-value def = Φ{ √ n 2 (R − 1/2)/ σ} with R, σ given by (2.2) and (2.3), where Φ(·) is the standard normal distribution function. Remark 1: split ratio. To implement Algorithm 1, one needs to choose the sizes of I 1 and I 2 . While a large I 1 will train a more accurate classifier, it also leads to a smaller testing data set I 2 . Thus it is important to balance the trade-off between classification and testing data. In our simulations, we found an equal-split performs very well. Without further notations, we assume |I 1 | = |I 2 | throughout the paper. Remark 2: choice of the classifier. In principle, our method can work with any classification algorithm A. However, the classification problem in our method is quite challenging. By construction, each coordinate in the two populations D 1A , D 1B have the same mean value, and the only difference is the dependence structure among the columns. Therefore, linear methods such as logistic regression cannot perform very well, and nonlinear methods such as support vector machine would require a good choice of kernel. In practice, we choose neural networks due to their great flexibility and adaptivity to complex structures in the data. Theoretical Justifications In the split-permute testing procedure described in Algorithm 1, both the classifier and twosample test are obtained using an originally paired subsample together with its cyclicly permuted version. Therefore the samples are not completely independent and the theoretical properties of the resulting test statistic deserve careful analysis. We first establish the asymptotic conditional distribution of the test statistic conditioning on a given fitted label probability function θ. It turns out that the null asymptotic conditional distribution is independent of θ and asymptotically distribution-free, while the estimated likelihood ratio needs to be better than random guess under the alternative. We will discuss the performance of classification using the cyclic permuted data in Section 3.2. Asymptotic distribution of test statistic Before presenting the theoretical results, we describe some necessary notations. Let F 1 * (·), F 2 * (·) be the cumulative distribution functions of θ(X, Y ) under P X,Y and P X × P Y , respectively. Let E * (·), P * (·), Cov * (·) and Var * (·) denote the conditional expectation, probability, covariance and variance given θ (or equivalently, given the first subsample). For k = 3, 4, define A k = 6 n k 2 Var * (R) k/2 i∈I 2 E * F 2 * { θ(X i , Y i )} − 1 2 k + i∈I 2 E * F 1 * { θ(X i , Y i )} − 1 2 k . (3.4) with decomposing the U-statistic R into its projectionR and the remaining term, as detailed in Lemma D.1. Specifically, let R = 1 n 2 i∈I 2 1 2 − F 2 * { θ(X i , Y i )} + 1 n 2 i∈I 2 F 2 * { θ(X i , Y i )} − 1 2 Lemma D.1 shows that R − 1 2 =R +O p (n −1 2 ). Then we prove the conditional Berry-Essen bound ofR and the unconditional asymptotic normality of R. The theoretical results under H 0 are summarized in Theorem 3.1. Theorem 3.1. Under H 0 , assume (X i , Y i ), i ∈ I 2 are i.i.d samples from P X,Y , and θ(x, y) is a function such that θ(X 1 , Y 1 ) is continuous, and F 2 * { θ(X, Y )} = g 1 (X) + g 2 (Y ) for any g 1 (·) and g 2 (·). Then sup s∈R P * √ n 2R σ * ≤ s − Φ(s) ≤ c( A 3 + A 4 ) where 0 ≤ c < 8 is a constant, A 3 and A 4 are defined in (3.4), and σ 2 * := 1 6 − 2 Cov * F 2 * { θ(X 2 , Y 2 )}, F 1 * { θ(X 1 , Y 2 )} − 2 Cov * F 2 * { θ(X 1 , Y 1 )}, F 1 * { θ(X 1 , Y 2 )} . Under the additional assumption of n 1/3 2 σ 2 * → ∞, we have σ/σ * − 1 = o P (1) and the test statistic √ n 2 (R − 1/2)/ σ converges in distribution to N (0, 1) as n 1 and n 2 → ∞. We discuss the convergence rate and conditions for Theorem 3.1 in the following remarks. Remark 2. The right hand side of the Berry-Essen bound in Theorem 3.1 consists of two terms: √ A 3 and √ A 4 . Here √ A 3 is the dominating term, and is of order n −1/4 2 when σ 2 * is of constant order. We can further improve the bound rate to the classical n −1/2 2 and relax the condition on σ 2 * to n 1/2 2 σ 2 * → ∞ by applying Theorem 2.2 of Jirak (2016). The cost is a slightly more complicated condition on the constant term in the Berry-Essen bound. Remark 3. Conditioning on the estimated probability function θ, our test statistic R is a twosample U -statistic. Its asymptotic normality requires its kernel to be non-degenerate, such that the asymptotic variance σ 2 * > 0. This non-degeneracy condition is further equivalent to F 2 * { θ(X, Y )} cannot be written in the form of g 1 (X) + g 2 (Y ) for any functions g 1 , g 2 , which is mild because F 2 * { θ(X, Y )} = g 1 (X) + g 2 (Y ) is equivalent to 1) g 1 (X) + g 2 (Y ) follows U (0, 1) and 2) θ(X, Y ) = W {g 1 (X) + g 2 (Y )}, for some strictly monotone increasing W : [0, 1] → [0, 1]. Common classifiers (logistic regression, random forest, SVM, neural network) can be easily verified to satisfy this non-degeneracy condition. Theorem 3.2. Under H 1 , assume (X i , Y i ), i ∈ I 2 are i.i.d samples from P X,Y , and there exists a strictly monotone function g such that E * g( θ(X , Y )) 1 − g( θ(X , Y )) − θ(X , Y ) 1 − θ(X , Y ) < 1/4 − µ/2 − c (3.5) holds with probability tending to 1 for some positive constant c. Here µ = P{θ(X, Y ) < θ(X , Y )}, with (X, Y ), (X , Y ) independently generated from P X,Y and P X × P Y respectively. Then, as n 1 and n 2 → ∞, the test statistic √ n 2 (R − 1/2)/ σ p → ∞. The condition required for the power guaranteee under the alternative is substantially weaker than the asymptotic normality under the null. This is because we no longer need to lower bound the variance term. It is remarkable that we do not need to assume the classifier to be consistent to have valid type-I and type-II error control. The type I error control is automatically guaranteed by the cyclic permutation and holds for arbitrary classifiers, because under H 0 , (X 1 , Y 1 ) and (X 1 , Y 2 ) have the same distribution and any classifier will not be able to distinguish the two samples. For the type II error control, equation (3.5) is much weaker than consistency, as it only requires θ to be close to θ up to a monotone transform and within some constant error bound. These properties are especially appealing in practice. For example, many nonparametric tests that rely on kernel density estimations need to carefully choose the kernel bandwidth to guarantee the correct type-I error rate. In our case, even though the classifier (such as a neural network) may have many tunning parameters to choose from, the test is always valid, and the power is non-trivial whenever the classifier can pick up even only a part of the difference between the joint and product distributions. Next, we present a local alternative analysis where the dependence signal changes with the sample size. To quantity the signal, we use the likelihood ratio defined in (2.1). Specifically, consider (X , Y ) and (X , Y ) independently drawn from P X × P Y . We define δ = E {|L(X , Y ) − L(X , Y )|} . (3.6) By Proposition 2.1, we know that δ d tv (P X,Y , P X × P Y ). Thus δ measures the distance between the null hypothesis and the local alternative. E{L(X , Y )} = 1 and δ = 0 if and only if P{L(X , Y ) = 1} = 1, which is equivalent to H 0 . Our local alternative analysis focuses on the case δ → 0 as (n 1 , n 2 ) → ∞. We introduce extra notation to analyze the local alternative. Let µ = P{θ(X, Y ) < θ(X , Y )}, µ * = P * { θ(X, Y ) < θ(X , Y )} and F 1 (·), F 2 (·) be the cumulative distribution functions of θ(X, Y ) under P X,Y and P X × P Y , respectively. And define R = 1 n 2 2 i,j∈I 2 I θ(X i , Y i ) < θ(X j , Y j ) . (3.7) Based on equation (D.12) in Lemma D.1, one can easily calculate the variance for the projection of √ n 2 R to be σ 2 0 := Cov(V 1 + V 2 + V 3 , V 2 ), where V i = F 1 {θ(X i , Y i )} − F 2 {θ(X i , Y i )}. While σ 2 0 is complicated and hard to understand, we also define σ 2 and show that σ 2 0 is actually sufficient close to σ 2 under the local alternative hypothesis. Specifically, σ 2 0 can be approximated by σ 2 = 1 6 − 2 Cov [F 2 {θ(X 2 , Y 2 )}, F 1 {θ(X 1 , Y 2 )}] − 2 Cov [F 2 {θ(X 1 , Y 1 )}, F 1 {θ(X 1 , Y 2 )}] , because the joint distribution P X,Y gets increasingly closer to the product distribution P X × P Y . For the same reason, σ 2 further converges to a quantity depending only on the product distribution P X × P Y . Thus it is reasonable to assume the variance term σ 2 0 is bounded away from zero in the local asyptotic population sequence: σ 2 0 ≥ c > 0 for some constant c not depending on the sample size. Theorem 3.3. Under the local alternative with (3.6) for a sequence δ = o(1), assume (X i , Y i ), i ∈ I 2 are i.i.d samples from P X,Y , θ(·) has a continuous distribution under both P X,Y and P X × P Y , µ * − µ = o p (n −1/2 2 ), σ 2 0 ≥ c for some constant c > 0, and E * I{ θ(X, Y ) < θ(X , Y )} − I{θ(X, Y ) < θ(X , Y )} = o p (1). (3.8) Then √ n 2 (R − 1/2) σ = Z − √ n 2 δ 4σ + o p (1). where Z d → N (0, 1) as n 1 and n 2 goes to infinity. As a consequence, when the distance between the local alternative and the null vanishes at the same or a slower rate as n −1/2 2 , the limiting distribution of the test statistic under the local alternative becomes a location-shited normal distribution with unit variance. * − µ = o p (n −1/2 2 ). Let ∆ i = I{ θ(X i , Y i ) < θ(X i , Y i )} − I{θ(X i , Y i ) < θ(X i , Y i )}. Then µ * − µ = E * ∆ i . If a parametric estimate θ is used, then typically ∆ i = O P (n −1/2 1 ). So the required condition holds if n 1 n 2 . In the pratically preferred case of n 1 n 2 , we have ∆ i n −1/2 1 , but µ * − µ = E * ∆ i can still be much smaller than ∆ i if the random variable ∆ i is centered around zero and not highly skewed. We also provide a simple numerical example that verifies the condition µ * − µ = o p (n −1/2 2 ) in section C of the supplement. Classification accuracy under cyclic permutation A remaining question regarding the procedure is whether we have any formal guarantees on the estimator θ because it is not obtained from a standard independent two sample data, but from only a single sample, with the second sample obtained from cyclically permuting the original sample. The quality of such θ would depend on the particular form of the estimator and the data distribution. Intuitively, the weak dependence caused by the cyclic permutation among the sample points should be negligible, and the resulting estimator would behave similarly to those obtained from genuine independent two-sample data. Here for an illustrative purpose, we prove the consistency of the classifier obtained under (1) a classical low-dimensional M-estimation and (2) a high-dimensional lasso-based sparse regression. Note that both the low dimensional and high dimensional models are trained on the first subset of data I 1 with n 1 = |I 1 |. For notation simplicity of the consistency analysis, we drop the subscript and use n instead of n 1 only in section 3.2 and its proofs. Low-dimensional M-estimation Define the objective function as M (X, Y, X , Y ; β) def = M 1 (X, Y ; β) + M 2 (X , Y ; β), where β ∈ R p is the unknown parameter in the classifier. Here (X, Y ) and (X , Y ) are independent realizations from P X,Y and P X × P Y , respectively. We use P to denote the joint distribution of (X, Y, X , Y ). Then the objective function is E{M (X, Y, X , Y ; β)}, where the expectation is taken with respect to P. For example, we can choose M 1 (x, y; β) = − 1 (x, y; β) and M 2 (x , y ; β) = − 2 (x , y ; β), with some class-specific binary classification loss functions 1 (·), 2 (·), such as the hinge loss or the logistic loss function. Let β 0 be the true parameter that maximizes the objective function. Using the cyclicly permuted data, the classifier is trained by maximizing the empirical criterion function M n (β) def = 1 n n i=1 {M 1 (X i , Y i ; β) + M 2 (X i , Y i ; β)} , Denote β n as the maximizer of M n (β). The consistency of β n is established in Theorem 3.4. Theorem 3.4. Suppose (X i , Y i ), i ∈ I, are independent observations drawn from P X,Y . Let M = {M (x, y, x , y ; β) : β ∈ B} be a class of measurable functions such that N [ ] ( , M, P) < ∞ for every > 0, and E [{M (X, Y, X , Y ; β)} 4 ] < ∞. Suppose the true parameter β 0 is identifiable, i.e., sup β:d(β,β 0 )≥ E{M (X, Y, X , Y ; β)} < E{M (X, Y, X , Y ; β 0 )}, where d(·) is a distance measure. Then any sequence of estimators β n with M n ( β n ) ≥ M n (β 0 ) − o p (1) converges in probability to β 0 . High dimensional regression We consider a scenario where the can be large, compared to the sample size. Denote the dimension of X as d 1 and the dimension of Y as d 2 . Let d = d 1 + d 2 . Denote Z = (X, Y ) ∼ P X,Y and Z = (X , Y ) ∼ P X × P Y . We define g(z) = dP X,Y dP X,Y + dP X × P Y (z) Our goal is to estimate g(z) while keeping in mind that d 1 and d 2 may be comparable or lager than the sample size n. In order to cope with high dimensionality, we assume that g(z) has a sparse representation in a certain basis. This would be particularly reasonable, for example, when only a few coordinates of X and Y are dependent. Assume that s 1 out of d coordinates of Z = (X, Y ) are dependent. Then g(z) is essentially a function of s 1 variables instead of d variables. Consider all the s 1 -way combinations of coordinates of Z, and use K n basis for each combination. Specifically, let ξ 1 , ξ 2 , ... be a basis function of the L 2 space R s 1 → R. Let K n be a slowly growing number. We consider the basis ξ (Z) = {ξ k (Z j 1 , Z j 2 , ..., Z js 1 ) : 1 ≤ k ≤ K n , 1 ≤ j 1 < j 2 < ... < j s 1 } with dimensionality m ∝ d s 1 K n , and assume that the function g(z) = ξ(z) T β * with β * 0 ≤ s 2 n. Such a hard sparsity assumption makes the presentation simpler and can be relaxed using a standard oracle-inequality argument. Our starting point is that the function g is the minimizer of the following problem min h E (1 − h(Z)) 2 + (0 − h(Z )) 2 , since we associated a label K = 1 (K = 0) for each sample point from P X,Y (P X × P Y ). As a result, under the assumed basis expansion and sparse representation of g, β * is the minimizer of the problem min β β T Γβ − 2γ T β, (3.9) where Γ = Eξ(Z)ξ(Z) T + Eξ(Z )ξ(Z ) T , γ = E [ξ(Z)] , Now consider the empirical version with cyclic permuted Z 1 , . . . , Z n , we estimate β by optimizing the regularized quadratic form min β β T Γβ − 2 γ T β + λ β 1 , (3.10) Denote Ξ Z = (ξ(Z 1 ) T , . . . , ξ(Z n ) T ) T and Ξ Z = (ξ(Z 1 ) T , . . . , ξ(Z n ) T ) T , Ξ = (Ξ Z , Ξ Z ), then Γ = n −1 Ξ T Ξ, γ = n −1 Ξ T Z 1 n×1 . Let G = (g(Z 1 ) T , . . . , g(Z n ) T , g(Z 1 ) T , . . . , g(Z n ) T ) T ∈ R 2n×1 . Define the set C α (S) = {∆ ∈ R m : ∆ S c 1 ≤ α ∆ S 1 }. We assume the matrix Ξ satisfies the restricted eigenvalue (RE) condition over S with parameters (κ, α) if 1 n Ξ∆ 2 2 ≥ κ ∆ 2 2 , for all ∆ ∈ C α (S). (3.11) We also define the residual with respect to the minimization problem (3.9). Note that the response vector is (1, . . . , 1, 0 . . . , 0) T ∈ R 2n and the design matrix is Ξ, with parameter β * . Thus we let w 1 (z) = 1 − ξ(z) T β * with z = Z 1 , . . . , Z n , and let w 0 (z) = −ξ(z) T β * with z = Z 1 , . . . , Z n . Denote w = (w 1 (Z 1 ), . . . , w n (Z n ), w 0 (Z 1 ), . . . , w 0 (Z n )) T . Theorem 3.5. Assume that β * is supported on a subset S ⊂ {1, 2, . . . , m} with |S| = s 2 , and each basis function is bounded on [−B, B]. Further assume the matrix Ξ satisfies the restricted eigenvalue condition (3.11) with parameters (κ, 3), and λ satisfies that λ ≥ 4 Ξw/n ∞ . Then the solution to the optimization problem (3.10) satisfies β − β * 2 ≤ 3 2κ √ s 2 λ. In particular, when taking λ = C log m/n, we have β −β * 2 ≤ C s 2 log m/n with probability no less than 1 − m −1 for some constant C depending only on κ and B. The proof of Theorem 3.5 is given in the supplement. We can also relax the hard sparsity assumption on β and use the oracle inequality version of the proof (Theorem 7.19 in Wainwright (2019)) to prove the finite bound on β. The restricted eigenvalue condition is a standard one in the lasso literature. Here we directly assume the random design matrix Ξ satisfies a restricted eigenvalue condition, which can hold with high probability if the population version Γ satisfies the same condition with slightly different constants. Recall that m ∝ d s 1 K n . Thus the error bound for β is of order √ s 1 s 2 log d/n. When assuming s 1 and s 2 are constants, the dimension of the data is allowed to grow exponentially with the sample size. Numerical Validation In this section, we conduct numerical simulations to illustrate the performance of our method. For brevity, we will focus on the more challenging and interesting cases where both X and Y are high dimensional, and the dependence signal is sparse. Specifically, we assume only the first element in X and Y are related: Y 1 = a × g(X 1 ) + , where the signal a varies from 0 to 1. Y 2 , . . . , Y d 2 , X 1 , X 2 , . . . , X d 1 , and all follow N (0, 1) and are independent. The following models are considered: • M1: Y 1 = a × X 1 + ; • M2: Y 1 = a × sin(X 1 ) + ; • M3: Y 1 = a × exp(X 1 ) + ; • M4: Y 1 = a × {I(X 1 < 0)N (1, 1) + I(X 1 > 0)N (−1, 1)} + ; • M5: Y 1 = a × log(4X 2 1 ) + ; • M6: Y 1 = a × 5 |X 1 | + ; Our simulation models are similar to a variety of models that have been considered in the literature, though mostly in a less challenging case where X and Y are both low dimensional. As mentioned in the previous section, we choose the neural network to train the classifier and implement it by TensorFlow (Abadi et al., 2015). We use three layers of nodes (one input layer, one hidden layer, and one output layer). The number of nodes for the input layer is the dimension of the training data, and the number of nodes in the hidden layer is proportional to the data dimension. The output layer only contains one node since the task is binary classification. We further enforce the hidden layer with L 1 kernel regularization with regularization parameter varying from 10 −4 to 10 −3 . The dropout rate (Srivastava et al., 2014) for the hidden nodes also varies from 0.1 to 0.3. Details about the algorithm can be found in the supplemental code written in python. We compare the proposed method with other popular statistical independence tests, including the distance correlation (Székely et al. (2007), denoted by "DC"), ranks of distance test (Heller et al. (2012), denoted by "HHG"), and mutual information , denoted by "MI"). Those competing tests are implemented with popular R packages: energy, HHG, and IndepTest, respectively. Because the proposed method is a Circularly Permuted Classification based independence test, we name it the CPC test. We first look into the high dimensional effect on the independence tests by considering the linear model (M1), where a is set to be 1. The performance of the tests when the dimension increases are summarized in Figure 1. For the proposed method, it can detect the sparse dependence even when the dimension increases up to 500. The main reason is that we implement the L 1 penalization for the hidden layer, which greatly eliminates the noise in the data and preserves the desired sparse dependence signal. For comparison, the HHG and MI method suffer significantly from high dimensionality, while the distance correlation has surprisingly high power when the dimension d 1 and d 2 are less than 200, but its power still decreases dramatically when the dimension further increases. Next, we focus on fixed dimension d 1 = d 2 = 100 to ensure a relatively fair comparison, because otherwise, all current methods tend to have inferior power. We report the performance of all six models ( While existing tests based on sample splitting tend to cause nonignorable power loss in practice (Wasserman et al., 2020;Kim and Ramdas, 2020), this phenomenon is weakening in our test. In the simulations, the newly proposed test outperforms other tests that use the whole dataset. This is because half of the data is "invested" to find the most promising dimension reduction directions, and improves power performance under H 1 . Lastly, we compare the computing time of the tests. We still use the linear model in (M1) for simplicity. We restrict the computation memory to be 16 GB and compute the average 1) the sample size n is fixed to be 1000, and dimension d 1 = d 2 linearly increase from 100 to 500. 2) the dimension d 1 = d 2 are fixed to be 100, and the sample size n increases from 1000 to 5000. The time costs measured in minutes are reported in Tables 1 and 2, respectively. We used permutation tests for distance correlation, HHG and mutual information to obtain p-values and the permutation replicate is set to be 200. We observe that the computation time of the proposed test almost grows linearly with the dimension and sample size. For distance correlation, HHG and mutual information, the computation costs grow linearly with the dimension but grow at least quadratically with the sample size. The HHG method exceeds the memory constraint (16GB) when the sample size n is larger than 2000, and we are unable to obtain its corresponding computation times in Table 2. In general, the proposed test is much faster compared with other methods for large-scale data sets. Lastly, we only used regular CPU cores for the entire simulation. The computing time for our test can be further reduced when using advanced GPU cores. Application to Single Cell Data The analysis of single cell sequencing data has fueled much discovery and innovation over recent years (Kulkarni et al., 2019), and recent advances in multimodal omics promise further progress. In this section, we apply the proposed test to a single cell dataset consisting of measurements of Peripheral blood mononuclear cells (PBMCs), publicly available on the 10X Genomics website (10x Genomics, 2021). The data contain measurements of ATAC-seq and RNA-seq in 11,898 cells, and we are interested in testing whether the two modes of measurement are independent. It has been widely assumed that ATAC-seq and RNA-seq are dependent because ATAC-seq identifies open chromatin sites that are available for transcription. For example, Eltager et al. (2021) proposed to identify cell clusters using the co-measurements of RNA-seq and ATAC-seq from the same cell. In this section, we aim to provide solid statistical evidence for the dependence relationship among the two random vectors. Each record in the dataset corresponds to a single cell. We perform quality control on these data before analysis. The RNA-seq data initially consists of a vector of counts that we preprocess following the Seurat 3.0 pipeline (Stuart and Satija, 2019). We retain cells that have counts from 50 -10,000 genes to exclude almost empty and noisy cells. We set minimum cells per gene to be 1 to remove genes that are detected in cells less than this threshold. RNA-seq counts are then normalized by dividing each count by the total count for each cell and then scaling up to counts per million. The ATAC-seq data is also derived from counts, however, because these fragments are distributed across the entire genome, the data were pre-processed to identify peaks, which are clusters of fragments that were inferred to indicate a single region of open chromatin; all of the fragments in the locality of a peak are counted and attributed to the peak location (Yan et al., 2020). We retain cells whose peaks include from 50 to 15,000 counts. The minimum cells per peak is set as 1. Peak counts are normalized by dividing each count by the total count for each cell and then scaling up to counts per million. Overall 11,188 cells passed the quality control for both RNA-seq and ATAC-seq. The dimension of the RNA-seq data is 29,717 genes, for which only 6.35% of the entries in the data matrix has non-zero values. For the ATAC-seq data, the dimension is 143,887 peaks and only 5.66% entries have non-zero values. To achieve fast computation, we store the data in a sparse matrix and run the proposed algorithm and other competing algorithms in python and R, respectively. However, the distance correlation, HHG, and mutual information all reported errors in the algorithm because of exceeding the memory constraint of 16GB. It suggests that some substantial adaptations may be necessary to apply these existing tests of independence that are unsuitable for such high dimensional sparse datasets. For the proposed method, we use the neural network with 3 layers, where the hidden layer contains 2000 nodes. We only used CPU cores to train the algorithm, and it takes about 13.89 minutes to run the test. The test statistic is −80.95 and the corresponding p-value is practically 0. This strongly confirms that the RNA-seq and ATAC-seq are indeed dependent on each other. Discussion In this paper, we proposed a general framework for independence testing that is powerful in detecting sparse dependence signals in high dimensional data. We borrow the strength from the most powerful classification tools, such as neural networks, to boost power when the dependence between X and Y is sparse and weak. The proposed test statistic has a standard normal asymptotic distribution when the sample size is large. In addition to such a distribution-free asymptotic null distribution, the new test has several advantages over existing works in both power performance and computing efficiency. We apply the new test to a single cell data set and confirmed a widely believed important hypothesis in the multimodal omics literature. There are several potential directions to follow up. The idea in this paper can be readily Moon, H. and Chen, K. (2020). "Interpoint-ranking sign covariance for test of independence." Biometrika, 103(1), 1-14. Schier, A.F. (2020). "Single-cell biology: beyond the sum of its parts." Nature Methods, 17(1), 17-20. Sejdinovic, D., Sriperumbudur, B., Gretton, A., and Fukumizu, K. (2013). "Equivalence of distance-based and rkhs-based statistics in hypothesis testing." The Annals of Statistics, pages 2263-2291. Shi, H., Drton, M., and Han, F. (2020). "Distribution-free consistent independence tests via center-outward ranks and signs." Journal of the American Statistical Association, pages 1-16. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014). A Additional Simulations Additional simulations for M1 -M6 when α = 0.01 and α = 0.1 are given in Figure 3 and 4. We also report the simulations when the data is correlated in Figure 5. Consider the model M 1, where we still assume only the fist element in X and Y are related, and let the signal a vary from 0 to 1. We assume that X follows a multivariate normal distribution with mean vector 0 and covariance matrix Σ, where Σ i,j = ρ |i−j| . Let ρ ∈ {0.25, 0.5, 0.75}. We report the power curves for all the methods when α = 0.05 and 0.1. Lastly, we report the simulations where the data is correlated and has heavy tailed distribution in Figure 6. We assume that both X and Y follow multivariate t-distribution with 2 degrees B Other Choice of Test Statistics In the paper, we have focused on using the rank sum test to distinguish θ(X, Y ) and θ(X , Y ). In fact one can use other two-sample tests under the same framework. For example, one may use a version of the two-sample t-statistic 1 n 2 i∈I 2 θ(X i , Y i ) − θ(X i , Y i ) and reject for large values of the test statistic. One may also estimate the the KL-divergence 1 n 2 i∈I 2 log θ(X i , Y i ) 1 − θ(X i , Y i ) − log θ(X i , Y i ) 1 − θ(X i , Y i ) since θ(X i , Y i )/{1− θ(X i , Y i )} is an estimate of the likelihood ratio. However, these test statistics would require additional assumptions on the distributions of θ(X, Y ), θ(X , Y ), and are more likely to be sensitive to outliers, which may not be very plausible in practice, especially when the quality of θ(·) is not fully guaranteed. C Numerical verification of Conditions In this section, we numerically verifiy the condition µ * −µ = o p (n θ(X, Y ) = 1 √ 2π|Σ| exp − 1 2 (X, Y )Σ −1 (X, Y ) 1 √ 2π|Σ| exp − 1 2 (X, Y )Σ −1 (X, Y ) + 1 √ 2π exp − 1 2 (X, Y )(X, Y ) = D 1 D 1 + D 2 where D 1 = (1 − ρ 2 /n) −1/2 exp −(1 − ρ 2 /n) −1 (X 2 − 2ρXY / √ n + Y 2 )/2, D 2 = exp −(X 2 + Y 2 )/2. θ(X, Y ) can be obtained by replacing ρ with its sample estimator ρ. For example, we may use the maximum likelihood estimator ρ = n −1/2 1 n 1 i=1 X i Y i . We numerically evaludate the functions θ() and θ() with a sequence of sample sizes, and calculate µ * − µ. The results are summarized in Table 3. As we can see, as the sample increases, n 1/2 2 (µ * − µ) goes to zero. This numerically verified that the condition is satisfied under large samples. D Technical Lemmas The analysis of the asymptotic distribution relies on the following crucial lemma. Lemma D.1. The rank-sum test statistic satisfy R − µ * = 1 n 2 i∈I 2 1 − F 2 * { θ(X i , Y i )} − µ * + 1 n 2 j∈I 2 F 1 * { θ(X j , Y j )} − µ * + O p (n −1 2 ), R − µ = 1 n 2 i∈I 2 [1 − F 2 {θ(X i , Y i )} − µ] + 1 n 2 j∈I 2 F 1 {θ(X j , Y j )} − µ + O p (n −1 2 ). (D.12) Recall the definition for likelihood ratio (2.1). Let L 1 = L(X, Y ), L 2 = L(X , Y ), where (X, Y ), (X , Y ) are independent realizations from P X,Y and P X × P Y respectively. Let F 1 * (x, y) be the density function of P X,Y and F 2 * (x, y) be the density function of P X × P Y . We further define the estimated version of L: L(x, y) = θ(x, y) 1 − θ(x, y) , L 1 = L(X, Y ), L 2 = L(X , Y ). (D.13) Lemma D.2. Let (X, Y ) be independent realizations from P X,Y and (X , Y ), (X , Y ) be another two independent realizations from P X × P Y . L 2 = L(X , Y ) and L 2 = L(X , Y ). Then P * { θ(X, Y ) < θ(X , Y )} = P * ( L 1 < L 2 ) = E * {L 2 I( L 2 < L 2 )}, P{θ(X, Y ) < θ(X , Y )} = P(L 1 < L 2 ) = E{L 2 I(L 2 < L 2 )}. Lemma D.3. Let (X, Y ), (X , Y ) be independent realizations from P X,Y and P X × P Y . θ is any estimated classifier of the true classifier θ. Then We state two definitions for the readers' convenience. P * { θ(X, Y ) < θ(X , Y )} − P * {θ(X, Y ) < θ(X , Y )} ≤ 2E * | L 2 − L 2 | . Definition 1. ( -bracket) Let U ∈ R m be a random vector. Given two functions l(·) and u(·), To prove (E.14), the bracket [l, u] is the set of all functions f ∈ F with l(U ) ≤ f (U ) ≤ u(U ), for all U ∼ P U . An -bracket is a bracket [l, u] with E U |l(U ) − u(U )| < .| 1 n 2 i∈I 2 M (X i , Y i , X i , Y i+1 ; β) − E{M (X, Y, X , Y ; β)}| → 0,almostE|r(W ) − r(W )| =E(r(W ) − r(W ))1I{r(W ) > r(W )} + E(r(W ) − r(W ))1I{r(W ) > r(W )} =2E{r(W ) − r(W )}1I{r(W ) > r(W )} =2 [Er(W )1I{r(W ) > r(W )} − 1/2 + 1/2 − Er(W )1I{r(W ) < r(W )}] =4 [1/2 − Er(W )1I{r(W ) < r(W )}] , where the last equality follows from E|r(W ) − r(W )| = E E|r(W ) − r(W )| W ≥ E|r(W ) − 1| = d tv (P, Q) . For the upper bound, E|r(W ) − r(W )| ≤ E|r(W ) − 1| + E|r(W ) − 1| = 2d tv (P, Q) . E.2 Proof of Lemma D.1 Proof It suffices to prove the first equation, and the other equation follows similar reasons. For notation simplicity, define h{(X i , Y i ), (X j , Y j )} = I θ(X i , Y i ) < θ(X j , Y j ) . Then we have R − µ * = 1 n 2 2 i,j∈I 2 h{(X i , Y i ), (X j , Y j )} − µ * . (E.16) The terms j ∈ {i − 1, i} only contributes to 2n 2 terms in the sum, and is O(1/n 2 ) after dividing by n 2 2 because h{( X i , Y i ), (X j , Y j )} − µ ∈ [−1, 1]. For the other terms, we consider the marginal projection of the two-sample kernel h. Let (X, Y ), (X , Y ) be independent samples from P X,Y and P X × P Y , respectively. Then, by continuity of F 1 * and F 2 * , E * [h{(X, Y ), (X , Y )} − µ | X, Y ] =1 − F 2 * { θ(X, Y )} − µ * , E * [h{(X, Y ), (X , Y )} − µ | X , Y ] =F 1 * { θ(X , Y )} − µ * . Define h † {(X i , Y i ), (X j , Y j )} def = h{(X i , Y i ), (X j , Y j )} − µ * − 1 − F 2 * { θ(X i , Y i )} − µ * − F 1 * { θ(X j , Y j )} − µ * , so that h{(X i , Y i ), (X j , Y j )} − µ * =h † {(X i , Y i ), (X j , Y j )} + 1 − F 2 * { θ(X i , Y i )} − µ * + F 1 * { θ(X j , Y j )} − µ * . (E.17) Plugging (E.17) into (E.16) for the pairs j / ∈ {i − 1, i}, each F 2 * { θ(X i , Y i )} and F 1 * { θ(X j , Y j )} appear exactly n 2 − 2 times in the sum. Thus (E.16) and (E.17) imply R − µ * = O(n −1 2 ) + 1 n 2 i∈I 2 1 − F 2 * { θ(X i , Y i )} − µ * + 1 n 2 j∈I 2 F 1 * { θ(X j , Y j )} − µ * + 1 n 2 It suffices to show that 1 n 2 2 j / ∈{i−1,i} h † {(X i , Y i ), (X j , Y j )} = O P (n −1 2 ) . (E.18) Consider E *     1 n 2 2 j / ∈{i−1,i} h † {(X i , Y i ), (X j , Y j )}   2   = 1 n 4 2 (i,j),(i ,j ) E * h † {(X i , Y i ), (X j , Y j )}h † {(X i , Y i ), (X j , Y j )} (E.19) where the sum is over all pairs (i, j) and (i , j ) such that j / ∈ {i − 1, i} and j / ∈ {i − 1, i }. Consider the following two scenarios: (a) i = i or i ∈ {j, j + 1}; (b) j ∈ {i, i − 1} or j ∈ {j − 1, j, j + 1}. Then (E.18) follows by combining the following two facts: (i) If at most one of (a), (b) holds, then E * h † {(X i , Y i ), (X j , Y j )}h † {(X i , Y i ), (X j , Y j )} = 0 because at least one of (X i , Y i ), (X j , Y j ), (X i , Y i ), (X j , Y j +1 ) is independent of the other three and the conditional expectation of h † {(X i , Y i ), (X j , Y j )}h † {(X i , Y i ), (X j , Y j )} E.3 Proof of Lemma D.2 Proof The proof of the two results are identical and it suffices to show the first one. P * { θ(X, Y ) < θ(X , Y )} = P * ( L 1 < L 2 ) follows trivially because L(x, y) is a monotone increasing transforma-tion of θ(x, y). Furthermore, by definition P * ( L 1 < L 2 ) = E * {I( L 1 < L 2 )} = E * { F 2 * (x, y) F 1 * (x, y) F 1 * (x, y) F 2 * (x, y) I( L(x, y) < L(x , y ))} = I{ L(x,y)< L(x ,y )} F 2 * (x, y)L(x, y)F 2 * (x , y )dxdydx dy = E * {L 2 I( L 2 < L 2 )}. The second to the last equality follows because L(x, y) = F 1 * (x, y)/F 2 * (x, y). The last equality follows by replacing the notation (x, y) with (x , y ). E.4 Proof of Lemma D.3 Proof By Lemma D.2, using the notation in (D.13), we have P * { θ(X, Y ) < θ(X , Y )} − P{θ(X, Y ) < θ(X , Y )} = P * { L 1 < L 2 } − P{L 1 < L 2 } = E * {L 2 I( L 2 < L 2 )} − E{L 2 I(L 2 < L 2 )} = E * {L 2 I( L 2 < L 2 )} − E * {L 2 I(L 2 < L 2 )} . The last equality follows because L 2 I(L 2 < L 2 ) is independent of the first subset of the data. Now we study E * {L 2 I( L 2 < L 2 )}. Let γ = L 2 − L 2 + L 2 − L 2 . Thus we have E * {L 2 I( L 2 < L 2 )} = E * {L 2 I(L 2 < L 2 )} + E * {L 2 I(L 2 ≤ L 2 ≤ L 2 + γ, γ > 0)} −E * {L 2 I(L 2 + γ ≤ L 2 ≤ L 2 , γ < 0)} = E * {L 2 I(L 2 < L 2 )} + E * {L 2 I(L 2 ≤ L 2 ≤ L 2 + γ, γ > 0)} −E * {L 2 I(L 2 < L 2 ≤ L 2 + γ, γ > 0)}, where the last equality follows by changing the roles of (X , Y ) and (X , Y ) in the expectation. Applying this result we have E * {L 2 I( L 2 < L 2 )} − E * {L 2 I(L 2 < L 2 )} = |E * {(L 2 − L 2 )I(L 2 ≤ L 2 ≤ L 2 + γ, γ > 0)}| ≤ E * {|L 2 − L 2 |I(|L 2 − L 2 | ≤ |γ|)} ≤ E * |γ| ≤ 2E * | L 2 − L 2 | . E.5 Proof of Lemma D.4 By Proposition 2.1, we know that d tv (F 1 , F 2 ) ≤ d tv (P X,Y , P X × P Y ) δ, where " " means "upper bounded up to a constant factor". Thus sup t |F 1 (t) − F 2 (t)| ≤ Cδ , where C is a constant. Thus Var(F 1 {θ(X 2 , Y 3 )}) = Var(F 2 {θ(X 2 , Y 3 )} + Cδ) = 1/12 + O(δ). Similarly, Var(F 2 {θ(X 2 , Y 2 )}) = 1/12 + O(δ). Furthermore, by definition of the total variation distance, we can construct (X 2 ,Ỹ 2 ) ∼ P X × P Y such that P((X 2 ,Ỹ 2 ) = (X 2 , Y 2 )) ≤ d tv (P X,Y , P X × P Y ) ≤ Cδ. Hence |Cov(F 1 {θ(X 1 , Y 2 )}, F 1 {θ(X 2 , Y 3 )})| ≤ Cov(F 1 {θ(X 1 ,Ỹ 2 )}, F 1 {θ(X 2 , Y 3 )}) + O(δ) = O(δ) . With the preparations above, we are ready to calculate the Cov(V 1 + V 2 + V 3 , V 2 ). Note that V i = F 1 {θ(X i , Y i )} − F 2 {θ(X i , Y i )}. We have Cov(V 1 + V 2 + V 3 , V 2 ) = Cov(F 1 {θ(X 1 , Y 2 )} − F 2 {θ(X 1 , Y 1 )}, F 1 {θ(X 2 , Y 3 )} − F 2 {θ(X 2 , Y 2 )}) + Cov(F 1 {θ(X 2 , Y 3 )} − F 2 {θ(X 2 , Y 2 )}, F 1 {θ(X 2 , Y 3 )} − F 2 {θ(X 2 , Y 2 )}) + Cov(F 1 {θ(X 3 , Y 4 )} − F 2 {θ(X 3 , Y 3 )}, F 1 {θ(X 2 , Y 3 )} − F 2 {θ(X 2 , Y 2 )}) := B 1 + B 2 + B 3 . We calculate each term separately. B 1 = Cov(F 1 {θ(X 1 , Y 2 )}, F 1 {θ(X 2 , Y 3 )}) − Cov(F 1 {θ(X 1 , Y 2 )}, F 2 {θ(X 2 , Y 2 )}) = O(δ) − Cov(F 1 {θ(X 1 , Y 2 )}, F 2 {θ(X 2 , Y 2 )}). The other two terms in B 1 are equal to 0 because (X 1 , Y 1 ) is independent of (X 2 , Y 3 ) and (X 2 , Y 2 ). B 2 = Cov(F 1 {θ(X 2 , Y 3 )}, F 1 {θ(X 2 , Y 3 )}) + Cov(F 2 {θ(X 2 , Y 2 )}, F 2 {θ(X 2 , Y 2 )}) − Cov(F 1 {θ(X 2 , Y 3 )}, F 2 {θ(X 2 , Y 2 )}) − Cov(F 2 {θ(X 2 , Y 2 )}, F 1 {θ(X 2 , Y 3 )}) = 1/6 − 2 Cov(F 2 {θ(X 2 , Y 2 )}, F 1 {θ(X 2 , Y 3 )}) + O(δ). Now we deal with B 3 . B 3 = Cov(F 1 {θ(X 3 , Y 4 )}, F 1 {θ(X 2 , Y 3 )}) − Cov(F 2 {θ(X 3 , Y 3 )}, F 1 {θ(X 2 , Y 3 )}) = Cov(F 1 {θ(X 1 , Y 2 )}, F 1 {θ(X 2 , Y 3 )}) − Cov(F 2 {θ(X 2 , Y 2 )}, F 1 {θ(X 1 , Y 2 )}) = O(δ) − Cov(F 2 {θ(X 2 , Y 2 )}, F 1 {θ(X 1 , Y 2 )}). The two terms in B 3 are also zero because (X 3 , Y 3 ) and (X 3 , Y 4 ) are independent of (X 2 , Y 2 ). Combining B 1 , B 2 and B 3 , we get σ 2 − σ 2 0 = O(δ). E.6 Proof of Lemma D.5 Proof By the Chebyshev inequality, ∀ > 0, P(|S n | > n ) ≤ 1 (n ) 4 E(S 4 n ). Now we study the upper bound for E(S 4 n ). For simplicity, we call the pair of index (i, j) dependent pair if |i − j| ∈ {0, 1, n − 1}. Note that E(S 4 n ) = E( i,j,k,l Z i Z j Z k Z l ) = i,j,k,l E(Z i Z j Z k Z l ). where the sum is over all 1 ≤ i, j, k, l ≤ n. Denote A * = {(i, j), (i, k), (i, l), (j, k), (j, l), (k, l)}. Consider the following scenarios: (a) A * contains at most one dependent pairs. Then E(Z i Z j Z k Z l ) = 0. (b) A * contains at least two dependent pairs. E(Z i Z j Z k Z l ) may not be 0. But the number of such terms E(Z i Z j Z k Z l ) is of order O(n 2 ). Thus there exists a constant C > 0, such that E(S 4 n ) ≤ Cn 2 for all positive integers n. It follows that n≥1 P(|S n | > n ) ≤ n≥1 C n 2 4 < ∞. The claimed result follows from the Borel-Cantelli lemma. E.7 Proof of Lemma D.6 Proof Let > 0 be a fixed number. We begin with choosing finitely many -brackets [l i , u i ] whose union covers M. For simplicity, let Z * j denote (X j , Y j , X j , Y j+1 ), and Z * denote (X, Y, X , Y ). Then for every M ∈ M, there exists a bracket such that 1 n n j=1 M (Z * j ; β) − EM (Z * ; β) ≤ { 1 n n j=1 u i (Z * j ) − Eu i (Z * )} + Eu i (Z * ) − EM (Z * ; β) ≤ { 1 n n j=1 u i (Z * j ) − Eu i (Z * )} + . Thus we have sup M ∈M 1 n n j=1 M (Z * j ; β) − EM (Z * ; β) ≤ max i { 1 n n j=1 u i (Z * j ) − Eu i (Z * )} + . By Lemma D.5, the right hand side converges almost surely to . Similarly, we have 1 n n j=1 M (Z * j ; β) − EM (Z * ; β) ≥ { 1 n n j=1 l i (Z * j ) − El i (Z * )} + El i (Z * ) − EM (Z * ; β) ≥ { 1 n n j=1 l i (Z * j ) − El i (Z * )} − . Thus we have inf M ∈M 1 n n j=1 M (Z * j ; β) − EM (Z * ; β) ≥ min i { 1 n n j=1 l i (Z * j ) − El i (Z * )} − . Similarly, the right hand side converges to − almost surely. It follows that sup M ∈M | 1 n n j=1 M (Z * j ; β) − EM (Z * ; β)| = max sup M ∈M 1 n n j=1 M (Z * j ; β) − EM (Z * ; β), − inf M ∈M 1 n n j=1 M (Z * j ; β) − EM (Z * ; β) . Thus lim sup | 1 n n j=1 M (Z * j ; β) − EM (Z * ; β)| M ∈M ≤ almost surely for every > 0. Thus it holds almost surely that sup M ∈M | 1 n 2 i∈I 2 M (X i , Y i , X i , Y i+1 ; β) − E{M (X, Y, X , Y ; β)}| → 0. E.8 Proof of Theorem 3.1 Proof Under H 0 . We have F 1 * (·) = F 2 * (·). By Lemma D.1, R − 1 2 = 1 n 2 i∈I 2 1 2 − F 2 * { θ(X i , Y i )} + 1 n 2 i∈I 2 F 2 * { θ(X i , Y i )} − 1 2 + O p (n −1 2 ).(E.20) Thus R−1/2 =R+O p (n −1 2 ). Let g 1 (X) = E[F 2 * { θ(X, Y )}|X]−1/2 and g 2 (Y ) = E[F 2 * { θ(X, Y )}|Y ]− 1/2 and g(X, Y ) = F 2 * { θ(X, Y )} − g 1 (X) − g 2 (Y ) − 1/2. Theñ R = 1 n 2 i∈I 2 [ g(X i , Y i+1 ) + g 1 (X i ) + g 2 (Y i+1 ) − g(X i , Y i ) − g 1 (X i ) − g 2 (Y i )] = 1 n 2 i∈I 2 g(X i , Y i+1 ) − 1 n 2 j∈I 2 g(X j , Y j ) , Note that E * { g(X j , Y j ) g(X i , Y i+1 )} = 0 for all i, j. This is because when i = j and i = j + 1, the two terms are independent andg(X, Y ) has mean 0. When i = j, it reduces to E * { g(X, Y ) g(X, Y )} = E * [E * ({ g(X, Y ) g(X, Y )|X}] = E * [E * { g(X, Y )|X} E * { g(X, Y )|X}] = 0. The case of i = j + 1 is similar. Therefore we have Var * (R) = 2 n 2 Var * {g(X 1 , Y 1 )} . By assumption,g(X, Y ) = F 2 * { θ(X, Y )}−g 1 (X)−g 2 (Y )−1/2 is non-degenerate and Var * {g(X 1 , Y 1 )} := σ 2 * /2 > 0. Moreover, F 1 * (·) and F 2 * (·) both follow uniform distribution and has variance 1/12. By Lemma D.1, we calculate that Var * ( R) = 1 6n 2 − 2 n 2 Cov * F 2 * { θ(X 2 , Y 2 )}, F 1 * { θ(X 1 , Y 2 )} − 2 n 2 Cov * F 2 * { θ(X 2 , Y 2 )}, F 1 * { θ(X 2 , Y 3 )} , which shows that Var * (R) = n −1 2 σ 2 * . By construction and the null hypothesis, the 2n 2 random variablesg(X i , Y i ) andg(X i , Y i ) have a 3-regular dependence graph, therefore by Theorem 2.2 of we have sup s∈R P * √ n 2R σ * ≤ s − Φ(s) ≤ c( A 3 + A 4 ) (E.21) Now we proceed under the assumption that n 1/6 2 σ * p → ∞ (note that in this case we are assuming that (n 1 , n 2 ) changes simultaneously). Fix an > 0, the assumption n 1/6 2 σ * p → ∞ guarantees there exists (n 1,0 , n 2,0 ) such that P(n 1/6 2 σ * ≤ −2/3 ) ≤ whenever n 1 ≥ n 1,0 and n 2 ≥ n 2,0 . Let T = √ n 2R /σ * . Now the √ A 3 term dominates the right hand side of (E.21), which can be bounded by C/(n 1/4 2 σ * 3/2 ) for some universal constant C. Then P(T ≤ s) ≤P(T ≤ s|n 1/6 σ * ≥ −2/3 )P(n 1/6 2 σ * ≥ −2/3 ) + P(n 1/6 2 σ * < −2/3 ) ≤(Φ(s) + C ) + = Φ(s) + (1 + C) . On the other hand P(T ≤ s) ≥P(T ≤ s|n 1/6 σ * ≥ −2/3 )P(n 1/6 2 σ * ≥ −2/3 ) ≥(Φ(s) − C )(1 − ) ≥ Φ(s) − (1 + C) . This establishes that T converges in distribution to N (0, 1) unconditionally. Now we analyze σ 2 . Using the Dvoretzky-Kiefer-Wolfowitz inequality we have F − F ∞ = O P (1/ √ n 2 ), so F 1 * { θ(X i , Y i )} F 1 * { θ(X i , Y i+1 )} − F 1 * { θ(X i , Y i )}F 1 * { θ(X i , Y i+1 )} = F 1 * { θ(X i , Y i )} F 1 * { θ(X i , Y i+1 )} − F 1 * { θ(X i , Y i+1 )} + F 1 * { θ(X i , Y i+1 )} F 1 * { θ(X i , Y i )} − F 1 * { θ(X i , Y i )} = O p (n −1/2 2 ). Combining this with the fact that the difference between σ 2 * and the empirical version using the true F function is just the difference between sample mean and the population mean for a random varialbe uniformly bounded by 1, we have σ 2 − σ 2 * = O P (1/ √ n 2 ) So that σ 2 σ 2 * − 1 = O P (1) 1 √ n 2 σ 2 * = o P (1) because by assumption n 1/2 2 σ 2 * ≥ n 1/3 2 σ 2 * p → ∞. Finally, √ n 2 (R − 1/2) σ = σ * σ √ n 2 (R − 1/2) σ * = σ * σ √ n 2R σ * + √ n 2 (R − 1/2 −R) σ * N (0, 1) because σ/σ * = 1 + o P (1) and √ n 2 (R − 1/2 −R)/σ * = O P (1/(n 1/2 2 σ * )) = o P (1). E.9 Proof of Theorem 3.2 Proof We first show that R = µ * + O p (n −1/2 2 ). Let R µ = 1 n 2 i∈I 2 1 − F 2 * { θ(X i , Y i )} − µ * + 1 n 2 j∈I 2 F 1 * { θ(X j , Y j )} − µ * . By Lemma D.1, R−µ * = R µ +O p (n −1 2 ). Note that E * R µ = 0 and Var * (R µ ) = σ 2 * /n 2 = Cov * ( V 1 + V 2 + V 3 , V 2 )/n 2 similarly as in the proof of Theorem 3.1. V i = F 1 * { θ(X i , Y i )} − F 2 * { θ(X i , Y i )} Thus R µ = O p (n −1/2 2 ) and R = µ * + O p (n −1/2 2 ). Note that the rank sum comparison is invariant with respect to any monotone transformation on θ. Thus one can easily replace θ with g( θ), where g(·) is a strictly monotone function. By Lemma D.3 and condition (3.5), we have |µ * − P{θ(X, Y ) < θ(X , Y )}| ≤ 1 2 − µ − 2c. Because P{θ(X, Y ) < θ(X , Y )} < 1/2 under H 1 , we have µ * < 1/2 − 2c holds with probability tending to 1. Thus as n 1 , n 2 → ∞, √ n 2 (R − 1/2) = √ n 2 (µ * − 1/2) + O p (1) → −∞ holds in probability. The result follows because σ 2 is upper bounded by constant 7/6. E.10 Proof of Theorem 3.3 Proof Because √ n 2 (R − 1/2) σ = √ n 2 (R − µ) σ + √ n 2 (R − R ) σ + √ n 2 (µ − 1/2) σ = √ n 2 (R − µ) σ 0 σ 0 σ + √ n 2 (R − R ) σ σ σ + √ n 2 (µ − 1/2) σ To deal with the three terms, we can divide our proof into four steps. Step 1: We begin by showing that ratio σ/ σ = 1 + o p (1), and σ 0 / σ = 1 + o p (1). Following similar proof as in Theorem 3.1, we can show that σ 2 * / σ 2 −1 = o p (1). By Lemma D.4, σ 2 0 /σ 2 −1 = o(1). Note that σ σ = σ σ * × σ * σ , and σ 0 σ = σ 0 σ × σ σ * × σ * σ . Thus it suffices to show that σ 2 /σ 2 * − 1 = o p (1). We first have E * F 2 * { θ(X 2 , Y 2 )}F 1 * { θ(X 1 , Y 2 )} − E [F 2 {θ(X 2 , Y 2 )}F 1 {θ(X 1 , Y 2 )}] = E * F 2 * { θ(X 2 , Y 2 )}F 1 * { θ(X 1 , Y 2 )} − F 2 {θ(X 2 , Y 2 )}F 1 {θ(X 1 , Y 2 )} = E * F 2 * { θ(X 2 , Y 2 )}F 1 * { θ(X 1 , Y 2 )} − F 2 * { θ(X 2 , Y 2 )}F 1 {θ(X 1 , Y 2 )}+ F 2 * { θ(X 2 , Y 2 )}F 1 {θ(X 1 , Y 2 )} − F 2 {θ(X 2 , Y 2 )}F 1 {θ(X 1 , Y 2 )} ≤ E * F 1 * { θ(X 1 , Y 2 )} − F 1 {θ(X 1 , Y 2 )} + E * F 2 * { θ(X 2 , Y 2 )} − F 2 {θ(X 2 , Y 2 )} = o p (1), where the last equation holds by condition (3.8). By the assumption of µ * − µ = o p (n −1/2 2 ), we have E [F 1 {θ(X i , Y i )}] = E * F 1 * { θ(X i , Y i )} +o p (1). Similarly, E [F 2 {θ(X i , Y i )}] = E * F 2 * { θ(X i , Y i )} + o p (1). It follows that Cov * F 2 * { θ(X 2 , Y 2 )}, F 1 * { θ(X 1 , Y 2 )} − Cov [F 2 {θ(X 2 , Y 2 )}, F 1 {θ(X 1 , Y 2 )}] = E * F 2 * { θ(X 2 , Y 2 )}F 1 * { θ(X 1 , Y 2 )} − E [F 2 {θ(X 2 , Y 2 )}F 1 {θ(X 1 , Y 2 )}] − E * F 2 * { θ(X 2 , Y 2 )} E * F 1 * { θ(X 1 , Y 2 )} + E [F 2 {θ(X 2 , Y 2 )}] E [F 1 {θ(X 1 , Y 2 )}] = o p (1). Thus we have shown that σ 2 * = σ 2 + o p (1). Under the condition σ 2 ≥ C > 0, it follows that σ 2 * σ 2 − 1 = σ 2 * − σ 2 σ 2 = o p (1) Step 2: We then deal with √ n 2 (R −µ) σ 0 σ 0 σ . By Lemma D.1, R − µ = R µ + O p (n −1 2 ), where R µ = 1 n 2 i∈I 2 [1 − F 2 {θ(X i , Y i )} − µ] + 1 n 2 j∈I 2 F 1 {θ(X j , Y j )} − µ . The dependence graph of the 2n 2 random variables 1 − F 2 { θ(X i , Y i )} − µ, i ∈ I 2 ∪ F 1 { θ(X j , Y j )} − µ, j ∈ I 2 is 3-regular. Note that σ 2 0 > c ≥ 0, thus we have n 1/3 2 σ 2 0 → ∞. Similar as the proof in Theorem 3.1, we have √ n 2 (R − µ) σ 0 d → N (0, 1). It follows that √ n 2 (R − µ) σ = √ n 2 (R − µ) σ 0 σ 0 σ = Z(1 + o p (1)), where Z converges to a standard normal distribution as n 1 and n 2 goes to infinity. Step 3: We now deal with √ n 2 (R − R ). By Lemma D.1, we know that R − µ * = 1 n 2 i∈I 2 1 − F 2 * { θ(X i , Y i )} − µ * + 1 n 2 j∈I 2 F 1 * { θ(X j , Y j )} − µ * + O p (n −1 2 ), R − µ = 1 n 2 i∈I 2 [1 − F 2 {θ(X i , Y i )} − µ] + 1 n 2 j∈I 2 F 1 {θ(X j , Y j )} − µ + O p (n −1 2 ). Thus √ n 2 (R − R ) is equal to √ n 2 (R − R ) = √ n 2 n 2 i∈I 2 F 2 {θ(X i , Y i )} − F 2 * { θ(X i , Y i )} + √ n 2 n 2 i∈I 2 F 1 * { θ(X i , Y i )} − F 1 {θ(X i , Y i )} + √ n 2 (µ − µ * ) + O p (n −1/2 2 ) := A 1 + A 2 + A 3 + O p (n −1/2 2 ). First of all, we know that A 3 = o p (1) by assumption. To deal with A 1 , note that the conditional expectation of each term in A 1 is E * F 2 {θ(X i , Y i )} − F 2 * { θ(X i , Y i )} = P * {θ(X , Y ) < θ(X, Y )} − P * { θ(X , Y ) < θ(X, Y )} = 1 − µ − (1 − µ * ) = µ * − µ = o p (n −1/2 2 ), where the inequality follows the assumption. Thus E(A 1 ) = o p (1). Now consider the conditional variance of F 2 {θ(X i , Y i )} − F 2 * { θ(X i , Y i )}: Var * F 2 {θ(X i , Y i )} − F 2 * { θ(X i , Y i )} ≤ E * F 2 {θ(X i , Y i )} − F 2 * { θ(X i , Y i )} 2 ≤ E * F 2 {θ(X i , Y i )} − F 2 * { θ(X i , Y i )} = o p (1), where the equation holds by condition (3.8). Thus we have Var * (A 1 ) = o p (1). It follows that A 1 = o p (1). To deal with A 2 , we need to consider the dependence between samples because (X i , Y i ) are no longer independent. First, it follows similarly that E * F 1 * { θ(X i , Y i )} − F 1 {θ(X i , Y i )} = o p (n −1/2 ) Var * F 1 * { θ(X i , Y i )} − F 1 {θ(X i , Y i )} = o p (1) And when i and j are dependent pairs (as defined in Lemma D.5), Cov * F 1 * { θ(X i , Y i )} − F 1 {θ(X i , Y i )}, F 1 * { θ(X j , Y j )} − F 1 {θ(X j , Y j )} = o p (1) When i and j are not dependent pairs, Cov * F 1 * { θ(X i , Y i )} − F 1 {θ(X i , Y i )}, F 1 * { θ(X j , Y j )} − F 1 {θ(X j , Y j )} = 0. It follows that Var(A 2 ) = o p (1). Because E(A 2 ) = o p (1) by the assumption of µ * −µ = o p (n −1/2 2 ), we have A 2 = o p (1). Thus we have shown that √ n 2 (R − R ) = o p (1). It follows that √ n 2 (R − R ) σ = √ n 2 (R − R ) σ σ σ = o p (1)(1 + o p (1)) = o p (1). Step 4: By Lemma D.2 and the continuous assumption of θ, we know that µ = E{L 2 I(L 2 < L 2 )} = 1 2 E{L 2 I(L 2 < L 2 )} + E{L 2 I(L 2 < L 2 )} = 1 2 1 − E{L 2 I(L 2 < L 2 )} + E{L 2 I(L 2 < L 2 )} = 1 2 1 − E{(L 2 − L 2 )I(L 2 < L 2 )} where Z converges to a standard normal distribution as the sample size goes to infinity. = 1 2 1 − 1 2 E{|L 2 − L 2 |} = 1 2 − 1 4 E{|L 2 − L 2 |} E.11 Proof of Theorem 3.4 Proof Because β n is the maximizer of M n (β), we know that M n ( β n ) ≥ M n (β 0 ) − o p (1). By the uniform consistency in Lemma D.6, we have that M n (β 0 ) = E{M (X, Y, X , Y ; β 0 )} + o p (1). Thus M n ( β n ) ≥ E{M (X, Y, X , Y ; β 0 )} − o p (1). It follows that E{M (X, Y, X , Y ; β 0 )} − E{M (X, Y, X , Y ; β n )} ≤ M n ( β n ) − E{M (X, Y, X , Y ; β n )} + o p (1) ≤ o p (1) p → 0, where the last inequality follows by Lemma D.6. By the identifiability condition, for every > 0, d(β, β 0 ) ≥ , there exist an η > 0 such that E{M (X, Y, X , Y ; β)} < E{M (X, Y, X , Y ; β 0 )}−η. Thus P{d(β, β 0 ) ≥ } ≤ P{E{M (X, Y, X , Y ; β)} < E{M (X, Y, X , Y ; β 0 )} − η} → 0. This completes the proof. E.12 Proof of Theorem 3.5 Proof Empirically, we optimize the regularized optimization problem (3.10). Because β is optimal, we have β T Γ β − 2 γ T β + λ β 1 ≤ β * T Γβ * − 2 γ T β * + λ β * 1 Let ∆ = β − β * . After some basic algebra, we obtain that 1 n Ξ ∆ 2 2 ≤ 1 n 2w T Ξ ∆ + λ β * 1 − λ β 1 . Because β * is supported on a subset S ⊂ {1, 2, . . . , m} with |S| = s 2 , we can write β * 1 − β 1 = β * S 1 − β * S + ∆ S 1 − ∆ S c 1 . Thus we have 0 ≤ 1 n Ξ ∆ 2 2 ≤ 1 n 2w T Ξ ∆ + λ β * S 1 − β * S + ∆ S 1 − ∆ S c 1 ≤ 2 Ξ T w n ∞ ∆ 1 + λ ∆ S 1 − ∆ S c 1 ≤ λ 2 3 ∆ S 1 − ∆ S c 1 . The second inequality follows from Holder's inequality and triangle inequality. The last inequality follows from the assumption on λ. Thus we have ∆ ∈ C 3 (S). By the restricted eigenvalue condition, κ ∆ 2 2 ≤ 1 n Ξ∆ 2 2 ≤ 3λ 2 ∆ S 1 ≤ 3λ 2 √ s 2 ∆ S 2 Thus we have ∆ 2 ≤ 3 2κ √ s 2 λ. Now it suffices to derive the upper bound for Ξ T w/n ∞ . For notation simplicity, we let p 1 (Z) be the density of Z, and let p 0 (Z ) be the density of Z . Then by assumption, we have ξ(Z) T β * = g(Z) = p 1 (Z) p 1 (Z) + p 0 (Z) . Thus for any function q(z) ∈ R, we have E[w 1 (Z)q(Z) + w 0 (Z )q(Z )] (E.22) = 1 − p 1 (Z) p 1 (Z) + p 0 (Z) q(Z)p 1 (Z)dZ + − p 1 (Z) p 1 (Z) + p 0 (Z) q(Z)p 0 (Z)dZ = 0. Let Ξ T w def = (ζ 1 , . . . , ζ m ) T . Thus by (E.22), we have Eζ j = 0. Moreover, ζ j is only a function of Z = (Z 1 , . . . , Z n ), written as ζ j (Z). Let Z i = (Z 1 , . . . , Z i−1 , Z i , Z i+1 , . . . , Z n ), where Z i is an independent copy of Z i . Following similar reasoning as in the proof of Theorem 3.1, the dependence graph of (Z 1 , . . . , Z n , Z 1 , . . . , Z n ) is 3-regular. Thus |ζ j (Z) − ζ j ( Z i )| ≤ 6B. By the McDiarmid's inequality, we have P(|ζ j | ≥ nt) ≤ 2 exp − 2nt 2 36B 2 Thus, P Ξ T w n ∞ ≥ t ≤ m j=1 P(|ζ j | ≥ nt) ≤ 2m exp − 2nt 2 36B 2 Replacing t with C 1 log m/n, we obtain that P Ξ T w n ∞ ≥ C 1 log m n ≤ m −1 . C 1 is a constant related to B. This completes the proof. Remark 4 . 4The conditions in Theorem 3.3 are stronger than those required in the fixed population versions in Theorems 3.1 and 3.2. This is because the local alternative hypothesis can be close to the null as fast as n more delicate treatment of the estimation error is needed to establish the asymptotic distribution. In particular, equation (3.8) typically holds when θ is a consistent estimate of θ up to a strictly monotone transform, whereas equation (3.5) only requires a constant error accracy. The most stringent condition is µ The condition for the class of objective functions N [ ] ( , M, L 1 ) < ∞ is relatively standard for classical M-esimators. SeeVan der Vaart (2000) for several examples.A detailed proof of Theorem 3.4 is given in Appendix E.11. The key of our proof is a strong law of large numbers resulting in the dependent data, proved by carefully decomposing the variance of the sum of dependent variables and applying the Borel-Cantelli lemma. Then we are able to show the uniform consistency of M n (β) in Lemma D.6, which further implies consistency of β n when combined with standard empirical processes and M-estimation results(Van der Vaart, 2000). For example, (M1) is one of the most popular models and have been considered in Székely et al. (2007); Huo and Székely (2016); Shi et al. (2020); Deb and Sen (2021), etc. Functional transformations similar as (M2) and (M3) have been considered in Zhu et al. (2017) and Zhu et al. (2020b). (M4) is the mixture model and was used in Heller et al. (2012); Biswas et al. (2016); Deb and Sen (2021). (M5) was previously used inSzékely et al. (2007) and. (M6) has also been considered inHuo and Székely (2016) andZhu et al. (2020b). Figure 1 : 1M1) -(M6) in Figure 2, with sample size n = 1000 and significant level α = 0.05. Additional simulation when α = 0.1 and 0.01 are given in the supplementary material. All results are averaged over 1000 repetitions. As expected, the proposed test has increasing power as the signal becomes stronger, with correct type-I error under the null hypothesis and high power when the signal exceeds a certain threshold. It performs particularly well for (M5), where all other tests have very low power even when the signal increases. The distance correlation also The power versus dimension of the proposed test ("CPC") compared with distance correlation ("DC"), ranks of distance test ("HHG"), and mutual information ("MI") when n = 1000, α = 0.05.has considerable power, especially when the signal is strong and the dependence relationship is linear. The ranks of distance test and mutual information do not suit the high dimensional setting and have very low powers for almost all settings. Figure 2 : 2The increasing power versus the signal a of the proposed test ("CPC") compared with distance correlation ("DC"), ranks of distance test ("HHG"), and mutual information ("MI") when n = 1000, d 1 = d 2 = 100, α = 0.05. computing time for one run of the test based on 1000 repetitions. Two settings are considered: applied in other related testing problems, including the test of mutual independence and the test of conditional independence. By constructing two samples that have the same distribution under H 0 but different distributions under H 1 , one can always transform those tests into a classification problem. Another interesting and unsolved problem is how to avoid the power loss caused by data splitting. One may switch the role of I 1 and I 2 and obtain another test statistic and p-value, which is dependent on the original one. Another choice is to perform multiple sample splitting and obtain a sequence of test statistics and p-values, which are statistically dependent. Existing methods such as Cauchy combination test(Liu and Xie, 2020; and averaging p-values(Vovk and Wang, 2020) could be applied to combine the results under cerntain restricting conditions. It will be very rewarding to study how to efficiently combine those dependent statistics and p-values in high dimensional independence testing problems.Eltager, M., Abdelaal, T., Mahfouz, A., and Reinders, M.J. (2021). "scmoc: Single-cell multiomics clustering." bioRxiv. Fan, J., Li, R., Zhang, C.H., and Zou, H. (2020). Statistical foundations of data science. CRC Press. Gao, L., Fan, Y., Lv, J., and Shao, Q.M. (2021). "Asymptotic distributions of high-dimensional distance correlation inference." The Annals of Statistics, 49(4), 1999-2020. Gretton, A., Bousquet, O., Smola, A., and Schölkopf, B. (2005). "Measuring statistical dependence with hilbert-schmidt norms." In "International Conference on Algorithmic Learning Theory," pages 63-77. Springer. Gretton, A., Fukumizu, K., Teo, C.H., Song, L., Schölkopf, B., and Smola, A.J. (2007). "A kernel statistical test of independence." In "Advances in Neural Information Processing Systems," pages 585-592. Heller, R., Heller, Y., and Gorfine, M. (2012). "A consistent multivariate test of association based on ranks of distances." Biometrika, 100(2), 503-510.Hu, X. and Lei, J. (2020). "A distribution-free test of covariate shift using conformal prediction." arXiv preprint arXiv:2010.07147.Huo, X. andSzékely, G.J. (2016). "Fast computing for distance covariance." Technometrics, 58(4), 435-447.Imbens, G.W. and Rubin, D.B. (2015). Causal inference in statistics, social, and biomedical sciences. Cambridge University Press.Jirak, M. (2016). "Berry-esseen theorems under weak dependence." The Annals of Probability, 44(3), 2024-2063.Kim, I., Lee, A.B., Lei, J., et al. (2019). "Global and local two-sample tests via regression."Electronic Journal of Statistics, 13(2), 5253-5305.Kim, I. and Ramdas, A. (2020)."Dimension-agnostic inference." arXiv preprint arXiv:2011.05068.Kim, I., Ramdas, A., Singh, A., and Wasserman, L. (2021). "Classification accuracy as a proxy for two-sample testing." The Annals of Statistics, 49(1), 411-434. Kong, E., Xia, Y., and Zhong, W. (2019). "Composite coefficient of determination and its application in ultrahigh dimensional variable screening." Journal of the American Statistical Association. Kulkarni, A., Anderson, A.G., Merullo, D.P., and Konopka, G. (2019). "Beyond bulk: a review of single cell transcriptomics methodologies and applications." Current opinion in biotechnology, 58, 129-136. Liu, Y. and Xie, J. (2020). "Cauchy combination test: a powerful test with analytic p-value calculation under arbitrary dependency structures." Journal of the American Statistical Association, 115(529), 393-402. Maathuis, M., Drton, M., Lauritzen, S., and Wainwright, M. (2018). Handbook of graphical models. CRC Press. " Dropout: a simple way to prevent neural networks from overfitting." The Journal of Machine Learning Research, 15(1), 1929-1958. Stuart, T. and Satija, R. (2019). "Integrative single-cell analysis." Nature Reviews Genetics, 20(5), 257-272. Székely, G.J. and Rizzo, M.L. (2013). "The distance correlation t-test of independence in high dimension." Journal of Multivariate Analysis, 117, 193-213. Székely, G.J., Rizzo, M.L., and Bakirov, N.K. (2007). "Measuring and testing dependence by correlation of distances." The Annals of Statistics, 35(6), 2769-2794. Van der Vaart, A.W. (2000). Asymptotic statistics. Cambridge University press. Vovk, V. and Wang, R. (2020). "Combining p-values via averaging." Biometrika, 107(4), 791-808. Wainwright, M.J. (2019). High-dimensional statistics: A non-asymptotic viewpoint, volume 48. Cambridge University Press. Wasserman, L., Ramdas, A., and Balakrishnan, S. (2020). "Universal inference." Proceedings of the National Academy of Sciences, 117(29), 16880-16890. Yan, F., Powell, D.R., Curtis, D.J., and Wong, N.C.(2020). "From reads to insight: a hitchhiker's guide to atac-seq data analysis." Genome Biology, 21(1), 1-16.Yao, S., Zhang, X., and Shao, X. (2018). "Testing mutual independence in high dimension via distance covariance." Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80(3), 455-480.Zhu, C., Preissl, S., and Ren, B. (2020a). "Single-cell multimodal omics: the power of many." Nature Methods, 17(1), 11-14.Zhu, C., Zhang, X.,Yao, S., and Shao, X. (2020b). "Distance-based and rkhs-based dependence metrics in high dimension." The Annals of Statistics, 48(6), 3366-3394.Zhu, L., Xu, K., Li, R., and Zhong, W. (2017). "Projection correlation between two random vectors." Biometrika, 104(4), 829-843. Figure 3 :Figure 4 :Figure 5 :Figure 6 : 3456of freedom. The multivariate t-distribution has location parameter 0 and scale matrix Σ, where Σ i,j = ρ |i−j| . Let ρ ∈ {0.25, 0.5, 0.75}. We report the power curves for all the methods when α = 0.05 and 0.1. We still work with the linear model, where Y 1 = aX 1 + , and ∼ N (The power versus the signal of the competing tests when n = 1000, d 1 = d 2 = 100, α = 0.01. The power versus the signal of the competing tests when n = 1000, d 1 = d 2 = 100, The power versus the signal of the competing tests when the data is correlated. n = 1000, d 1 = d 2 = 500. The power versus the signal of the competing tests when the data has heavy-tailed distribution and is correlated. n = 1000, d 1 = d 2 = 200. Consider the example where (X, Y ) ∼ N (ρ > 0 and (X , Y ) ∼ N (0, I 2 ), where I 2 is the 2 × 2 identity matrix. Then using the Quadratic Discriminant Analysis (QDA), we have 3 : 3Numerical validation of the condition µ * − µ = o p (n Lemma D. 4 . 4Assume the distance between the null and local alternative as defined in (3.6) for a sequence δ = o(1), then σ 2 − σ 2 0 = O(δ). Lemma D.5. (Strong Law of Large Numbers for Dependent Variables) Let (Z i : i ≥ 1) be a sequence of random variables with mean 0 and sup i≥1 E(Z 4 i ) < ∞. Assume that Z i and Z j are independent whenever |i − j| / ∈ {0, 1, n − 1}. Let S n = n i=1 Z i . Then lim n→∞ S n n = 0 almost surely. Definition 2 . 2(Bracketing number) When U ∼ P U , the bracketing number N [ ] ( , F, P U ) is the minimum number of -brackets needed to cover F. Lemma D.6. M = {M (X, Y, X , Y ; β), β ∈ B} be a class of measurable functions such that N [ ] ( , M, P) < ∞ for every > 0, and and E [{M (X, Y, X , Y ; β)} 4 ] < ∞. Then sup M ∈M tv (P, Q) ≤ E|r(W ) − r(W )| ≤ 2d tv (P, Q) . (E.15) Er(W )1I{r(W ) > r(W )} = Er(W )1I{r(W ) > r(W )} and Er(W )1I{r(W ) > r(W )} + Er(W )1I{r(W ) < r(W )} = Er(W ) = 1by the construction of r(W ) and its continuity.To prove (E.15), observe that d tv (P, Q) = E|r(W ) − 1|. For the lower bound we have by Jensen's inequality Finally, √ n 2 2(R − 1/2) σ = √ n 2 (R − µ) o p (1)) + o p (1) = Z − √ n 2 δ 4σ + o p (1). Table 1 : 1The average computing time measured in minutes of the proposed test and distance correlation, rank of distance test, and mutual information when d 1 increases from 100 to 500.d 2 = d 1 , n = 1000. d 1 , d 2 CPC DC HHG MI 100 0.025 0.009 0.105 0.425 200 0.052 0.011 0.107 0.765 300 0.086 0.015 0.108 1.104 400 0.138 0.017 0.113 1.448 500 0.144 0.020 0.122 1.837 Table 2 : 2The average computing time measured in minutes of the proposed test and distance correlation, rank of distance test, and mutual information when n increases from 1000 to 5000.d 1 = d 2 = 100. n CPC DC HHG MI 1000 0.023 0.009 0.105 0.417 2000 0.040 0.046 0.470 1.785 3000 0.055 0.099 - 3.676 4000 0.086 0.150 - 6.499 5000 0.086 0.201 - 9.849 Table j / ∈{i−1,i} h † {(X i , Y i ), (X j , Y j )}. PBMC from a healthy donor -granulocytes removed through cell sorting (10k). Genomics. Genomics (2021). "PBMC from a healthy donor -granulocytes removed through cell sort- ing (10k)." https://cf.10xgenomics.com/samples/cell-arc/2.0.0/pbmc_granulocyte_ sorted_10k/pbmc_granulocyte_sorted_10k_web_summary.html. . M Abadi, A Agarwal, P Barham, E Brevdo, Z Chen, C Citro, G S Corrado, A Davis, J Dean, M Devin, S Ghemawat, I Goodfellow, A Harp, G Irving, M Isard, Y Jia, R Jozefowicz, L Kaiser, M Kudlur, J Levenberg, D Mané, R Monga, S Moore, Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Mané, D., Monga, R., Moore, S., TensorFlow: Large-scale machine learning on heterogeneous systems. D Murray, C Olah, M Schuster, J Shlens, B Steiner, I Sutskever, K Talwar, P Tucker, V Vanhoucke, V Vasudevan, F Viégas, O Vinyals, P Warden, M Wattenberg, M Wicke, Y Yu, X Zheng, Software available from tensorflow.orgMurray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., and Zheng, X. (2015). "TensorFlow: Large-scale machine learning on heterogeneous systems." Software available from tensorflow.org. On normal approximations of distributions in terms of dependency graphs. P Baldi, Y Rinott, The Annals of Probability. Baldi, P. and Rinott, Y. (1989). "On normal approximations of distributions in terms of depen- dency graphs." The Annals of Probability, pages 1646-1650. Nonparametric independence testing via mutual information. T B Berrett, R J Samworth, Biometrika. 1063Berrett, T.B. and Samworth, R.J. (2019). "Nonparametric independence testing via mutual information." Biometrika, 106(3), 547-566. On some exact distribution-free tests of independence between two random vectors of arbitrary dimensions. M Biswas, S Sarkar, A K Ghosh, Journal of Statistical Planning and Inference. 175Biswas, M., Sarkar, S., and Ghosh, A.K. (2016). "On some exact distribution-free tests of independence between two random vectors of arbitrary dimensions." Journal of Statistical Planning and Inference, 175, 78-86. Model-free prediction test with application to genomics data. Z Cai, J Lei, K Roeder, doi:10.1073/ pnas.2205518119Proceedings of the National Academy of Sciences. 119342205518119Cai, Z., Lei, J., and Roeder, K. (2022a). "Model-free prediction test with application to genomics data." Proceedings of the National Academy of Sciences, 119(34), e2205518119. doi:10.1073/ pnas.2205518119. A distribution free conditional independence test with applications to causal discovery. Z Cai, R Li, Y Zhang, Journal of Machine Learning Research. 2385Cai, Z., Li, R., and Zhang, Y. (2022b). "A distribution free conditional independence test with applications to causal discovery." Journal of Machine Learning Research, 23(85), 1-41. A new coefficient of correlation. S Chatterjee, Journal of the American Statistical Association. 116536Chatterjee, S. (2021). "A new coefficient of correlation." Journal of the American Statistical Association, 116(536), 2009-2022. A distribution-free test of independence based on mean variance index. H Cui, W Zhong, Computational Statistics & Data Analysis. 139Cui, H. and Zhong, W. (2019). "A distribution-free test of independence based on mean variance index." Computational Statistics & Data Analysis, 139, 117-133. Multivariate rank-based distribution-free nonparametric testing using measure transportation. N Deb, B Sen, Journal of the American Statistical Association. Deb, N. and Sen, B. (2021). "Multivariate rank-based distribution-free nonparametric testing using measure transportation." Journal of the American Statistical Association, pages 1-45. PBMC from a healthy donor -granulocytes removed through cell sorting (10k). Genomics. Genomics (2021). "PBMC from a healthy donor -granulocytes removed through cell sort- ing (10k)." https://cf.10xgenomics.com/samples/cell-arc/2.0.0/pbmc_granulocyte_ sorted_10k/pbmc_granulocyte_sorted_10k_web_summary.html. . M Abadi, A Agarwal, P Barham, E Brevdo, Z Chen, C Citro, G S Corrado, A Davis, J Dean, M Devin, S Ghemawat, I Goodfellow, A Harp, G Irving, M Isard, Y Jia, R Jozefowicz, L Kaiser, M Kudlur, J Levenberg, D Mané, R Monga, S Moore, Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Mané, D., Monga, R., Moore, S., TensorFlow: Large-scale machine learning on heterogeneous systems. D Murray, C Olah, M Schuster, J Shlens, B Steiner, I Sutskever, K Talwar, P Tucker, V Vanhoucke, V Vasudevan, F Viégas, O Vinyals, P Warden, M Wattenberg, M Wicke, Y Yu, X Zheng, Software available from tensorflow.orgMurray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., and Zheng, X. (2015). "TensorFlow: Large-scale machine learning on heterogeneous systems." Software available from tensorflow.org. On normal approximations of distributions in terms of dependency graphs. P Baldi, Y Rinott, The Annals of Probability. Baldi, P. and Rinott, Y. (1989). "On normal approximations of distributions in terms of depen- dency graphs." The Annals of Probability, pages 1646-1650. Nonparametric independence testing via mutual information. T B Berrett, R J Samworth, Biometrika. 1063Berrett, T.B. and Samworth, R.J. (2019). "Nonparametric independence testing via mutual information." Biometrika, 106(3), 547-566. On some exact distribution-free tests of independence between two random vectors of arbitrary dimensions. M Biswas, S Sarkar, A K Ghosh, Journal of Statistical Planning and Inference. 175Biswas, M., Sarkar, S., and Ghosh, A.K. (2016). "On some exact distribution-free tests of independence between two random vectors of arbitrary dimensions." Journal of Statistical Planning and Inference, 175, 78-86. Model-free prediction test with application to genomics data. Z Cai, J Lei, K Roeder, doi:10.1073/ pnas.2205518119Proceedings of the National Academy of Sciences. 119342205518119Cai, Z., Lei, J., and Roeder, K. (2022a). "Model-free prediction test with application to genomics data." Proceedings of the National Academy of Sciences, 119(34), e2205518119. doi:10.1073/ pnas.2205518119. A distribution free conditional independence test with applications to causal discovery. Z Cai, R Li, Y Zhang, Journal of Machine Learning Research. 2385Cai, Z., Li, R., and Zhang, Y. (2022b). "A distribution free conditional independence test with applications to causal discovery." Journal of Machine Learning Research, 23(85), 1-41. A new coefficient of correlation. S Chatterjee, Journal of the American Statistical Association. 116536Chatterjee, S. (2021). "A new coefficient of correlation." Journal of the American Statistical Association, 116(536), 2009-2022. A distribution-free test of independence based on mean variance index. H Cui, W Zhong, Computational Statistics & Data Analysis. 139Cui, H. and Zhong, W. (2019). "A distribution-free test of independence based on mean variance index." Computational Statistics & Data Analysis, 139, 117-133. Multivariate rank-based distribution-free nonparametric testing using measure transportation. N Deb, B Sen, Journal of the American Statistical Association. Deb, N. and Sen, B. (2021). "Multivariate rank-based distribution-free nonparametric testing using measure transportation." Journal of the American Statistical Association, pages 1-45. scmoc: Single-cell multiomics clustering. M Eltager, T Abdelaal, A Mahfouz, M J Reinders, bioRxivEltager, M., Abdelaal, T., Mahfouz, A., and Reinders, M.J. (2021). "scmoc: Single-cell multi- omics clustering." bioRxiv. Statistical foundations of data science. J Fan, R Li, C H Zhang, H Zou, CRC PressFan, J., Li, R., Zhang, C.H., and Zou, H. (2020). Statistical foundations of data science. CRC Press. Asymptotic distributions of high-dimensional distance correlation inference. L Gao, Y Fan, J Lv, Q M Shao, The Annals of Statistics. 494Gao, L., Fan, Y., Lv, J., and Shao, Q.M. (2021). "Asymptotic distributions of high-dimensional distance correlation inference." The Annals of Statistics, 49(4), 1999-2020. Measuring statistical dependence with hilbert-schmidt norms. A Gretton, O Bousquet, A Smola, B Schölkopf, International Conference on Algorithmic Learning Theory. SpringerGretton, A., Bousquet, O., Smola, A., and Schölkopf, B. (2005). "Measuring statistical depen- dence with hilbert-schmidt norms." In "International Conference on Algorithmic Learning Theory," pages 63-77. Springer. A kernel statistical test of independence. A Gretton, K Fukumizu, C H Teo, L Song, B Schölkopf, A J Smola, Advances in Neural Information Processing Systems. Gretton, A., Fukumizu, K., Teo, C.H., Song, L., Schölkopf, B., and Smola, A.J. (2007). "A kernel statistical test of independence." In "Advances in Neural Information Processing Systems," pages 585-592. A consistent multivariate test of association based on ranks of distances. R Heller, Y Heller, M Gorfine, Biometrika. 1002Heller, R., Heller, Y., and Gorfine, M. (2012). "A consistent multivariate test of association based on ranks of distances." Biometrika, 100(2), 503-510. A distribution-free test of covariate shift using conformal prediction. X Hu, J Lei, arXiv:2010.07147arXiv preprintHu, X. and Lei, J. (2020). "A distribution-free test of covariate shift using conformal prediction." arXiv preprint arXiv:2010.07147. Fast computing for distance covariance. X Huo, G J Székely, Technometrics. 584Huo, X. and Székely, G.J. (2016). "Fast computing for distance covariance." Technometrics, 58(4), 435-447. G W Imbens, D B Rubin, Causal inference in statistics, social, and biomedical sciences. Cambridge University PressImbens, G.W. and Rubin, D.B. (2015). Causal inference in statistics, social, and biomedical sciences. Cambridge University Press. Berry-esseen theorems under weak dependence. M Jirak, The Annals of Probability. 443Jirak, M. (2016). "Berry-esseen theorems under weak dependence." The Annals of Probability, 44(3), 2024-2063. Global and local two-sample tests via regression. I Kim, A B Lee, J Lei, Electronic Journal of Statistics. 132Kim, I., Lee, A.B., Lei, J., et al. (2019). "Global and local two-sample tests via regression." Electronic Journal of Statistics, 13(2), 5253-5305. Dimension-agnostic inference. I Kim, A Ramdas, arXiv:2011.05068arXiv preprintKim, I. and Ramdas, A. (2020). "Dimension-agnostic inference." arXiv preprint arXiv:2011.05068. Classification accuracy as a proxy for two-sample testing. I Kim, A Ramdas, A Singh, L Wasserman, The Annals of Statistics. 491Kim, I., Ramdas, A., Singh, A., and Wasserman, L. (2021). "Classification accuracy as a proxy for two-sample testing." The Annals of Statistics, 49(1), 411-434. Composite coefficient of determination and its application in ultrahigh dimensional variable screening. E Kong, Y Xia, W Zhong, Journal of the American Statistical Association. Kong, E., Xia, Y., and Zhong, W. (2019). "Composite coefficient of determination and its application in ultrahigh dimensional variable screening." Journal of the American Statistical Association. Beyond bulk: a review of single cell transcriptomics methodologies and applications. A Kulkarni, A G Anderson, D P Merullo, G Konopka, Current opinion in biotechnology. 58Kulkarni, A., Anderson, A.G., Merullo, D.P., and Konopka, G. (2019). "Beyond bulk: a review of single cell transcriptomics methodologies and applications." Current opinion in biotechnology, 58, 129-136. Cauchy combination test: a powerful test with analytic p-value calculation under arbitrary dependency structures. Y Liu, J Xie, Journal of the American Statistical Association. 115529Liu, Y. and Xie, J. (2020). "Cauchy combination test: a powerful test with analytic p-value calculation under arbitrary dependency structures." Journal of the American Statistical As- sociation, 115(529), 393-402. Handbook of graphical models. M Maathuis, M Drton, S Lauritzen, Wainwright , M , CRC PressMaathuis, M., Drton, M., Lauritzen, S., and Wainwright, M. (2018). Handbook of graphical models. CRC Press. Interpoint-ranking sign covariance for test of independence. H Moon, K Chen, Biometrika. 1031Moon, H. and Chen, K. (2020). "Interpoint-ranking sign covariance for test of independence." Biometrika, 103(1), 1-14. Single-cell biology: beyond the sum of its parts. A F Schier, Nature Methods. 171Schier, A.F. (2020). "Single-cell biology: beyond the sum of its parts." Nature Methods, 17(1), 17-20. Equivalence of distance-based and rkhs-based statistics in hypothesis testing. D Sejdinovic, B Sriperumbudur, A Gretton, K Fukumizu, The Annals of Statistics. Sejdinovic, D., Sriperumbudur, B., Gretton, A., and Fukumizu, K. (2013). "Equivalence of distance-based and rkhs-based statistics in hypothesis testing." The Annals of Statistics, pages 2263-2291. Distribution-free consistent independence tests via center-outward ranks and signs. H Shi, M Drton, F Han, Journal of the American Statistical Association. Shi, H., Drton, M., and Han, F. (2020). "Distribution-free consistent independence tests via center-outward ranks and signs." Journal of the American Statistical Association, pages 1-16. Dropout: a simple way to prevent neural networks from overfitting. N Srivastava, G Hinton, A Krizhevsky, I Sutskever, R Salakhutdinov, The Journal of Machine Learning Research. 151Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014). "Dropout: a simple way to prevent neural networks from overfitting." The Journal of Machine Learning Research, 15(1), 1929-1958. Integrative single-cell analysis. T Stuart, R Satija, Nature Reviews Genetics. 205Stuart, T. and Satija, R. (2019). "Integrative single-cell analysis." Nature Reviews Genetics, 20(5), 257-272. The distance correlation t-test of independence in high dimension. G J Székely, M L Rizzo, Journal of Multivariate Analysis. 117Székely, G.J. and Rizzo, M.L. (2013). "The distance correlation t-test of independence in high dimension." Journal of Multivariate Analysis, 117, 193-213. Measuring and testing dependence by correlation of distances. G J Székely, M L Rizzo, N K Bakirov, The Annals of Statistics. 356Székely, G.J., Rizzo, M.L., and Bakirov, N.K. (2007). "Measuring and testing dependence by correlation of distances." The Annals of Statistics, 35(6), 2769-2794. A W Van Der Vaart, Asymptotic statistics. Cambridge University pressVan der Vaart, A.W. (2000). Asymptotic statistics. Cambridge University press. Combining p-values via averaging. V Vovk, R Wang, Biometrika. 1074Vovk, V. and Wang, R. (2020). "Combining p-values via averaging." Biometrika, 107(4), 791-808. High-dimensional statistics: A non-asymptotic viewpoint. M J Wainwright, Cambridge University Press48Wainwright, M.J. (2019). High-dimensional statistics: A non-asymptotic viewpoint, volume 48. Cambridge University Press. Universal inference. L Wasserman, A Ramdas, S Balakrishnan, Proceedings of the National Academy of Sciences. 11729Wasserman, L., Ramdas, A., and Balakrishnan, S. (2020). "Universal inference." Proceedings of the National Academy of Sciences, 117(29), 16880-16890. From reads to insight: a hitchhiker's guide to atac-seq data analysis. F Yan, D R Powell, D J Curtis, N C Wong, Genome Biology. 211Yan, F., Powell, D.R., Curtis, D.J., and Wong, N.C. (2020). "From reads to insight: a hitch- hiker's guide to atac-seq data analysis." Genome Biology, 21(1), 1-16. Testing mutual independence in high dimension via distance covariance. S Yao, X Zhang, X Shao, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 803Yao, S., Zhang, X., and Shao, X. (2018). "Testing mutual independence in high dimension via distance covariance." Journal of the Royal Statistical Society: Series B (Statistical Methodol- ogy), 80(3), 455-480. Single-cell multimodal omics: the power of many. C Zhu, S Preissl, B Ren, Nature Methods. 171Zhu, C., Preissl, S., and Ren, B. (2020a). "Single-cell multimodal omics: the power of many." Nature Methods, 17(1), 11-14. Distance-based and rkhs-based dependence metrics in high dimension. C Zhu, X Zhang, S Yao, X Shao, The Annals of Statistics. 486Zhu, C., Zhang, X., Yao, S., and Shao, X. (2020b). "Distance-based and rkhs-based dependence metrics in high dimension." The Annals of Statistics, 48(6), 3366-3394. Projection correlation between two random vectors. L Zhu, K Xu, R Li, W Zhong, Biometrika. 1044Zhu, L., Xu, K., Li, R., and Zhong, W. (2017). "Projection correlation between two random vectors." Biometrika, 104(4), 829-843.
[]